Quantcast
Channel: University of Cambridge - Latest news
Viewing all articles
Browse latest Browse all 4033

Evolving with the robots

0
0
Live demonstration of Social Robotics in Wellcome Collections' Friday Late Spectacular - Body Language Event, 4 Nov. 2016 (19:00-23:00), Euston, London, UK

Fear mongering and myth-making about human-like and social robots is stopping us from engaging with the technology behind them and having an input into how they - and we - evolve, says Hatice Gunes, Associate Professor at University of Cambridge's Computer Laboratory.

Dr Gunes will be speaking about her research at the Cambridge Series at the Hay Festival on 1st June and says we need to move beyond sensationalist portrayals of human-like robots and understand how they work.

Her Hay talk will centre on human robot interaction [HRI] and how it can be used for our benefit, for instance, for helping children with autism learn how to read expressions and to stimulate the senses of elderly people in care.

Dr Gunes will outline how HRI works. She says it has to be believable in order to be effective. That means robots’ appearance is very important. This is what has driven the development of humanoid robots with arms and aspects of a human face which can behave in a human-like way, for instance, moving their arms, legs and eyes. However, more important than appearance is their behaviour and emotional expressivity. Dr Gunes refers to the way we relate to Disney’s animated characters. “People believe in them because they can portray emotion,” she says.

To achieve expressivity requires an understanding of how human emotions are portrayed and triggered. Scientists have been working on artificial emotional intelligence which enables new technolgoy such as embodied agents and robots to both express and detect emotions, understanding non-verbal cues. Dr Gunes cites the work of Charles Darwin on the visual nature of emotions and how they can be mapped to various changes in facial expressions.

Her research investigates how humanoids can be programmed not only to extract and respond to facial clues to emotions, but also to understand the context in which those emotions are expressed. That means they will be able to offer a response that is sensitive to specific contexts.

Will robots ever be able to have emotions themselves though? Dr Gunes says there is no reason why not and questions what emotions are. The process of working with robots on artificial emotional intelligence unpicks the nature of our emotions, showing them to be a layering of different goals, experiences and stimuli.

Another area which scientists are looking at in their quest to improve humanoids’ believability is personality. Dr Gunes has done a lot of work on personality in telepresence robotics, robots controlled remotely by a human - a kind of 3D avatar. These can be used in many ways, for instance, by medical staff to offer remote home care. The medical person can be based anywhere and operate the robot through a virtual headset. Dr Gunes is interested in how people react to the teleoperator (the human controlling the robot remotely) who is present in robot form. Once again, both the robot’s physical appearance and behaviour are important and research shows that their personality needs to be task dependent.

Dr Gunes says there remain some big challenges for scientists working on HRI, including how to process and combine all the different data they are gathering, how to modify their appearance and behaviour dynamically, and how to keep their power going 24/7. The major challenges, however, are to do with breaking down some of the myths and fears people have about humanoids.

Part of this is because they don’t understand the benefits humanoid robots can bring and why, for instance, they need to take on a human form and understand emotions. She says humanoids can be positive in terms of increasing trust and engagement among certain groups, such as the elderly; that humans tends to anthropomorphise technology in any event; and that robots can be programmed to be limited to positive emotions that promote altruism.

“People tend to love or hate robots, but they don’t really know a lot about them,” says Dr Gunes. “They mainly know of them from sci-fi movies or Netflix. They need to be demystified and opened up so people understand them and are able to question the science, code robots and see for themselves how they work. In the future people will be able to adapt and personalise their robots like they do their phones. They will be as common as smartphones and will operate with humans, predicting their needs. There will be a form of co-evolution.”

She adds: “Understanding robots will empower people so they can help to shape them to do good. The public is usually on the receiving end of new technology. Demystifying robots gives people back the power to push for change and create the robots they want.”​

Demystifying how social and human-like robots work is vital so that we can understand and shape how they will affect our future, Dr Hatice Gunes will tell the Hay Festival next week.

Understanding robots will empower people so they can help to shape them to do good. Demystifying robots gives people back the power to push for change and create the robots they want.
Dr Hatice Gunes
Live demonstration of Social Robotics in Wellcome Collections' Friday Late Spectacular - Body Language Event, 4 Nov. 2016 (19:00-23:00), Euston, London, UK.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes
License type: 

Viewing all articles
Browse latest Browse all 4033

Latest Images

Trending Articles





Latest Images