Off-beat perceptions and life tips of the world and all its players.
Keep it clean, keep it honest and as a great friend told me, keep swimming!
Posting articles here is my hobby. No advertisements on this page, although linked pages may have some. No copyright infringement intended.
Can You Teach a Robot to Love?
Researchers are using their knowledge of how human emotion develops to try to build robots that can feel.
From the sci-fi classic Bladerunner to the recent films Her and Ex Machina, pop culture is filled with stories demonstrating our simultaneous fascination with and fear of artificial intelligence (AI).
This interest is rooted in questions about where the line between human and artificial intelligence will be, and whether that line might one day disappear. Will robots eventually be able to not only think but also feel and behave like us? Could a robot ever be fully human?
A new multidisciplinary field called developmental robotics is paving the way to some answers. Rather than writing programs that try to mimic specific human behaviors like love, developmental roboticists build machines that learn and develop the way humans do as they grow from newborn infants to adults. The goal is to model human learning and then create machines that can learn in similar ways.
My research at Kyoto University focused on building robots with human-like emotional architecture who learn emotional behavior from the people they interact with, particularly their human caregivers. It offers insights into how we might one day be able to create machines with a full range of emotions comparable to our own.
How do humans develop emotion?
For a developmental roboticist, the first step in tackling the problem of robot emotion is understanding how humans develop the capacity for emotion. Though this process is still a bit of a mystery, the field of developmental psychology is beginning to unlock some of its secrets.
Around the age of two, when toddlers start to speak, they begin to learn the emotional names for their internal states. The word “sad,” for instance, refers to a certain set of physiological and psychological feelings, along with associated expressions of these feelings through tone of voice, facial appearance, and body movement. Sadness is often linked to slower-paced speech, a frowning mouth, and sluggish body movement. Anger, on the other hand, is generally associated with intense, abrupt speech; downturned eyebrows; and quick, aggressive movements.
As we get older, we use these behaviors to express our internal states and to recognize emotion in others. We even see emotion in non-human objects, such as a sad piece of music or an excited pet. We may also do self-inspections to deduce our own emotions—for example, someone noticing her voice rising as a way to identify when she is feeling frustrated. All of this emotional expression and perception happens quickly, involuntarily, and subconsciously, conveying a great deal of information in a concise way.
How do we develop these forms of emotional expression? Are they learned or innate (or some combination of both)? For a long time, the prevailing view was that human emotional expressions are biologically determined, particularly when it comes to basic emotions like happiness, sadness, anger, fear, disgust, and surprise. However, new research suggests that how humans express emotion may, at least in part, depend on how they are taught to do so by their caregivers and peers.
Cross-cultural studies suggest that cultural environment plays a role in the development of emotion. According to research by Stanford psychologist Jeanne Tsai, emotional expression and ideals tend to differ across Eastern and Western cultures. Individuals in Western cultures identify “feeling good” as a high-arousal positive emotion, whereas Eastern cultures prefer a low-arousal positive emotion. In other words, Western cultures favor high-arousal emotions such as excited joy and elation, whereas Eastern cultures favor low-arousal emotions such as calm joy and bliss.
To illustrate, one study found that Asian Canadians prefer smiles between 20 and 60 percent intensity, whereas European Canadians prefer smiles from 80 to 100 percent intensity. Research has also demonstrated that people have a harder time identifying the emotions connected to the facial expressions and vocal cues of people from other cultures than those from their own culture.
Interactions with caregivers at a young age may play a particularly important role in the development of emotion. Research shows that when a rhesus monkey is separated early in life from its mother, its genes express differently in brain regions controlling socioemotional behaviors. This primate studysuggests that early parental care—or the absence of it—can profoundly change an infant’s future emotional behavior, even at the genetic level.
Though studies of development in humans are rarer due to ethical issues, observations of children raised in emotionally deprived institutional environments show that early life experiences can have lasting effects on emotional intelligence. For example, individuals who grew up in Eastern European orphanages with little social interaction or attention from caregivers had difficulty later in life matching appropriate faces to happy, sad, and fearful scenarios (though they were able to match angry faces).
Building emotional robots
How can we use our knowledge of how human emotion develops to build robots with the capacity for emotion? The idea behind developmental robotics is to create robots that learn behaviors the same way human children do. Typically, a software model is programmed to represent a part of the robot’s “brain.” Then, the robot is exposed to an environment to stimulate the training of that model—for example, through interactions with a human caregiver. In my research, I tested the idea that caregivers can play a role in helping robots develop emotion, just as they play a role in emotional development for human infants.
First, we must ask: What would it mean for a robot to have emotion, and how would we know if it did? Neuroscientist Antonio Damasio defines emotion as “the expression of human flourishing or human distress, as they occur in the mind and body.” I have proposed that we define flourishing for a robot as a state of “all-systems-go” or homeostasis, where the battery, motors, and other parts are in working order and the core temperature is normal. We can imagine this as similar to a human infant being well fed, rested, and in good health. Distress is when something is wrong, which could result from a hot motor or CPU, low battery, or the saturation of microphone sensors with loud noises or vision sensors with extremely bright light. This parallels a newborn feeling distress from hunger, a wet diaper, or a loud sound.
In my research, I had human caregivers interact with robots in a variety of ways, expressing emotions such as happiness, sadness, and anger, while the robots are in both flourishing and distressed states. The caregiver behaviors parallel ways in which developmental psychologists have observed parents interacting with human infants. For example, when the robot is in a flourishing state, the caregiver plays with the robot in a joyous way, modeling happiness. When the robot is in a physically distressed state, the caregiver may display empathy, showing sadness while comforting the robot.
The result? The robot learned to express its internal states based on whatever models it was taught by its caregivers. Changing how the caregivers behave affects how the robots later express their internal states—in other words, how they show emotion. If the caregiver spoke to the robot in an empathic way when it showed distress, like saying “poor robot” in a slow and sorrowful voice, the robot would learn to express a distressed state as something similar to sadness, using a slow voice and movements. If the caregiver scolded the robot when it was in distress, expressing frustration or anger, the robot would later express a distressed state using the aggressive, intense patterns we typically associate with anger.
We could conduct similar experiments with various types of positive emotions. If a caregiver expresses calm and peaceful happiness to a robot in a flourishing state, this might lead the robot to express flourishing in the same relaxed, calm way. A caregiver who expresses more energetic, boisterous joy could produce a robot that expresses flourishing in a more intense, high-energy manner. Looking at the world around us, we can see families, households, and even cultures that demonstrate how human emotional expression can vary in similar ways.
Can a robot love?
In an article entitled “Can Robots Fall in Love, and Why Would They?,” leading AI philosopher Daniel Dennett described two possibilities for creating robots with emotions. The first is that an AI could be programmed to act like it was in love and, on the surface, appear to have emotions. Essentially, “a robot could fake love.”
The second and less obvious route is to create an architecture less like current computers and more like the human brain. This system would not be a hierarchy controlled from the top down; instead, behaviors would emerge “democratically” from low-level, competing elements, much as they do in biological nervous systems. With this structure, Dennett writes, you could potentially create a computer that truly loved, though doing so would be “beyond hard.”
While still in its early stages, my research offers an approach to building emotional robots that follows Dennett’s “emergent” model. Rather than hard-coding emotions into a robot using fixed rules, we might be able to create a robot with an emotional architecture similar to a human’s, wherein firsthand experiences with emotions like happiness and love teach the robot how to express these emotions in the future.
Emotions color every human interaction and are the foundation for living in a social world. As robots become a more integral part of our daily lives, we will benefit if they can understand and respond to our emotional states. Emotional robots may be able to communicate with us in ways we intuitively understand, like showing a sluggish walk when their battery needs recharging, instead of a confusing panel of lights and beeps. The ultimate goal is not necessarily to create robots that can fall in love or fulfill all our human emotional needs, but to build machines that can interact with us in a more human way, rather than requiring us to behave more like machines.