Phase 1: Imagined encounters with near-human agents
Subjective ratings of near-human agents
My first study for my PhD was a data gathering study. I wanted to find out what it was about almost human faces that caused people to find them uncanny: however, apart from the images used in my first study, I had no empirical measure of the uncanniness of specific faces. This would be necessary so I could use them in experiments, and have an initial rating to use when I came to compare how people perceived them.
In August 2007 I sent out surveys to eight judges who were willing to spend their time rating twenty faces on their eeriness, human likeness and strangeness, and also tell me in a comment how they felt. I was able to use these judgements to assign mean ratings to each face, and again plot them to see if the ratings described the classic uncanny curve.
I was surprised to find that the images actually clustered into three groups rather than lying neatly on the curve.
I selected one image from each cluster, added a human and artificial face, and used those five faces for my next study.
Imagined Encounters with near-human agents
The second study was carried out online via a custom designed website hosted at the Open University. 212 participants were recruited during August 2009 through four main channels: a link in an article about the uncanny valley in an online animation and graphics magazine, the Open University’s internal message board, a private social networking forum, and via the open social networking sites, Facebook and Twitter. Participants were mainly female (66%) and aged between 31 and 35 (51%).
Participants saw five different faces, and were asked questions on three key themes: how they would describe the faces so someone else could pick them out of a crowd, how they would feel about sharing a home with someone who looked like that, and how they would rate them on three measurements of human likeness.
Ratings
I found that the five faces were rated significantly differently in terms of their human likeness, strangeness and eeriness. As you can see from the chart above, the doll face was rated as the most eerie, with the CGI face as the least eerie. The differences in ratings supported the findings from previous work.
Descriptions
I was interested in the words that participants used to describe each of the faces: they were able to write as much or as little text as they wanted, which I then categorised according to which facial features were mentioned. The chart above shows the number of references made to key facial components when describing each of the faces. It was immediately clear that eyes were described most often, and that the robot and doll faces had the highest numbers of references to eyes.
Reactions
I wanted to explore how people would feel interacting closely with each of the different entities in the pictures. This part of the experiment did not work as successfully as the other parts: as you can see from the chart above, when I categorised the emotions people mentioned into positive, negative and neutral I found that a very large proportion of people either did not mention any emotions in their descriptions, or used emotions of conflicting categories. I am therefore developing a different method for exploring the emotional component of the UV effect.
Overall, the findings suggest that there is an interesting effect occurring when people perceive faces that vary in human likeness: it may be that the standard mechanisms for face processing do not work in the same way when viewing something that has many of the same qualities as a human face but is not quite completely human.
My next research phase explored this question in more detail.