Soon we will be sharing a large part of our social lives with various kinds of artificial agents. However, many notions in philosophy describing socio-cognitive abilities are rather restrictive. To describe human-machine interactions which cannot be reduced to tool-use, we lack appropriate notions. In this talk, I will demonstrate how one can develop a notion of joint action that is applicable to human-machine interactions. Thereby, I question whether our conception of sociality should be limited to living beings. In the second part of my talk, I will argue that even if one would still interpret human-machine interactions in an instrumental way, there are reasons why we should consider social norms regulating our interactions with artificial agents.
Hollywood movies such as Bladerunner 2049 and Ex Machina present visions of virtual agents that can charm us, flirt with us, reason with us and deceive us. But what would it take to actually build machines with these capacities? I will begin with a rather pessimistic viewpoint, suggesting that we are a long way from building human-like machines, and that we actually know very little about how humans interact. Without a robust understanding of human real-world social interaction, it will be very challenging to build virtual agents that can interact. To move forward, I suggest a recipe for building artificial agents and provide an example from our lab in the domain of mimicry. Our data shows how the detailed study of human mimicry behaviour can translate into an algorithm for building realistic mimicry behaviour into virtual agents. I will consider what these kinds of results could mean for our theories of the computational processes underlying interactions in both humans and virtual agents. I will leave the question of whether we should try to build the agents that the movies show us as one for the audience to consider.
As robots advance from the pages and screens of science fiction into our homes, hospitals, and schools, they are poised to take on increasingly social roles. Consequently, the need to understand the mechanisms supporting human-machine interactions is becoming increasingly pressing, and will require contributions from the social, cognitive and brain sciences in order to make progress. In this talk, we introduce a framework for studying the cognitive and brain mechanisms that support human-machine interactions, leveraging advances made in social cognition and cognitive neuroscience to link different levels of description with relevant theory and methods. Also highlighted are unique features that make this endeavour particularly challenging (and rewarding) for brain and behavioural scientists. Overall, the framework offers a way to conceptualize and study the cognitive science of human-machine interactions that respects the diversity of social machines, individuals' expectations and experiences, and the structure and function of multiple cognitive and brain systems.
In this talk, I will highlight three intersecting research themes on the study of humanoid AI, which have been a focus of my work in the past years: i) Using machines, in particular, humanoid robots to study humans ii) Enabling machines with a sense of touch iii) Purposive-based learning from humans for AI
Cooperation in human groups is challenging, and various mechanisms are required to sustain it, although it nevertheless usually decays over time. Here, we perform theoretically informed experiments involving networks of humans playing a public-goods game to which we some- times added autonomous agents (bots) programmed to use only local knowledge. This experiment shows that cooperation can not only be stabilized, but even promoted, when the bots intervene in the partner selections made by the humans, reshaping social connections locally within a larger group. This network-intervention strategy outperformed other strategies, such as adding bots playing tit-for-tat. On the other hand, we also found that personalized intervention strategies did not work and sometimes exacerbated human cooperation. Overall, this work sheds light on hybrid systems of humans and machines that are embedded in networks, and it shows that simple machine intelligence can be used to help humans to help themselves.
Machine intelligence plays a growing role in our lives. How do we ensure that these machines will be trustworthy? This talk explores various psychological, social, cultural and political factors that shape our trust in machines. It will also propose an interdisciplinary agenda for understanding and improving our human-machine ecology.
Robotaxis will be tools for passengers who use them, but also agents with whom other humans will negotiate traffic on the road. Will we, humans, be willing to cooperate with them or will we happily exploit them to serve our selfish goals? Recent developments in behavioural game theory suggest that we often cooperate with others because we recognize the need to reciprocally sacrifice some of our personal interests to attain mutually beneficial results. If this is true and if we perceive machines to be strictly utility-maximizing entities that are unable to spontaneously alter their ultimate objectives, we should expect us to cooperate with them less than we do with fellow humans. I will point to empirical studies that support this prediction and discuss policies that we may wish to consider to regulate our future interactions with autonomous machines on roads.
Socially assistive robots are those which should provide some helpful function through their social interactions. This is really all about social influence – designing socially persuasive and hence influential robots that can induce some desirable behaviour change in the user. Literally, using robots to make ‘better’ humans. However, it’s hard to do this well, and potentially at least a little bit ethically questionable. This is where the humans come in – (most) humans are pretty good at this social stuff and some (e.g. therapists, teachers) are expert at it. They can also help us roboticists figure out exactly what our robots should (not) do, and to think about the broader social impact of our work. So clearly, we should be working with humans to make better robots. In this talk I’ll draw on examples from my work to demonstrate how both of these things can be done in practice, and try to convince you that working on these two goals simultaneously is the best way forward for engineering effective and meaningful human-robot interactions.
The behavioral approach to ethics views morality not as the mere result of well-reflected ethical intentions but as mediated by environmental influences. Relatedly, the post-phenomenological approach to the philosophy of technology considers artefacts not as merely instrumental but as mediators of human experience and behavior. I report the results of two experiments that are inspired by these approaches. The first examines people's aversion against non-human agents that take decisions in the moral domain. The second investigates a decision-makers possibility to shift blame to a non-human agent. Both experiments study the ethical consequences of having humans in or out of the loop.
This talk, which is aimed at non-experts, reviews a few unfortunate myths and misperceptions about Artificial Intelligence. What is AI? How does it relate to Machine Learning? How close are we to human-level AI? Are super-intelligent robots going to take over the world? What are real concerns about AI?
Artificial agents (e.g., animated avatars, robots) are expected to have an increasing presence in our lives; acting as assistants in consumer, education and healthcare settings. Much work attempting to maximise engagement with artificial agents has focused on how these agents look and behave, but very little has focused on understanding how the fundamental neurocognitive mechanisms of human social perception and expectation come in to play. This is, in part, because we still know relatively little about these mechanisms due to a dearth of experimental paradigms that can offer ecological validity, experimental control and objective measures of attention, behaviour and neural processing during dynamic social interactions. Artificial agents in virtual reality can offer a solution – by realistically simulating dyadic interactions in a context that offers experimental control. I combine neurophysiology, eye-tracking and motion capture measures across various virtual interaction paradigms to objectively measure social attention and behaviour during interactions with other humans and artificial agents. I also investigate how our beliefs and expectations about artificial agents influence our strategies for social information processing. In doing so, I hope to advance our understanding of the neurocognitive mechanism of social interaction and inform how to best design and position artificial agents to promote intuitive interactions.
For over a decade digital prediction tools have changed policing. Their use has sparked critique about the way in which such tools challenge the presumption of innocence, the right to due process and non-discrimination, the proportional use of data and the secrecy that surrounds the actual variables used in the algorithms. More recently, criticisms have been reformulated as a question of agency: the tool itself has the power to change society. This paper discusses this agency: Who acts in the production and implementation of predictive software? Which parts of the software are comprehensible, which ones are harder to grasp? When and where is data generated that is fed into the algorithm and who is part in programming the software? A methodology of life cycles is used to capture these dynamics. By tracing the lifecycle of data and of a prediction algorithm, this paper describes how much tools, data and humans shape each other and how, together, they give rise to the logic of the pattern as a new agent within the prediction landscape.