After fifteen years, in recent years, I have become interested in human-machine interactions. During my PhD, I was introduced to Human-Computer Interaction (HCI). Our university hosted several talks on transforming neural signals into actions, such as helping disabled individuals using their neural signals, but I didn’t find these topics particularly engaging at the time. They seemed heavily technical and lacking in psychological aspects. My journey to cognitive science started from understand the nature of ‘mind’ to grammar of society.
After moving away from my background in physical sciences, I unexpectedly found myself immersed in neuroscience and psychology. Initially, I perceived psychology as a field that lacked scientific rigor, a common assumption among science students. Psychology often isn’t regarded with the same respect as the so-called “hard sciences,” and even psychology students often looked up to physics students with admiration. During my second Master’s studies, I encountered the concept of “Mind and the Turing Machine.” However, my interest in the mind was not piqued at that point because it was framed in terms of computation, which felt too much like a physical science approach.
For several years, I struggled to identify what truly interested me. At one point, I developed an interest in philosophy, though I was unsure whether it was a practical field to study in terms of career prospects. My academic journey has been shaped by a series of accidents and shifts in focus.
In the past, I was expelled, due to poor performance, from a master’s program (M.Tech) and several PhD programs: a Neuroscience PhD at NBRC, a Psychology PhD at Utrecht University in the Netherlands, and an Economics PhD at Queen Mary, University of London. I thought my PhD pursuits were over and considered doing an MBA. However, after visiting a business school, I lost interest; it felt like the focus was on selling soap and making money, rather than engaging with great scientific questions.
In those hopeless times, accidentally discovered the book “Governing the Commons” at IISc. Being in a top-notch science institute, I found the book fascinating. What intrigued me was the question of ‘how people govern collectively without a central authority. This question bothered me several years in the past speculating about self organizing systems like brain, pondering questions such as whether physical actions create a non-physical mind, and what exactly the mind is.
I questioned why researchers focused on neurons to understand the mind, given that the mind doesn’t seem physical to me. If computations could produce a mind, I thought it would be better to study computation itself rather than neural structures. During these years, I was also introduced to neural networks, but they didn’t capture my interest. The mind-body problem appeared too complex, and I felt I lacked the skills to adequately explore it.
Again by accident, I enrolled in a PhD program in Allahabad, India. I felt like a stranger in the city. Before joining the PhD program in Allahabad, I had attempted PhDs at least five times, either leaving on my own or being asked to leave. The fact of the matter is that I was not competitive enough to pursue a PhD at a top institution. I thought a mid-tier place like Allahabad would allow me to make mistakes and learn in my own way.
During this time, I read “The Evolution of Cooperation” by Robert Axelrod with great interest. The central question it posed — whether it is possible to govern ourselves without a central authority — fascinated me. Axelrod argues that it is possible, even for selfish individuals to cooperate under certain conditions, particularly when defection has future consequences. From kin selection to the five known mechanisms of cooperation, the underlying principle is that current defection has future repercussions.
In my PhD program, my mentor told me, “If you work on the research area I am working on, you will get help from me. If you choose your own, you’ll need to figure it out yourself, though we will help with logistics.” I decided to go my own way, thinking that if it didn’t work out, I might not be cut out for a PhD and could always return to teaching physics.
I decided to work on reputation mechanisms. At the time cooperative mechanism people are studying was indirect reciprocity. In a nutshell, reputation mechanisms work as follows: if individual X defects against stranger Y, and individual Z witnesses this interaction, then in the future, when X encounters Z in a situation where Z could help X, Z may refuse to help because X did not cooperate previously.
I was interested in exploring what happens if Z misinterprets X’s reputation. If Z mistakenly views X as untrustworthy and defects, the cooperation could fall apart. This led me to consider how people assess credible information. For instance, if Z learns about X’s past defection through gossip, how can Z determine if this information is credible? I aimed to address this question, but I failed. Three years passed, and I still had no clear research question for my PhD.
Later, I’ll tell you how I finally found a research question and solved it through a combination of accident and intuitive thinking.
Let us return to the concept of the mind. The great Alan Turing proposed that a computer can be said to have a mind if, during interactions, humans are unable to distinguish whether they are interacting with a machine or a human. This idea was later challenged by John Searle’s Chinese Room argument, which questioned whether true understanding or consciousness could be attributed to machines merely performing symbol manipulation.
Additionally, Thomas Nagel’s “What is it like to be a bat?” paper highlighted the subjective nature of experience, arguing that there are aspects of consciousness that cannot be fully comprehended from an objective standpoint. Daniel Dennett further contributed to the debate by proposing that the mind is a psychological construct, perceived subjectively and lacking an objective existence.
During those years, these questions fascinated me: What constitutes the mind? How can we understand it? However, I had no clear path forward in exploring these questions, so I left them behind on the shores of Mumbai.
…..
After almost 15 years, I find myself once again contemplating the nature of the mind, especially in the context of human-machine interactions. These interactions become increasingly complex when we consider forming teams that include bots, robots, or algorithms. The idea of writing employment contracts with algorithms, or “algorithmic contracts,” is no longer far-fetched. In the future, society will likely need to draft algorithmic contracts as algorithms begin to govern us (AI act).
Already, algorithms are influencing our behavior, social norms, commitments, trust, and even our human nature. This has led me to explore how machines affect our prosocial preferences and what kinds of algorithmic contracts are sustainable. Questions arise about whether we can trust machines, what it means to trust machines as opposed to individuals, and whether it is possible to repair trust once it is broken. Are the mechanisms for repairing trust different from those in human relationships? Can we use algorithms to enhance human nature and solve cooperation problems in human-machine collectives?
As we interact with machines, we perceive their “minds.” But what is the mind anyway? In human interactions, people with similar minds tend to cooperate and trust each other. How does this translate when we interact with machines? The perception of a mind is crucial in judging whether an action is moral or immoral in human dyadic interactions. Do we apply the same criteria in human-machine interactions?
We typically see machines as devoid of a mind, viewing them as mere algorithms. However, what happens if an algorithm adopts anthropomorphic traits? What is the role of transparency and responsibility when it comes to viewing machines? The seemingly dry field of Human-Computer Interaction (HCI) from my PhD days has evolved into a mind-boggling problem of human-machine interactions. The advent of large language models (LLMs) adds further complexity, providing us with the ability to shape their personalities and minds.