Humans, Machine, Theory of Mind, and Trust.

baala
2 min readSep 4, 2024

--

theory of mind, which is key to building trust

The concept of trust is complex, varying significantly across disciplines. Trust generally implies that when X trusts Y, X invests resources or confidence in Y, despite uncertainty about Y’s reciprocation. Trust isn’t about blind faith — it’s about leaping with incomplete information. For example, when a lover says, “I trust you,” it signifies a belief that their partner will honor that trust, even when safer options might exist. Trust should be voluntary and not conditional; it’s about choosing to believe in someone’s reliability without complete certainty.

In human interactions, trustworthiness is often assessed based on various markers. For instance, banks evaluate credit history, and families might look at social and economic status when considering marriage alliances. In romantic relationships, trustworthiness could be inferred from stable jobs, good education, or personal qualities. One-shot interactions, like online shopping, make trust more challenging. Without a history to rely on, we assess trustworthiness based on other cues, such as brand reputation or product images.

In biology, honest signals — like a male peacock’s ornamental tail — demonstrate trustworthiness, as they require significant resources to maintain. Similarly, in human society, people might display trustworthiness through actions that signal commitment, like making substantial financial or personal investments.

When it comes to trusting machines, the challenge is making them appear as trustworthy as humans. Just as symmetry in human faces can evoke trust, machines need to signal trustworthiness through reliable and consistent behavior. However, trust in machines isn’t just about reliability; it also involves how well they adhere to social norms and whether they can simulate human-like understanding. For example, in a factory, a machine might only need to demonstrate accuracy and reliability to earn trust. But in elderly care, or when teaching a course, the expectations are different. Machines need to be perceived as having a mind, capable of empathy (ability to experience emotions) and control over their actions (agency), not just executing tasks.

Our recent studies suggest that trust in machines can be enhanced when they adhere to human social norms and demonstrate a theory of mind — an ability to understand and predict human behavior. However, trust is highly context-dependent. In scenarios that demand reciprocity, such as collective action where individual and collective interests conflict, achieving voluntary cooperation without enforcement is particularly challenging. Additionally, machines must navigate the delicate balance between earning trust and respecting personal freedom and privacy.

A major hurdle is that humans are often less forgiving of machine errors — a phenomenon known as algorithmic aversion. Overcoming this ‘machine penalty’ necessitates rethinking how machines can repair trust, particularly in sensitive roles where trust is critical.

In essence, building trust in machines involves not just technical accuracy but also aligning their behavior with human social expectations, making them appear as reliable partners in various aspects of life.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

No responses yet

Write a response