
AI is a modern Leviathan
The Illusion of Machine Intelligence
Artificial Intelligence (AI) has infiltrated every aspect of our lives, promising efficiency and innovation. Yet, many of us operate under a critical misunderstanding: we equate AI’s capabilities with human intelligence. This flawed perception overlooks a fundamental truth — AI lacks human agency. While AI can process information and make decisions based on programmed algorithms, it does so without the moral compass or emotional depth that define human thought. The implications of this oversight are profound, raising urgent questions about morality, responsibility, and the ethical frameworks that underpin our society.
AI is not a biological entity driven by instincts or emotions; rather, it is a meticulously designed construct crafted to serve specific purposes and goals. The questions that arise from this are crucial: What intentions guide the designers of AI? What underlying assumptions shape the algorithms they produce? And what consequences do these assumptions have on the ethical practices that govern human interactions? As we lean increasingly on technology, we must confront unsettling inquiries about the erosion of human memory, attention span, and executive control. How will our capacities for social learning and community engagement be affected? These are not merely academic musings but urgent issues that demand our attention before the technologies we unleash spread like a virus, diminishing our abilities to reclaim what is inherently human.
The Regulatory Framework: A High-Stakes Game
The AI Act attempts to provide clarity in this turbulent landscape, defining AI as “a machine-based system that can operate autonomously and adapt after deployment, generating outputs like predictions or decisions.” This gives us a framework to assess AI’s place in our world, emphasizing its autonomy and adaptability. However, the Act also highlights two critical stakeholders in the AI ecosystem: the provider and the deployer. Providers develop AI systems and introduce them to the market, while deployers utilize these systems in real-world applications.
This delineation is pivotal. It underscores the complex interplay between creation and use in the AI landscape. Providers and deployers must critically evaluate potential harms and risks associated with their technologies. They are not just architects of innovation; they are also guardians of ethical standards, responsible for ensuring that safety measures are ingrained throughout the AI lifecycle.
AI’s Dual Nature: Narrow vs. General
GPT (folks) defines AI as follows: AI can be broadly classified into two categories: Narrow AI and General AI. “Narrow AI, also termed Weak AI, excels at performing specific tasks — think of image recognition or language translation (in fact is not AI)”. Yet, its brilliance is confined to a narrow task spectrum; it cannot generalize or adapt across diverse applications. “In contrast, General AI aspires to mimic human cognitive abilities — understanding, reasoning, and learning — with an ambition to be indistinguishable from humans (in fact it never reason as humans do)”. While this lofty goal remains unrealized, it is the focus of ongoing research and heated debate.
Given the stakes, the necessity for robust oversight is clear. Current AI systems operate in contexts devoid of agency. We must demand that these technologies function as moral agents, especially when serving vulnerable populations. The urgency for rigorous testing methodologies — akin to the randomized controlled trials (RCTs) used in medical and economic research — has never been greater.
Accountability In Crisis: The Underbelly of AI Technology
Despite the sophistication of these definitions, the reality is stark: many interpretations of AI overlook its lack of agency. The notion that machine intelligence equates to human intelligence obscures critical distinctions. Today’s AI systems rely more on engineering principles than cognitive frameworks, primarily utilizing conditional probabilities over true cognitive understanding. This results in outputs governed by statistical correlations, complicating our ability to trace the reasoning that underlies their decisions.
The implications of this lack of traceability are alarming. Philosopher W.V. Quine’s notion of underdetermination raises significant challenges regarding accountability. In our current technological environment, AI is often treated as just another tool — similar to a knife or even a bomb — rather than an entity that harbors the potential for societal harm. As Johnna Bryson aptly points out, the regulatory landscape governing AI is lackluster. Unlike drugs that are scrutinized meticulously for ethical compliance, AI technologies frequently privilege profit margins over protecting vulnerable communities.
Corporate giants predominantly drive AI development, wielding significant power to shape regulations and circumvent existing guidelines. This profit-centric model prioritizes financial gain over social responsibility, effectively blurring lines of moral accountability. When harm occurs — especially impacting at-risk populations — assigning responsibility becomes a tangled mess, stifling the creation of effective governance frameworks.
The Social and Ethical Landscape: An Individualism Crisis
Current economic paradigms have perpetuated a narrow view of individuals as rational, self-interested beings — a perspective harkening back to early 20th-century economic theories. This viewpoint has fueled a competitive corporate culture where profit often trumps empathy and community welfare. In this landscape, individualism has eclipsed collective cooperation.
Urgent Call for Accountability and Oversight
Algorithms, designed to maximize efficiency, frequently prioritize user engagement to the detriment of our attentional resources. As we navigate the turbulent waters of technological advancement, the imperative for accountability in AI development and deployment grows ever more pressing. The current framework must evolve to safeguard against the ethical pitfalls we face, ensuring that technologies not only serve the interests of a privileged few but also protect the broader societal fabric. The question looms large: as AI continues to shape our lives, will we foster a future where innovation benefits everyone, or will we stand by as profits are concentrated in the hands of a select few, while the harms ripple through the entirety of society? The time for decisive action and reform is now; the stakes have never been higher.