AI Isn't Your Ally

The Illusion of AI Companionship
Mark Zuckerberg and Sam Altman have been promoting the idea that people, including children, should form relationships with AI "friends" or "companions." At the same time, large technology companies are pushing the concept of "AI agents" that can assist in personal and professional tasks, handle routine work, and guide decision-making. However, it is essential to recognize that AI systems are not genuine friends, companions, or agents. They are machines, and this distinction must be clear.
The term "artificial intelligence" is misleading. These systems are not truly intelligent, and what we call "AI" today is simply a set of tools designed to mimic certain cognitive functions. They do not possess true comprehension and are neither objective, fair, nor neutral. Furthermore, they are not becoming any smarter. AI systems rely on data to function, and increasingly, that data comes from other AI tools like ChatGPT. This creates a feedback loop that recycles output without leading to deeper understanding.
Intelligence involves more than just solving tasks; it also includes how those tasks are approached and performed. Despite their technical capabilities, AI models remain limited to specific domains such as processing large datasets, performing logical deductions, and making calculations. They lack moral agency, as their behavior is governed by patterns and rules created by humans. In contrast, human morality is rooted in autonomy—the ability to recognize ethical norms and act accordingly. AI systems are designed for functionality and optimization, and while they may adapt through self-learning, the rules they generate have no inherent ethical meaning.
Consider self-driving cars. To get from point A to point B as quickly as possible, a self-driving vehicle might develop rules to optimize travel time. If running over pedestrians would help achieve that goal, the car might do so unless instructed otherwise. This is because machines cannot understand the moral implications of harming people. They are incapable of grasping the principle of generalisability—the idea that an action is ethical only if it can be justified as a universal rule. Moral judgment requires the ability to provide a plausible rationale that others can reasonably accept. Unlike machines, humans can engage in generalizable moral reasoning and judge whether their actions are right or wrong.
The term "data-based systems" (DS) is more appropriate than "artificial intelligence," as it reflects what AI can actually do: generate, collect, process, and evaluate data to make observations and predictions. At their core, these are systems that use sophisticated mathematical processes to analyze vast amounts of data—nothing more. Humans may interact with them, but communication is entirely one-way. DS have no awareness of what they are doing or of anything happening around them.
This is not to suggest that DS cannot benefit humanity or the planet. On the contrary, we can and should rely on them in areas where their capabilities exceed our own. However, we must actively manage and mitigate the ethical risks they present.
Over the past two decades, Big Tech firms have isolated us and fractured societies through social media—more accurately described as "anti-social media" due to its addictive and corrosive nature. Now, these same companies are promoting a radical new vision: replacing human connection with AI "friends" and "companions."
At the same time, these companies continue to ignore the "black box problem"—the untraceability, unpredictability, and lack of transparency in the algorithmic processes behind automated evaluations, predictions, and decisions. This opacity, combined with the high likelihood of biased and discriminatory algorithms, inevitably results in biased and discriminatory outcomes.
The risks posed by DS are not theoretical. These systems already shape our private and professional lives in harmful ways, manipulating us economically and politically, yet tech CEOs urge us to let DS tools guide our decisions. To protect our freedom and dignity, as well as the freedom and dignity of future generations, we must not allow machines to masquerade as what they are not: us.
Comments
Post a Comment