AI, power and the future of human autonomy – A growing divide

AI, Machine learning, robot hand ai artificial intelligence assistance human touching on big data network connection background, Science artificial intelligence technology, innovation and futuristic.

The rapid development of Artificial General Intelligence and advanced AI systems has ushered in a new era of technological capability. These systems are becoming increasingly sophisticated, capable of performing tasks that were once thought to require uniquely human intelligence.

From diagnosing diseases to crafting legal arguments, AI is reshaping industries and redefining what it means to be productive in the modern world. However, with this progress comes a growing concern: the concentration of power in the hands of a select few – those who possess the expertise and resources to create, control, and manipulate AI systems for financial and political gain.

This dynamic creates a stark divide between two groups: the FEW, a small elite group of individuals or organisations with deep technical expertise, access to cutting-edge AI tools, and the ability to shape societal narratives through AI; and the MANY, the vast majority of users who rely on AI systems without fully understanding how they work or questioning their outputs.

Defining the problem: The divide between the FEW and the MANY

At its core, the divide between the FEW and the MANY stems from disparities in knowledge, resources, and intent. The FEW consist of technologists, corporate leaders, and policymakers who understand the intricacies of AI systems and have the means to deploy them strategically. They operate within a realm of complexity that is inaccessible to most people, leveraging AI to optimise profits, sway public opinion, or consolidate authority. For example, social media platforms use AI algorithms to curate content tailored to individual preferences, subtly influencing user behaviour and reinforcing echo chambers. Similarly, financial institutions employ AI to predict market trends, giving them an edge over ordinary investors.

On the other hand, the MANY represent the general population – individuals who interact with AI systems daily, but lack the technical literacy to comprehend their inner workings. These users often trust AI-generated recommendations implicitly, whether it’s accepting search engine results as objective truth or relying on virtual assistants for decision-making. This blind reliance stems from a combination of convenience and ignorance; many people are content to delegate cognitive labour to machines, believing that AI will lead them to safer, more efficient outcomes.

The danger lies in the asymmetry of power. While the FEW shape the rules of engagement, the MANY remain unaware of the extent to which their choices are being manipulated. Consider the rise of deepfake technology, which allows malicious actors to fabricate convincing audiovisual content. Without robust safeguards, unsuspecting users may fall prey to misinformation campaigns orchestrated by those with ulterior motives. In this way, the divide perpetuates a cycle of dependency, where the MANY grow increasingly reliant on AI while ceding their autonomy to the FEW.

Analysing the risks: Short-term effects and long-term impacts

The consequences of this divide manifest in both immediate and enduring ways, posing significant threats to individual agency and societal stability. In the short term, the erosion of critical thinking skills stands out as a pressing concern. As AI assumes responsibility for tasks ranging from information retrieval to problem-solving, users may become complacent, outsourcing intellectual effort to machines. This phenomenon, often referred to as “cognitive offloading”, diminishes the capacity for independent reasoning. Over time, individuals may struggle to evaluate the validity of AI-generated outputs, leaving them vulnerable to manipulation.

Another short-term risk is the amplification of existing inequalities. The FEW, armed with superior access to AI technologies, can exploit these tools to widen economic and social divides. For instance, companies that harness AI for automation can reduce labour costs, displacing workers whose jobs are rendered obsolete. Meanwhile, those who lack the skills to adapt to an AI-driven economy face diminished prospects for upward mobility. This disparity exacerbates tensions between different socioeconomic groups, fuelling resentment and mistrust.

In the long term, the consolidation of power among the FEW poses even graver dangers. As AI becomes more integrated into governance, healthcare, education, and other critical sectors, the FEW gain unprecedented leverage over societal structures. Imagine a future where AI algorithms dictate resource allocation, determine eligibility for social services, or even influence electoral outcomes. Such scenarios raise troubling questions about accountability and transparency. Who ensures that these systems operate fairly? How do we prevent bias from creeping into algorithmic decision-making? Without adequate oversight, the FEW could entrench their dominance, creating a dystopian reality where the MANY are reduced to passive subjects of AI-mediated control.

Perhaps the most insidious long-term impact is the normalisation of passivity. As AI assumes greater responsibility for navigating life’s complexities, individuals may come to view critical thinking as unnecessary or burdensome. This cultural shift undermines the very foundation of democracy, which relies on informed citizens capable of making reasoned judgments. If the MANY lose the ability — or willingness — to question authority, they risk surrendering not only their autonomy, but also their collective voice.

Proposing solutions: Bridging the gap between the FEW and the MANY

Addressing the divide between the FEW and the MANY requires a multifaceted approach that spans education, policy, and technology. At the heart of any solution lies the imperative to empower individuals, fostering a culture of curiosity and scepticism that resists blind conformity to AI-driven narratives.

One crucial step is reforming education systems to prioritise digital literacy and critical thinking. Schools must equip students with the skills needed to navigate an AI-saturated world, teaching them how to evaluate sources, recognise biases, and interrogate assumptions. For example, courses on data science and ethics could introduce learners to the principles underlying AI algorithms, demystifying their operation and highlighting potential pitfalls. Beyond formal education, public awareness campaigns can promote lifelong learning, encouraging adults to stay informed about emerging technologies and their societal implications.

Policy interventions also play a vital role in mitigating the risks associated with AI. Governments should establish regulatory frameworks that hold AI developers accountable for the societal impacts of their creations. Transparency requirements, such as mandatory audits of algorithmic decision-making processes, can help ensure fairness and prevent abuse. Additionally, anti-trust measures may be necessary to curb monopolistic practices among tech giants, fostering competition and innovation while safeguarding consumer interests.

Technological solutions offer another avenue for bridging the gap. Developers can design AI systems with built-in safeguards that promote user empowerment. For instance, explainable AI techniques enable users to understand the rationale behind machine-generated recommendations, fostering trust and enabling informed decision-making. Similarly, open-source initiatives can democratize access to AI tools, allowing grassroots innovators to contribute to the development of ethical and inclusive technologies.

Case studies and examples: Lessons from real-world scenarios

To illustrate these concepts, consider the case of Estonia, a country renowned for its embrace of digital governance. Estonia has implemented blockchain-based systems to enhance transparency and security in areas such as voting and healthcare. By granting citizens direct access to their personal data and enabling them to track how it is used, Estonia demonstrates how technology can be harnessed to empower rather than alienate. This model underscores the importance of designing AI systems with user-centric principles in mind.

Conversely, the Cambridge Analytica scandal highlights the dangers of unchecked AI manipulation. By exploiting Facebook’s data-sharing policies, the firm harvested millions of users’ private information to craft targeted propaganda during elections. This episode reveals the vulnerabilities inherent in centralised AI ecosystems and underscores the need for stringent privacy protections.

Recommendations for individuals: Taking control in an AI-driven world

For everyday users, engaging responsibly with AI begins with cultivating a healthy dose of scepticism. Before accepting an AI-generated recommendation, ask yourself: What evidence supports this claim? Could alternative perspectives exist? Additionally, familiarise yourself with basic AI concepts, such as training data and bias, to better assess the reliability of machine-generated insights. Finally, advocate for ethical AI practices by supporting organisations and initiatives committed to promoting transparency and accountability.

This is a call to action: the divide between the FEW and the MANY represents one of the defining challenges of our time. If left unaddressed, it threatens to undermine the very foundations of democratic society, eroding individual autonomy and concentrating power in the hands of a privileged few. However, by taking proactive steps — reforming education, enacting sound policies, and designing ethical technologies — we can bridge this gap and ensure that AI serves as a force for good. The choice is ours: Will we allow ourselves to be led blindly into a future dictated by machines, or will we reclaim our agency and demand a world where technology empowers rather than controls? The time to act is now.

Yuri Koszarycz

Yuri Koszarycz was a Senior Lecturer in the School of Theology, Brisbane Campus, Australian Catholic University. He has degrees in philosophy, theology and education and lectured in bioethics, ethics and church history. He has now retired.