Sitemap

We Have Rushed into the AI Era — But Why?

14 min readJun 19, 2025

--

Generative AI is being marketed as the solution to everything from healthcare to educational crises — but in practice, it’s largely used to mass-produce meaninglessness. Behind the promises of innovation lies a technology that exploits workers, depletes culture, drains language, and centralizes power. And yet, few dare to speak out because no one wants to be seen as standing in the way of “the future.” But precisely for that reason, we must begin to question what exactly we are building — and who it’s really for.

Generative Artificial Intelligence (genAI) — the latest concept in the broader AI field — is being heralded as a revolutionary technology with enormous potential. In practice, most people associate it with services like ChatGPT, DeepSeek, Claude, Gemini, DALL·E, and Midjourney — platforms that can “create” text, images, video, and music with a level of precision that would have been unthinkable just a few years ago. A range of other capabilities are also attributed to genAI, painting a picture of a tech-driven revolution to improve society.

The Halo Effect is a psychological phenomenon in which our overall judgment of something is disproportionately influenced by a single positive trait or association. The term was coined by psychologist Edward Thorndike in the early 20th century after observing that a soldier’s physical appearance affected how officers rated their intelligence, competence, and discipline — despite no supporting evidence. In the tech world, the halo effect often arises when an innovation is linked to concepts like the future, improvement, intelligence, or transformative potential — making us more likely to overlook its flaws or ignore its consequences. GenAI is surrounded by a powerful halo effect. It’s presented as the key to future healthcare, personalized education, efficient public services, and scientific breakthroughs. Because the technology could enhance such sectors, it’s implicitly assumed that all its uses are valuable. But in practice, genAI is mainly used to mass-produce text, images, and code that are fast and cheap — but rarely meaningful or valuable in the long run.

Combined with a widespread sense of urgency — the fear of “losing the AI race” or “falling behind” — this halo effect makes critical discussion difficult. Anyone who questions the technology risks being labeled regressive, anti-tech, or indifferent to societal needs. And that’s exactly why a critical perspective is necessary.

Beneath the glossy surface lies a dark side that’s rarely discussed — a reality where ecological, social, and economic sustainability issues intersect with geopolitical and cultural concerns, all of which are worsened by today’s largely uncritical pace of technological development.

Clarote & AI4Media / Better Images of AI / Labour/Resources / CC-BY 4.0

Economic Unsustainability — A Blow to Creative Professions

Writers, artists, photographers, and musicians have long relied on creativity and copyright as the foundation of their work. Generative AI undermines this by training on data scraped from the internet — often without permission or compensation for the original creators, whose work becomes raw material for training. GenAI services based on language models and GPT architectures can now produce art, music, and writing that imitates existing creators’ styles — raising profound questions about copyright and fair compensation.

Legislation is lagging. In many countries, using creative works for AI training without consent remains a legal gray area. Text and data mining often involve software bots scraping content from the internet. For creators, this means their work is exploited without payment — a development that threatens the entire creative sector.

The cultural damage is already significant. In 2023 alone, AI-generated images reportedly outnumbered all photographs taken in human history. More than half of today’s internet traffic is generated by bots, and a growing portion of all online content bears the marks of machine generation. The result is a media and information environment rapidly losing its human quality, where originality is replaced by statistical probabilities.

This trend risks discouraging a new generation of creators. Why invest time, money, and one’s life in becoming a writer, illustrator, or photographer, if your work can be scraped, mimicked, and replaced by an AI model that quickly dominates your market? Instead of stimulating creativity, generative AI fosters an industry where inputting a simple text prompt yields results — while the entire creative process is outsourced to a commercial statistical prediction engine. The “creative” task is reduced to ordering, and we get an increasingly AI-fied media reality — characterized by sameness, reproduction, and commercial optimization.

Over time, this may lead to what could be called a cultural model collapse: as AI models are increasingly trained on content generated by other AI models, quality, variation, and depth diminish. The issue then becomes not just economic or legal, but cultural and existential: How do we preserve human expression in a world where machines are allowed to mimic people without limits, accountability, or resistance?

🔍 If you’re a writer, you can check whether your texts have been scraped and used in Meta’s latest models. The site haveibeentrained.com also lets you check if your images were used as training data.

Social Unsustainability — The Hidden Labor Behind AI

For generative AI models to work, vast amounts of human labor are required to curate training data. Behind every well-written text and generated image lie thousands of hours of low-paid work. A global shadow labor market has emerged in silence — where people in countries like Kenya, the Philippines, and Venezuela manually label, clean, and moderate the messy scraped datasets. Wages are low, protections are non-existent, and the psychological toll of daily exposure to violent or disturbing content is severe. The success of the AI industry quite literally rests on the shoulders of these people — but the labor in these “digital sweatshops” remains invisible to most end users.

Content moderators (as well as clickworkers, annotators, or taggers) who filter inappropriate material from social media and AI systems have reported PTSD, depression, and anxiety as a result of their work. Meanwhile, tech companies raking in billions often offer little support.

📕 The book “Feeding the Machine” (Cant et al., 2024) provides an in-depth look at the invisible labor and suffering behind AI content moderation.

Catherine Breslin & Team / Better Images of AI / Chipping Silicon / CC-BY 4.0

Ecological Unsustainability — An Energy-Hungry Industry

Training and operating large-scale AI models has a significant energy footprint and requires large amounts of electricity and water. Training a single model like OpenAI’s GPT-4 consumes as much energy as thousands of households use in a year. The data centers powering these systems are energy-intensive giants — with computing generating heat that demands extensive water for cooling. Water scarcity is already an acute problem in many regions, yet the AI industry’s water use is poorly documented, making its full impact difficult to assess.

Add to this the carbon footprint. Training foundation models can produce emissions equivalent to several long-haul flights, and even everyday use of AI consumes significant energy. This is often overlooked in climate discussions — it’s not just training that matters, but every single AI-generated response (the “inference”). The environmental impact of generative AI is a growing problem whose full consequences we have yet to understand. The issue is that accurate data on energy use, carbon emissions, and water consumption is difficult to obtain. This lack of transparency makes it easy to downplay the problem — and as long as tech companies succeed in withholding these figures, the environmental issues remain unaddressed.

📕 Kate Crawford’s “Atlas of AI” (2021) explores the environmental toll of artificial intelligence.

Image by Comuzi / © BBC / Better Images of AI / Surveillance View B / CC-BY 4.0

Geopolitical Instability — AI as Infrastructure and Dependency

AI hype is reshaping global geopolitics faster than previous tech shifts. One major concern is Europe’s — and Sweden’s — growing dependence on American tech giants for AI services. Companies like OpenAI, Google, and Meta dominate the generative AI landscape, and their models are increasingly used in both public and private sectors. This creates vulnerabilities in a world where geopolitical tensions and sanctions can quickly shift the rules of the game.

If AI is becoming a central part of future infrastructure — as it clearly is — we must ask what risks arise from relying on critical technologies controlled by a few U.S.-based corporations. In a future crisis, access to AI services could be restricted or shaped by decisions made beyond our control. This makes AI not just a commercial issue, but one of digital sovereignty and resilience (Swedish link).

📕 Carl Heath explores digital resilience from a Swedish perspective in a 2025 article (in Swedish).

Linguistic Impact — Sweden’s Language Landscape

One of the more underestimated but long-term effects of generative AI is its impact on language. The largest language models are optimized for American English and often carry an implicit Anglo-centric worldview that may influence how Swedish is used and evolves. AI is becoming an everyday part of language usage — in writing tools, customer service, education, and public communication. If Sweden falls behind in developing its own language models and instead relies on English-centric systems, we risk eroding Swedish as a cultural and communicative tool.

This isn’t just about syntax or grammar — it’s about language as a vessel for thought and identity. Language shapes how we see the world. If AI is dominated by a particular cultural context, it risks altering how we express ourselves and what perspectives we bring into our reasoning.

The situation is especially serious for Sweden’s minority languages — such as Sami, Meänkieli, and Romani Chib — which already struggle for survival. These languages are virtually absent in today’s AI models and risk further marginalization unless targeted efforts are made to include them in national language model development.

If we view AI as a form of language modeling — and as part of our society’s digital operating system — then we must also recognize the importance of preserving linguistic diversity and ensuring that future AI models strengthen, rather than undermine, our own languages and cultural identities.

AI and Information Warfare — A New Battleground for Manipulation

A growing threat with generative AI is its vulnerability to information manipulation by hostile actors. Foreign propaganda and disinformation networks have already begun infiltrating language models through a strategy known as “LLM grooming.” By mass-producing fake news and using search engine optimization, these actors can make AI models repeat and validate foreign narratives — even when the models attempt to reject them. For instance, Russian outlet Pravda operates more than 50 domains that together produced 3.6 million custom articles in 2024 alone. These articles are scraped and absorbed into chatbot responses and AI-generated news content.

This presents a serious national security risk for Sweden and Europe. If we rely on AI models trained on manipulated data, we risk building digital ecosystems where disinformation spreads in ways that are hard to detect or counter. That’s why there’s an urgent need to develop domestic language models with robust mechanisms for identifying and resisting information attacks. Sweden must establish protocols for testing and auditing AI systems used in critical societal functions — while also investing in open, transparent AI models for Swedish and minority languages.

The Legal Vacuum — Irresponsibility as a Business Model?

Generative AI has allowed tech companies to move directly into media, culture, and entertainment — without taking on the responsibilities traditionally carried by publishers. When AI systems produce text, images, and videos, tech firms are no longer just platforms — they become, in practice, publishers. Yet unlike newspapers, TV stations, or book publishers, these companies are not held to editorial standards or legal accountability. This mirrors what we’ve already seen with social media: a systemic disconnect between content and responsibility, which is already having serious consequences.

Today, generative AI is used to create defamation, deepfake pornography, fake news, and extortion material. AI-generated child abuse images have been discovered, and individuals have been falsely accused of crimes by language models. In none of these cases have the companies behind the models been held legally accountable. Distributing abuse imagery via zip files is (rightly) illegal — but training and distributing an AI model that enables the same harm is often met with a shrug and a link to a “terms of service” agreement.

Meanwhile, users have little ability to protect themselves. Tools like haveibeentrained.com have enabled over 1.5 billion images to be opted out of training — but companies like OpenAI and Stability AI have ignored these opt-outs in newer versions of DALL·E 3 and Flux. The same goes for text: the Books3 database — used by DeepSeek and OpenAI — contains copyrighted works from thousands of authors without permission. Even The Atlantic’s tool to help writers identify their works in these datasets is of little help when there’s no mechanism for compensation or meaningful opt-out.

ℹ️ The Authors’ Guild has an information page with tips on what actions you can take as a writer if your work has been used without permission.

The result is a system where the legal, social, and societal costs are offloaded onto users, creators, victims, and governments — while profits are concentrated in the hands of a few global players. If we are serious about using AI for public good, we cannot tolerate an ecosystem where irresponsibility is baked into the business model.

Janet Turra & Cambridge Diversity Fund / https://betterimagesofai.org /

Who Profits from AI?

The key question is: Who actually benefits from the current trajectory of generative AI development?

Tech companies behind generative AI are reaping enormous profits, while the societal costs — environmental, economic, cultural — are shouldered by the rest of the world. The AI industry consumes energy, depletes natural resources, exploits labor in the Global South, shifts geopolitical power balances, and threatens the future of creative professions. At the same time, transparency from these companies is virtually nonexistent.

So why are we running?

Who really benefits from today’s AI? That’s the central question we must ask ourselves. We seem to have agreed to participate in an “AI race” where the goal is faster, bigger, and more integrated technology — but to what end, and for whose sake? Did we even choose this path, or have we simply accepted it as inevitable (“adapt or die”)?

OpenAI’s motto “when in doubt, scale it up” says something about the trajectory of these kinds of companies. The sharp focus on scale opens up a new level of understanding why the huge LLMs are problematic. This obsession with scale in LLMs hides a deeper problem: who can actually afford to keep building them? Follow the money to see who. Only the biggest industries can bankroll this race: fossil fuels, surveillance, and military complexes. They are the only industries with the scale and incentives to sustain the arms race in compute and data over time. Of course they will be the favorite partners/clients for the model providers. And that has consequences.

In the grand scheme, it’s not about what these models can do for your or any individual organization. It’s about what the companies providing the models (e.g. OpenAI) have to become: Even more extractive, centralized, and locked into endless growth just to stay in the game.

There’s a pervasive assumption that efficiency and technological innovation always equal progress. But is that actually true? New technologies may be a necessary piece of the puzzle, but it is not sufficient in itself. What happens when the pace accelerates, but the direction remains unclear? Generative AI promises much — but it also introduces new vulnerabilities, dependencies, and forms of exploitation. Instead of asking how we can run faster, perhaps we should ask: Why are we running at all — and where is this sprint taking us?

The push for more efficiency, more scale, sounds good. But it’s a trap. At some point, faster and cheaper stops being progress. It becomes consolidation. True innovation isn’t just bigger, faster, cheaper (those are 1900s metrics). True innovation in the 21st century needs to be accountable, decentralized, and regenerative.

📕 Karen Hao’s book “Empire of AI” (2025) explores the unsustainable business models and practices pushed by foundation model providers like OpenAI.

Hanna Barakat & Archival Images of AI + AIxDESIGN / Better Images of AI / Data Mining 1 / CC-BY 4.0

A Different Path — AI for the Public Good

If we look back at the history of technological development, we see that it has typically benefited a small economic elite first — not society as a whole. It’s only when laws and policies have been enacted to distribute technology’s effects more broadly that the general population begins to benefit. This point is often forgotten when people paint rosy pictures of the Industrial Revolution. It was only after workers organized and labor laws were established — with protections like minimum wage and regulated work hours — that the positive effects we now associate with industrialization came into play. The steam engine didn’t give us paid vacation — unions, laws, and political action did.

Where are those mechanisms in today’s AI race?

This opens the door to a very different conclusion from the one offered by AI utopians who insist we must join the “AI race” or be left behind. Instead of blindly accepting generative AI development as a natural law, we could ask: How can we guide this technology to serve democracy, workers, and the broader public — rather than a handful of corporate interests?

Generative AI, like the steam engine, electricity, and digitization, is not a neutral force. It is shaped by the rules we set, the incentives we create, the actors we empower, and the priorities we choose. Thorndike’s halo effect tells a one-sided, optimistic story of democratizing AI — even though current development trends more closely resemble early industrialization, increasing inequality, eliminating jobs, and concentrating power.

But it doesn’t have to be this way.

We can choose a future where AI improves working life instead of replacing it — where it helps us build a sustainable world instead of devouring resources — where it enriches our language, culture, and society instead of diluting them. But this won’t happen by itself. Technology does not automatically generate positive outcomes. It requires rules, policies, and laws.

So the real question isn’t whether or not we should have generative AI.

It’s: How should it work — and for whose benefit?

Rushing ahead blindly is not progress. Real progress would be to pause and ask: What kind of future do we actually want?

Media theorist and author Neil Postman posed seven essential questions that we should always ask when facing a new technological shift:
1. What problem does the technology claim to solve?
2. Whose problem is it?
3. What new problems will be created by solving the old one?
4. Which people and institutions will be most harmed?
5. What changes in language are being promoted?
6. What shifts in economic and political power are likely?
7. What alternative media might emerge from this technology?

A serious and thoughtful exploration of these questions, in relation to generative AI, would be a good start — before we put on our running shoes and step into the starting blocks.

Pontus Wärnestål
Researcher, designer, and author specializing in AI, design, and digital transformation — working across industry, society, infrastructure, and culture. Not only with generative AI, but with broader digital applications. It may seem contradictory to be critical of something I actively work with — but that’s exactly the point. I want what we build to be good: sustainable, fair, and wise. We only get one shot at doing this right. And to make that happen, we need more perspectives, not fewer. Technology is never neutral — and it’s up to us to steer it in a direction we believe in.

Special thanks to Oskar Broberg and Johan Cedmar-Brandstedt for their feedback and contributions to this text.

➡ Join me on LinkedIn.

--

--

Pontus Wärnestål
Pontus Wärnestål

Written by Pontus Wärnestål

Designer at Ambition Group. Deputy Professor (PhD) at Halmstad University (Sweden). Author of "Designing AI-Powered Services". I ride my bike to work.

No responses yet