The Real Promises and Limits of Generative AI

7 min readMar 16, 2025

Generative AI is often described as a revolutionary force, transforming industries and reshaping human creativity, work, and learning. As someone who engages critically with these claims daily, I recognize that my perspective might be mistaken for skepticism or outright opposition. That is not the case. This post is not an argument against generative AI but a clear-eyed examination of its real capabilities — separating genuine progress from overblown expectations. Below, I analyze nine of the most common positive claims about generative AI, assessing what holds up under scrutiny and what is exaggerated.

Elise Racine & The Bigger Picture / Better Images of AI / Web of Influence I / CC-BY 4.0

1. Enhanced Productivity and Efficiency

The claim: Generative AI automates repetitive tasks, accelerates workflows, and boosts efficiency across industries.

Mostly accurate, but with trade-offs

  • AI does help automate repetitive tasks (email drafting, document summarization, code suggestions, etc.), but it often lacks nuance and reliability, requiring human oversight.
  • In coding, AI-assisted tools can speed up development, but they also produce errors and security vulnerabilities that developers must fix.
  • AI-generated summaries can be inaccurate or miss key context, leading to over-reliance on flawed outputs.

Realistic framing: AI augments productivity but is not a replacement for human expertise, and it requires constant verification.

2. “Democratization” of Creativity

The claim: AI lowers barriers to artistic expression, allowing anyone to create professional-quality content without specialized skills.

False. More like centralization of creativity

  • Generative AI shifts creative power from individuals to companies that control AI models.
  • Artists and designers are being undercut by AI-generated content that is based on human-created work but often provided at a lower cost (or free) without compensating original creators.
  • AI-generated images, music, and writing are mostly derivative, repeating patterns from existing works rather than creating truly original pieces.

Note: it helps if you think about “ordering” content, instead of “creating” content. Your prompt could just as well be an email to a human artist.

Realistic framing: AI makes content more convenient to generate, but it does not democratize creativity — it commodifies it under centralized control. If you want real democratization of arts and creativity, crayons and tax-supported art schools for all children is a much better bet.

3. Personalized and Scalable Education

The claim: AI tutors provide tailored learning experiences, adapting to individual needs and making education more accessible.

⚠️ Partially true, but with risks

  • AI-powered tutoring can adapt to different students’ needs, but current models still struggle with accuracy, bias, and deeper understanding beyond surface-level pattern recognition.
  • AI-generated educational content risks being unreliable, as models often produce misleading or outright incorrect information.
  • Over-reliance on AI tutors could reduce critical thinking skills if students simply accept AI-generated answers without questioning them.

In fact, Generative AI in education presents a paradox: those who would benefit the most — students and laypeople — lack the expertise to discern AI’s hallucinations and factual errors, making them vulnerable to misinformation. Meanwhile, experts (who might need AI the least), can use it effectively to streamline work, generate ideas, and outline texts. Instead of democratizing knowledge, AI risks widening the gap between those who can critically assess its output and those who cannot, reinforcing existing inequalities rather than bridging them.

Realistic framing: AI can assist in education but must be supplemented by human educators to ensure accuracy and depth. It is not a replacement for real teaching.

4. Improved Healthcare Support

The claim: AI revolutionizes healthcare by assisting in diagnostics, medical research, and patient management, reducing human error.

⚠️ Partially true, but overhyped

  • AI can assist in medical documentation (e.g., summarizing patient notes), but it also makes mistakes that could have serious consequences if not reviewed carefully.
  • AI-generated drug discovery insights show promise but still require years of validation and human scientific expertise — AI does not “discover” cures independently.
  • AI cannot replace doctors, but it can act as a supplementary tool for professionals who know how to interpret its outputs. One study provides a systematic analysis of medical hallucinations in large language models (LLMs) and revealed that state of the art models like 𝗚𝗣𝗧-𝟰𝗼, 𝗖𝗹𝗮𝘂𝗱𝗲 𝟯.𝟱, and 𝗚𝗲𝗺𝗶𝗻𝗶-𝟮.𝟬 generate high-risk hallucinations in clinical decision-making. The study found that 91.8% of doctors encountered AI hallucinations in medical applications, with 84.7% believing they could impact patient health.

Realistic framing: AI assists in healthcare tasks but is not a revolutionary medical solution. It requires rigorous human oversight to be useful and safe.

5. Smarter Customer Support and Assistance

The claim: AI chatbots and virtual assistants offer instant, accurate responses, improving customer service and reducing costs.

Mostly accurate, but limited in scope

  • AI chatbots do reduce response times and provide automated help for basic inquiries.
  • However, AI struggles with complex or emotionally nuanced interactions, often frustrating customers rather than solving their issues.
  • Many companies use AI to cut human support jobs, even when the AI is not actually good enough to replace them.

Realistic framing: AI improves efficiency for basic tasks but cannot fully replace human customer service without sacrificing quality.

6. Enhanced Accessibility

The claim: AI-powered speech recognition, translation, and other assistive technologies empower people with disabilities and bridge language gaps.

Genuinely beneficial, but not without issues

  • AI-powered speech-to-text and text-to-speech have made digital content more accessible for people with disabilities.
  • Real-time translation is improving, though errors still occur, and AI cannot always capture cultural nuances.
  • AI is not universally available — it often requires internet access, costly subscriptions, or relies on Big Tech platforms that could restrict access or monetize these tools.

Realistic framing: AI genuinely enhances accessibility, but access to AI itself is often controlled by corporations, limiting its full democratic potential.

7. Augmented Scientific Research and Discovery

The claim: Generative AI accelerates breakthroughs by analyzing vast datasets, generating hypotheses, and aiding complex simulations.

⚠️ Partially true, but overestimated

  • AI can analyze large datasets to suggest patterns or hypotheses, but it does not “think” like a scientist — it just finds correlations in existing data. And by the way: this is what classical machine learning and data analysis is good at. GPT-based generative models suffer from the risk of hallucinations, even in data analysis.
  • AI-generated simulations (e.g., for climate models or protein folding) are useful tools but still require human interpretation.
  • Scientific discovery is about new ideas and critical reasoning, which AI does not possess on its own. Scientists can, however, get assistance from genAI tools.

Realistic framing: AI is a powerful research tool, but not a researcher — it still needs human scientists to verify and interpret findings.

8. Legal and Administrative Document Processing

The claim: AI streamlines contract analysis, regulatory compliance, and documentation, reducing legal bottlenecks.

Mostly accurate, but not a full replacement

  • AI does help with contract analysis, summarization, and legal research, but it can misunderstand legal nuances.
  • AI-generated legal documents must be reviewed by professionals to avoid serious legal risks.
  • AI cannot replace lawyers, but it can speed up routine tasks like contract review and compliance tracking.

Realistic framing: AI is a useful assistant in legal work, but not a substitute for legal expertise.

9. Ethical AI and Bias Mitigation

The claim: AI can detect and eliminate biases, making automated systems fairer, more transparent, and more ethical.

⚠️ Mixed truth — AI can both help and worsen bias

  • AI can help detect bias in datasets, but it also inherits and amplifies biases from the data it’s trained on.
  • AI fairness tools are still flawed, and many AI systems reinforce systemic biases rather than eliminating them.
  • Companies often market AI as ethical while deploying biased models that harm marginalized communities.

Realistic framing: AI can be used to identify bias, but humans are required to actively work to make it fair — it does not “solve” bias on its own.

Conclusion: Stripping Away the Hype

Here’s a realistic summary of AI’s potentials, as far as I am concerned:

Where AI is actually useful:

  • Automating repetitive tasks (writing assistance, coding suggestions, document summaries). Note: It then becomes another question what happens to deep knowledge and craft if you never have to deal with the “repetitiveness”. (Schön’s reflection-in-action and reflection-on-action comes to mind. What happens if this gets stripped away in the name of efficiency?)
  • Accessibility improvements (speech-to-text, translation, assistive tools).
  • Customer service for basic inquiries.
  • Scientific research acceleration (data analysis, simulations, maybe help with idea generation).

⚠️ Where AI is overhyped — or plain wrong:

  • “Democratizing” creativity (it commodifies rather than empowers).
  • “Personalized learning” (useful but prone to errors and biases).
  • “Revolutionizing healthcare” (it assists but doesn’t replace medical professionals).
  • “Bias-free AI” (AI needs active human oversight to be fair).

Where AI is misleadingly marketed:

  • “Replacing skilled jobs” (often leads to job losses while providing lower-quality outputs; therefore it’s not a replacement, rather a degradation).
  • “Independent scientific breakthroughs” (AI finds patterns but doesn’t innovate on its own).
  • “Understanding human emotions” (AI mimics responses but doesn’t truly understand or empathize).

So, while generative AI does offer meaningful improvements in some areas, the utopian claims of full automation, “democratization,” or replacing skilled professionals are misleading at best and harmful at worst. The reality is that today’s generative AI is a set of powerful but flawed techniques — they can extend human capability in some areas, but does not replace human expertise, judgment, or creativity.

If you like (or dislike!) this view, and need assistance in developing AI policies, human-centered AI-powered services, or just want to talk more about this, don’t hesitate to reach out: Look me up on LinkedIn or comment below.

--

--

Pontus Wärnestål
Pontus Wärnestål

Written by Pontus Wärnestål

Designer at Ambition Group. Deputy Professor (PhD) at Halmstad University (Sweden). Author of "Designing AI-Powered Services". I ride my bike to work.

Responses (1)