Rethinking the Shape of Higher Education to Meet Future Challenges
Or: How do we prepare students for a world where fluent nonsense is everywhere?
This is the written version of my talk at the Chandigarh University’s Global Education Summit in September 2025 on the topic of the future of higher education.
– Pontus Wärnestål, Deputy Professor of Information Technology, Halmstad University
Introduction
My theme across this conference is rethinking the very shape of higher education to meet future challenges. And of course, that begs the question: what shape are we in? What exactly is the shape? A circle, a spiral, an arrow? And, are we in good shape?
Higher education is being reshaped by forces outside the campus walls:
- Shrinking enrollments and funding precarity.
- Rising disinformation, automation, and generative AI.
- New geopolitical order.
- Student wellbeing and mental health.
- And in many countries, growing public skepticism about the value of higher education itself.
Generative AI is just one of many flashpoints — but it tells us something important about all these changes.
Generative AI as signal
While we use the term ”AI” a lot, I want to make a distinction. There is no one thing called ”AI”. It’s a whole galaxy of approaches that are often not even related. But the dominating paradigm today is statistical machine learning, and the most popular form of ML is token predictors based on large language models (LLMs).
Today’s AI tools, i.e. transformer-based LLMs in the form of chatbots, are fluent but unreliable. Several studies (e.g. NewsGuard’s AI Tracking Center, and AIMultiple) found leading chatbots and AI models now give false information up to 45% of the time, which is significantly worse than just a year ago. Refusal rates are in practice 0% — they answer everything, even when wrong. This is because the LLMs don’t model truth, they stitch tokens together based on probabilities.
This mirrors the wider challenge: our students are now surrounded by polished but hollow information – or fluent nonsense. Tools make things look correct while stripping away rigor. That’s not literacy — it’s a simulation of literacy.
The deeper problem: neoliberal “AI literacy”
Calls for “AI literacy” sound progressive, but they narrow education into workforce training: learning to operate tools, not to think, critique, or create knowledge. Literacy has traditionally meant participation in discourse and communication. Teaching students to prompt a chatbot is not literacy — at best, it’s scaffolding. Without removing the scaffold, they never learn to ride the bike themselves.
When I switched flights in New Delhi on my way here to Chandigarh, I saw a huge billboard at the airport: a group of students around a tablet, smiling, under the banner of a major AI vendor. The message was clear: we provide educational licenses for our AI model.
This is the branding of AI in higher education — not about learning, but about market share. It packages tool access as educational progress. The irony, of course, is that this billboard isn’t showing students learning together; it shows students consuming a product together. This is exactly the neoliberal reframing I worry about — where education is reduced to licensing agreements and tool fluency, rather than knowledge, critique, and communication.
What pedagogy can do
In one of the Methods classes I teach at Halmstad University, my colleague and I asked students to critique an AI-generated survey of 50 questions on student health. It looked elegant, but when they tested the survey, they saw it was useless — no research question, no analyzable data. They then rewrote the survey around a clear research question, ran it again, and discovered the difference between surface fluency and rigor.
I believe that is our role: design learning where students feel the gap between appearance and knowledge, then rebuild with judgment, communication, and critique. This is bigger than just LLM-based AI products: the same approach applies to climate change education, student wellbeing, global migration, disinformation, and automation in labs and professions: Higher education’s future is not about keeping pace with tools. It’s about reaffirming universities’ role as stewards of knowledge in society, communication, and inquiry. The shape of higher education must be adaptive, humane, and grounded in dialogue — not automation or content production for its own sake.
So how do we prepare students for a world where fluent nonsense is everywhere?
When it comes to generative AI, there are two main routes in every discussion:
- Ban it: Well, bans don’t work. They are symbolic, unenforceable; and only serve to drive use underground (“shadow AI”).
- Embrace it (i.e. ed tech-first adoption): Even if not everyone is saying it this directly, the alternative in practice is to outsource “pedagogy” to vendors (in line with the airport anecdote above).
I would argue for a third way:
3. Communication-first, pedagogy-first integration with explicit de-scaffolding and accountability to sources.
This means we prepare students by reminding them that fluency isn’t truth. Generative AI shows us that polish and confidence are no guarantee of accuracy. So our task is not to train students to be better prompt engineers — it’s to make them critical readers, writers, researchers, and citizens.
That means teaching students:
- to test claims against evidence,
- to see through elegant form to methodological rigor, and
- to value communication and inquiry over automation and output.
We should not focus on teaching students to compete with fluent nonsense. We should prepare them to diagnose it, dismantle it, and replace it with knowledge that actually means something.
Our graduates should use technology without being used by it. And they should leave us able to tell the difference between fluency and truth. This means that we as teachers (not ed tech vendors) will have to design our pedagogy and assessments carefully and accordingly.
Thank you — and I look forward to continuing this conversation with you in this global context and forum.
