Articles / Governance of a City-State
The Vaping of the mind: Why Singapore should act and how

At this year’s National Day Rally, Prime Minister Lawrence Wong made an important and  striking intervention in the debate on Artificial Intelligence (AI) and Singapore’s Smart Nation future.

Learning, he stressed, is not just about mastering technology but about cultivating “the human qualities that machines cannot replicate – character, values, empathy and a sense of purpose.”

This timely reminder brings to mind the late French philosopher Gilbert Simondon, now enjoying a quiet revival among technologists in the West. Simondon argued that humans and machines are not opposed but co-evolving, and that technology should be integrated into human life with ethical intent. His insights prefigure today’s AI debates: the challenge is not to resist technological advance but to steer it towards human flourishing.

 

From Google to GenAI

Over the past quarter century, our relationship with technology has shifted dramatically. In the early 2000s, Google changed how people accessed information.

By the mid-2000s, social media platforms like Facebook and YouTube created an always-on ecosystem of connection, comparison and distraction.

As Jonathan Haidt shows in The Anxious Generation, smartphones have since rewired adolescent brains, with lasting effects on mental health and social skills. When search engines first appeared, critics feared a “Google effect” in which memory and deep learning would atrophy.  Those fears eventually subsided: “Search” became a cognitive aid, requiring users to interpret and synthesise. But, GenAI is different — it does not merely retrieve information but generates arguments, summaries, and reasoning structures on our behalf.

By stripping away the friction of thinking, it risks dulling the very faculties we most need, as users are less compelled to evaluate sources or stitch together meaning on their own.

 

Cognitive Offloading

Evidence is accumulating. A study last year by Dr Michael Gerlich of SBS Swiss Business School found a strong negative correlation between frequent AI use and critical thinking, mediated by “cognitive offloading.”

The phrase is a reference to how younger users in particular showed greater dependence on AI and weaker critical thinking scores. In a separate study, researchers of the Massachusetts Institute of Technology (MIT) asked groups to draft essays with ChatGPT, a search engine, or no tools at all.

Brain scans showed the ChatGPT group had the lowest neural activation and weakest recall of their own writing; those writing unaided were the most cognitively engaged.

None of this is a slam dunk, of course, but the evidence is already beginning to coalesce, pointing in a certain direction. Heavy reliance on Large Language Models (LLMs) erodes practice in independent reasoning and problem-solving. A society where too many lose these faculties risks becoming brittle and vulnerable.

 

The Vaping of the Mind

Before we get to the point where we discuss discerning between the right and wrong uses of AI, and instilling AI literacy in schools, serious thinking needs to be done about the type of Smart Nation we want.

There is, of course, immense promise in an AI-infused society. That said, there will certainly be worryingly passive cumulative effects, culturally and educationally. Repeated, seemingly “benign” uses, as in instances of students using ChatGPT for homework or professionals outsourcing mundane writing could reshape and denude skills across a generation in ways not captured by prevailing education assessment frameworks.

Early reliance may create learned helplessness, dulling critical faculties before they properly develop: thus, my moniker for describing it as the vaping of the mind.

And the future will not only be generative but agentic; meaning that AI systems will increasingly act with initiative, pursuing tasks across domains and anticipating needs. Such tools may reshape how we understand knowledge, relationships, and even identity.

Microsoft’s Head of AI, Mustafa Suleyman, recently warned in an influential essay of the dangers of psychosis and unhealthy attachment. He observed that interacting with seemingly conscious AI is highly compelling and very real interaction, rich in feeling and experience. Users reportedly believe their AI is God, and even fall in love with it.

 

Beyond safety teams

The AI companies are unlikely to solve this. Their trust and safety units are geared toward active, aggressive, and obvious existential threats – terrorism, bioweapons, child abuse, election interference, and toxic content.

They do not consider the slow erosion of critical faculties a “safety” issue. Instead, they frame it as a matter for pedagogy or academia.

Yet, Singapore has never shied from bold action on online harms. Laws such as Foreign Interference (Countermeasures) Act (FICA) and Protection from Online Falsehoods and Manipulation Act (POFMA), and the creation of the Online Safety Commission, show we are prepared to move in early.

No regulator today demands that firms mitigate the long-term cognitive effects of their products. That responsibility will fall to government, educators and civil society.

This means making hard choices now. Introduction of GenAI in schools should be delayed until students first master foundational skills: constructing arguments, reading deeply, and writing independently.

Globally, even before GenAI, these basics have been slipping. The Organisation for Economic Co-operation and Development’s (OECD) Programme for International Student Assessment (PISA) 2022 results showed that a quarter of Danish 15-year-olds could not understand a plain text. For Singapore – whose only real resource is its people – that is a cautionary tale.

 

Building resilience

Of course, students must eventually learn to work with AI. The question is when and how. Policymakers, academics, educators, and parents must decide together on arming students with the necessary resilience against the friability of AI.

Singapore’s Character and Citizenship Education (CCE) curriculum already emphasises cyber wellness, mental health, and social-emotional well-being. It should be updated alongside the Education Ministry’s National Digital Literacy Programme, launched five years ago, which trains students to “critically gather and evaluate information.”

That skill will be severely tested in the GenAI era, when plausible but wrong outputs can erode trust in knowledge itself.

Other models offer inspiration. Finland promotes “multiliteracy”: the ability to interpret and evaluate narratives across diverse sources and formats. This recognises that the skills needed to parse an always-on information stream differ from those needed to read a single text.

Singapore already has strong foundations: our students topped the world in creative thinking in PISA 2022, demonstrating originality and problem-solving imagination. Six in 10 of these students tested achieved the two highest proficiency levels, demonstrating the ability to generate original and diverse ideas for a wide range of problem-solving tasks and contexts which required expression and imagination.

This is worth considering, even as the ever-progressive and educationally imaginative Finns do not claim to have a silver bullet.

 

The Case for Books

One clear step is to promote reading – of actual books. Encouraging deeper reading is not nostalgia but a necessity. It builds resilience not only against disinformation but also dams the erosion of critical faculties.

The teaching of how to read a longform text, without frills, in hardcopy  – is a “heritage skill” that should be preserved. It should not be allowed to be a rarefied skill at which people gawp.

Various studies and meta-analyses show that students who read on paper scored significantly higher in comprehension tests than those who read the same material on screens. Researchers refer to this as the “screen inferiority effect”,  meaning digital reading yields lower retention and understanding.

Studies consistently show that comprehension and retention are higher on paper than on screens, a phenomenon researchers call the “screen inferiority effect”. Yet the National Library Board’s (NLB) survey last year found that while most Singaporeans report reading weekly, much of this is in the form of e-books or online content. Physical book reading for both teens and adults has declined markedly since 2016.

The NLB should balance its eye-catching innovations like StoryGen (an AI-driven story generator) with its core role as steward of reading culture. More initiatives to support physical book reading should be introduced.

Among the measures worth considering, which aren’t new, are subsidising bookshop rentals, and scrapping the GST on books (which Denmark has done after its poor PISA reading results).

 

Conclusion

This is not neo-Luddism, but a call to recognise that building resilience requires more than coping mechanisms or fatalism about technological inevitability.

PM Wong’s words are welcome. But Singapore must go further — to preserve the ability of citizens to think, reason and judge independently in an age when machines are eager to do the thinking for us.

 

Shashi Jayakumar is Executive Director of SJKGeostrategic Advisory Pte Ltd, a political and security risk consultancy.

Top photo from Unsplash

  • Tags:

Subscribe to our newsletter

Sign up to our mailing list to get updated with our latest articles!