AI and the Temptation of "Educational Populism"
Calls for "mindful AI use" won't be enough

Thank you for reading The Coffee Parliament, a Substack on Thai politics and policy. Usually I analyze current events and I try not to stray too much from my lane. I wanted to take some time today, however, to do some thinking on paper about potential future policy directions on AI. But this comes with a necessary disclaimer that I’m not an expert on AI, and please feel free to skip this piece if you would prefer to stick with my usual areas of expertise.
Thailand’s Ministry of Digital Economy and Society (DES) recently announced a partnership with Google, where Thai students above the age of 18 (i.e. university students) will be able to access Google AI Pro for free.
This isn’t exactly unique to Thailand: this trial of Google AI Pro is also available in a several other countries. The Bhumjaithai government is understandably promoting this as an important government achievement, stating in a party post that “Chaichanok Chidchob, the Minister for Digital Economy and Society, led the DES ministry team in partnering with Google Cloud Thailand.” They also said that the ministry will move forward with “expanding access to five million more Thais around the world.”
Seeing this made me wonder about the likelihood that AI will begin featuring more prominently in next year’s election campaign. The last general election was held in May 2023 — only a few months after the launch of ChatGPT, when perhaps it hadn’t quite made its way into the national consciousness to the extent that it has now. I imagine, however, that things will be very different this time around. I wanted to use this post to reflect on what I will call “educational populism” oriented around AI could look like. (I use the term “populism” as it is commonly used in Thailand. The Thai term for populism is prachaniyom, literally “popular with the people,” which refers to the direct distribution of benefits to ordinary citizens).
Cognitive Offloading in an Underperforming Education System
Chaichanok said as part of his statement:
It is important to use AI mindfully. If we use AI as a substitute for thinking, or let it work for us, then the loser will be the user. But if we use it mindfully, to effectively upgrade our knowledge, our thinking, our projects and our research, then it will be a tool that will help develop your future and the country’s future.
The government, of course, is aware of the potential negative effects of providing all university students with increasingly powerful AI tools. But are they applying any brakes beyond beseeching students to use AI mindfully?
When I teach undergraduates, my discussion section syllabus always contains a brief statement on responsible AI use, and I start off the semester with a short presentation on AI. I show a picture of math teachers protesting calculator use in the 1960s to make the point that AI is a tool like any other and that as a level-headed instructor who recognizes that times have changed, I have no interest in completely outlawing them. Like Chaichanok, I take pains to emphasize that you can use AI not as a substitute for writing and critical thinking, but as a tool to help with research and brainstorming.
And mindful use of AI is undoubtedly possible. I would actually encourage Thai students with this new access to Google AI Pro to watch this video from TDRI President Dr. Somkiat Tangkitvanich on how to use AI in ways that make us smarter rather than dumber. (He suggests asking these chatbots to pose criticisms and alternative takes on arguments we are making, for example).1
But I think we also have to be real. Everyone who has been involved in university teaching since 2022 know that there is an epidemic of AI use, and it is certainly not mindful AI use. We have all seen students who are clearly having ChatGPT do most, if not all, of the thinking and writing involved in some assignments. Yes, sometimes it’s very easy to tell when something is LLM-written,2 but it’s almost always at best a guessing game because you can never really be sure. And there are countless horror stories out there of students who have become entirely dependent on AI, leading to what researchers call “cognitive offloading.”3
This is especially concerning because basic skills in Thailand are already lagging even in the pre-AI world. As I summarized here in Fulcrum:
Thai education, as it stands, is lagging behind (Table 1). Standardised tests administered by the Programme for International Student Assessment (PISA) demonstrated a consistent downward trend in the educational attainment of Thai students over the past decade. In 2022, PISA scores revealed that Thai students ranked 58th for maths and science, and 64th for reading. The PISA study assessed 81 OECD countries that participated in the examinations. A new test of creative thinking ability also showed that Thai students scored significantly lower than the OECD average, and also lower than neighbours Vietnam and Malaysia. Meanwhile, another study published by the World Bank in 2023 found that 64.7 per cent of Thais scored below threshold levels of foundational reading literacy while 74.1 per cent underperformed in foundational digital skills.
So the question is this: what happens to the abilities of Thai students to read and reason mathematically, already subpar compared to international peers, when it is now easier than ever to offload all of that cognitive work to a LLM?
Take a moment to think about the benefits listed with Google AI Pro: “The Pro version handles a much larger amount of data and multiple files in a single prompt (up to 1,500 pages of text/documents), making it ideal for summarizing large textbooks or research papers.” Or: “Gemini in Workspace: Use AI assistance directly within apps students use every day, including Gmail, Docs, Sheets, and Slides.” If we want to produce students who are still able to read and summarize materials, or indeed possess basic skills like writing emails, it is more crucial than ever to be intentional about guiding students on “smart” uses of AI.
Responsibly Promoting AI in Education
It’s important to note that we are seeing positive efforts to integrate AI into Thai education. This article in the Bangkok Post, for example, discusses the Bangkok Metropolitan Administration’s integration of AI learning tools to help students develop English proficiency. Meanwhile, Bangkok’s teachers are taking a “Digital Citizen Plus” curriculum aimed at helping teachers guide students in responsible technology use.
Unfortunately, however, I wonder if there will be such restraint from politicians. The temptation will instead be in the other direction: to build political popularity by making access to more powerful AI tools even easier for ever-larger segments of the Thai population, with little regard to its potential harms. It’s the current flashy thing, after all. One can easily imagine AI joining other classic populist initiatives at the next election. Educational populism is not a new phenomenon. Remember “One Tablet PC Per Child”? That was originally an initiative from the Yingluck Shinawatra government, but it was revived after the 2023 general election as well. Yet now we have an abundance of research to show the negative effects that screens have for children’s learning.4
But serious education and skill development simply does not make for sexy election slogans. It will be far more likely that we see a policy proposal along the lines of “ChatGPT Go for all students!” rather than “We will make sure students are literate!” Much more alluring for parties and politicians to ensure they are seen as promoting AI in education, while paying only lip service to responsible and constructive usage.
So far, I don’t think we’ve seen urgency on this issue from the government. The previous Minister for Education, Permpoon Chidchob, said after announcing a new partnership with Microsoft that he is not concerned that students will lack critical thinking skills because AI tools are “merely to be used as part of instruction,” such as in curriculum design. That is fair in reference to instructors’ adoption of AI, but I would not be so trusting with students. The current minister, Naruemon Pinyosinwat, did recently say that she would like the Office of the Education Council to brainstorm ways to manage AI in education. But I hope that the Education Ministry is doing more than requesting brainstorming from the Education Council. It needs to take leadership in pioneering thinking on how to use AI such that it grows, and not stunts, student capabilities.
I think that eventually many countries will have to make the bet that the key to future development is a workforce that is proficient in, but not over-reliant, on AI. This would require accomplishing two key tasks. Firstly, students must be educated on on how to use AI mindfully in ways that prevent cognitive offloading. At the same time, schools must still develop students’ basic abilities in verbal and mathematical reasoning, along with creative thinking, without the use of AI.
Accomplishing both will require highly intentional approaches to instruction and curriculum design. I’m not a K-12 teacher, but if I were one right now, I would adopt something similar to the “Kumon method” — completely analog, pen and paper, no screens involved — on basic skills like reading and writing. And if I were in university leadership, I would encourage all instructors to read this MIT-published guide on AI use and think hard about its recommendations on how to design AI-proof checks of understanding (especially adding oral components to course assessments).
Perhaps I am worrying too much. The new education-focused Thai Kao Mai party recently shared a World Bank finding that only six percent of the population uses generative AI. “With our neighbors racing ahead,” they argued, “we are being left behind.” Given that we still need to ensure that less privileged students even have access to adequate learning technology to begin with, perhaps we are still some way off from having to worry about AI dependency. But I also don’t think that politicians (who are generally insufficiently proactive) will suffer from a bit more advance thinking about managing AI’s pitfalls in education.
And of course, I would never advocate for not using AI. I have certainly found uses of generative AI in my own work (but trust me when I say that you are reading a human-written Substack!) Nate Silver is right when he wrote that “you should just get in the habit of using LLMs and other AI tools. There’s likely to be some immediate productivity benefit, and you’ll develop your intuitions for the better and worse use cases.” Unfortunately, I’m not sure that we can trust students to develop that intuition without a very intentional educational system that pushes them to do so.
In fact, any student with or without access to Google AI Pro should probably be encouraged to think about mindful uses of AI. The free versions of LLMs like Gemini and ChatGPT are already plenty powerful, and Google AI Pro honestly sounds like overkill to me for most university students.
Although with how ChatGPT’s favorite words appear to be feeding back into human language, perhaps seeing words like “delve” too often in a paper is no longer a clear-cut sign of ChatGPT use. Also, as a big fan of the em dash, I really hate that I can no longer use them without wondering if someone might mistake my human-produced Substack articles for AI.


Nice article. Happy to see you shift out of your lane just a little bit. Your contributions are always thoughtful and well written, regardless of topic.
Education could benefit by having more papers checked along the writing process to ensure authenticity. Also creating confidence and success. Less content (hmmm? What could be ditched off the Thai curriculum to free up more time for deeper learning and practice of essential skills?). Longer spells of focuse and concentration. AI should be used a springboard but critical analysis needs to be developed and maintained..