NEW TECHNOLOGY, FAMILIAR THREATS
Guardrails against digital media addiction may take on new urgency with the rise of artificial intelligence chatbots. The top use of generative AI in 2025 was not writing or coding, but therapy and companionship, according to research reported on in the Harvard Business Review.
Eyeing an IPO later this year, OpenAI announced it will display ads on the free and lower-tier subscription levels of ChatGPT.
Though OpenAI has stated it does not aim to maximise the time users spend on ChatGPT, the business incentive is clear. More time spent on a platform means more time to present ads. Chatbots appear poised to follow the social media business model of monetising user engagement.
The lesson from social media is that engagement-driven business models lead to predictable harm to some users. AI companies are already facing lawsuits for harming users who developed emotional bonds with chatbots, including those from family members of users who died by suicide. Platforms have had little incentive to fix what is working as designed, to keep us engaged.
Kaley’s verdict and the emerging EU approach raise possibilities that Singapore might consider. Your late-night doomscrolling or chats with AI aren’t just failures of your willpower. They are the result of platform design choices.
The questions now are whether social media platforms and AI chatbots will help users make informed choices and how impactful new safety features will be.
Dr Mark Cenite is Associate Dean (Undergraduate Education) at Nanyang Technological University’s College of Humanities, Arts, and Social Sciences, and teaches media law and artificial intelligence law at the Wee Kim Wee School of Communication and Information.
