Malaysian Government Eyes Mandatory “AI-Generated” Labels for Online Content
The Malaysian government is advancing discussions to mandate the labelling of AI-generated content under the upcoming Online Safety Act 2024. This move is seen as a critical step to combat the rising tide of AI misuse, including scams, defamation, and identity impersonation, particularly prevalent on social media platforms.
Combating AI Misuse with New Legislation

Communications Minister Datuk Fahmi Fadzil confirmed the government’s consideration of this requirement, which is expected to come into force by the end of the year. Speaking at a press conference following the Institute of Public Relations Malaysia’s (IPRM) “YOU & AI: MEET@BANGSAR” program, Minister Fahmi emphasised the necessity of such a measure. “The move is crucial to address the misuse of AI, such as scams, defamation and identity impersonation, especially on social media platforms,” he stated.
Call for Proactive Platform Responsibility
Beyond governmental mandates, Minister Fahmi also called upon digital platforms to take proactive steps in identifying and labelling AI-generated content. He noted that some social media platforms have already begun implementing voluntary labelling initiatives, suggesting that these efforts could be expanded regionally through cooperation among ASEAN countries.
Global Discussions on AI Regulation
Addressing concerns about the spread of deceptive AI-generated videos and images, Fahmi acknowledged the current lack of globally satisfactory regulatory guidelines. However, he highlighted ongoing active discussions at international forums, including the United Nations (UN) and the International Telecommunication Union (ITU).
“I recently attended the AI for Good Summit in Geneva, Switzerland. Indeed, at both the UN and ITU levels, there is ongoing debate over who should be responsible for AI regulation,” Fahmi remarked. He further stressed that while national bodies like Parliament and the Digital Ministry must lead in this domain, every ministry has a role in assessing and evaluating artificial intelligence use within its specific scope.