One might argue that the slippery slide downhill for social media began in 2022, when the world’s richest man Elon Musk bought Twitter and renamed it X.
The head of electric vehicle maker Tesla quickly began using it to amass political influence, eventually clinching a role in the newly installed administration of President Donald Trump in Washington DC.
From the very first days, his takeover raised concerns about the impact of having an influential communications tool under the control of an already-powerful man.
In an online commentary at the time, Dr Nolan Higdon, a lecturer of history and media studies at California State University, East Bay, likened the emergence of Mr Musk and others like him to a second US Gilded Age.
The original Gilded Age, in the late 1900s, involved an economic boom characterised by wealthy industrialists, political corruption and a wide disparity in wealth across society.
Dr Higdon warned that this second Gilded Age is being marked by a new crop of rich and powerful business leaders, or oligarchs, using their vast wealth to purchase media and political influence.
It certainly did not help that shortly after the buyout, Mr Musk dissolved X’s trust and safety council, its advisory group of around 100 independent civil, human rights and other organisations. Twitter had formed the council in 2016 to address hate speech, child exploitation, suicide, self-harm and other problems on the platform.
Mr Musk also slashed the number of the renamed X’s safety engineers by 80 per cent. Early last year, he made further cuts to the platform’s global trust and safety team.
In place of its content moderation system, Mr Musk’s X relied instead on initiatives such as “community notes”, which allows users to add context and corrections to others’ posts, including ones they believe to be potentially misleading or non-factual.
Dr Saifuddin Ahmed, an assistant professor at Wee Kim Wee School of Communication and Information at Nanyang Technological University (NTU), said that while such a system has the potential to democratise fact-checking by allowing more users to participate and hence increase transparency, these initiatives are limited in their ability to stop misinformation from spreading.
Misinformed users could undermine the quality control of this approach to fact-checking, Dr Ahmed said.
The system could also be exploited by malicious actors who may flood the platform with incorrect information, label accurate content as false or misleading, and thereby aim to manipulate public opinion, he added.
These fears materialised during the campaign ahead of last November’s US presidential election, as misinformation gained traction on the platform, sometimes led by posts or reposts by Mr Musk himself.
A 2024 report by non-profit group Center for Countering Digital Hate found that false or misleading claims by Mr Musk about the election had amassed two billion views on X.
Far from seeing this as a cautionary tale, it seems other big tech firms are taking a leaf out of Mr Musk’s playbook to get closer to the Trump administration.
Earlier this month, Meta’s CEO Mark Zuckerberg announced that the company would eliminate its third-party fact-checkers in the US, to be replaced with a community-based fact-checking system much like X’s.
Using language often used by Mr Trump and his followers, Mr Zuckerberg even accused the outgoing Biden administration of “censoring” social media posts by requiring fact-checking and content moderation.
Most recently, TikTok named Mr Trump as its saviour in the US when he agreed to restore the app’s services last Sunday (Jan 19). The platform had gone dark for a brief period in America, as a law banning the app on national security grounds came into effect.
In fact, it was Mr Trump himself, during his first term as president, who had led the effort to ban TikTok over concerns China was harvesting data from US citizens. But Mr Trump had then used TikTok during last year’s presidential campaign to appeal to younger voters, attracting over 15 million followers.
Commentators told CNA TODAY that these developments among the social media platforms are concerning not only due to allegations of pro-Trump bias, but also because of the broader implications for misinformation.
They added that these shifts could have ripple effects on governments and policymaking institutions, communication professionals and individual users alike.
Dr Ahmed said: “What we are witnessing with social media platforms globally highlights several key dynamics. Most importantly, the situation with TikTok highlights the vulnerability of these platforms to geopolitical tensions.”
For one thing, the decision to ban TikTok in the US, followed by its subsequent reinstatement under President Trump’s influence, demonstrates how political pressures can shape tech policy, said Dr Ahmed.
There has been speculation that Mr Musk may acquire TikTok, which, if true, would have an effect on global political discourse, he added.
“These developments also confirm that social media companies are no longer merely tech firms, and their platforms are no longer solely about social connections,” he said.
“The real risk lies in these platforms becoming major arenas for political maneuvering rather than spaces for political information and discussions.”
Dr Taberez Ahmed Neyazi, principal investigator at the National University of Singapore’s (NUS) Centre for Trusted Internet and Community, said that Meta’s decision to discontinue fact-checkers in the US was particularly worrying.
The move “reeks of dangerous opportunism, prioritising a ‘community-driven’ spin over accountability under the guise of freedom of speech”, he said.
Dr Taberez added: “By replacing professional fact-checkers with mechanisms like X’s community notes, Meta is gambling with truth in favour of populist appeal, a move that aligns ominously with the political zeitgeist.
“Community-driven moderation often falls short in rigour, impartiality and expertise to address complex and culturally nuanced falsehoods. Even more concerning, it creates opportunities for coordinated efforts by malicious groups to manipulate truth ratings.”
REGULATING SOCIAL MEDIA A LONG-TERM CHALLENGE
To be sure, challenges around regulating private tech giants are not new issues.
Even before the most recent spate of developments, governments around the world had been grappling with the challenge of regulating and holding social media platforms accountable for the spread of misinformation.
This has, however, arguably accelerated since Mr Musk’s takeover of Twitter, with extreme right-wing content flourishing on X ever since.
Social media users interviewed by CNA TODAY said this is something they have personally observed.
Mr Edo Lio, 28, a content creator and social media strategist for brands, said, for example, he has observed X users getting more aggressive in their exchanges, and that political discourse on the platform can be “quite intense”, with “fake news” and “shady” videos easily gaining traction.
He noticed this most around the time of the US elections, he added.
This is unsurprising, said experts.
Dr Ahmed of NTU said that, after all, the fundamental reality is that social media platforms operate as profit-driven entities, not services for societal good.
This makes it particularly challenging to regulate and cooperate with them when their business models are at stake, he added.
When it comes to regulating social media platforms, policymakers here face challenges such as defining platform accountability, managing the rapid speed at which misinformation spreads and navigating cross-border jurisdictional conflicts, said Dr Ahmed.
In Singapore, for example, much misinformation seen online originates from abroad, which further complicates enforcement. The rapid advancement of technology, such as the rise of deepfakes, also poses significant regulatory challenges, he added.
According to a 2024 report by the British broadcaster the BBC, social media algorithms could also work to perpetuate the spread of misinformation.
The algorithms are designed to maximise user engagement – and they do so by promoting content that elicits strong reactions.
This could result in sensational or emotionally-charged posts being prioritised on users’ feeds, regardless of their accuracy. That means misinformation spreads more rapidly, as the algorithms amplify false or misleading content.
Faced with these challenges, and in the absence of a central global authority to oversee private tech and social media companies, some governments have tried to take matters into their own hands.
In November last year, Australia’s parliament passed a law banning children under 16 from accessing social media.
Responding to queries by Members of Parliament on Jan 7 on whether Singapore would consider similar measures, Minister of State for Digital Development and Information Rahayu Mahzam said the authorities will continue to study the effectiveness of mandating age limits for social media access.
The Government is also engaging its Australian counterparts and social media platforms in discussions, which would inform their next steps, she said.
Currently, the authorities also have measures in place to guard against online harms and falsehoods.
These include the Code of Practice for Online Safety, which requires designated social media services, including Facebook and Instagram, to put in place systems and processes to enhance online safety and mitigate the spread of harmful content on their services.
Responding to queries from TODAY, a spokesperson from the Ministry of Digital Development and Information (MDDI) said on Friday (Jan 24) that the ministry is also aware of Meta’s recent policy changes in the US.
“We are monitoring the situation to understand the impact of these policy changes on online safety for Singapore users across Meta’s platforms.
“Social media platforms are our important partners in online safety. We will continue to engage and work with them to ensure the safety of Singapore users online and guard against misinformation.”
POLITICS AND POWER PLAY
The overt attempts made by the owners of the social media platforms to butter up the Trump administration has gotten observers worried that these major tech players are getting too close to the heart of US political power.
Tech billionaires – including Mr Musk, Mr Zuckerberg and Amazon’s founder Jeff Bezos – were given prime positions at Mr Trump’s inauguration last Monday (Jan 20). Mr Musk has also been given a job in the new administration by Mr Trump to cut waste in government spending.
In an unprecedented demonstration of their power and influence on US politics, TikTok CEO Chew Shou Zi, Apple CEO Tim Cook and Google CEO Sundar Pichai were also in attendance at the inauguration.
Commentators said that the implications of big tech firms and social media platforms lying outside of regulatory reach, are profoundly troubling.
Dr Taberez of NUS said: “Meta’s pivot (to replace its third-party fact-checkers with community notes) is a clarion call for governments, civil society and users to demand greater accountability from tech giants.
“This is not merely a shift in policy; it is a fundamental reshaping of the information ecosystem with consequences that will ripple across borders.”
The stakes are even higher for countries in Southeast Asia, like Singapore, because of the prevalence of local vernacular languages, through which many of these falsehoods thrive, Dr Taberez said.
“Without the support of platforms like Meta, fact-checkers may lose their battle against misinformation – a battle that underpins the very fabric of democracy and social cohesion in the region,” he added.
Public relations professionals told CNA TODAY that such policy shifts would likely lead to the proliferation of misinformation and disinformation corrupting the dissemination of government information. This would eventually erode trust in the authorities.
Misinformation generally refers to information that is simply untrue while disinformation refers to false information that is spread knowingly with the goal of achieving nefarious outcomes
Mr Ed Burleigh, head of the Public Relations and Communications Association (PRCA) Asia Pacific, said: “Just like the old adage ‘bad news travels fast’, misinformation often spreads far quicker than accurate information.”
This can be further compounded by user-driven narratives, and amplified by the social media platforms’ algorithms, which can make it very difficult to correct false narratives in real-time, he added.
To combat this, Ms Mayda Jutahkiti, managing director at public relations agency Elliot & Co, said that communications professionals and the industry would likely have to adapt by focusing on data-backed storytelling.
Crisis communication strategies will have to preempt and address misinformation more swiftly, while collaboration with ‘trusted’ voices on social media would take greater precedence for brands and policymakers, she added.
Policymakers already have some regulations in place on dealing with misinformation over government policies and communications, such as the 2019 Protection from Online Falsehoods and Manipulation Act here.
But the effectiveness of these regulations is not without criticism.
If misinformation continues to spread unchecked across these platforms, social media users stand to lose, commentators said.
Mr Benjamin Ang, senior fellow and head of digital impact research at the S. Rajaratnam School of International Studies (RSIS), noted the volume and speed at which hate speech or harmful content is created and distributed.
He said this misinformation onslaught far outstrips the ability of any society to teach every individual about the dangers, or the ability of individuals to avoid exposure.
Mr Ang added that studies have shown how harmful this exposure is, both to individual mental health, and to social cohesion.
Additionally, given the significant presence of TikTok among local youth, regulatory and governance changes to the platform could reshape the content they consume, which may influence their understanding of global sociopolitical issues, said Dr Ahmed of NTU.
Mr Wong Hin-Yan, executive vice president and strategic planning and head of Asia Pacific intelligence at communications agency Weber Shandwick said that as social media users choose platforms that align with their values and beliefs, users may “switch off” from a platform for a while if it is flooded with misinformation or disinformation.
A DISCERNING APPROACH TO SOCIAL MEDIA
Policy flip-flopping by big tech firms to align with the government of the day, such as their recent pandering to Mr Trump, ultimately leaves users bearing the brunt of the consequences, commentators said.
This inconsistency erodes trust, fosters misinformation, and ultimately compromises the safety of the platforms for millions of users increasingly rely on them daily for news and other information, they added.
Data published in the 2023 Reuters Institute Digital News Report showed an acceleration in the structural shifts towards more digital, mobile and media environments.
This is where news content is delivered and consumed via social media – including video-led platforms such as TikTok – rather than via traditional “legacy” media, such as newspapers and television.
Dr Ahmed of NTU said: “Many users may assume that these developments won’t affect them, but that’s far from the truth.
“Everyday users should be concerned about these changes, as the decentralisation of fact-checking by Meta and the increasing political influence over platforms like TikTok will have significant consequences.”
For one thing, these shifts would likely increase the influx of misinformation, and worsen the already declining trust in social media platforms. Users may also struggle to discern credible information, leading to skepticism, said Dr Ahmed.
Additionally, the prevalence of algorithms prioritising self-aligning content will deepen echo chambers, where users are exposed only to information that reinforces their existing beliefs. As political influence over platform governance grows, public distrust in these platforms is likely to intensify, he added.
According to TikTok, however, its policies and algorithms did not change over the weekend when US service was restored.
Nevertheless, given these considerations, the experts stressed the importance of cultivating media literacy, and ensuring that users learn to consume information on social media with discernment.
This includes fact-checking their news and being aware of the underlying interests and agendas of different groups of people.
Dr Andrew Yee, an assistant professor at NTU’s Wee Kim Wee School of Communication and Information, said: “The old adage is relevant here: Never believe everything you read on the internet. Every time one sees something that evokes a reaction, be ultra sceptical.
“If not, we run the risk of perpetuating misinformation when we share something untrue, or risk being manipulated by different groups with different interests. We essentially become a ‘useful idiot’.”
Users can also take an active role in curating their information feeds, said Dr Ahmed of NTU.
Many users in Singapore are not proactive in blocking or unfollowing sources known to spread misinformation, he said.
Developing such habits of removing unreliable sources and intentionally following credible ones, however, can significantly improve the quality of content one consumes on social media, said Dr Ahmed.
RSIS’ Mr Ang said: “We should develop mindful habits in our social media use like we should develop mindful habits in eating. Just like we stop to wash our hands before eating, we should stop to think before scrolling and especially before sharing.
“Just as we avoid binge eating junk food, especially when bored, we should avoid binge scrolling low-quality content. We need mindful habits to prevent emotional content from triggering our ‘system one’ thinking – of fast and automatic responses – which bypasses our media literacy.”
A handful of social media users and content producers interviewed by CNA TODAY said that they are aware of the need to practise these hygiene tips – adding that they have become more alert about doing so lately.
Ms Krysta Joy D’Souza, 26, said that given the prevalence of misinformation on social media, she is now much more careful over the type of content she interacts with and the amount of time she spends on the different platforms.
The musician and content creator said she tries to cross-reference information she consumes on social media with other reliable sources online, and engages in conversations with people whom she knows are experts in that specific field.
“I also intentionally engage in conversation with people outside of social media to ensure I am not stuck in an echo chamber while scrolling through my phone,” she added.
Source : https://www.channelnewsasia.com/today/big-read/social-media-connection-chaos-toxic-4891351