The digital landscape of Sri Lanka, much like its physical terrain, is often beautiful yet fraught with hidden currents. For months, I have been tracking Meta's omnipresent AI-powered content recommendation systems, the unseen hands that shape what millions of Sri Lankans see and hear on Facebook and Instagram. What I have uncovered is not merely a technical oversight but a systemic issue, one where algorithms, in their relentless pursuit of engagement, are inadvertently or perhaps deliberately silencing critical voices and amplifying narratives that serve to divide rather than unite.
The revelation began not with a whistleblower, but with a pattern of disappearance. Activists, independent journalists, and civil society organizations, particularly those vocal about government accountability or ethnic reconciliation, noticed a precipitous drop in their organic reach on Meta platforms. Their posts, once reaching thousands, now struggled to break through to hundreds. Meanwhile, content characterized by sensationalism, political partisanship, or even thinly veiled hate speech, seemed to thrive, often appearing prominently in user feeds. It was a digital vanishing act, a slow erosion of public discourse that many dismissed as mere algorithm changes, but I suspected something more insidious was at play.
My investigation began with anecdotal evidence, a collection of testimonies from across the island. A human rights advocate in Jaffna, whose posts detailing land disputes and minority rights violations once garnered significant attention, reported a consistent decline in engagement since late 2023. A collective of young journalists in Colombo, known for their investigative pieces on corruption, shared screenshots illustrating how their carefully researched reports were often flagged for 'community standards violations' with opaque explanations, while less credible, often inflammatory, content from state-aligned pages faced no such impediments. These were not isolated incidents; they formed a mosaic of algorithmic suppression.
To move beyond anecdote, I collaborated with a small team of local data enthusiasts, who, working discreetly, analyzed public engagement metrics from a diverse set of Sri Lankan pages and profiles over the past 18 months. We focused on pages representing different political leanings, ethnic groups, and journalistic approaches. Here's what the data actually shows: pages promoting nationalistic, often majoritarian, viewpoints consistently experienced higher organic reach and engagement, even when their content was demonstrably less factual or more provocative. Conversely, pages advocating for minority rights, environmental protection, or critical government oversight saw their reach plummet by an average of 40 to 60 percent, despite maintaining consistent posting schedules and audience sizes. This disparity is not random; it points to a bias embedded within the recommendation engine.
The evidence suggests that Meta's AI, particularly its Llama-powered recommendation systems, prioritizes content that generates immediate, strong reactions, often at the expense of nuanced, fact-based reporting. In a country like Sri Lanka, still grappling with the legacies of conflict and communal tensions, this algorithmic preference can have devastating consequences. It rewards division and sensationalism, pushing aside the very voices needed for constructive dialogue.
Who is involved in this digital imbalance? At its core, it is Meta's algorithms, designed and refined in Menlo Park, operating globally without sufficient local oversight or understanding of regional sociopolitical complexities. While Meta frequently touts its investments in AI safety and local language moderation, the reality on the ground in Sri Lanka paints a different picture. Resources for Sinhala and Tamil content moderation remain woefully inadequate compared to the sheer volume of content generated. This means that while Meta's AI is highly effective at identifying and promoting engaging content, its ability to accurately assess context, nuance, and potential harm in local languages is severely limited.
I reached out to Meta's regional communications team for comment on these findings. Their response was boilerplate: a reiteration of their commitment to free expression, community standards, and investments in AI to combat harmful content. They spoke of sophisticated AI models that identify hate speech and misinformation. Yet, when pressed on the specific data showing disproportionate suppression of certain types of content in Sri Lanka, their answers became vague, citing proprietary algorithms and the dynamic nature of their systems. It was a classic cover-up by denial, a refusal to acknowledge the tangible impact of their technology on a fragile democracy.
This is not merely a Sri Lankan problem, of course. Similar patterns have been observed in other emerging democracies, where local language nuances and political sensitivities are often lost in translation for global AI models. "The global platforms often struggle with the granular realities of local contexts," noted Dr. Rohan Samarajiva, a prominent telecommunications policy expert and former Chairman of Sri Lanka's ICT Agency. "Their algorithms are trained on vast datasets, but these datasets may not adequately represent the complexities of a multi-ethnic, multi-religious society like ours. The unintended consequences can be severe for public discourse and social cohesion." His observation underscores the critical gap between technological ambition and societal responsibility.
The implications for the public are dire. When Meta's algorithms systematically deprioritize independent journalism and critical analysis, they starve the public of diverse perspectives. This creates an environment where misinformation and emotionally charged narratives can flourish unchecked, further polarizing society. It undermines the very foundations of informed citizenship and democratic participation. The digital public square, once envisioned as a space for open exchange, risks becoming a curated echo chamber, dictated by an opaque algorithm's pursuit of engagement metrics.
As citizens, we are increasingly reliant on these platforms for news and information. When those platforms become gatekeepers, subtly shaping our realities through algorithmic preferences, the stakes are incredibly high. We must demand greater transparency from Meta and other tech giants. We need independent audits of their algorithms, particularly in non-Western contexts. We need more resources dedicated to local language moderation and contextual understanding. The future of our public discourse, and indeed our democracy, depends on it.
Without genuine accountability, the promise of a globally connected world risks devolving into a fragmented reality, where powerful algorithms dictate who gets heard and what narratives prevail. For Sri Lanka, a nation still rebuilding trust and fostering reconciliation, this algorithmic grip is a threat we cannot afford to ignore. The digital currents may be unseen, but their impact is profoundly felt, shaping the very fabric of our society. For more on how AI is impacting global societies, see MIT Technology Review. The conversation around algorithmic fairness and transparency is only just beginning, and it is one we must all engage in, actively and critically. Further insights into AI's societal impact can be found on Wired.










