The State of Conflict and Violence in Asia 2021

How Does Social Media Affect Conflict in Asia? Expert Views

Maria Ressa, Sanjana Hattotuwa, Sarah Oh


The views expressed in this publication are the views of the author and not necessarily those of The Asia Foundation.

Welcome and thank you for joining us. Can you first tell us a bit about yourselves and your interest in the broad field of online media and its impact on politics, society, and conflict?

Sanjana Hattotuwa: My entry into technology and peacebuilding in 2002 was very hands-on. I helped design platforms to help stakeholders in a Norwegian-led conflict mediation process based on a ceasefire agreement between the Sri Lankan government and the Liberation Tigers of Tamil Eelam (LTTE). The process was called One-Text, and I led the development of software tools used by those who were part of it.

Since 2006, I have worked with the ICT4Peace Foundation. Through this experience, I also foresaw the role of mobiles in conflict transformation before smartphones were ubiquitous. In July 2020, I reviewed what we did nearly 20 years ago,1 which was the kind of engagement Swiss-based think tanks like HD Centre are only now starting to think about.

Maria Ressa: By next year, I’ll have been a journalist for 35 years. I opened the Manila Bureau for CNN in 1987, and then I opened the Jakarta Bureau in 1995. And then I came home to the Philippines in 2005. I headed the largest newsgroup here for six years, the one that was just shut down by the government. In 2012, we were a journalists’ organization that decided to experiment with tech to build a community.

I come with three different perspectives that normally don’t come together in one person: I run the business and the tech of our newsgroup; I am also an investigative journalist and have used social-network analysis with CNN for the tracking of terrorist networks. It’s a hop, skip, and a jump to go from physical social networks to social media. And then the third part, I became a target.

Sarah Oh: I’ve been working at the intersection of tech, civil society, and government for the last decade or more. I recently worked at Facebook, understanding the impact of social media on conflict, specifically in emerging markets and regions of the world that have just come online.

Before that, I worked with civil society and the tech community in Myanmar on hate-speech trends as the country was coming online. This work on understanding the abuse of tech in these contexts, particularly the impact on marginalized groups, is very different from the beginning of my career, which involved the potential of tech to support civic participation in the aftermath of the Arab Spring.

Our forthcoming report, The State of Conflict and Violence in Asia 2021, focuses on identity-based conflicts and extremism. Do you think that new media technologies have played a role in the politics and patterns of violence?

Sanjana Hattotuwa: First, the term “new media” is outdated. It is new only for those of a certain age. For our children, and especially those in their teens today, “new media” is a meaningless phrase. It’s just media, where offline information and news flows seamlessly merge with online platforms and apps.

Our governance, oversight, regulatory, and legal frameworks are no longer fit to address the way news and information are produced, spread, and engaged with. Political entrepreneurs produce propaganda at a pace that overwhelms the ability of existing governance mechanisms to address it. Social media a decade ago offered, in many ways, platforms for dissent. Today they are often cesspools of toxicity and polarization.

Biased, partial, partly true, or entirely false worldviews, promoted as the sole truth, influence the beliefs of billions. They rend society asunder by amplifying division and hate and normalizing the worst of who we are instead of the best we can be. The situation is dire and getting worse.

Maria Ressa: The biggest problem right now is that lies laced with anger and hate spread faster and further than facts, because facts are really boring, and that means the platforms are biased against facts. If you don’t have facts, then how can you have truth? How can you have trust? And how can you have democracy? Facts underpin democracy. Facts underpin markets, right? And what’s replacing them? Well, propaganda networks. Networks that are for sale, essentially.

What we’ve seen, the alternate realities, really began with Russian disinformation in 2014 on Crimea, and the first real target was Ukraine. Most recently, we’ve seen digital outfits pretending to be news organizations. And these are hiring journalists. It’s an extremely dangerous time, because if you don’t have the integrity of facts, how do you have the integrity of elections?

As the platforms have grown, journalists, news organizations, have lost our gatekeeping powers. Around 2014–2015 that gatekeeping power went to tech, and tech abdicated responsibility for protecting the public sphere. They like to say that they’re neutral; that’s not true.

Sarah Oh: Myanmar is a great example of a place where dangerous discourse was being amplified. There is also the example of misinformation about the Easter terrorist bombings in Sri Lanka wrongfully targeting minority groups as the perpetrators. In Myanmar, you have a lack of accurate, high quality reporting about armed conflict, so you have an environment where misinformation or half-truths can shape perceptions. In 2017, there were photos of people in Bangladesh fighting, with headlines suggesting they were Rohingya militants. That’s dangerous. The subtext is that this group is violent, so perhaps violence against the Rohingya would be justified.

The problem is not only the immediate outcomes, but also that it’s occurring in environments that exacerbate this type of abuse: weak protections for digital rights, lack of public education in media and information literacy, fragility in the information ecosystem. We’ve seen so many examples of how political leaders have skillfully used social media to mobilize their supporters and broadcast their messages. Religious leaders in Southeast Asia, like the monks leading Ma Ba Tha, have used Facebook, YouTube, and VK [the Russian social media site Vkontakte] to do exactly that. Many of these groups and actors are becoming very savvy. It really surpasses any single strategy, platform, or method, and it’s constantly evolving.

What has been the impact of social media on democratic systems of governance, or on governance in general?

Maria Ressa: In 2016, we exposed what we started calling the “propaganda machine,” the disinformation networks that are government, progovernment, or government affiliated. The most recent one we exposed, which Facebook just took down, was linked to the police and the military. When we did this in 2016, I was targeted with an average of 98 messages per hour. That’s when I realized this is a brand-new world, we are not prepared for it, and it can be weaponized.

Being a target also meant I watched our democracy in the Philippines cave in, and I watched how it seeded a narrative on Facebook. In 2016, I started warning Western journalists and Google that what is happening to us is going to come your way. Our dystopian present is your dystopian future. And here we are four years later.

It’s impossible to deal with this now. The decisions made in Silicon Valley cascade and destroy us faster than they do the West, because our institutions are just so weak. The Cambridge Analytica whistleblower, Chris Wylie, called the Philippines a petri dish. He said that the company, Cambridge Analytica, as well as its parent, experimented with tactics of mass manipulation here and in other countries in the global South. When these tactics worked, they ported—that’s his word—they ported them over to the West, America and Europe.

Sarah Oh: People talk a lot about the authoritarian playbook being replicated across countries. We’re constantly seeing, for example, what’s happened in the Philippines popping up in the United States. And then you see some other evolution of those tactics and strategies in other countries. It’s everywhere. And I think we need to pay attention to how those tactics are being reused and strengthened. One personal anecdote is from the U.S. elections, when we saw disinformation targeting Spanish-speaking voters in Florida in advance of the final stage of the primary elections. That immediately felt familiar, based on the work that I’d done in Southeast Asia on disinformation campaigns and strategies.

Do new technologies reinforce patterns of civil unrest, instability, and challenges to state authority? What patterns are evident, on the side of both protests and governments?

Sanjana Hattotuwa: Social media’s role is complicated and fluid. Simplistic projections of social media as a monolithic entity are wrong. But the toxicity, violence, hate, and racism on the platforms I study for doctoral research are clear evidence of a failed global experiment in believing that connecting everyone leads invariably to democratic, plural, liberal, and peaceful outcomes. Please read my “Hidden Campaigns2 and “Murals as Masks3 as key examples from Sri Lanka, over just the past year, that directly speak to this question.

There is also evidence of citizens countering violent extremism, but the odds are increasingly stacked against civil society. (A recent article by Andrew Marantz in the New Yorker is essential reading in this regard.4) Templates for sophisticated authoritarian control allow for more precise, sustained, and sinister propaganda—constant digital campaigns to undermine the foundations of democracy. A few of us have studied the genesis of these dynamics over the past decade, but it appears to be only when Western societies are under the threat of what we [in the Global South] have suffered far longer that global media, concerted policymaking, and conversations on regulation are generated.

Maria Ressa: As early as November 2017, Freedom House came out with a study that said cheap armies on social media had rolled back democracy in more than two dozen countries around the world. A year later, Oxford University’s Computational Propaganda Research Project pushed that number to almost double, and by 2019 the number had reached 70 countries around the world.

The platforms have learned how to handle terrorist content. But the bigger problem is the gray areas—hate speech, conspiracy theories. They’re like black holes that you dive into: if you watch one video, the next recommendation will be just a little more extreme, because the end goal of these platforms is to keep you on their site and to learn your behavior so they can sell your behavior.

Social media platforms have become behavior modification systems. As we users dump our posts into Facebook, that’s all picked up by machine learning, and it builds a model of who we are. It knows us more intimately than we know ourselves. And that model then is pulled together by artificial intelligence, which looks for the weakest moment we have to a message and sells it to the highest bidder. And that bidder can be a company or a nation. So, the advertising model is a perpetual learning machine that learns from our own behavior.

So, you can say that extremist content or the shift towards more extremist beliefs is built into the designs. They push your behavior, and you’re insidiously manipulated.

How have existing conflict actors—violent extremists, nonstate armed groups, militaries—adapted their strategies to new media technologies?

Sanjana Hattotuwa: The adoption and adaptation of social media by violent extremists is indicative of how far ahead they are of civil society in leveraging new vectors of communication, control, and persuasion. It is now the norm, not an aberration. It is what the platforms aid and abet, albeit to varying degrees. Some violent conflict is now purely digital in nature, ranging from cyber-warfare and offensive cyber-operations to the weaponization of trolls and bots against human-rights defenders, examples of which are legion from Sri Lanka and Asia. Other, more entrenched violent conflict is now shaped by digital frames, either to sustain division and hate or by actors who seek to reconcile differences. In sum, the impact of social media on conflict is multifaceted and evolving.

Maria Ressa: In 2016 [in the Philippines], a progovernment account seeded this narrative of journalist = criminal. This has been seeded in many countries around the world, but here in the Philippines, because Rappler stood up, it was targeting me. And in 2016 I laughed, because I’m really old, I’ve been around a long time, and you can see my track record. But there is no track record on social media. There’s no context on social media. So, while I laughed, over time people started going, well, maybe where there’s smoke, there’s fire.

In 2017, we got our first subpoena. And then in 2018, 11 cases were filed against me and Rappler. In 2019 I had eight arrest warrants. I was arrested twice in a five-week period. On June 15th, 2020, the first of the eight criminal cases came to a verdict in a lower court, and I was found guilty. And this is the funniest part: it was for a story we published eight years ago, before the law we allegedly violated had even been enacted. You can see that some of my obsession with this is because I’m living it and it’s shocking.

Compared with these high-profile political concerns, less has been written on the gendered effects of social media. Do you have any reflections on how these technologies are experienced differently by women and men?

Maria Ressa: Rappler is a fact-checking partner of Facebook; we’re one of two Filipino fact-checking partners. The database that we have shows that gendered disinformation targets women at least 10 times more than men in the Philippines.

The top three targets that we started looking at were our vice president, Leni Robredo; then, Senator Leila de Lima, who has been in jail for more than three years now with less time in court than I’ve ever had. She was our former commissioner of human rights and a former justice secretary. And I was the third one. The tactics at the beginning are always about the way you look. It is definitely sexist at best. With Laila de Lima, there were doctored videos of her supposedly having sex. For women, once the attacks are sexualized, it’s a hop, skip, and a jump to hate speech and then to violence.

Sarah Oh: Violence against women online—on social media and on the internet in general—is a serious, serious issue that needs a much closer look than it’s getting right now. Without question, this is the single most common issue that’s been raised with me in every country that I’ve done research in.

It’s important to have more conversations with civil society and groups that work on women’s safety and rights. Sometimes it’s making sure women can freely express themselves online. Other times it’s creating the opportunity for them to not have their identity linked to what they’re saying. There’s work to be done on legal recourse for women who have suffered abuse online.

I think it starts with really understanding the experience. There’s a lot of great research about some of these issues in the offline context, but we are only beginning to understand how it intersects with online platforms.

How can governments or intergovernmental bodies in Asia address these challenges? Do you support more regulation? Can national governments manage social media, or are they dependent on action elsewhere—in California, perhaps?

Sanjana Hattotuwa: Governments in Asia are interested in their own survival and will use social media regulation to clamp down on inconvenient truths and dissent. Ultimately, it comes down to a global, regional, and domestic conversation about responsibility (who can and should act), responsiveness (how quickly actions must be taken), transparency (making clear what was done and why) and accountability (including avenues for redress and appeal by actors involved in regulation). Independent of democratic underpinnings, it is unclear how these principles can find expression in Asian governments that, in their outlook and investments, undermine these principles daily.

Sarah Oh: The industry should bear the brunt of these challenges, but I think there’s a lot more that can be done at the government level. I’m really interested in opportunities for associations or regional networks like ASEAN [the Association of Southeast Asian Nations] to pressure their member governments to create environments that are safer for online engagement in their countries.

This could include making sure resources are put forward for media and digital literacy education, really trying to understand what works, and creating more robust legal frameworks that protect people online. A lot of critical events have provided almost an x-ray of the weaknesses in both of those areas—public education and literacy, and all the gaps and bad laws that are still on the books.

We have this challenge of creating a better regulatory environment for a service that both has no borders and is generally pretty popular with the consumers who use it, especially in emerging markets and places in Southeast Asia, for example.

Can tech companies police themselves? Can you share any positive examples of change that has reduced the abuse of online spaces, and explain what incentives or actions caused the positive outcome?

Sanjana Hattotuwa: No.

Maria Ressa: No. We tried. And I was one of the people early on. Even though I was under attack, I actually thought the tech platforms could act like journalists. But news organizations have standards and ethics manuals. And there is a line between our editorial team and our business, right? There’s nothing like that in the social media world, in tech. In fact, tech is built to make money, to continue to grow. And with those two imperatives, they’ve torn apart democracy in many countries around the world.

Sarah Oh: I come back to transparency and creating a mechanism to enforce accountability, because market forces themselves won’t result in the outcomes that we want. I’m glad to see public discussion about the international implications of American companies, but I don’t think it’s enough. There needs to be multi-stakeholder engagement to really drive home the accountability that’s required.

The Facebook oversight board is up and running. It’s still early, but so far it’s promising. What’s perhaps still missing is an effort to bring all sectors together on neutral ground. There have been great ad hoc efforts by civil society, but, again, I don’t think any of them have been convened with all sectors on a neutral footing.

What can nongovernmental groups and civil society do to reduce the negative impacts and build on the positive impacts of new media technologies?

Sanjana Hattotuwa: Focus on investments, data, evidence, and innovation. Hostage to a practice of activism that is outdated, and to outmoded senior management and leadership, civil society’s simplistic assumptions about social media aiding democratic processes and institutions are increasingly out of sync with stark realities.

Complex new technologies require new modes of engagement, research, and advocacy. The potential for inciting hate and violence is present on social media platforms alongside their socially beneficial role. Focusing on one or the other misses out on how the interplay, always in flux and linked to context, can be studied and adapted to strengthen democratic, positive outcomes. It is unclear the degree to which civil society recognizes the need to intentionally engage with social media, basing advocacy and activism on data, evidence, and context. And therein lies the rub.

Maria Ressa: What is going to govern content moderation? This kind of whack-a-mole approach that the social media platforms are using doesn’t work. So, the first thing is, use the UN Declaration of Human Rights to actually define the principles of content moderation, because the current list doesn’t work.

I always say there are three C’s—collaborate, collaborate, collaborate—because we don’t have a seat at the table. We never did. These decisions were made in Silicon Valley and we bear the full brunt of a lot of them. We’re still partners with Facebook, so we demand accountability. We speak a long time behind the scenes; we flag. And then we need to look for policy solutions. I’m a cochair of the Forum on Information and Democracy, a working group that will be releasing policy recommendations. And there are at least 52 countries that will look at that.

We’re trying to raise a billion dollars a year for independent media. It’s the International Fund for Public Interest Media. I said at the very beginning in Rappler: we build communities of action, and the food we feed our communities is journalism, right? In civic engagement, we continue MovePH,5 which The Asia Foundation worked on with our civic engagement team.

I think this is the battle for the next five years. We’re on the precipice, and we need to do exactly what happened after World War II: bring a lot of people to the table and say, “This is destroying all of us. How do we prevent our tools, what we create, from destroying humanity, destroying our structures? What new structures do we need to put in place?”

Sarah Oh: I really think this is an ecosystem problem that requires governments, civil society, and tech companies to have a shared framework for understanding these issues. A way to get there is, first, setting up some principles and expectations of transparency. [We should] dramatically rethink how we support people who know the most about conflict: the NGOs or advocates on the ground. To date, I’ve mostly seen direct training for a lot of these groups. Why not give them the resources to hire experts in those areas, whether it’s machine learning or data science?

Monitoring and measuring are critical. It’s been really encouraging to see what groups have been able to do with access to tools like [Facebook’s] CrowdTangle. It’s not everything, but it begins to give groups tangible things to look at, monitor, and understand. I’m hoping that in the next five years we can begin doing those really practical things, coming up with shared principals and then trying to measure work against them.

Finally, Covid-19: the pandemic has been described as a disruption accelerator. Clearly, people are videoconferencing and shopping online more than before, but what about its impact on how tech relates to conflict?

Sanjana Hattotuwa: Please read “Post-Pandemic Peace Operations,”6 based on a presentation to the UN’s senior leadership on this very question a few months ago.

Maria Ressa: It’s destroyed the world as we know it. In the Philippines, Covid quarantine was characterized by lockdowns, curfews, barricades; there were more than a hundred thousand people arrested for breaking quarantine rules; some people were killed. In many countries, the Covid crisis gave leaders a chance to consolidate power. How do you hold great power to account when you’re stuck at home?

In the time of Covid you have to make sure journalism survives. So, we’ve used this time period to take what we know as journalists and evolve a new, sustainable business model. And that’s worked for us. But the other part is realizing that as we come under attack, as the law is weaponized against us, we actually need new laws to protect journalists.

Sarah Oh: I worry about the conflicts that we’ve seen in pandemics past feeding into the polarization we see online. We’ve seen scapegoating of minority groups at a level that’s extremely alarming and has resulted in offline violence against Muslims, who’ve been targeted by misinformation about super-spreader events in some countries.

I’m also really worried about the vulnerabilities that get exposed in a very insecure population. When the first lockdown occurred in India, there was a lot of food insecurity and people being trapped. There were some efforts to try online methods to distribute food and resources to people who were unable to move around. What would it look like to really scale up and give the handful of groups who are working on the most important causes the resources they need to really get their message out? Someone once told me that the only time they saw people from different groups coming together was after major floods or earthquakes. And the place where that person had seen that—people reaching out to offer services and relief—was on social media. Sometimes things like that can only happen in a digital space.


Notes

1 Sanjana Hattotuwa (2020), “Peace processes after the pandemic: What role for technology?” ICT for peace foundation, 7 July, https://ict4peace.org/activities/peace-processes-after-the-pandemic-what-role-for-technology/

2 Sanjana Hattotuwa (2019), “Hidden campaigns,” Sanjana Hattotuwa WordPress, 22 December, https://sanjanah.wordpress.com/2019/12/22/hidden-campaigns/

3 Sanjana Hattotuwa (2019), “Murals as masks,” Sanjana Hattotuwa WordPress, 18 December, https://sanjanah.wordpress.com/2019/12/08/murals-as-masks/

4 Andrew Marantz (2020), “Why Facebook Can’t Fix Itself,” The New Yorker, 12 October, https://www.newyorker.com/magazine/2020/10/19/why-facebook-cant-fix-itself

6 Sanjana Hattotuwa (2020), “Post-pandemic peace operations,” ICT for peace foundation, 21 June, https://ict4peace.org/activities/post-pandemic-peace-operations/