Quantcast
Channel: Feature Archives - The Singapore Law Gazette
Viewing all articles
Browse latest Browse all 153

From Liberty to Liability: Shifting Sands in Singapore’s Internet Governance

$
0
0

The growing concern over online harms has become a pressing issue both internationally and within Singapore, leading to a heightened sense of urgency among governments to find effective regulatory solutions. In Singapore, there appears to be a shift towards holding internet intermediaries more accountable for the content on their platforms. This article contends that relying heavily on intermediary liability may not be the most viable or effective strategy in the long run. Instead, it suggests alternative strategies that could offer a more equitable and balanced framework for regulating online harms.

What Are Online Harms?

The term “online harms” only entered Singapore’s regulatory vernacular around late 2022 following a study by the non-profit SG Her Empowerment. The study brought to light various forms of objectionable internet content and behaviours and labelled them collectively as “online harms”.1SG Her Empowerment, “Study on Online Harms in Singapore 2023 Topline Findings” (2023) < https://api2.she.org.sg/uploads/SHE_Report_on_Online_Harms_Study_Final.pdf.> (accessed 20 February 2024) (“SHE”). This coincided with the Ministry of Law announcing the need for further measures to protect victims of online harm – a term the government has notably never used before until now.2Osmond Chia, “Further laws needed to protect victims of online harms: Shanmugam” The Straits Times (25 September 2023).

Despite how recently this term was coined, online harms have been present for as long as the internet has been in widespread use. Early internet days saw the emergence of harms like online stalking and cyberbullying. As social media became prevalent, it brought about issues like citizen vigilantism, doxing, and hate speech. The rise of dating apps saw an increase in image-based harms such as revenge pornography, while the advent of AI technology has introduced the threat of deepfakes. More recently, the proliferation of misinformation, scams, and frauds have also been recognised as significant online harms.

Evolution of the Regulatory Landscape

Singapore’s approach to online harms was initially characterised by a patchwork of laws. For instance, stalking and cyberbullying were addressed under the common-law tort of harassment, which later became codified into the Protection from Harassment Act.3Protection from Harassment Act 2014 (2020 Rev Ed) (“POHA”). One could potentially act against hate or abusive speech through defamation laws. And in situations where such speech can cause strife between communities or ethnic groups, the maker of that speech can be prosecuted criminally under the (now defunct) Sedition Act and its successor,4Sedition Act (1948). the Maintenance of Religious Harmony Act.5Maintenance of Religious Harmony Act 1990 (2020 Rev Ed). Revenge porn is also prosecutable under the Films Act.6Films Act 1981 (2020 Rev Ed). These laws share a common focus: they target the perpetrator of the online harm by imposing civil and/or criminal liability personally on the perpetrator.

This approach reflected an era of internet libertarianism – a time when the internet was seen as a burgeoning economic space that should remain free from heavy-handed regulation.7D. Daniel Sokol, “Framework for digital platform regulation” (2021) 17(2) Competition Law International 95. The prevailing attitude was one of passive neutrality, where intermediaries like internet service providers and platforms were considered mere conduits of information that should not be responsible for the content passing through their systems.8Yeong Zee Kin, Technology Regulation in the Digital Economy (Singapore Academy of Law, 2023). In fact, some laws entrenched this stance by providing explicit protection for intermediaries. For example, platforms and search engines were shielded from liability for hosting defamatory content under the common-law defence of innocent dissemination. More broadly, the Electronic Transactions Act exempted network service providers from civil and criminal liability for transmitted content.9Electronic Transactions Act 2010 (2020 Rev Ed). The upshot of this belief was that intermediaries were under no obligation to monitor their systems or police the content they conveyed. This lax regulatory landscape likely also arose from the pacing problem – a phenomena where regulations cannot yet keep up with the rapid pace of technological progress.10Gary E Marchant, Braden R. Allenby & Joseph R. Herkert, The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight: The Pacing Problem (Springer Netherlands, 2011).

However, this landscape began to shift dramatically in recent times. The last five years have seen a swift and significant change in regulatory attitudes towards intermediaries. A series of new laws have placed increasing responsibility on internet platforms. Notable among these is the Protection from Online Falsehoods and Manipulation Act,11Protection from Online Falsehoods and Manipulation Act 2019 (2020 Rev Ed). which allows the government to order intermediaries to remove, block, or disable content deemed as fake news. The Protection from Harassment Act was also amended to enable courts to order intermediaries to block or remove content that is harassing or false towards an individual.12POHA, supra n 3. Furthermore, changes to the Broadcasting Act made through the Online Safety (Miscellaneous Amendments) Act empower the government to direct intermediaries to remove, block, or disable “egregious” content deemed prejudicial to national security, public health, or morality.13Broadcasting Act 1994 (Rev Ed 2020) (“Broadcasting Act”); Online Safety (Miscellaneous Amendments) Act 2022. In a similar vein, the upcoming Online Criminal Harms Act allows the government to order that intermediaries remove, block, or disable content related to criminal activities like scams and frauds.14Online Criminal Harms Act 2023.

Problems With Regulating Online Harms Through Intermediary Liability

This evolution from passive neutrality to one of actively imposing intermediary liability marks a significant shift in Singapore’s regulatory approach. Instead of targeting individuals, we now target intermediaries. But why the sudden change? Two primary factors may have driven this paradigm shift. First, the vast amount of user-generated content on the internet, capable of rapid and widespread dissemination, makes it challenging to regulate online harms at their source. Second, the anonymity afforded by the internet often shields perpetrators, making it difficult to hold them personally accountable for their acts. Thus, shifting the liability to intermediaries is seen as a more feasible way to control harmful content.

But as with any step too far and fast in a particular direction, we must ask some important questions. Is imposing more obligations on intermediaries the most ideal way of regulating online harms? Also, are there long-term downsides to increased regulation directed at intermediaries?

Conceptual Difficulties With Intermediary Liability

Intermediary-focused regulation raises several conceptual difficulties. A key concern is the alignment of these laws with international standards like the Manila Principles on Intermediary Liability, which advocate for (among other things) content restriction only under judicial orders and after due process. The Manila Principles were formulated chiefly to prevent overburdening intermediaries with monitoring responsibilities.15Kylie Pappalardo & Nicolas Suzor, The Liability of Australian Online Intermediaries” (2018) 40(4) Sydney Law Review 469.

At first glance, it is immediately apparent that many of Singapore’s new laws (discussed in the section above, and with the exception of the Protection from Harassment Act) allow the government to unilaterally order that intermediaries restrict content under their charge. The government does not need a court order to do this; let alone procure such order through a process whereby the intermediary can put in its defence. When slapped with an order, intermediaries must comply first, or risk facing criminal prosecution. Any appeal an intermediary can file, if permissible under legislation, comes later and at a time after which the content in question has already been removed or blocked.

Another conceptual argument against over-regulating intermediaries is that such regulation can undermine the economic and social benefits of the digital environment.16Yassine Lefouili & Leonardo Madio, “The Economics of Platform Liability” (2022) 53 European Journal of Law and Economics 319. The fear of litigation or regulatory restrictions might prompt intermediaries to excessively censor information on their platforms. Furthermore, enhanced compliance and content monitoring efforts could result in increased operating costs for intermediaries, which are likely to be passed on to users through higher prices. Limited access to information could also hinder innovation and economic growth. A tipping point might be reached where complying with regulations becomes too costly or laborious, leading intermediaries to limit or withdraw their services from certain markets. This was exemplified when Facebook threatened to pull news services in Australia in response to the News Media Bargaining Code.17“Facebook blocks Australian users from viewing or sharing news”, BBC (Australia) (18 February 2021) < https://www.bbc.com/news/world-australia-56099523> (accessed on 20 February 2024). Such actions can degrade the extent and quality of services available to users, ultimately resulting in the consumer bearing the brunt of these regulatory challenges.

Practical Limitations of Protecting Against Online Harms Through Intermediary Liability

Beyond theoretical concerns, the practical efficacy of intermediary liability in fully addressing online harms is debatable. We regulate against online harms because we want to protect individuals and society from their ills. This is an objective that regulatory regimes cannot lose sight of or deviate from.

Current legislation may be effective in enabling the government to swiftly remove harmful online content. However, this approach can be seen as overly authoritative. The power vested in authorities to issue orders for content removal, restriction, or disabling often aligns with higher-level objectives like safeguarding public health, national security, and societal harmony.

This entirely overlooks the individual and personal impact of online harms. Presently, there is no mechanism for individuals or entities who are directly impacted by online harms to report and initiate action against such content. A deeply ironical situation ensues: given that studies, like the one by SG Her Empowerment, highlight the individual as the primary victim of online harms,18SHE, supra n 1. why do individuals not then get a say in when online harm legislation can be invoked?

Moreover, the “remove, block, disable” mechanism common across all intermediary-focused legislation is not a one-size-fits-all solution to all forms of online harm. Removing, blocking or disabling content from one platform does not prevent it from being reposted by another person on another platform. Or from being circulated in a private channel. More importantly, the psychological harm and trauma inflicted by online abuses linger on far after the offending content is removed. These are aspects of online harm that must be dealt with, or even prevented in the first place.

Enhancing the Regulatory Framework

The current “remove, block, disable” policy may be a starting point in addressing online harm, but it is far from being a comprehensive solution. A more robust and inclusive regulatory framework is needed. In rethinking the approach towards online harms, Singapore must adopt a more holistic strategy that not only addresses the symptoms but also the root causes of these issues. So far, this has not been done.

The government’s recognition of “online harms” as a collective concept is a good starting point. However, this term is still somewhat vague. We must do more to enumerate the various types of online harms as exhaustively and as specifically as possible. Classification of online harms is not merely an administrative task, but a foundational step in raising public awareness. For this, we can take a leaf out of the Australian eSafety Commissioner’s playbook. The Commissioner has resources that create a detailed taxonomy of online harms that different segments of the population may be susceptible to.19eSafety Commissioner website < https://www.esafety.gov.au/> (accessed 14 January 2024). For example, seniors may fall prey more readily to scams, young people to cyberbullying, women to image-based abuse, and minorities to hate speech. Correctly classifying and labelling online harms is important. It makes people aware that these harms are legitimate wrongdoings, which then gives them the confidence and awareness needed to respond to such threats. For example, our legislation should call “cyberbullying” or “doxing” as they are rather than shying away from such terms and defining them amorphously through cryptic language such as “conduct that may cause harassment, alarm, or distress”.20POHA, supra n 3.

Once there is awareness, we must promote action. Encouraging individuals to protect themselves against online harms requires a shift from a predominantly top-down regulatory approach to a more participatory, bottom-up model. Again, Singapore can take inspiration from Australia’s Online Safety Act,21Online Safety Act 2021(Australia). which allows individuals to report instances of online harm through a system operated by the eSafety Commissioner. The Commissioner, a public body, is empowered to then investigate the report, and if need be, issue takedown orders against intermediaries. This approach marries the best of both worlds: the “remove, block, disable” aspects of existing legislation can remain; but the government is not the only entity that can invoke these tools. Citizens who have been affected by online harms must have the ability to kick start the process. Moreover, giving a commissioner takedown powers may bring about speedier relief to victims of online harms, who now need to invoke lengthier court proceedings to seek reprieve. If the government can set up a POFMA Office through which the public can report online falsehoods so that the Office can act on them,22POFMA Office website <https://www.pofmaoffice.gov.sg/> (accessed 20 February 2024). surely a similar public body can be established in response to online harms against the citizenry.

Our regulatory regime against online harms should move away from relying entirely on ex post measures (i.e. imposing orders against intermediaries only after a problem surfaces). We should include more ex ante measures in our regulatory toolkit (i.e. preventative measures that reduce the likelihood of problems surfacing in the first place). Some key preventative measures come to mind.

First, our regulatory regime should maxmise responsible authentic internet engagement. This involves ensuring that users’ identities are verifiable. A practical step in this direction is to require users to authenticate their online profiles with identifiable information, such as a mobile phone number. Platforms particularly susceptible to misuse, like Hardwarezone – an online forum known for user-generated gossip, allegations, and occasionally abusive speech – should prioritize user identification.23Hardwarezone website <https://www.hardwarezone.com.sg/home> (accessed 14 January 2024). Presently, all one needs to register for a Hardwarezone account is an email address, a username, and a password. These details do not allow the actual user to be conclusively identified. Platforms naturally prefer to collect as little personal data about their users as possible to minimise their data-protection obligations, compliance costs, and potential liabilities. However, this needs to change. Mandating users to provide some degree of static personal data (such as mobile phone numbers under the pretext of requiring two-factor authentication) has two advantages. For one, users fundamentally know they can be traced and identified if push comes to shove, and this should diminish the odds of them engaging in harmful online conduct. Next, such details allow users to be traced and identified more conclusively. The platform need not make such identity information public for all to see (which allays concerns over privacy). But, at the very least, identification information can be available for legal or law enforcement purposes when ordered to be disclosed by a court.

Second, there should be standard codes of practices for intermediaries to structure their operations against.24Karen Kornbluh & Ellen P. Goodmand, “ Safeguarding Digital Democracy: Digital Innovation and Democracy Initiative Roadmap” (2020) German Marshall Fund of the United States. While some degree of compliance is ultimately required, a code would establish clear and consistent guidelines. And this is preferable to a system that imposes sanctions without any guidance. Singapore has made some strides in this area. In 2023, the Infocomm Media Development Authority (IMDA) introduced the Code of Practice for Online Safety.25Infocomm Media and Development Authority “Code of Practice for Online Safety” (2023) <https://www.imda.gov.sg/-/media/imda/files/regulations-and-licensing/regulations/codes-of-practice/codes-of-practice-media/code-of-practice-for-online-safety.pdf> (accessed 20 February 2024). The Broadcasting Act makes it compulsory for selected social media platforms to comply with this code by implementing systems and processes to manage and moderate harmful content.26Broadcasting Act, supra n 13 s 45L. Users must also be given tools to control their exposure to harmful content. Furthermore, platforms must conduct age verification so that accounts belonging to children have their own set of age-appropriate restrictions and guidelines. However, the current form of the code, spanning just six pages, is somewhat brief and lacks precise language.

One key change to the code can be for it to mandate what is known as a “safety by design” approach for major platforms. This approach, which the UK’s Online Safety Act 2023 embraces to a large degree,27Online Safety Act 2023 (c 50) (UK). focuses on integrating user safety into the development and operational strategy of platforms from the start. It begins with understanding how platform features, like user interactions or content creation and sharing, might expose users to risks. By identifying these vulnerabilities early, platforms can implement measures to shield users from potential harms. For instance, platforms can design interfaces that alert users when their actions might cause harm, facilitate easy reporting of harmful content or behaviour, and deter the sharing of illegal content or content that violates terms of service. Empowering users to make safer decisions is another critical element. Platforms can guide users towards safer online behaviour through design choices that encourage the review of privacy settings, highlight fact-checked content, and warn about excessive screen time. Children’s safety is a special consideration, given their heightened vulnerability to online harms. Platforms should ensure that services are accessible only by age-appropriate users and that the online experience for children is tailored to safeguard them from harm. This includes making safety tools and terms of service easily understandable for young users, promoting positive interactions, and enabling responsible adults to configure safety settings for a child’s online activities.

Third, it is noteworthy that the code falls short of establishing mechanisms for ensuring compliance and accountability. A potential improvement could involve subjecting social media platforms to periodic compliance audits by the IMDA, thereby ensuring that platforms remain vigilant and responsible in their content management practices. The audit process might involve a thorough examination of the platforms’ operational procedures, the effectiveness of their content moderation tools and algorithms, and their responsiveness to user reports and complaints. It would also assess how well these platforms comply with legal requirements and ethical standards in managing and mitigating online harms. By making these audits regular and transparent, the IMDA can foster a culture of continuous improvement among social media platforms. Platforms would be incentivised to proactively maintain high standards of safety and responsibility, knowing that their practices are subject to scrutiny. This approach not only ensures that platforms are held accountable for their content management but also builds public trust in their commitment to user safety. Furthermore, the findings from these audits could be publicly published to highlight best practices and areas needing improvement. This transparency would empower users to make informed choices about the platforms they engage with. It would also allow platforms to learn and adapt from the upsides and downsides canvassed.

Conclusion

Singapore’s increasing recognition of online harms as a pressing issue is commendable. However, the direction in which regulation is headed might not be the way to go. An intermediary-liability regime is a blunt tool as best, and only one facet of protecting society against online harms. A more balanced approach is necessary: one that also includes individual empowerment and preventive measures.

Endnotes

Endnotes
1 SG Her Empowerment, “Study on Online Harms in Singapore 2023 Topline Findings” (2023) < https://api2.she.org.sg/uploads/SHE_Report_on_Online_Harms_Study_Final.pdf.> (accessed 20 February 2024) (“SHE”).
2 Osmond Chia, “Further laws needed to protect victims of online harms: Shanmugam” The Straits Times (25 September 2023).
3 Protection from Harassment Act 2014 (2020 Rev Ed) (“POHA”).
4 Sedition Act (1948).
5 Maintenance of Religious Harmony Act 1990 (2020 Rev Ed).
6 Films Act 1981 (2020 Rev Ed).
7 D. Daniel Sokol, “Framework for digital platform regulation” (2021) 17(2) Competition Law International 95.
8 Yeong Zee Kin, Technology Regulation in the Digital Economy (Singapore Academy of Law, 2023).
9 Electronic Transactions Act 2010 (2020 Rev Ed).
10 Gary E Marchant, Braden R. Allenby & Joseph R. Herkert, The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight: The Pacing Problem (Springer Netherlands, 2011).
11 Protection from Online Falsehoods and Manipulation Act 2019 (2020 Rev Ed).
12 POHA, supra n 3.
13 Broadcasting Act 1994 (Rev Ed 2020) (“Broadcasting Act”); Online Safety (Miscellaneous Amendments) Act 2022.
14 Online Criminal Harms Act 2023.
15 Kylie Pappalardo & Nicolas Suzor, The Liability of Australian Online Intermediaries” (2018) 40(4) Sydney Law Review 469.
16 Yassine Lefouili & Leonardo Madio, “The Economics of Platform Liability” (2022) 53 European Journal of Law and Economics 319.
17 “Facebook blocks Australian users from viewing or sharing news”, BBC (Australia) (18 February 2021) < https://www.bbc.com/news/world-australia-56099523> (accessed on 20 February 2024).
18 SHE, supra n 1.
19 eSafety Commissioner website < https://www.esafety.gov.au/> (accessed 14 January 2024).
20 POHA, supra n 3.
21 Online Safety Act 2021(Australia).
22 POFMA Office website <https://www.pofmaoffice.gov.sg/> (accessed 20 February 2024).
23 Hardwarezone website <https://www.hardwarezone.com.sg/home> (accessed 14 January 2024).
24 Karen Kornbluh & Ellen P. Goodmand, “ Safeguarding Digital Democracy: Digital Innovation and Democracy Initiative Roadmap” (2020) German Marshall Fund of the United States.
25 Infocomm Media and Development Authority “Code of Practice for Online Safety” (2023) <https://www.imda.gov.sg/-/media/imda/files/regulations-and-licensing/regulations/codes-of-practice/codes-of-practice-media/code-of-practice-for-online-safety.pdf> (accessed 20 February 2024).
26 Broadcasting Act, supra n 13 s 45L.
27 Online Safety Act 2023 (c 50) (UK).

The post From Liberty to Liability: Shifting Sands in Singapore’s Internet Governance appeared first on The Singapore Law Gazette.


Viewing all articles
Browse latest Browse all 153

Trending Articles