Quantcast
Channel: Feature Archives - The Singapore Law Gazette
Viewing all articles
Browse latest Browse all 153

Round Up of Significant Legal Developments in AI for 2023

$
0
0

And What’s Ahead for 2024

This article identifies the three key developments for Artificial Intelligence (AI) in 2023, and the trends we expect to see next year as a natural extension of the issues settled this year. Where the buzzword is “interoperability” of AI frameworks across countries, this article unpacks what it means and what it takes before a framework can be said to be interoperable.

As 2023 draws to a close, it is timely to review the key artificial intelligence (AI) developments this year and what will be a work-in-progress for the next.

This article will be divided into two parts. The first will outline significant developments both in Singapore and overseas, and highlight the three key developments in 2023 and their implications. The second part will then identify legal trends that we are likely to see in 2024 — many of these future plans can only come to fruition after other areas are first developed, like laying bricks on top of one another – e.g. AI governance principles1Singapore notes that countries are “generally coalescing around 11 key AI ethics principles”, namely (1) transparency; (2) explainability; (3) repeatability/reproducibility; (4) safety; (5) security; (6) robustness; (7) fairness (i.e. mitigation of unintended discrimination); (8) data governance; (9) accountability; (10) human agency & oversight and (11) inclusive growth, societal & environmental well-being – see para 10 of the AI Verify Invitation to Pilot, available at https://file.go.gov.sg/aiverify.pdf must be agreed upon first amongst countries, before details of their implementation can be worked out.

Part 1: Significant Developments in 2023

An Overview of the Latest Legal Developments in Singapore

Singapore continues to build on its guidelines for the use of AI.2Some examples of significant guides released earlier are: (1) the Model Artificial Intelligence Governance Framework (1st edition in 2019 and the 2nd edition in 2020); (2) the Companion to the Model AI Governance Framework – Implementation and Self-Assessment Guide for Organizations (2020); (3) various sectorial guides, such as the Monetary Authority of Singapore’s “Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector” (2018), and the Ministry of Health’s “Artificial Intelligence in Healthcare Guidelines” (2021). Earlier this month, Singapore released its National AI Strategy 2.0,3Available at https://file.go.gov.sg/nais2023.pdf updating it from the first version in 2019. AI is now seen as a “necessity”, where it is imperative for people to know and understand AI and not just see it as “good to have”. Singapore also intends to take its AI strategy “from local to global”, thus underscoring the need for its AI governance policies to be in line with the international community’s concerns in order to be “world-leading” in AI.

2023 also saw the mapping of Singapore’s AI Verify (an AI governance testing framework and toolkit)4Launched in 2022, AI Verify is a voluntary self-assessment framework which lets organisations verify the claimed performance of their AI systems against a set of standardized process checks and technical tests. More details are available at https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2022/sg-launches-worlds-first-ai-testing-framework-and-toolkit-to-promote-transparency and https://file.go.gov.sg/aiverify.pdf to the USA’s AI Risk Management Framework. This was a first-of-its-kind mapping exercise between two countries on their AI governance frameworks, and it is significant because the frameworks are now “interoperable”5https://www.mci.gov.sg/media-centre/press-releases/singapore-and-the-us-to-deepen-cooperation-in-ai/ (i.e. meeting the requirements of Country A’s framework means you also meet the requirements of Country B’s framework). It is a very sensible and strategic move as Singapore’s approach is now in sync with one of the largest hubs for AI innovation.6The completion of the joint mapping exercise “will provide companies with greater clarity to meet the requirements within both frameworks, reduce compliance costs, and foster a more conducive environment for AI deployment and innovation” – see MCI’s 13 October 2023 press release at https://www.mci.gov.sg/media-centre/press-releases/singapore-and-the-us-to-deepen-cooperation-in-ai/

Keeping pace with the popularity and accessibility of generative AI, IMDA also released two discussion papers:7Excitingly, IMDA also announced on 4 December that is partnering AI Singapore and A*Star to launch a Large Language Model that is representative of the South-east Asian cultural contexts and linguistic nuances (e.g. the need to manage context-switching between languages in multilingual Singapore), as most LLMs are developed based on Western cultures, values and norms. By building this, Singapore will have a deeper understanding of how LLMs work as regulators develop their in-house expertise, and can also develop more relevant AI governance principles. See the IMDA press release at https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2023/sg-to-develop-southeast-asias-first-llm-ecosystem

  1. the first paper “Generative AI: Implications for Trust and Governance” (released 6 October 2023) sets out six risks of generative AI, thus spotlighting the areas where regulatory/industry solutions are necessary;
  2. the second paper “Cataloguing LLM Evaluations” (released 31 October 2023) sets out commonly-used tests to evaluate specific performance aspects of Large Language Models (LLMs), as well as a recommended baseline for model evaluation – this is timely given that AI Verify was not designed to apply to generative AI.

Finally, Singapore reiterated its stance that its immediate priority is not to enact legislation, and it will instead focus on deepening its understanding of how AI works, what benchmarks to use and what testing is appropriate, so that any legislation (if found to be an appropriate way to manage the risks to and from AI) can be enforced.8See the National AI Strategy 2.0 released on 4 December 2023, at pages 54 to 55. See also the earlier interview with IMDA on 19 June 2023, available at https://www.cnbc.com/2023/06/19/singapore-is-not-looking-to-regulate-ai-just-yet-says-the-city-state.html

An Overview of the Latest Legal Developments Internationally

The consensus among countries is that the application of AI technology to everyday issues from finance to healthcare and even content-creation cannot go unchecked, because of the risks it brings.9Regulators have set out the risks they find crucial to intervene in. See, for example, the UK Parliament’s Science, Innovation and Technology Committee which has identified “twelve challenges of AI governance that must be addressed by policymakers”, at https://committees.parliament.uk/work/6986/governance-of-artificial-intelligence-ai/news/197236/ai-offers-significant-opportunities-but-twelve-governance-challenges-must-be-addressed-says-science-innovation-and-technology-committee/ However, they differ in their approaches on how to contain the risks, with three broad categories10Save where autonomous vehicles are concerned – all countries have enacted legislation given that previous road traffic legislation is premised on there being a human driver.:

  1. Enacting omnibus legislation regulating various applications of AI across various sectors – e.g. EU,11The Artificial Intelligence Act (introduced in April 2021, not yet in force) Canada,12The Artificial Intelligence and Data Act (introduced June 2022, not yet in force) USA;13The Algorithmic Accountability Act was reintroduced in September 2023, but it has failed to pass twice before when introduced in 2019 and 2022. Given the pace and frequency at which individual Senators introduce legislative packages, but uncertainty over whether they will garner enough support to be passed, “Executive Branch policies have acquired major relevance in the United States” (see page 20 of the OECD Paper on “The State of Implementation of the OECD AI Principles Four Years On” (October 2023), available at https://www.oecd-ilibrary.org/docserver/835641c9-en.pdf?expires=1701914238&id=id&accname=guest&checksum=3F7E626C94AC5DBB4CDA22A83F5C231F)
  2. Enacting specific legislation to target particular applications of AI or particular sectors – e.g. China (on algorithmic recommendation systems14Provisions on the Management of Algorithmic Recommendations in Internet Information Services (introduced in 2021; operational on 1 March 2022) and generative AI15Provisions on the Administration of Deep Synthesis Internet Information Services (introduced in 2022, operational on 10 January 2023) and Interim Measures for the Management of Generative Artificial Intelligence Services (introduced in 2023, operational on 15 August 2023).), USA (in employment);16E.g. the Artificial intelligence Video Interview Act (Illinois) and the New York Local Law 144 of 2021 on automated employment decision tools.
  3. Taking a wait-and-see approach, so products and services are regulated in a technology-neutral way without enacting AI-specific legislation17See for example UK’s Online Safety Act 2023, which does not mention AI directly but applies to the use of bots or other automated tools. – e.g. Singapore, UK,18While Lord Chris Holmes recently introduced a private members’ bill on AI regulation on 22 November 2023, the sentiment is that it is unlikely to be passed and become law given the government’s stance (from March through November 2023) that it will take a light-touch approach to AI regulation and not introduce legislation in the near future. See https://globaldatareview.com/article/uk-data-bills-progress-through-parliament-no-ai-regime-in-sight Japan, Australia.

Outside of their legislative efforts, regulators are also issuing guidelines to guide and steer organisations’ use of AI. There has been a very public push to bring the industry on board as a partner in co-developing AI frameworks, such as with the White House securing voluntary commitments from seven leading AI companies to manage the risks posed by AI,19https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/ and Singapore launching the AI Verify foundation.20https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2023/singapore-launches-ai-verify-foundation-to-shape-the-future-of-international-ai-standards-through-collaboration

What should we focus on when looking at AI governance policies/regulations around the world?

In the author’s view, how each country wants to regulate AI (whether by hard laws or by non-binding guidelines) is secondary, as is whether they take a more pro-consumer protection or pro-enterprise stance.

Instead, the contents of their legislation or guidelines are what is key. The author proposes that the contents can be viewed through this four-step framework:

(1) What are the principles guiding the use of AI?

Illustration: These are broadly set out – e.g. “fairness”, “transparency”, “explainability”, “robustness”.21See the 11 principles listed in Singapore’s AI Verify framework – (1) transparency; (2) explainability; (3) repeatability/reproducibility; (4) safety; (5) security; (6) robustness; (7) fairness (i.e. mitigation of unintended discrimination); (8) data governance; (9) accountability; (10) human agency & oversight and (11) inclusive growth, societal & environmental well-being.

(2) What does the principle mean?

Illustration: How broadly is the principle to be interpreted (e.g. does it include X but not Y)? In the case of “transparency” where “appropriate information is provided to individuals impacted by [the] AI system”,22See page 4 of the AI Verify Invitation to Pilot, available at https://file.go.gov.sg/aiverify.pdf it has been defined to include23See page 66 of the IMDA Model Artificial Intelligence Governance Framework

    • ensuring that AI systems, applications and algorithms operate in a transparent and fair manner;
    • making available externally visible and impartial avenues for redress for adverse decisions made by the AI system;
    • incorporating processes for users to verify how and when AI technology is being applied; and
    • to keep detailed records of design processes and decision-making.

There are also some organisations that take the view that “explainability” (ensuring that the decision-making process, including the data and considerations driving the decision, can be explained)24See page 64 of the IMDA Model Artificial Intelligence Governance Framework. It is also defined in AI Verify as the “ability to assess the factors that led to the AI system’s decision, its overall behaviour, outcomes and implications” is a subset of “transparency”, instead of a standalone principle, so the scope of “transparency” widens.25See page 38 of the OECD Paper on “The State of Implementation of the OECD AI Principles Four Years On” (October 2023): “Transparency and explainability also figure prominently in several non-binding guidelines for ethical AI implementation. However, despite broad agreement on the need for transparent and explainable AI, operationalising these concepts is complex due to their multifaceted nuances. AI transparency entails: i) clearly communicating to users that they are dealing with an AI system; ii) the interpretability of decision-making processes, and iii) the explainability of decision-making logic.”

Moreover, a study has observed that where it comes to transparency/explainability,26Where explainability is closely related to transparency as it allows persons to understand how the AI system works to arrive at its output. many government-issued documents include “openness” (i.e. the open sharing of data, and open-source research and collaboration in designing and developing AI systems) as an aspect of transparency/explainability, but company-issued documents are less likely to do so.27Chesterman et al, “The Evolution of AI Governance” (Working paper version November 2023), available at https://www.techrxiv.org/articles/preprint/The_Evolution_of_AI_Governance/24681063

As can be seen from this survey of the literature/guides available on “transparency”, there is less of a consensus once we zoom in into what the principle means.

(3) How should the principle be implemented in terms of concrete steps an organisation may28Please note that at present, AI Verify does not set out compulsory criteria and processes (i.e. organisations may choose not to implement the measures in the process checks (or indicate that a particular measure is not applicable to them), and assume the risk that non-compliance may mean that they are non-compliant with other regulatory requirements)./must take

Illustration: Singapore’s AI Verify sets out both the testable criteria (factors that contribute to achieving the principle) and processes (actionable steps that organisations must take to meet the testable criteria).

To give effect to the principle of “transparency”, for example,29For brevity, the processes listed in this article are not an exhaustive list of all the processes related to the principle in AI Verify. an organisation should provide the necessary information to end users about the use of their personal data to ensure it is processed in a fair and transparent manner.30Criteria 1.1 To that end, organisations should align with the Personal Data Protection Commission’s guidelines on the Personal Data Protection Act 2012,31Process 1.1.1 and publish a privacy policy on their website to share information about their use of personal data in the AI system.32Process 1.1.2 An organisation should also provide information to end users on the purpose, intended use and intended response of the AI system.33Criteria 1.3; Process 1.3.1

(4) How do we test to see if the steps have been taken (including reporting obligations and post-deployment performance monitoring obligations), to verify that the principle has been implemented

Illustration: Under AI Verify, assessing the implementation of the AI ethics/governance principles can be done by a combination of both technical tests and process checks. However, for the principle of “transparency”, only process checks apply, such as producing documentary evidence of:

    • communication with end users concerning the intended use and response of the AI system (e.g. producing a Model Card);
    • a privacy policy on the organisation’s website on how personal data is used in an AI system;
    • internal policy on how data will be processed in compliance with existing data protection laws and regulations.

In contrast, the principles of explainability, robustness and fairness can be assessed through a combination of technical tests and process checks.

It is good to have a standard list of tests and a standard list of steps, so that organisations are all assessed on the same factors when reporting their compliance with AI governance principles.

Different countries have accomplished steps (1) to (4) above to different degrees, and having legislation in the works does not mean that they are further along the scale, as Singapore has advanced very far into step (4) despite not wanting to enact AI-specific or omnibus legislation yet. In fact, Singapore’s position is that it is essential to sort out testing benchmarks and methodologies before enacting legislation so that the legislation can be enforced!34See IMDA’s interview on 19 June 2023 at https://www.cnbc.com/2023/06/19/singapore-is-not-looking-to-regulate-ai-just-yet-says-the-city-state.html. Other jurisdictions are also using voluntary guidelines as a precursor to hard laws, to fine-tune the policy first. Examples are the joint EU-US initiative to develop a voluntary AI Code of Conduct before any jurisdiction passes laws on AI, and Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (effective 27 September 2023) while the Artificial Intelligence and Data Act is going through the legislative process. It was described as “a critical bridge between now and when that legislation would be coming into force” – see the press release at https://www.canada.ca/en/innovation-science-economic-development/news/2023/09/minister-champagne-launches-voluntary-code-of-conduct-relating-to-advanced-generative-ai-systems.html

From an international comparative perspective, high-level principles at step (1) have largely been agreed upon, with step (2) an ongoing work-in-progress to set their boundaries. The key is to now focus on steps (3) and (4) and come to a common agreement on what they mean and how to carry them out, otherwise a company that rolls out its services to 10 different countries may have to comply with 10 different pieces of laws/guidelines and create 10 different product offerings to conform to each!35This is why the AI Verify and NIST AI Risk Management Framework mapping was described as “an important step towards harmonisation of international AI governance frameworks to reduce industry’s cost to meet multiple requirements.” The joint press release is available at https://www.mci.gov.sg/files/Press%20Releases%202023/annex%20a.pdf

Three Key Legal Developments in the AI Space in 2023

With the context set, we now zoom in on the key legal developments.

#1: The definition of “artificial intelligence” was updated on 8 November 2023 to describe the technology by reference to its unique features

While companies may label their products/services as AI-enabled for marketing purposes, we have to look behind the labels. While basic, the definition of what is AI (as adopted by regulators) is important because it sets the scope of what technology the existing and upcoming legislation and guidelines will apply to.

The OECD updated its definition of AI on 8 November 2023, and this is a significant development to be discussed, as the OECD AI Principles are “a global reference point for trustworthy AI”:36See page 8 of the OECD Paper on “The State of Implementation of the OECD AI Principles Four Years On” (October 2023). The OECD principles are widely adopted by countries (including Singapore), given that there are 46 countries that have signified their adherence to the OECD’s “Recommendation of the Council on Artificial Intelligence” (issued in 2019) that sets out AI governance principles.

Previous definition New definition
An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy. An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

The key thing is that the definition of AI must distinguish it from other types of technology. Very colloquially, AI is where instead of the human making the rules, the machine learns and then makes the rules. The updated definition now captures more of our understanding of AI:37See an explanatory note to the OECD’s revised definition of an AI system at https://oecd.ai/en/wonk/ai-system-definition-update

  1. that it is adaptive (the UK38See the UK white paper on “A pro-innovation approach to AI regulation” (introduced 29 March 2023 and updated on 3 August 2023), available at https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper was one of the first to adopt this characteristic in defining AI). One of the critiques of the previous definition based on autonomy alone was that many things could operate with autonomy (i.e. without the “express intent or ongoing control of a human”39See the UK white paper above at para 39.) – even electric toothbrushes40See the commentary at https://techmonitor.ai/comment-2/eu-ai-act-improving – hence it is important to also present that AI can learn from experience and adapt to new circumstances/environments in a way that is not “directly envisioned by their human programmers”41See the UK white paper above at para 39.;
  2. that AI also has the potential to set its own objectives based on what it has learned (“implicit objectives”) without the need for a human to expressly set them;
  3. that its output can include content as well, so that generative AI models will also fall within the scope of the definition.

#2: Interoperability

This year saw a huge push towards creating a common framework for AI governance. As mentioned earlier, Singapore and the USA led the way with mapping the principles and processes in Singapore’s AI Verify to those in the USA’s NIST AI Risk Management Framework – the first-of-its-kind country-to-country mapping of AI governance frameworks, going into the fine details of concrete steps to take and testing methodologies instead of just a high-level agreement on general principles.

Countries will always be keen to release their own legislation/guidelines on AI to establish thought leadership and to stake their space in the rapidly-evolving AI landscape – either by having their standards become the international standard, or being able to shape the international standard (in their favour). However, they know not to do it in a vacuum.

Countries are sensitive to the fact that it would not be easy for a company to follow a patchwork of 10 fragmented rules/guidelines if they deploy their AI system across 10 different countries – companies would ideally like to look at just one! This was one of the reasons why the EU’s Artificial Intelligence Act was mooted. Hence, there have been significant high-profile international gatherings this year to reach a consensus as to how AI principles and their practical application should be developed, such as:

  1. the G7 summit on 30 October 2023 which introduced 11 International Guiding Principles applicable to all organisations (whether developing and using AI systems),42https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-guiding-principles-advanced-ai-system as well as a voluntary International Code of Conduct for organisations developing advanced AI systems that may be adopted by both the public and private sector;43https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-code-conduct-advanced-ai-systems
  2. the inaugural AI Safety Summit was hosted by the UK from 1-2 November 2023, cumulating in the Bletchley Declaration affirmed by 29 nations (including China, the EU and the USA) to co-operate to address the risks of AI;44https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023
  3. 18 countries have endorsed UK-developed guidelines on cybersecurity principles to observe when designing, developing and deploying AI systems;45https://www.ncsc.gov.uk/news/uk-develops-new-global-guidelines-ai-security
  4. the Council of Europe had proposed an international Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law back in January 2023,46https://www.coe.int/en/web/artificial-intelligence/cai and negotiations on the text are still ongoing;47The text is expected to be finalised in early 2024, as reported in https://events.euractiv.com/event/info/charting-the-future-of-ai-from-the-eu-ai-act-to-global-ai-governance
  5. closer to home, the ASEAN countries are developing an ASEAN guide to AI ethics and governance. While a copy has not yet been made publicly-available save to technology companies for their feedback, it is expected to be released at the ASEAN Digital Ministers Meeting in January 2024.48https://www.reuters.com/technology/southeast-asia-eyes-hands-off-ai-rules-defying-eu-ambitions-2023-10-11/ Reuters has also reported that the guide will take into account the unique socio-political climate of ASEAN, be very business-friendly, and will not prescribe any prohibited uses of AI (cf. the EU’s AI Act).49https://www.reuters.com/technology/southeast-asia-eyes-hands-off-ai-rules-defying-eu-ambitions-2023-10-11/

Nevertheless, there are two things to bear in mind with the harmonization of laws and guidelines:

  1. we must be clear on what is being aligned – recount the four-step framework above in the 10th paragraph. While countries are in agreement about the principles – e.g. “fairness”, “transparency”, “explainability” at step (1), the details of these principles and how they are to be implemented and what sort of benchmarks must be met to say something is “fair”, etc. are still to be ironed out in the months to come. The author submits that harmonisation/interoperability is only achieved when steps (3) and (4) are aligned across countries;
  2. countries should not all rush to adopt a common set of principles/standards – they should first thoroughly consider the impact of the proposals on their industries/economies/societies before deciding if it is a good idea to follow it. A case in point is how France, Germany and Italy took a different view from the rest of the EU bloc on how foundation models should be regulated under the EU AI Act (they proposed mandatory self-regulation though codes of conduct, instead of hard laws), causing an impasse in negotiations.50https://www.euractiv.com/section/artificial-intelligence/news/france-germany-italy-push-for-mandatory-self-regulation-for-foundation-models-in-eus-ai-law/

#3: More clarity on the IP front for generative AI

The legal boundaries on IP issues surrounding generative AI – which have been the subject of much debate since the introduction of ChatGPT to the public in November 2022 – are starting to take shape with judicial commentary as well as regulator guidelines, although questions still remain.

Regulators have issued guidelines clarifying the position on authorship over AI-generated works – see the US Copyright Office’s “Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence”.51Issued on 16 March 2023, and available at https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence However, it does not seem that they will be issuing guidance any time soon on whether copyright is infringed when using copyrighted works to train generative AI models without seeking the permission of the authors first (although they have acknowledged that there are ambiguities to be resolved),52See, for example, pages 11, 12, 21 and 22 of the IMDA paper on “Generative AI: Implications for Trust and Governance”. given that there are compelling arguments for either position53On one hand, some argue that training generative AI models on copyrighted materials is not fair use, since the generated output competes with the copyrighted works it is trained on, affecting the livelihoods of artists. See https://sifted.eu/articles/stability-ai-head-of-audio-resigns and the original tweet at https://twitter.com/ednewtonrex/status/1724902327151452486. On the other hand, AI companies have argued that it is fair use because the copyrighted works are used to train the model to recognise patterns in text and images to generate their own new text and images, and not reproduce existing text and images. and there is no harm in waiting to see how courts rule on the many lawsuits brought against generative AI companies first before taking a position.

The position on training is yet to be settled in the courts, but recent judicial rulings54The first case is that of Sarah Andersen et al. v Stability AI Ltd, heard in the United States District Court, Northern District of California. District Judge William Orrick (on 30 October 2023) expressed doubt over arguments that all AI generated outputs are infringing derivates of the images they were trained on, saying “even if plaintiffs narrow their allegations to limit them to output images that draw upon training images based upon copyrighted images, I am not convinced that copyright claims based (on) a derivative theory can survive absent ‘substantial similarity’ type allegations.” As reported in https://www.theregister.com/2023/10/31/judge_copyright_stabilityai_deviantart_midjourney/The second case is that of Richard Kadrey et al. v Meta Platforms, Inc., heard in the United States District Court, Northern District of California by District Judge Vince Chhabria. As reported in https://www.newsweek.com/sarah-silverman-lawsuit-meta-ai-1846340 and https://www.niemanlab.org/2023/11/the-legal-framework-for-ai-is-being-built-in-real-time-and-a-ruling-in-the-sarah-silverman-case-should-give-publishers-pause/ have focussed on cutting down very expansive claims against generative AI providers, requiring claimants to show that the AI-generated output is substantially similar to their copyrighted work used in training the AI model, instead of alleging that all generated outputs are infringing derivatives of their copyrighted work simply by virtue of the work being used to teach the model to recognise text patterns or images.

Part 2 – Future Trends

In 2024, we are likely to see more of the following given the trajectory this year – these developments would be a natural progression as the necessary foundations have been laid this year:

  1. More alignment across countries on how to operationalise the principles and how to conduct testing, since legislation is only effective to the extent it can be enforced. This can take the form of:
    1. standard setting by international bodies such as the International Organization for Standardization which will be more technical in nature (e.g. a standard on assessing the robustness of neural networks,55https://www.iso.org/standard/77609.html?browse=tc or standards on conducting autonomous vehicle testing56https://www.iso.org/standard/78951.html);
    2. countries mutually recognising each other’s governance and testing frameworks (akin to the Singapore and US mapping exercise, where if you comply with country A’s framework you also comply with country B’s);
    3. a guideline endorsed by multiple countries (like UK Guidelines for secure AI system development57https://www.ncsc.gov.uk/files/Guidelines-for-secure-AI-system-development.pdf) – although these will be more high-level and do not drill down technical standards, they still pave the way for the next steps.

    As the guidelines issued by countries become more uniform and also with more widespread adoption, one thing to think about is their implications on establishing liability – will they now become the industry standard of good practice, so that not adhering to them will be a basis to argue that an organisation fell below the standard of care in a negligence claim?

  1. More clarity about the responsibilities and obligations for an organisation developing an AI system VS an organisation using an AI system developed for it. While there are some guides released by international organisations targeted at specific actors in the AI chain,58For example, the World Economic Forum published a guide in June 2023 on “Adopting AI Responsibly: Guidelines for Procurement of AI Solutions by the Private Sector”, available at https://www.weforum.org/publications/adopting-ai-responsibly-guidelines-for-procurement-of-ai-solutions-by-the-private-sector/ it will be helpful for regulators to release guides to spotlight individual obligations.59There have been calls for this – see for example https://www.dataguidance.com/news/germany-dsk-publishes-opinion-eu-ai-act-demanding-clear and https://www.ukfinance.org.uk/system/files/2023-11/The%20impact%20of%20AI%20in%20financial%20services.pdf (which speaks of the uncertainty in procuring models from third-party providers). This is not to say that existing guides do not contain relevant material – they do, but often address parties in general, so various actors have to sift out what principles are relevant to them. Singapore has made clarifying the obligations of each actor in the AI supply chain a priority in the latest National AI Strategy 2.0.60See page 56 of the National AI Strategy 2.0: “We will regularly review and adjust frameworks like the Model AI Governance Framework and AI Verify to reflect emerging principles, concerns, and technological developments (e.g. Generative AI). As part of this, it will be important to establish clear responsibilities for actors across the AI supply chain. This baseline guidance will give clarity to AI developers and users on how to be responsible in the design and use of AI.”
  2. More clarity on the consequences of AI systems not complying with existing laws or AI governance principles. With the passage of time, it is hoped that some of the copyright infringement lawsuits would have progressed in 2024 so that there will be concrete rulings, or that regulators (such as the US Federal Trade Commission61The FTC has previously ordered companies to delete illegally-obtained data and algorithms built using such data.) take enforcement action or issue guidance notes. It is important to know what are the consequences if there are issues with the training data, if the consequences are not specified in legislation – can a company merely pay damages and carry on? If so, what is the quantum of the damages? And what will be the threshold for courts or regulators to order a company to cease to use the AI system62See https://slate.com/technology/2023/10/artificial-intelligence-copyright-thomson-reuters-ross-intelligence-westlaw-lawsuit.html, where Bob Brauneis, an IP law professor at George Washington Law university is quoted as saying: “The worst outcome would be that you lose (i.e. be found liable for copyright infringement) and that the relief is you have to destroy your model and start all over again (…) The way these models are generated, there’s no way that, say, (OpenAI’s) GPT-4, there’s no way you can go back and filter out the plaintiff’s content from the model that you generated. I think every computer scientist agrees that is not currently possible with the way these models are being built. Then you have to destroy that and start all over again with content that you’ve licensed.” – while neither an inherently discriminatory AI system nor a system built on illegally collected personal data63https://www.ftc.gov/news-events/news/press-releases/2022/03/ftc-takes-action-against-company-formerly-known-weight-watchers-illegally-collecting-kids-sensitive should be allowed to carry on, should the same fate befall an AI system that was trained on copyrighted materials if that is eventually held by the courts to be not fair use and therefore infringing, given the previous uncertainty over the legal position?
  3. New obligations on persons to counter the risks of AI – e.g. would you have a responsibility now to prevent your data on the Internet from being scraped if you know that this is a common practice by companies to train their AI models? The Italian data protection authority recently (in November 2023) launched a public consultation to look into companies’ practices to prevent the personal data they put on the Internet from being scraped.64https://www.dataguidance.com/news/italy-garante-investigates-webscraping-algorithm (press release only available in Italian, so more details can be found at https://www.jdsupra.com/legalnews/the-italian-data-protection-authority-2331171/)

Conclusion

At the end of the day, regulating AI is not about regulating the technology per se, but the use of it in specific products/services. It is like building a plane – regardless of the type of technology deployed in it, you want to ensure that the ultimate product/service is safe for people to use. Therefore, if AI is to be applied in a loan application service, you want to know how the AI system was designed to ensure it does not discriminate against a particular group of people. Where it applies to content creation, you want to understand how the AI system was designed so that guardrails can be put in to ensure that no (or minimal) harmful or IP-infringing content is generated.

Consequently, AI governance principles may be new, but many existing laws/legal principles still anchor the development of AI systems – they are not developed in a legal vacuum. Any AI-related regulation will have to be built on top of existing regulations for products/services already on the market, as well as regulations for data protection, consumer protection, etc. Beyond legislation, developers are also required to comply with open-source licence conditions if they use publicly-available datasets to train their AI models, or obtain code or pre-trained models from open-source repositories.

Therefore, the focus of AI regulation is actually on the steps to take in developing the AI system, and the documentation/proof/tests to show that those steps have in fact been taken (in order to mitigate the risks identified), so that, to borrow the words of the US President when announcing the Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, companies must “prove that their most powerful systems are safe before allowing them to be used. […] That means companies must tell the government about the large-scale AI systems they’re developing and share rigorous independent test results to prove they pose no national security or safety risk to the […] people.”65https://www.whitehouse.gov/briefing-room/speeches-remarks/2023/10/30/remarks-by-president-biden-and-vice-president-harris-on-the-administrations-commitment-to-advancing-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

The views expressed in this article are the personal views of the author and do not represent the views of Drew & Napier LLC.

Endnotes

Endnotes
1 Singapore notes that countries are “generally coalescing around 11 key AI ethics principles”, namely (1) transparency; (2) explainability; (3) repeatability/reproducibility; (4) safety; (5) security; (6) robustness; (7) fairness (i.e. mitigation of unintended discrimination); (8) data governance; (9) accountability; (10) human agency & oversight and (11) inclusive growth, societal & environmental well-being – see para 10 of the AI Verify Invitation to Pilot, available at https://file.go.gov.sg/aiverify.pdf
2 Some examples of significant guides released earlier are: (1) the Model Artificial Intelligence Governance Framework (1st edition in 2019 and the 2nd edition in 2020); (2) the Companion to the Model AI Governance Framework – Implementation and Self-Assessment Guide for Organizations (2020); (3) various sectorial guides, such as the Monetary Authority of Singapore’s “Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector” (2018), and the Ministry of Health’s “Artificial Intelligence in Healthcare Guidelines” (2021).
3 Available at https://file.go.gov.sg/nais2023.pdf
4 Launched in 2022, AI Verify is a voluntary self-assessment framework which lets organisations verify the claimed performance of their AI systems against a set of standardized process checks and technical tests. More details are available at https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2022/sg-launches-worlds-first-ai-testing-framework-and-toolkit-to-promote-transparency and https://file.go.gov.sg/aiverify.pdf
5 https://www.mci.gov.sg/media-centre/press-releases/singapore-and-the-us-to-deepen-cooperation-in-ai/
6 The completion of the joint mapping exercise “will provide companies with greater clarity to meet the requirements within both frameworks, reduce compliance costs, and foster a more conducive environment for AI deployment and innovation” – see MCI’s 13 October 2023 press release at https://www.mci.gov.sg/media-centre/press-releases/singapore-and-the-us-to-deepen-cooperation-in-ai/
7 Excitingly, IMDA also announced on 4 December that is partnering AI Singapore and A*Star to launch a Large Language Model that is representative of the South-east Asian cultural contexts and linguistic nuances (e.g. the need to manage context-switching between languages in multilingual Singapore), as most LLMs are developed based on Western cultures, values and norms. By building this, Singapore will have a deeper understanding of how LLMs work as regulators develop their in-house expertise, and can also develop more relevant AI governance principles. See the IMDA press release at https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2023/sg-to-develop-southeast-asias-first-llm-ecosystem
8 See the National AI Strategy 2.0 released on 4 December 2023, at pages 54 to 55. See also the earlier interview with IMDA on 19 June 2023, available at https://www.cnbc.com/2023/06/19/singapore-is-not-looking-to-regulate-ai-just-yet-says-the-city-state.html
9 Regulators have set out the risks they find crucial to intervene in. See, for example, the UK Parliament’s Science, Innovation and Technology Committee which has identified “twelve challenges of AI governance that must be addressed by policymakers”, at https://committees.parliament.uk/work/6986/governance-of-artificial-intelligence-ai/news/197236/ai-offers-significant-opportunities-but-twelve-governance-challenges-must-be-addressed-says-science-innovation-and-technology-committee/
10 Save where autonomous vehicles are concerned – all countries have enacted legislation given that previous road traffic legislation is premised on there being a human driver.
11 The Artificial Intelligence Act (introduced in April 2021, not yet in force)
12 The Artificial Intelligence and Data Act (introduced June 2022, not yet in force)
13 The Algorithmic Accountability Act was reintroduced in September 2023, but it has failed to pass twice before when introduced in 2019 and 2022. Given the pace and frequency at which individual Senators introduce legislative packages, but uncertainty over whether they will garner enough support to be passed, “Executive Branch policies have acquired major relevance in the United States” (see page 20 of the OECD Paper on “The State of Implementation of the OECD AI Principles Four Years On” (October 2023), available at https://www.oecd-ilibrary.org/docserver/835641c9-en.pdf?expires=1701914238&id=id&accname=guest&checksum=3F7E626C94AC5DBB4CDA22A83F5C231F)
14 Provisions on the Management of Algorithmic Recommendations in Internet Information Services (introduced in 2021; operational on 1 March 2022)
15 Provisions on the Administration of Deep Synthesis Internet Information Services (introduced in 2022, operational on 10 January 2023) and Interim Measures for the Management of Generative Artificial Intelligence Services (introduced in 2023, operational on 15 August 2023).
16 E.g. the Artificial intelligence Video Interview Act (Illinois) and the New York Local Law 144 of 2021 on automated employment decision tools.
17 See for example UK’s Online Safety Act 2023, which does not mention AI directly but applies to the use of bots or other automated tools.
18 While Lord Chris Holmes recently introduced a private members’ bill on AI regulation on 22 November 2023, the sentiment is that it is unlikely to be passed and become law given the government’s stance (from March through November 2023) that it will take a light-touch approach to AI regulation and not introduce legislation in the near future. See https://globaldatareview.com/article/uk-data-bills-progress-through-parliament-no-ai-regime-in-sight
19 https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/
20 https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2023/singapore-launches-ai-verify-foundation-to-shape-the-future-of-international-ai-standards-through-collaboration
21 See the 11 principles listed in Singapore’s AI Verify framework – (1) transparency; (2) explainability; (3) repeatability/reproducibility; (4) safety; (5) security; (6) robustness; (7) fairness (i.e. mitigation of unintended discrimination); (8) data governance; (9) accountability; (10) human agency & oversight and (11) inclusive growth, societal & environmental well-being.
22 See page 4 of the AI Verify Invitation to Pilot, available at https://file.go.gov.sg/aiverify.pdf
23 See page 66 of the IMDA Model Artificial Intelligence Governance Framework
24 See page 64 of the IMDA Model Artificial Intelligence Governance Framework. It is also defined in AI Verify as the “ability to assess the factors that led to the AI system’s decision, its overall behaviour, outcomes and implications”
25 See page 38 of the OECD Paper on “The State of Implementation of the OECD AI Principles Four Years On” (October 2023): “Transparency and explainability also figure prominently in several non-binding guidelines for ethical AI implementation. However, despite broad agreement on the need for transparent and explainable AI, operationalising these concepts is complex due to their multifaceted nuances. AI transparency entails: i) clearly communicating to users that they are dealing with an AI system; ii) the interpretability of decision-making processes, and iii) the explainability of decision-making logic.”
26 Where explainability is closely related to transparency as it allows persons to understand how the AI system works to arrive at its output.
27 Chesterman et al, “The Evolution of AI Governance” (Working paper version November 2023), available at https://www.techrxiv.org/articles/preprint/The_Evolution_of_AI_Governance/24681063
28 Please note that at present, AI Verify does not set out compulsory criteria and processes (i.e. organisations may choose not to implement the measures in the process checks (or indicate that a particular measure is not applicable to them), and assume the risk that non-compliance may mean that they are non-compliant with other regulatory requirements).
29 For brevity, the processes listed in this article are not an exhaustive list of all the processes related to the principle in AI Verify.
30 Criteria 1.1
31 Process 1.1.1
32 Process 1.1.2
33 Criteria 1.3; Process 1.3.1
34 See IMDA’s interview on 19 June 2023 at https://www.cnbc.com/2023/06/19/singapore-is-not-looking-to-regulate-ai-just-yet-says-the-city-state.html. Other jurisdictions are also using voluntary guidelines as a precursor to hard laws, to fine-tune the policy first. Examples are the joint EU-US initiative to develop a voluntary AI Code of Conduct before any jurisdiction passes laws on AI, and Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (effective 27 September 2023) while the Artificial Intelligence and Data Act is going through the legislative process. It was described as “a critical bridge between now and when that legislation would be coming into force” – see the press release at https://www.canada.ca/en/innovation-science-economic-development/news/2023/09/minister-champagne-launches-voluntary-code-of-conduct-relating-to-advanced-generative-ai-systems.html
35 This is why the AI Verify and NIST AI Risk Management Framework mapping was described as “an important step towards harmonisation of international AI governance frameworks to reduce industry’s cost to meet multiple requirements.” The joint press release is available at https://www.mci.gov.sg/files/Press%20Releases%202023/annex%20a.pdf
36 See page 8 of the OECD Paper on “The State of Implementation of the OECD AI Principles Four Years On” (October 2023). The OECD principles are widely adopted by countries (including Singapore), given that there are 46 countries that have signified their adherence to the OECD’s “Recommendation of the Council on Artificial Intelligence” (issued in 2019) that sets out AI governance principles.
37 See an explanatory note to the OECD’s revised definition of an AI system at https://oecd.ai/en/wonk/ai-system-definition-update
38 See the UK white paper on “A pro-innovation approach to AI regulation” (introduced 29 March 2023 and updated on 3 August 2023), available at https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper
39 See the UK white paper above at para 39.
40 See the commentary at https://techmonitor.ai/comment-2/eu-ai-act-improving
41 See the UK white paper above at para 39.
42 https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-guiding-principles-advanced-ai-system
43 https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-code-conduct-advanced-ai-systems
44 https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023
45 https://www.ncsc.gov.uk/news/uk-develops-new-global-guidelines-ai-security
46 https://www.coe.int/en/web/artificial-intelligence/cai
47 The text is expected to be finalised in early 2024, as reported in https://events.euractiv.com/event/info/charting-the-future-of-ai-from-the-eu-ai-act-to-global-ai-governance
48 https://www.reuters.com/technology/southeast-asia-eyes-hands-off-ai-rules-defying-eu-ambitions-2023-10-11/
49 https://www.reuters.com/technology/southeast-asia-eyes-hands-off-ai-rules-defying-eu-ambitions-2023-10-11/
50 https://www.euractiv.com/section/artificial-intelligence/news/france-germany-italy-push-for-mandatory-self-regulation-for-foundation-models-in-eus-ai-law/
51 Issued on 16 March 2023, and available at https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence
52 See, for example, pages 11, 12, 21 and 22 of the IMDA paper on “Generative AI: Implications for Trust and Governance”.
53 On one hand, some argue that training generative AI models on copyrighted materials is not fair use, since the generated output competes with the copyrighted works it is trained on, affecting the livelihoods of artists. See https://sifted.eu/articles/stability-ai-head-of-audio-resigns and the original tweet at https://twitter.com/ednewtonrex/status/1724902327151452486. On the other hand, AI companies have argued that it is fair use because the copyrighted works are used to train the model to recognise patterns in text and images to generate their own new text and images, and not reproduce existing text and images.
54 The first case is that of Sarah Andersen et al. v Stability AI Ltd, heard in the United States District Court, Northern District of California. District Judge William Orrick (on 30 October 2023) expressed doubt over arguments that all AI generated outputs are infringing derivates of the images they were trained on, saying “even if plaintiffs narrow their allegations to limit them to output images that draw upon training images based upon copyrighted images, I am not convinced that copyright claims based (on) a derivative theory can survive absent ‘substantial similarity’ type allegations.” As reported in https://www.theregister.com/2023/10/31/judge_copyright_stabilityai_deviantart_midjourney/The second case is that of Richard Kadrey et al. v Meta Platforms, Inc., heard in the United States District Court, Northern District of California by District Judge Vince Chhabria. As reported in https://www.newsweek.com/sarah-silverman-lawsuit-meta-ai-1846340 and https://www.niemanlab.org/2023/11/the-legal-framework-for-ai-is-being-built-in-real-time-and-a-ruling-in-the-sarah-silverman-case-should-give-publishers-pause/
55 https://www.iso.org/standard/77609.html?browse=tc
56 https://www.iso.org/standard/78951.html
57 https://www.ncsc.gov.uk/files/Guidelines-for-secure-AI-system-development.pdf
58 For example, the World Economic Forum published a guide in June 2023 on “Adopting AI Responsibly: Guidelines for Procurement of AI Solutions by the Private Sector”, available at https://www.weforum.org/publications/adopting-ai-responsibly-guidelines-for-procurement-of-ai-solutions-by-the-private-sector/
59 There have been calls for this – see for example https://www.dataguidance.com/news/germany-dsk-publishes-opinion-eu-ai-act-demanding-clear and https://www.ukfinance.org.uk/system/files/2023-11/The%20impact%20of%20AI%20in%20financial%20services.pdf (which speaks of the uncertainty in procuring models from third-party providers).
60 See page 56 of the National AI Strategy 2.0: “We will regularly review and adjust frameworks like the Model AI Governance Framework and AI Verify to reflect emerging principles, concerns, and technological developments (e.g. Generative AI). As part of this, it will be important to establish clear responsibilities for actors across the AI supply chain. This baseline guidance will give clarity to AI developers and users on how to be responsible in the design and use of AI.”
61 The FTC has previously ordered companies to delete illegally-obtained data and algorithms built using such data.
62 See https://slate.com/technology/2023/10/artificial-intelligence-copyright-thomson-reuters-ross-intelligence-westlaw-lawsuit.html, where Bob Brauneis, an IP law professor at George Washington Law university is quoted as saying: “The worst outcome would be that you lose (i.e. be found liable for copyright infringement) and that the relief is you have to destroy your model and start all over again (…) The way these models are generated, there’s no way that, say, (OpenAI’s) GPT-4, there’s no way you can go back and filter out the plaintiff’s content from the model that you generated. I think every computer scientist agrees that is not currently possible with the way these models are being built. Then you have to destroy that and start all over again with content that you’ve licensed.”
63 https://www.ftc.gov/news-events/news/press-releases/2022/03/ftc-takes-action-against-company-formerly-known-weight-watchers-illegally-collecting-kids-sensitive
64 https://www.dataguidance.com/news/italy-garante-investigates-webscraping-algorithm (press release only available in Italian, so more details can be found at https://www.jdsupra.com/legalnews/the-italian-data-protection-authority-2331171/)
65 https://www.whitehouse.gov/briefing-room/speeches-remarks/2023/10/30/remarks-by-president-biden-and-vice-president-harris-on-the-administrations-commitment-to-advancing-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

The post Round Up of Significant Legal Developments in AI for 2023 appeared first on The Singapore Law Gazette.


Viewing all articles
Browse latest Browse all 153

Trending Articles