What can businesses in Singapore do right now to prepare for the AI regulations that are speedily being enacted around the world, and before Singapore enacts our own? This article discusses the three key issues on everyone’s minds (extraterritoriality, applicability and retroactivity of the new laws), and three practical things organisations can do when the laws are still in flux – document (so that you have contemporaneous records to rely on for subsequent audits, or regulatory submissions), disclosure (to show compliance, and to build trust) and dialogue (with regulators, to have a role in shaping regulation).
The European Union’s Artificial Intelligence Act (heralded as “the world’s first comprehensive AI law”1Described by the European Parliament as such at https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence) is expected to be passed in end-2023, China has enacted a suite of AI legislation2Notably, the following 3 legislation:Provisions on the Management of Algorithmic Recommendations in Internet Information Services (“Algorithmic Recommendation Provisions”) (introduced in 2021; operational on 1 March 2022) – translation available at https://digichina.stanford.edu/work/translation-internet-information-service-algorithmic-recommendation-management-provisions-effective-march-1-2022/Provisions on the Administration of Deep Synthesis Internet Information Services (“Deep Synthesis Provisions”) (introduced in 2022; operational on 10 January 2023) – translation available at https://www.chinalawtranslate.com/en/deep-synthesis/Interim Measures for the Management of Generative Artificial Intelligence Services (“Generative AI Measures”) (introduced in 2023; operational on 15 August 2023) – translation available at https://www.chinalawtranslate.com/en/generative-ai-interim/ that is now in force, and as of mid-2023, the UK3https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper – see the Ministerial foreword: “Our approach relies on collaboration between government, regulators and business. Initially, we do not intend to introduce new legislation. By rushing to legislate too early, we would risk placing undue burdens on businesses.” and Singapore4https://www.cnbc.com/2023/06/19/singapore-is-not-looking-to-regulate-ai-just-yet-says-the-city-state.html maintain that there is no need for AI regulation, while Japan5See pages 18-19 of the Liberal Democratic Party’s AI White Paper released in April 2023, on “New approaches to AI regulation”, available at: https://www.taira-m.jp/ldp%E2%80%99s%20ai%20whitepaper_etrans_2304.pdf and Australia6The Department of Industry, Science and Resources launched a public consultation on 1 June 2023, where it sought the public’s views on how the Australian Government could support the safe and responsible use of AI, through either or both voluntary approaches (like tools, frameworks and principles), or enforceable regulatory approaches (like laws and mandatory standards), available at: https://www.industry.gov.au/news/responsible-ai-australia-have-your-say are now reconsidering their positions on the same. With all this going on, what can businesses do now to prepare for upcoming regulation of their use of AI?
Part 1: The Global State of Play of AI Regulation and Three Key Things Organisations Should Note
There are three main ways in which countries around the world are regulating the use of AI:
- Enact legislation regulating use of AI generally (i.e. across sectors): EU, Canada (and the US if the Algorithmic Accountability Act will be reintroduced);7The Algorithmic Accountability Act of 2022 was rejected after it failed to pass before the 117th Congress adjourned. It would have been the federal equivalent to the EU AI Act. However, the sponsor of the Bill (Senator Wyden) has mentioned in a 3 March 2023 speech that he and his co-sponsors are “again looking at ways to improve the legislation before reintroducing it” (available at https://www.wyden.senate.gov/news/press-releases/wyden-calls-for-accountability-transparency-for-ai-in-remarks-at-georgetown-law). The Algorithmic Accountability Act has been introduced at each Congress starting in 2019.
- Enact legislation regulating use of AI in a particular sector (e.g. employment, finance, transport8All countries have enacted legislation for autonomous vehicles as existing road traffic laws are premised on there being a human driver.), or a particular application of AI (e.g. in internet algorithmic recommendation systems, or generative AI): China9See the Algorithmic Recommendation Provisions, Deep Synthesis Provisions and Generative AI Measures mentioned in footnote 2. and US10See for example the Artificial intelligence Video Interview Act (Illinois) and the New York Local Law 144 of 2021 regarding automated employment decision tools.;
- Rely on existing legislation for now (a wait-and-see approach), but issuing guidelines where necessary to guide the industry: Singapore, UK, Australia and Japan.
However, regardless of whether countries think new legislation is needed or not, they are moving away from “what” AI governance principles should be to the practical side of “how” these principles should be effected. There is broad consensus on what are AI governance principles – see for example those from the OECD which many countries build around.11Available at https://oecd.ai/en/ai-principles But when it comes to translating these ethical/governance principles into enforceable legislation,12Legislation, as opposed to guidelines, can impose consequences for non-compliance such as fines. Hence, they have more “bite” than guidelines. countries then differ in how the specifics are implemented (which we will discuss in Part 2 below).
(1) Will Overseas AI Regulations Impact Me?
So why does it matter to an organisation in Singapore what other jurisdictions are doing to regulate the use of AI? This is because their legislation may have extraterritorial effect, given that technology is not constrained by geographical borders – for example:
- the EU AI Act can apply to AI systems outside the EU if the output produced by those systems is used in the EU;13See Article 2(1) of the EU AI Act.
- China’s Generative AI Measures apply to service providers outside China, who are providing generative AI services in China.14See Article 20 of the Generative AI Measures.
Ideally, if we comply with Country A’s AI laws, we would also comply with Country B’s, so that the technology is easily exported, but global regulation has not reached that state of uniformity yet (although there is talk of global AI governance)15https://unu.edu/article/secretary-general-antonio-guterres-convenes-critical-body-debate-global-governance. Hence, it is important to keep an eye on AI developments overseas.
(2) Is the Technology I am Using Even Caught by the AI Regulations?
Another key issue is defining what is AI, so organisations know whether or not their technology is caught. There are two main approaches to defining AI, with the latter gaining more popularity:
- defining it by the technology involved — listing out the techniques and approaches (e.g. machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning16See Annex I of the EU AI Act.) — like the EU Commission did in the EU AI Act;
- defining it by the unique qualities of the technology – such as opacity, complexity, dependency on data and autonomous behaviour17As described by the European Commission in section 3.5 of the Explanatory Memorandum to the EU AI Act. – like the UK (focussing on autonomy and adaptivity),18The EU Commission defined an AI system based on the techniques and approaches involved, but the EU Parliament is moving in the direction of defining AI by its unique qualities (“autonomy”), following the OECD definition. However, it does not mention “adaptivity” unlike the UK. The author’s view is that the UK approach is the better one as it covers both unique characteristics of AI to distinguish it from other technologies. There are systems that are “autonomous” but we do not consider them AI. to future-proof the definition.
It is also relevant whether the AI system is considered a “high-risk” or “high-impact” system, where there are enhanced obligations on such system providers (beyond transparency obligations to disclose to the user that they are interacting with AI). The EU’s list of high-risk AI systems is still subject to trilogue negotiations – but at least it sets out concrete instances unlike Canada’s legislation where what is high risk will be left to regulations that will only be drafted later.19See section 5(1) of the AIDA. However, the Canadian government has set out what are the key factors it will examine in determining whether an AI system is a high-impact one, in the companion document to the Bill, available at https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document#s6. For example, it will consider evidence of risks of harm to health and safety, based on both the intended purpose and potential unintended consequences, as well as the degree to which the risks are adequately regulated under another law.
There is also the issue of how general purpose AI systems (such as tools like ChatGPT) will be regulated because there are so many different use cases for them – some applications will be high-risk ones, while others will not.20See the European Parliament’s note on general-purpose artificial intelligence technologies, available at: https://www.europarl.europa.eu/RegData/etudes/ATAG/2023/745708/EPRS_ATA(2023)745708_EN.pdf Therefore, until these issues are settled, organisations would do well to start preparing if there is reason to believe (e.g. from regulator statements; or regulation of similar AI systems in other countries) that they will be subject to more stringent obligations under the upcoming AI regulations.
(3) Will AI Legislation Apply to Existing AI Systems on the Market (Or Only Those Developed After the Legislation Comes into Force)?
It is for regulators to decide whether an AI system developed and deployed prior to the commission of the legislation will be exempted from its operation, or its developers given a period of time to comply instead. The latter scenario is more likely, in light of the spirit of the law (if safety and fairness are key considerations, they should apply to all AI systems and not only those developed after a certain date), and also from what is being done on the ground. For example, in New York, the commencement of the bias audit law was delayed so that organisations using automated tools for employment decisions would have more time to have their tools audited before continuing to use them.21https://news.bloomberglaw.com/daily-labor-report/new-york-citys-ai-hiring-bias-law-creates-hurdles-for-companies In the EU, while the initial draft of the AI Act generally provided that the Act would not apply to existing AI systems on the market unless they were subject to “significant changes” in their design or intended purpose after the Act came into force, the European Parliament’s amendments now require any high-risk AI system used by public authorities to comply with the requirements of the Act within two years after it enters into force.22Article 83 of the EU AI Act addresses what is to happen to AI systems already placed on the market or put into service before the commencement of the regulations. The amendments by the European Parliament expand the European Commission’s initial scope of the existing AI systems the Act will apply to.
It is possible that an organisation must cease usage of its AI system if it cannot comply. There have been some enforcement actions in the US where the result is that the algorithms developed on data obtained in violation of data privacy laws or through deceptive conduct must be deleted.23https://www.ftc.gov/news-events/news/press-releases/2022/03/ftc-takes-action-against-company-formerly-known-weight-watchers-illegally-collecting-kids-sensitive
While there is the option to appeal to the regulators for an exemption from the legislation, exemptions are a matter of discretion and are not a given (hence organisations should not count on this route). Therefore, organisations should act as if the upcoming legislation will apply to their existing AI systems, and take the steps in Part 2 below, so as not to be caught short if and when the newly-enacted legislation applies to their existing system.
Part 2: What Organisations Can Do Now to Prepare for Upcoming AI Regulation?
In light of the issues discussed in Part 1, the question on an organisation’s mind would be what can I do now to make my compliance later easier?
There are three things organisations can do:
- Document the development process now;
- Disclose the development process, both to regulators and to the public, to demonstrate compliance and to build trust;
- Dialogue with regulators, to be able to shape the AI regulations and policies that the organisation will be subject to.
When discussing these steps, we will draw on the legislation of countries with AI legislation in place or in the works24There are more countries, like Brazil, that are intending to introduce AI legislation. However, for the purposes of this article, we will only look at these 4 most commonly-cited jurisdictions. (USA, Canada, EU, China), as well as Singapore’s requirements in our Model Artificial Intelligence Governance Framework25Available at https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf (Model Framework). However, one limitation is that many of the legislation leave the specifics to be set out in yet-to-be-enacted subsidiary legislation (e.g. Canada, US), so the full details are not yet known.
Another thing to note is that an organisation will be subject to different obligations depending on its role – whether as a developer/provider of an AI system, or a user of the AI system. For example:
- Canada’s Artificial intelligence and Data Act applies to a person responsible for an AI system, namely if they design, develop or make available for use the AI system or manage its operation;26Section 5(2) AIDA.
- China’s AI regulations predominantly impose obligations on the providers of the AI services27The Algorithmic Recommendation Provisions impose obligations on the “algorithmic recommendation service providers”’; the Deep Synthesis Provisions impose obligations on “deep synthesis service providers”; and the Generative AI Measures impose obligations on providers of generative AI services who provide the services to the public., but in limited cases there are also obligations on users (e.g. users must not use deep synthesis services to produce or transmit fake news information)28Article 6 of the Deep Synthesis Regulations. and organisations that provide technical support to the AI service provider;29Articles 14 and 15 of the Deep Synthesis Regulations.
- the EU AI Act applies to providers30Persons who develop an AI system or have an AI system developed with a view of placing it on the market or putting it into service. as well as users/deployers31Persons who have control over the use of the AI system as they are using the AI system under their authority. of AI systems, but the bulk of the obligations (relating to documentation and disclosure) fall on the provider;32See Articles 18 and 19 – the obligation to draw up technical documentation in Article 11 falls on the provider, and the provider of the high-risk AI system must also ensure that the system undergoes the relevant conformity assessment procedure in accordance with Article 43 prior to the system being placed on the market.
- for the use of automated employment decision tools in New York, the employer using the tool is subject to the laws requiring the tool to be audited, even if they did not develop the tool themselves and procured it from a vendor;33https://www.nyc.gov/assets/dca/downloads/pdf/about/DCWP-AEDT-FAQ.pdf
- Singapore’s Model Framework applies to organisations that develop or procure their own AI systems and is “not intended for organisations that are deploying updated commercial off-the-shelf software packages that happen to now incorporate AI in their feature set”.34Model Framework at 2.2.
(1) Document
The idea of “documentation” is that an organisation keeps records so that it can trace the development process of the AI system from start to end, to enable it to account for the decisions made by the AI system.35Model Framework at 3.27(a) states: “Documenting how the model training and selection processes are conducted, the reasons for which decisions are made, and measures taken to address identified risks will enable the organisation to provide an account of the decisions subsequently.” Documentation helps to “mitigate the risks to fundamental rights and safety posed by AI”,36See para 2.3 of the European Commission’s Explanatory Memorandum to the EU AI Act: “For high-risk AI systems, the requirements of high quality data, documentation and traceability, transparency, human oversight, accuracy and robustness, are strictly necessary to mitigate the risks to fundamental rights and safety posed by AI and that are not covered by other existing legal frameworks.” as well as fulfil requirements for placing the AI system on the market. It also facilitates an audit of an AI system where its algorithms, data and design processes can be assessed by internal or external auditors.37Model Framework at 3.41 and 3.43.
At the end of the day, if documentation is performed for the various steps in a timely manner, if the information is ever needed, it would be contemporaneous, and also more complete than if it were to be pieced together retroactively. This would be especially relevant when defending against liability, and for troubleshooting if the model does not perform as expected.38Model Framework at 3.36.
Across jurisdictions, all the legislation set out requirements for documentation. Although many have left the specifics of documentation to subsidiary legislation (such that until the subsidiary legislation is enacted we only have the broad picture),39For example:Canada – section 10 AIDAUSA – section 3(b)(1) Algorithmic Accountability Act of 2022EU – Article 11(3) of the AI Act provides for the specifics of documentation to be amended by delegated acts, and Article 40 provides that high-risk AI systems that are in conformity with harmonised standards published in the Official Journal of the European Union will be presumed to be in conformity with the requirements in Chapter 2, Title III of the EU AI Act (including the requirements for documentation), to the extent those standards cover the requirements. a clear picture emerges across all jurisdictions that the following should be documented:
- how the AI system is designed/trained;
- what data is used in training; and
- the risks and steps taken to mitigate the risks.
- EU
- Provider of high-risk AI systems must maintain documentation containing the information in Annex IV40Some of the matters that must be documented include (a) the methods and steps performed for the development of the AI system; (b) the design specifications of the system, namely the general logic of the AI system and of the algorithms; (c) the training data sets used, including information about the provenance of those data sets; (d) validation and testing procedures used; and (e) a detailed description of the risk management system in accordance with Article 9. of the AI Act in order to carry out a conformity assessment.41A “conformity assessment” is the process of verifying whether the requirements in Title III, Chapter 2 of the AI Act relating to a high-risk AI system have been fulfilled.
- US
- Maintain documentation of any “impact assessment”42An “impact assessment” is the ongoing study and evaluation of the system and its impact on consumers. performed on the deployed AI system.43Section 3(b)(1)(B) of the Algorithmic Accountability Act of 2022. What must be covered is set out in section 4 of the Algorithmic Accountability Act of 2022.44Some of the matters that must be covered in an impact assessment are:documenting consultations with relevant stakeholders;perform ongoing testing and evaluation of privacy risks of the AI system in accordance with any NIST or Federal Government best practices and standards;perform ongoing testing and evaluation of the current and historical performance of the AI system, including documenting the methods used to assess performance, and evaluating any differential performance associated with consumers’ race, color, sex, gender, age, disability, religion, family status, socioeconomic status, or veteran status, and any other characteristics the Federal Trade Commission deems appropriate;maintain and keep updated documentation of any data or other input information used to develop, test, maintain or update the AI system;documenting steps taken to eliminate or reasonably mitigate any likely material negative impact of the AI system on consumers, and if the impact cannot be mitigated, the rationale for it and why the compelling interest which leads to such an impact cannot be satisfied by other any other means.
- China
- Documentation is necessary in order to fulfil statutory requirements, such as for filing algorithms with the registry (e.g. providing information on how algorithm is trained), as well as to periodically review the performance of the algorithms45See e.g. Article 15 of the Deep Synthesis Provisions, which states “Deep synthesis service providers and technical supporters shall strengthen technical management, periodically reviewing, assessing and verifying algorithmic mechanisms that produce synthesis”, available at https://www.chinalawtranslate.com/en/deep-synthesis/. See also Article 19 of the Generative AI Measures., or to confirm that training data was obtained from lawful sources.46Article 7 of the Generative AI Measures.
- Canada
- Responsible persons must keep records of the measures established for their use of anonymized data.47Section 10 AIDA on “Keeping general records”If the AI system is a high-impact system, to also keep records of the reasons supporting their assessment that it is a high-impact system, as well as the measures they have taken to identify, assess and mitigate the risks of harm or biased output, and monitor compliance with those mitigation measures.48Section 10 AIDA on “Keeping general records”The records must be provided to the Minister upon request.49Sections 13 and 14 AIDA.
While waiting for more details to emerge in the legislation, organisations can in the meantime rely on guidelines from regulators.50See, for example, the UK Information Commissioner’s Office’s guide on what must be documented when developing and using an AI system: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence/part-3-what-explaining-ai-means-for-your-organisation/documentation/ By way of illustration of what should be documented, the Singapore Model Framework recommends that organisations:
- Document the datasets and processes that give rise to the AI model’s decision (including those of data gathering, data labelling and the algorithms used)51Model Framework at 3.36. – this would include keeping records of how the AI system is designed, such as why one model was selected over another;
- Prepare a risk impact assessment when deploying AI in decision-making, identifying the risks, looking at the probability and/or severity of harm to affected persons, and the measures taken to address identified risks;52The Model Framework at 3.13 encourages organisations to document the process of developing the AI system (including the level of human involvement in AI-augmented decision-making) through a periodically reviewed risk impact assessment. It adds that the risk impact assessment will “also help organisations respond to potential challenges from individuals, other organisations or businesses, and regulators.”
- Keep a data provenance record to ascertain the quality of data used for model training – this would cover where the data came from (whether the source is reliable), whether the data is recent/up-to-date, how representative/complete the dataset is, and whether the data has been modified in any way (e.g. labels have been applied to the data, or certain attributes have been removed).53The Model Framework at 3.23 encourages organisations to keep a data provenance record to ascertain the quality of the data used to train their AI models based on its origin and subsequent transformation, trace potential sources of errors, update the data and attribute data to their sources. For example, it would be helpful to know if datasets obtained from a trusted third-party have been comingled with data from multiple sources, so that the organisation can assess the risk of using such data and manage the risk accordingly.
For further guidance on how to document the above, an organisation may actually create its own comprehensive documentation that broadly covers all the above (and the common international requirements mentioned earlier) by completing the checklist in the “Implementation and Self-Assessment Guide for Organisations” (ISAGO)54Available at https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGIsago.pdf, a companion to the Model Framework. As a rule of thumb, the level of documentation required would be to the extent that the results of the model can be reproduced by an independent verification team using the same method based on the documentation made by the organisation.55Model Framework at 3.39. See also Florian Königstorfer & Stefan Thalmann, AI Documentation: A Path to accountability in the Journal of Responsible Technology Vol. 11, October 2022, available at https://www.sciencedirect.com/science/article/pii/S2666659622000208, quoting an interviewee that “In principle, AI applications should be documented to the extent that their results are reproducible.”
While having a robust system for documentation cannot ensure a 100% “safe” or “accurate” AI system, being made to think about each of the issues/risks raised in detail ensures more well-considered “eyes-wide-open” decisions. Until more concrete details emerge from regulators both locally and overseas on what to document, the ISAGO is a very good starting point for organisations in Singapore to follow.
(2) Disclose
Disclosure in this context means more than just disclosing that AI is being used, or watermarking AI-generated content. Rather, it is about making information about the AI system (such as how it works) available to persons outside of the organisation, which could be regulators, or the general public.
When translating AI ethics/governance principles (e.g. fairness, transparency) into legislation, it is often in the form of steps organisations must take in order to achieve that principle – otherwise the rule would be so broad it does not mean anything (e.g. “an AI system’s decision-making process must be fair”), especially where concepts like “fairness” are subjective. Organisations would then have some reporting obligations (i.e. disclosures) on the steps they have taken, so that regulators are able to assess their compliance with these steps to maximise safety and minimise risks and biased outcomes.
“Documentation” therefore feeds into “disclosure”, as keeping good records ensures you are not caught short when eventually called on to provide details of actions you have taken. Seeing what regulators expect to be disclosed also helps you be more targeted about what you document, and understand the purpose behind documenting that item/process.
How disclosures are to be carried out generally falls into three baskets:
- Conduct an assessment (known by many names – such as an “impact assessment” or “conformity assessment”) – whether internally or with the aid of a third party (depending on the nature of the AI system) – where the results must be submitted to a regulator, or can be viewed on request by a regulator. The assessment will cover matters such as the factors taken into account when making a decision, what data is used to train the model, etc.;
- File the algorithms in a Registry controlled by the regulator (China);
- Publish the results of an external audit or an internal assessment on a publicly-available website (e.g. New York, Canada).
- EU
- Perform a conformity assessment56The conformity assessment procedure is set out in Annex VI (if carried out by the provider itself) and Annex VII (if carried out by a third party “notified body”) of the AI Act. before placing the high-risk AI system on the market.
- The technical documentation may also be examined by national competent authorities57The information that the technical documentation must contain is set out in Annex IV of the EU AI Act. Article 11(1) of the AI Act provides that the technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in Chapter 2, and provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements. See also Article 64 concerning the authorities’ access to data and documentation. to see if the AI system complies with the Act requirements.
- Register high-risk AI systems in EU-wide database.58Article 60.
- Notify authorities of serious incidents or malfunctioning of the AI system.59Article 62.
- US
- Federal: Submit annually to the Federal Trade Commission a “summary report” containing information from the impact assessment60Section 3(b)(1)(D) of the Algorithmic Accountability Act of 2022. See section 5 for what the summary reports to the FTC must contain. (discussed above in “documentation”), which the FTC will then publish a summary of publicly on its website.61Section 6(a) and (b), Algorithmic Accountability Act of 2022.
- New York: Subject AI tool to an audit before it is used, and publish the results on a publicly-available website.62Automated employment decision tools must be subject to a bias audit by an independent auditor: https://rules.cityofnewyork.us/wp-content/uploads/2023/04/DCWP-NOA-for-Use-of-Automated-Employment-Decisionmaking-Tools-2.pdf.Illinois has a related practice, where records of decisions made by an AI system are sent to a government body so they can be evaluated. Under the Artificial Intelligence Video interview Act (available at: https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=4015&ChapterID=68), reports of demographic data (race and ethnicity of applicants subject to AI video interviews) are to be submitted to a government body so that they can be analysed to see if the data discloses a racial bias in the use of AI.
- China
- File the algorithms in a Registry63Article 24 of the Algorithmic Recommendation Provisions – providers of algorithmic recommendation services with public opinion properties or having social mobilization capabilities must file information about the algorithms, where a limited version of the filing is then made public. See also Article 19 of the Deep Synthesis Provisions and Article 17 of the Generative AI Measures. (disclosing details of how the AI model is trained, including the datasets used in training and whether biometric or personal information is used)64https://carnegieendowment.org/2022/12/09/what-china-s-algorithm-registry-reveals-about-ai-governance-pub-88606
- Carry out security assessment procedures65Per Article 27 of the Algorithmic Recommendation Provisions, Articles 15 and 20 of the Deep Synthesis Provisions, and Article 17 of the Generative AI Measures.A security assessment requires the service provider to assess the extent to which it has in place measures for verifying users’ real identities, logging information concerning users’ accounts/usage times, preserving records where there is harmful information published or disseminated to facilitate investigations, and having technical measures to protect personal data, etc. – see the Provisions on the Security Assessment of Internet Information Services that have Public Opinion Properties or the Capacity for Social Mobilization available at https://www.chinalawtranslate.com/en/provisions-on-the-security-assessment-of-internet-information-services-that-have-public-opinion-properties-or-the-capacity-for-social-mobilization/
- Inform users how the algorithms function66Article 16 of the Algorithmic Recommendation Provisions.
- Canada
- Publish explanation on publicly-available website on how the high-impact system operates and the measures taken to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system67Section 11 AIDA – it imposes separate (but similar) obligations on a person who makes available for use a high-impact system, and a person who manages the operation of a high-impact system.
- Notify Minister if use of high-impact system results or is likely to result in material harm68Section 12 AIDA.
If there is no in-force legislation requiring disclosure to the authorities/public as yet, voluntary disclosure to the public takes centre-stage, as it helps to build user trust. Singapore’s Model Framework encourages organisations to do so voluntarily, suggesting that they may publicise how AI is used in relation to consumers, what steps they have taken to mitigate the risks, and the role and extent that AI plays in their decision-making process.69Model Framework at 3.46. The concept of voluntary disclosures is not new – most recently, the White House announced that seven leading companies involved in the development of AI technology have voluntarily committed to manage the risks posed by AI and publicly disclose both the capabilities and the limitations of their AI systems.70https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/#:~:text=As%20part%20of%20this%20commitment,help%20move%20toward%20safe%2C%20secure%2C
(3) Dialogue
No single entity has all the answers where it comes to developing regulations for AI, hence Singapore’s regulators have said that it is important for them to work closely with the industry, research organisations and other governments.71https://www.cnbc.com/2023/06/19/singapore-is-not-looking-to-regulate-ai-just-yet-says-the-city-state.html Singapore has most recently held public consultations for the use of AI in human biomedical research72The Bioethics Advisory Committee held a public consultation from 2 May to 14 July 2023, available at https://www.reach.gov.sg/Participate/Public-Consultation/Ministry-of-Health/Bioethics-Advisory-Committee/public-consultation-on-big-data-and-artificial-intelligence-in-human-biomedical-research as well as the use of personal data to develop and deploy AI systems.73The Personal Data Protection Commission held a public consultation from 18 July to 31 August 2023 for the proposed advisory guidelines on the use of personal data in AI recommendation and decision systems, available at https://www.pdpc.gov.sg/news-and-events/announcements/2023/07/public-consultation-for-the-proposed-advisory-guidelines-on-use-of-personal-data-in-ai-recommendation-and-decision-systems Organisations would do well to contribute their on-the-ground expertise.
To give an illustration of how responding to consultation can shape policy, in the consultation draft of the Generative AI Measures, providers had to “ensure” the truth, accuracy, objectivity and diversity of the training data, which would pose challenges for compliance. However, in the version released after the consultation period, it was changed to just “employ effective measures to increase the quality of training data, and increase the truth, accuracy, objectivity and diversity of training data”.74See a comparison of both versions of Article 7(4) at https://www.chinalawtranslate.com/en/comparison-chart-of-current-vs-draft-rules-for-generative-ai/
Conclusion
Organisations should always take a sensible approach and remember that in the absence of specific legislation for AI systems, whatever laws governing humans who make a decision would govern the same decision if made by a machine. Therefore, if there are factors a human cannot take into account, or if the human’s decision-making process is subject to scrutiny, the same would (and should) apply to AI systems.
In the meantime, these 3 Ds – documentation, disclosure and dialogue – drive a responsible AI approach for organisations, even before clear and enforceable legislation is crystallised.
The views expressed in this article are the personal views of the author and do not represent the views of Drew & Napier LLC.
Endnotes
↑1 | Described by the European Parliament as such at https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence |
---|---|
↑2 | Notably, the following 3 legislation:Provisions on the Management of Algorithmic Recommendations in Internet Information Services (“Algorithmic Recommendation Provisions”) (introduced in 2021; operational on 1 March 2022) – translation available at https://digichina.stanford.edu/work/translation-internet-information-service-algorithmic-recommendation-management-provisions-effective-march-1-2022/Provisions on the Administration of Deep Synthesis Internet Information Services (“Deep Synthesis Provisions”) (introduced in 2022; operational on 10 January 2023) – translation available at https://www.chinalawtranslate.com/en/deep-synthesis/Interim Measures for the Management of Generative Artificial Intelligence Services (“Generative AI Measures”) (introduced in 2023; operational on 15 August 2023) – translation available at https://www.chinalawtranslate.com/en/generative-ai-interim/ |
↑3 | https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper – see the Ministerial foreword: “Our approach relies on collaboration between government, regulators and business. Initially, we do not intend to introduce new legislation. By rushing to legislate too early, we would risk placing undue burdens on businesses.” |
↑4 | https://www.cnbc.com/2023/06/19/singapore-is-not-looking-to-regulate-ai-just-yet-says-the-city-state.html |
↑5 | See pages 18-19 of the Liberal Democratic Party’s AI White Paper released in April 2023, on “New approaches to AI regulation”, available at: https://www.taira-m.jp/ldp%E2%80%99s%20ai%20whitepaper_etrans_2304.pdf |
↑6 | The Department of Industry, Science and Resources launched a public consultation on 1 June 2023, where it sought the public’s views on how the Australian Government could support the safe and responsible use of AI, through either or both voluntary approaches (like tools, frameworks and principles), or enforceable regulatory approaches (like laws and mandatory standards), available at: https://www.industry.gov.au/news/responsible-ai-australia-have-your-say |
↑7 | The Algorithmic Accountability Act of 2022 was rejected after it failed to pass before the 117th Congress adjourned. It would have been the federal equivalent to the EU AI Act. However, the sponsor of the Bill (Senator Wyden) has mentioned in a 3 March 2023 speech that he and his co-sponsors are “again looking at ways to improve the legislation before reintroducing it” (available at https://www.wyden.senate.gov/news/press-releases/wyden-calls-for-accountability-transparency-for-ai-in-remarks-at-georgetown-law). The Algorithmic Accountability Act has been introduced at each Congress starting in 2019. |
↑8 | All countries have enacted legislation for autonomous vehicles as existing road traffic laws are premised on there being a human driver. |
↑9 | See the Algorithmic Recommendation Provisions, Deep Synthesis Provisions and Generative AI Measures mentioned in footnote 2. |
↑10 | See for example the Artificial intelligence Video Interview Act (Illinois) and the New York Local Law 144 of 2021 regarding automated employment decision tools. |
↑11 | Available at https://oecd.ai/en/ai-principles |
↑12 | Legislation, as opposed to guidelines, can impose consequences for non-compliance such as fines. Hence, they have more “bite” than guidelines. |
↑13 | See Article 2(1) of the EU AI Act. |
↑14 | See Article 20 of the Generative AI Measures. |
↑15 | https://unu.edu/article/secretary-general-antonio-guterres-convenes-critical-body-debate-global-governance |
↑16 | See Annex I of the EU AI Act. |
↑17 | As described by the European Commission in section 3.5 of the Explanatory Memorandum to the EU AI Act. |
↑18 | The EU Commission defined an AI system based on the techniques and approaches involved, but the EU Parliament is moving in the direction of defining AI by its unique qualities (“autonomy”), following the OECD definition. However, it does not mention “adaptivity” unlike the UK. The author’s view is that the UK approach is the better one as it covers both unique characteristics of AI to distinguish it from other technologies. There are systems that are “autonomous” but we do not consider them AI. |
↑19 | See section 5(1) of the AIDA. However, the Canadian government has set out what are the key factors it will examine in determining whether an AI system is a high-impact one, in the companion document to the Bill, available at https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document#s6. For example, it will consider evidence of risks of harm to health and safety, based on both the intended purpose and potential unintended consequences, as well as the degree to which the risks are adequately regulated under another law. |
↑20 | See the European Parliament’s note on general-purpose artificial intelligence technologies, available at: https://www.europarl.europa.eu/RegData/etudes/ATAG/2023/745708/EPRS_ATA(2023)745708_EN.pdf |
↑21 | https://news.bloomberglaw.com/daily-labor-report/new-york-citys-ai-hiring-bias-law-creates-hurdles-for-companies |
↑22 | Article 83 of the EU AI Act addresses what is to happen to AI systems already placed on the market or put into service before the commencement of the regulations. The amendments by the European Parliament expand the European Commission’s initial scope of the existing AI systems the Act will apply to. |
↑23 | https://www.ftc.gov/news-events/news/press-releases/2022/03/ftc-takes-action-against-company-formerly-known-weight-watchers-illegally-collecting-kids-sensitive |
↑24 | There are more countries, like Brazil, that are intending to introduce AI legislation. However, for the purposes of this article, we will only look at these 4 most commonly-cited jurisdictions. |
↑25 | Available at https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf |
↑26 | Section 5(2) AIDA. |
↑27 | The Algorithmic Recommendation Provisions impose obligations on the “algorithmic recommendation service providers”’; the Deep Synthesis Provisions impose obligations on “deep synthesis service providers”; and the Generative AI Measures impose obligations on providers of generative AI services who provide the services to the public. |
↑28 | Article 6 of the Deep Synthesis Regulations. |
↑29 | Articles 14 and 15 of the Deep Synthesis Regulations. |
↑30 | Persons who develop an AI system or have an AI system developed with a view of placing it on the market or putting it into service. |
↑31 | Persons who have control over the use of the AI system as they are using the AI system under their authority. |
↑32 | See Articles 18 and 19 – the obligation to draw up technical documentation in Article 11 falls on the provider, and the provider of the high-risk AI system must also ensure that the system undergoes the relevant conformity assessment procedure in accordance with Article 43 prior to the system being placed on the market. |
↑33 | https://www.nyc.gov/assets/dca/downloads/pdf/about/DCWP-AEDT-FAQ.pdf |
↑34 | Model Framework at 2.2. |
↑35 | Model Framework at 3.27(a) states: “Documenting how the model training and selection processes are conducted, the reasons for which decisions are made, and measures taken to address identified risks will enable the organisation to provide an account of the decisions subsequently.” |
↑36 | See para 2.3 of the European Commission’s Explanatory Memorandum to the EU AI Act: “For high-risk AI systems, the requirements of high quality data, documentation and traceability, transparency, human oversight, accuracy and robustness, are strictly necessary to mitigate the risks to fundamental rights and safety posed by AI and that are not covered by other existing legal frameworks.” |
↑37 | Model Framework at 3.41 and 3.43. |
↑38 | Model Framework at 3.36. |
↑39 | For example:Canada – section 10 AIDAUSA – section 3(b)(1) Algorithmic Accountability Act of 2022EU – Article 11(3) of the AI Act provides for the specifics of documentation to be amended by delegated acts, and Article 40 provides that high-risk AI systems that are in conformity with harmonised standards published in the Official Journal of the European Union will be presumed to be in conformity with the requirements in Chapter 2, Title III of the EU AI Act (including the requirements for documentation), to the extent those standards cover the requirements. |
↑40 | Some of the matters that must be documented include (a) the methods and steps performed for the development of the AI system; (b) the design specifications of the system, namely the general logic of the AI system and of the algorithms; (c) the training data sets used, including information about the provenance of those data sets; (d) validation and testing procedures used; and (e) a detailed description of the risk management system in accordance with Article 9. |
↑41 | A “conformity assessment” is the process of verifying whether the requirements in Title III, Chapter 2 of the AI Act relating to a high-risk AI system have been fulfilled. |
↑42 | An “impact assessment” is the ongoing study and evaluation of the system and its impact on consumers. |
↑43 | Section 3(b)(1)(B) of the Algorithmic Accountability Act of 2022. |
↑44 | Some of the matters that must be covered in an impact assessment are:documenting consultations with relevant stakeholders;perform ongoing testing and evaluation of privacy risks of the AI system in accordance with any NIST or Federal Government best practices and standards;perform ongoing testing and evaluation of the current and historical performance of the AI system, including documenting the methods used to assess performance, and evaluating any differential performance associated with consumers’ race, color, sex, gender, age, disability, religion, family status, socioeconomic status, or veteran status, and any other characteristics the Federal Trade Commission deems appropriate;maintain and keep updated documentation of any data or other input information used to develop, test, maintain or update the AI system;documenting steps taken to eliminate or reasonably mitigate any likely material negative impact of the AI system on consumers, and if the impact cannot be mitigated, the rationale for it and why the compelling interest which leads to such an impact cannot be satisfied by other any other means. |
↑45 | See e.g. Article 15 of the Deep Synthesis Provisions, which states “Deep synthesis service providers and technical supporters shall strengthen technical management, periodically reviewing, assessing and verifying algorithmic mechanisms that produce synthesis”, available at https://www.chinalawtranslate.com/en/deep-synthesis/. See also Article 19 of the Generative AI Measures. |
↑46 | Article 7 of the Generative AI Measures. |
↑47 | Section 10 AIDA on “Keeping general records” |
↑48 | Section 10 AIDA on “Keeping general records” |
↑49 | Sections 13 and 14 AIDA. |
↑50 | See, for example, the UK Information Commissioner’s Office’s guide on what must be documented when developing and using an AI system: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence/part-3-what-explaining-ai-means-for-your-organisation/documentation/ |
↑51 | Model Framework at 3.36. |
↑52 | The Model Framework at 3.13 encourages organisations to document the process of developing the AI system (including the level of human involvement in AI-augmented decision-making) through a periodically reviewed risk impact assessment. It adds that the risk impact assessment will “also help organisations respond to potential challenges from individuals, other organisations or businesses, and regulators.” |
↑53 | The Model Framework at 3.23 encourages organisations to keep a data provenance record to ascertain the quality of the data used to train their AI models based on its origin and subsequent transformation, trace potential sources of errors, update the data and attribute data to their sources. For example, it would be helpful to know if datasets obtained from a trusted third-party have been comingled with data from multiple sources, so that the organisation can assess the risk of using such data and manage the risk accordingly. |
↑54 | Available at https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGIsago.pdf |
↑55 | Model Framework at 3.39. See also Florian Königstorfer & Stefan Thalmann, AI Documentation: A Path to accountability in the Journal of Responsible Technology Vol. 11, October 2022, available at https://www.sciencedirect.com/science/article/pii/S2666659622000208, quoting an interviewee that “In principle, AI applications should be documented to the extent that their results are reproducible.” |
↑56 | The conformity assessment procedure is set out in Annex VI (if carried out by the provider itself) and Annex VII (if carried out by a third party “notified body”) of the AI Act. |
↑57 | The information that the technical documentation must contain is set out in Annex IV of the EU AI Act. Article 11(1) of the AI Act provides that the technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in Chapter 2, and provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements. See also Article 64 concerning the authorities’ access to data and documentation. |
↑58 | Article 60. |
↑59 | Article 62. |
↑60 | Section 3(b)(1)(D) of the Algorithmic Accountability Act of 2022. See section 5 for what the summary reports to the FTC must contain. |
↑61 | Section 6(a) and (b), Algorithmic Accountability Act of 2022. |
↑62 | Automated employment decision tools must be subject to a bias audit by an independent auditor: https://rules.cityofnewyork.us/wp-content/uploads/2023/04/DCWP-NOA-for-Use-of-Automated-Employment-Decisionmaking-Tools-2.pdf.Illinois has a related practice, where records of decisions made by an AI system are sent to a government body so they can be evaluated. Under the Artificial Intelligence Video interview Act (available at: https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=4015&ChapterID=68), reports of demographic data (race and ethnicity of applicants subject to AI video interviews) are to be submitted to a government body so that they can be analysed to see if the data discloses a racial bias in the use of AI. |
↑63 | Article 24 of the Algorithmic Recommendation Provisions – providers of algorithmic recommendation services with public opinion properties or having social mobilization capabilities must file information about the algorithms, where a limited version of the filing is then made public. See also Article 19 of the Deep Synthesis Provisions and Article 17 of the Generative AI Measures. |
↑64 | https://carnegieendowment.org/2022/12/09/what-china-s-algorithm-registry-reveals-about-ai-governance-pub-88606 |
↑65 | Per Article 27 of the Algorithmic Recommendation Provisions, Articles 15 and 20 of the Deep Synthesis Provisions, and Article 17 of the Generative AI Measures.A security assessment requires the service provider to assess the extent to which it has in place measures for verifying users’ real identities, logging information concerning users’ accounts/usage times, preserving records where there is harmful information published or disseminated to facilitate investigations, and having technical measures to protect personal data, etc. – see the Provisions on the Security Assessment of Internet Information Services that have Public Opinion Properties or the Capacity for Social Mobilization available at https://www.chinalawtranslate.com/en/provisions-on-the-security-assessment-of-internet-information-services-that-have-public-opinion-properties-or-the-capacity-for-social-mobilization/ |
↑66 | Article 16 of the Algorithmic Recommendation Provisions. |
↑67 | Section 11 AIDA – it imposes separate (but similar) obligations on a person who makes available for use a high-impact system, and a person who manages the operation of a high-impact system. |
↑68 | Section 12 AIDA. |
↑69 | Model Framework at 3.46. |
↑70 | https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/#:~:text=As%20part%20of%20this%20commitment,help%20move%20toward%20safe%2C%20secure%2C |
↑71 | https://www.cnbc.com/2023/06/19/singapore-is-not-looking-to-regulate-ai-just-yet-says-the-city-state.html |
↑72 | The Bioethics Advisory Committee held a public consultation from 2 May to 14 July 2023, available at https://www.reach.gov.sg/Participate/Public-Consultation/Ministry-of-Health/Bioethics-Advisory-Committee/public-consultation-on-big-data-and-artificial-intelligence-in-human-biomedical-research |
↑73 | The Personal Data Protection Commission held a public consultation from 18 July to 31 August 2023 for the proposed advisory guidelines on the use of personal data in AI recommendation and decision systems, available at https://www.pdpc.gov.sg/news-and-events/announcements/2023/07/public-consultation-for-the-proposed-advisory-guidelines-on-use-of-personal-data-in-ai-recommendation-and-decision-systems |
↑74 | See a comparison of both versions of Article 7(4) at https://www.chinalawtranslate.com/en/comparison-chart-of-current-vs-draft-rules-for-generative-ai/ |
The post Navigating the Ever-evolving AI Landscape: What Lawyers and Businesses in Singapore Should Know appeared first on The Singapore Law Gazette.