Regulating AI

Regulating AI

Margaret A. Jackson
DOI: 10.4018/978-1-7998-3130-3.ch009
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Artificial intelligence (AI) is already being used in many different sectors and industries globally. These areas include government (help desks, sending demand letters), health (predicative diagnosis), law (predicative policing and sentencing), education (facial recognition), finance (for share trading), advertising (social media), retail (recommendations), transport (drones), smart services (like electricity meters), and so on. At this stage, the AI in use or being proposed is ‘narrow' AI and not ‘general' AI, which means that it has been designed for a specific purpose, say, to advise on sentencing levels or to select potential candidates for interview, rather than being designed to learn and do new things, like a human. The question we need to explore is not whether regulation of AI is needed but how such regulation can be achieved. This chapter examines which existing regulations can apply to AI, which will need to be amended, and which areas might need new regulation to be introduced. Both national and international regulation will be discussed; Australia is the main focus.
Chapter Preview
Top

Introduction

Artificial Intelligence (AI) involves the creation of programs designed to perform tasks generally performed by humans. As the Office of the Victorian Information Commissioner (OVIC) explains in its 2018 issues paper Artificial Intelligence and Privacy (OVIC, 2018, p.1):

These tasks can be considered intelligent, and include visual and audio perception, learning and adapting, reasoning, pattern recognition and decision-making. ‘AI’ is often used as an umbrella term to describe a collection of related techniques and technologies including machine learning, predictive analytics, natural language processing and robotics.

Artificial Intelligence (AI) is already being used in many different sectors and industries globally. These areas include government (help desks, sending demand letters); health (predicative diagnosis), law (predicative policing and sentencing), education (facial recognition), finance (for share trading), advertising (social media), retail (recommendations), transport (drones), smart services (electricity meters) and so on. At this stage, the AI in use or being proposed is ‘narrow’ AI and not ‘general’ AI. This means that it has been designed for a specific purpose, say, to advise on sentencing levels or to select potential candidates for an interview, rather than being designed to learn and do new things, like a human. This does not mean that ‘narrow’ AI, generally non-conscious systems, may not be able to replicate human consciousness in recognising patterns (Harari, 2016) as AI systems excel at identifying patterns in large amounts of data.

While some of the development and the deployment of AI systems is happening at a state or national level, for instance, self-driving cars, there are concerns being expressed that AI development and ownership will be dominated by the large global companies such as Google, Facebook, Apple, Microsoft and Amazon (Nemitz, 2018). Paul Nemitz cites four bases of digital power to watch – lots of money; control of “infrastructure of public discourse”, collection of personal data and profiling, and the algorithms in a “black box; not open to public scrutiny” (Nemitz, 2018, pp. 3-4). All of these bases of power are possessed by the global companies and they are investing considerably in AI development. What this means is that, unless the international community is proactive in working together to create an acceptable and consistent framework of AI regulation which can be adapted by individual nations, there is a risk that commercial interests will set the AI agenda and regulatory responses, at both an international and national level, will largely be reactive.

This chapter examines what is an appropriate regulatory framework for dealing with AI, one which will be able to handle future developments in AI technology. Ethical principles, guidelines, standards and legislation all form part of a regulatory framework. The focus of the chapter is on Australian regulation. It discusses the role of ethical codes and standards in handling AI challenges. It then explores existing regulation to determine if it will apply to AI, whether it will need to be amended and whether there are areas which will require new regulation to be introduced.

Key Terms in this Chapter

Artificial Intelligence (AI): The creation of programs designed to perform tasks generally performed by humans. AI can be ‘narrow’ AI which means that it has been designed for a specific purpose and ‘general’ AI which are designed to replicate human consciousness.

Regulation: Mechanisms of social control, usually based in law, and resulting in the establishment of frameworks, policies, standards and laws.

Standards: Usually technical specifications which set out levels of quality, performance, safety or dimensions relating to a product or process.

Complete Chapter List

Search this Book:
Reset