A Framework for Risk Management in AI System Development Projects

A Framework for Risk Management in AI System Development Projects

Kitti Photikitti, Kitikorn Dowpiset, Jirapun Daengdej
Copyright: © 2019 |Pages: 20
DOI: 10.4018/978-1-5225-7903-8.ch001
(Individual Chapters)
No Current Special Offers


It has been well-known that the chance of successfully delivering a software project within an allocated time and budget is very low. Most of the researches in this area have concluded that “user's requirements” of the systems is one of the most difficult risks to deal with in this case. Interestingly, until today, regardless of amount of effort put into this area, the possibility of project failure is still very high. The issue with requirement can be significantly increased when developing an artificial intelligence (AI) system, where one would like the systems to autonomously behave. This is because we are not only dealing with user's requirements, but we must also be able to deal with “system's behavior” that, in many cases, do not even exist during software development. This chapter discusses a preliminary work on a framework for risk management for AI systems development projects. The goal of this framework is to help project management in minimizing risk that can lead AI software projects to fail due to the inability to finish the projects on time and within budget.
Chapter Preview


In the past few years, there is no concept that catch attentions from businesses and even leaders of countries around the world like the use and advancement of Artificial Intelligence (AI). In 2016, the White House released a report on future directions and considerations for AI called “Preparing for the Future of Artificial Intelligence” (Felten & Lyons, 2016). A “National Artificial Intelligence Research and Development Strategic Plan”, which lays out a strategic plan for Federally-funded research and development in AI, is also released, accordingly. In fact, in 2016, President Obama said during his interview with MIT’s Joi Ito and WIRED’s Scott Dadich that:

We’ve been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed, and it promises to create a vastly more productive and efficient economy. If properly harnessed, it can generate enormous prosperity and opportunity.

For Russia, President Putin states that the nation that leads in AI ‘will be the ruler of the world’ (Vincent, 2017). In addition, for China, President Xi Jinping targets that China will become the global leader in AI by 2030 (Chandler, 2018).

On the other hand, in commercial world, Pichai, CEO of Google, stated at the company’s annual event called Google I/O 2017 that Google’s vision has been changed from “Mobile First World to an AI First World” (Zerega, 2017). Microsoft has been investing heavily in AI for more than 25 years. Amazon has a dedicated AI unit that spread and provide support on applied researches in AI to the company’s projects (Levy, 2018).

Regardless of its popularity in both public and private sectors, unfortunately, a large number of AI system development projects around the world are considered to be failed. In addition to the news around the world that report on failure of autonomous cars in various incidents, in 2016, TechRepublic reported “Top 10 AI Failures of 2016”, which led to results such as:

  • AI built to predict future crime was racist

  • Non-player characters in a video game crafted weapons beyond creator's plans

  • Insurance company uses Facebook data to issue rates, shows bias

  • Robot injured a child, and

  • AI-judged beauty contest is racist

Especially with the incidents related to life and threat of people, an article by MIT Technology Review is dedicated to a certain kind of failure of the AI system, self-driving cars (Emerging Technology, 2018). In addition, according to an article published in Harvard Business Review, April, 2017, most of the AI projects in organizations tend to fail unless an appropriate approach in implementation of the system is adopted (Hosanagar & Saxena, 2017). This followed by an article in Harvard Business Review by Satell who suggests a number of important points that should be considered by organizations in order to increase the chance of success in AI system development projects (Satell, 2018).

Regardless of what AI is, AI system is simply a software that has to be written. Even though some of the AI systems can develop itself, but, to start, the system has to be properly design and code (Simonite, 2017). Then, depending on the applied concept, some of the systems may be able to further develop themselves when certain conditions are met. This means that regardless of how the AI system is developed, knowledge and experience in software project management still play an important role in the success of the projects.

Key Terms in this Chapter

Risk: An uncertainty that an interested event/activity will not meet expected outcome.

Artificial Intelligence (AI): A field of study that focuses on mimicking human thought and learning process by using computerized systems.

Risk Management: Process of identifying and assessing risks in order to develop strategies to manage risks.

Autonomous System: A system that has ability to acquire input, analyze it, reason the result, and act automatically without human intervention.

Complete Chapter List

Search this Book: