Challenges of Developing AI Applications in the Evolving Digital World and Recommendations to Mitigate Such Challenges: A Conceptual View

Challenges of Developing AI Applications in the Evolving Digital World and Recommendations to Mitigate Such Challenges: A Conceptual View

Srinivasan Vaidyanathan, Madhumitha Sivakumar, Baskaran Kaliamourthy
Copyright: © 2021 |Pages: 22
DOI: 10.4018/978-1-7998-4900-1.ch011
(Individual Chapters)
No Current Special Offers


These intelligence in the systems are not organic but programmed. In spite of being extensively used, they suffer from setbacks that are to be addressed to expand their usage and a sense of trust in humans. This chapter focuses on the different hurdles faced during the course of adopting the technology namely data privacy, data scarcity, bias, unexplainable Blackbox nature of AI, etc. Techniques like adversarial forgetting, federated learning approach are providing promising results to address various issues like bias, data privacy are being researched widely to check their competency to mitigate these problems. Hardware advancements and the need for enhancing the skillset in the artificial intelligence domain are also elucidated. Recommendations to resolve each major challenge faced are also addressed in this chapter to give an idea about the areas that need improvement.
Chapter Preview


Artificial Intelligence: A Brief Introduction

In today’s technology dominated world Artificial Intelligence is one of the fast developing and dynamically changing sectors. The word AI is increasingly being used everywhere right from autopilots and self-cruising cars to trivial items like toothbrush. Recent researches suggest that even AI can be used to analyse the trends of pandemics like the infamous Covid-19. This sophisticated technology comes with the ability to sweep through piles of data and analyse them effectively and bring up a conclusion thus helping us solve even the existing intractable problems. “Data is the new oil” is a paraphrase that well describes the growth rate of this technology. With the emerging technologies like Internet of Things the network is ever expanding and that requires processing lot of data and analysis of the collected data becomes a humongous task for humans. These systems have the greatest advantage that they are self-taught and always have the capability of adopting to new challenges thrown at them: the neural network which forms the building block of Artificial Intelligence, trains itself over every new data set and optimizes its predictions based on that.

The Artificial Intelligence is like a double-edged sword and it has to properly handled. The increasing connectivity between all of our technological gadgets and even entities like devices make us fragile and more vulnerable to cyberattacks from hackers who are out on the watch for sensitive data. As the AI technology is spreading its influence over a majority of the domains the hackers are also parallelly on the hunt for sophisticated methods to extract data and information from these smart devices.

Thus, adoption of AI itself should not introduce additional burden in sensitive applications. This chapter mainly focusses on the various stumbling blocks that are encountered before adopting AI for critical applications. It mainly focusses on the various setbacks that might occur while adapting to smart devices powered by AI and the various vulnerabilities of the technology that has to be addressed in future to create tightly secure software applications that can’t be meddled by any potential hacker and where tracking out these malicious hackers becomes effectively easier.



Artificial Intelligence is indeed the best invention of man which has helped him ease his daily tasks and effort in any domain. Technology built to mimic human mind is indeed performing well in all domains and is being adopted by everyone. Its application ranges from chatbots, digital assistants to critical applications in the field of therapeutics and banking. Domains like Cyber security, Forensics also are becoming quintessential in nearly all organisations due to the fear of cyber-attacks from hackers for want of some sensitive data and information that pose a great risk to the integrity of the organisation. The growing number of cyberattacks has resulted in the inaugural of Future Series: Cybercrime 2025 by the World Economic Forum and Equifax which is planned during the year 2025.

There is an increasing case of hackers adopting AI techniques to infect any system and retrieve data. According to Rajat Mohanty, CEO Paladion “if attackers can use AI then we defenders can also leverage AI’s power, speed and precision to effectively handle today’s evolved threat landscape”. But there are many hurdles to be overcome before any organisation can adopt AI (Jawed Akhtar,2014). Basically, the perception that AI enabled software alone can defend the organisation from attackers has to vanish. AI enabled systems are only a part of the defending system and they can only provide support in defending the information.

AI applications heavily rely upon data and thus storing and analysing huge amount of data is very important. But sensitive data must be stored with proper care and safe guarding techniques be deployed to prevent any potential breach. Data management becomes a critical task in adopting AI. In case of any data breach the results tend to be devastating as rightly said by Elon Musk,” AI is a fundamental risk to the existence of human civilisation”. There are many problems to be addressed before AI could take over the world.

Key Terms in this Chapter

Data Privacy: It is the relationship between collection and dissemination of data. Certain companies hold sensitive data relating to their customers that they don’t want to reveal to uphold the trust of their customers.

Datasets: This refers to any collection of data that holds the critical information about the current application.

Blackbox: The unexplainable and opacity of the AI systems that don’t reveal the rationale behind the prediction or decision taken.

Data Transparency: Here data transparency refers to the control flow of the data in the machine learning algorithm.

Data Scarcity: It is the unavailability of data that could possibly satisfy the need of the system to increase the accuracy and prediction dynamics.

Adversarial AI: It is a machine learning technique that is employed to fool models by supplying them with malicious and incorrect input datasets.

Data Breach: This refers to any intentional or unintentional leak of secure or private or confidential data to any untrusted system. This is also referred to as information disclosure or data spill.

Bias: Here bias refers to the prejudice or inclination of choice to one aspect of the decision. In AI bias refers to the prediction being made always in favour of the data set already the system has been trained with.

Adversarial Forgetting: It is a methodology that mimics the selective amnesia in human brain applied to AI systems to remove bias.

Machine Learning: The study of algorithms and methods that help machines to learn implicitly without the help of instructions from the patterns in the data fed to the algorithms. This is an emerging trend of automation and is an integral part of artificial intelligence.

Federated Learning: This is a machine learning technique that trains an algorithm on the decentralised data existing on different edge devices than compared to the traditional method of accumulating the data. This model was developed by Google and is an evolving technique that ensures data security.

Complete Chapter List

Search this Book: