Artificial Intelligence and the Myth of Objectivity: Need for Regulation of Artificial Intelligence in Healthcare

Artificial intelligence (AI) is being rapidly integrated into healthcare with a naïve belief in the objectivity of AI and a complacent trust in the omniscience of computational knowledge. While AI has the potential to transform healthcare, there are significant ethical and safety concerns. The pace of AI development and the race for AI supremacy is leading to a rapid, and largely unregulated, proliferation of AI applications. It is important to understand that AI technologies bring new and accelerated risks and need meaningful human control and oversight. However, standards and regulation in the field are at a very nascent stage and need urgent attention. This paper explores the issues related to reliability, transparency, bias, and ethics to illustrate the ground realities and makes a case for developing standards and regulatory frameworks for the safe, effective, and ethical use of AI in healthcare.


INTROdUCTION
"By far, the greatest danger of artificial intelligence is that people conclude too early that they understand it."-EliezerYudkowsky 1 Artificial Intelligence (AI) is changing the way we work and live with an exponential increase in the integration of AI components into products and processes around us.It is rapidly making its way in every possible sector, affecting us directly or indirectly.In fact, a recent Whitepaper by the UK government noted that AI could go on to have as much impact on human life as electricity or the internet (Gov.UK, 2023).
AI technologies have the potential to transform healthcare and, in recent years, the abilities of AI to augment those of clinicians have been repeatedly and emphatically demonstrated in many medical domains.However, the current state and nature of the technology has raised several concerns about safety and ethicality of AI.The exponential growth of AI-powered tools has even prompted the World Health Organization (WHO) to warn about the need to demonstrate evidence-based benefits before services are offered to patients and consumers as they may come with long-term risks (WHO, 2023) Since AI technology involves some level of autonomy in the software, there is apprehension that unless it is aligned with fundamental human values, it can have unintended and grave consequences.Oversight, domain-specific standards and active regulation are critical for the utilization of the full potential of AI in healthcare.However, existing policies for the regulation of AI are general in nature; they do not consider the sensitivities, specificities, and risks of AI in the healthcare domain.
Technological progress always comes with new and significant challenges.Some of these challenges are tied to the technical properties of AI, others relate to the legal, medical, and social perspectives, making it necessary to adopt an integrative approach.
This paper, using a multi-disciplinary perspective, explores the limitations of AI, and makes a case for the need of standards and regulation for its safe, effective and ethical use in healthcare.The first section introduces AI and its applications in healthcare.The following section with some illustrations reflects upon the various issues emerging from the use of AI in healthcare.The need for standards and the challenges related to regulation of AI are discussed in the final section.

SeCTION ONe
The adoption of artificial intelligence (AI) is rapidly taking hold across global business, according to a Global Survey (McKinsey & Co, 2018).This is thanks, in large part, to the availability of data on almost every aspect of life and exponential increase in computational power, as also advances in AI technology (Figure 1), which have made it possible to process large amounts of data from which actionable information can be produced (Turner Lee, 2019).The global AI market size was valued at USD 136.55 billion in 2022 and is projected to expand at a compound annual growth rate (CAGR) of 37.3% from 2023 to 2030.AI in healthcare is projected to grow from USD 14.6 bn (2023) to USD 102.7 bn by 2028, at a CAGR of 47.6%.The availability of big data and the demand to reduce healthcare costs are expected to be the major drivers for its growth (Markets and Markets, 2023).
what is Artificial Intelligence?AI, in simple words, is the simulation of human intelligence by computers (Burns, 2023).The basis of AI is algorithms 2 , which are translated into computer code that carries instructions for rapid analysis and transformation of data into conclusions, information or other outputs.Initially conceived as a technology that could mimic human intelligence, AI has evolved in ways that far exceed its original conception.
Early AI systems used algorithms with a database of deductive rules which, given some inputs, could infer certain outputs.In contrast, advanced approaches like machine learning (ML) aim at extracting patterns in data.These systems work by analyzing large amounts of data for correlations and patterns and using this information to make inferences about future states.Deep learning(DL), a subset of machine learning, is based on Artificial Neural Networks (ANN) to progressively extract features from unstructured data such as documents, images, and text (ITU, 2022).
Advanced AI tools are sometimes called 'black box' models since they work semi-autonomously or autonomously and the user cannot interpret how the model arrived at its final decision (Linardatos et al., 2020).In this paper, the term artificial intelligence (AI) has been used as an overarching term to include its more advanced techniques, including machine learning (ML), Deep learning (DL), and Artificial Neural Networks (ANN).
Since its inception, scholars have debated the potential shortcomings, pitfalls, threats, and negative impacts of AI systems.With the exponential development and implementation of AI tools, the awareness of their limitations and potential harms has increased.In recent years concerns about the disruptive potential of AI tools has drawn the attention of policymakers worldwide who have stepped up regulatory scrutiny of these tools.Leading technology experts, governments, and organizations have recommended caution.Most recently, the G7, during a meeting in Japan, committed to 'riskbased' regulation of AI technologies, and the EU has also called for urgent regulation (K Kantaro, 2023) .The intensity of the debate citing ethical, legal, and even existential issues, has reached levels comparable to only a few other technological innovations, such as genetic engineering or nuclear power.

AI in Healthcare
The integration of AI in healthcare holds promise as a solution to address significant challenges in various healthcare domains, including diagnosis, therapeutics, preventive treatments, clinical decision making, public health surveillance, patient engagement, and administrative activities.Some examples of use of AI in healthcare are listed in Table 1.
It can enhance patient safety by improving error detection and management of medications.Furthermore, the predictive abilities of AI algorithms have the potential to catalyze the shift in healthcare strategies from treatment of diseases to their prevention through early interventions.(Grote & Berens, 2020).
Recent advances in the AI have made substantial strides in "perception" (the interpretation of sensory information) and natural language processing, that, until a few years ago, could be done only by humans (Hosny et al., 2018).Spectacular improvements in "machine-vision", which is used in analysis of images in radiology, pathology, dermatology, retinopathies, etc.) is a testament to advances in these capabilities.AI is not only pushing the boundaries of human performance, it is also getting better at mimicking human behavior.In fact, AI has had a significant impact even in areas typically considered "nonmachine" domains like emotional intelligence (Green, 2023) and empathy.A recent study published in

Patient engagement and education
Provide patients with information, reminders, and support for managing chronic conditions and promoting a healthier lifestyle.

Appointment scheduling
Virtual health assistants provide patients with instant medical advice, appointment scheduling, and basic healthcare information.
the Journal of American Medical Association (JAMA) demonstrated that responses by an AI chatbot to patient questions were nearly ten times more empathetic than those by physicians (Ayers et al., 2023).
Studies have found that it can even positively contribute to the well-being of physicians by reducing burnout and improving their work-life balance (Awan, 2023).Thus, the convergence of human and artificial intelligence has immense potential in healthcare both at the individual and at the institutional level (Topol, 2019a).
The list applications in healthcare is likely to grow continuously in the future and many applications not yet envisaged will emerge.However, for their widespread adoption in the medical field, regulators need put in place safeguards to ensure that the AI applications are safe, reliable and trustworthy.

SeCTION TwO
"Getting diversity in the training of these algorithms is going to be incredibly important, otherwise we will be, in some sense, pouring concrete over whatever current distortions exist."-Isaac Kohane 3

Reliability of Medical AI: Potential and Reality
A study using EHR data from nearly 30,000 patients in University of Michigan found that Epic Sepsis Model, a proprietary sepsis prediction model implemented at hundreds of US hospitals, missed most instances (67%) of sepsis and produced multiple false alarms.(Wong et al., 2021).Considering that sepsis is a life-threatening emergency and a leading cause of hospital death, these findings raise serious concerns about the reliability of such systems.
Unreliable predictions can cause more harm than benefit in guiding clinical decisions.During the COVID-19 pandemic hundreds of clinical prediction models were developed and deployed.However, the Turing Institute found that none of the models had any valuable impact (The Alan Turing Institute, 2020).Similarly, a systematic review of AI-based skin-checking apps available in the public domain found that they were frequently inaccurate (Freeman et al., 2021).
ChatGPT, a large language model (LLM) which has rapidly gained public acclaim has found place in many healthcare applications too.However, LLMs are known to "hallucinate", or make up false information (Eysenbach, 2023).The problem is that the output of these tools is so coherent that without a high degree of diligence, it is very difficult to separate the truth from the chaff.
Data Scientists also warn of "Clever Hans 4 predictors" found in ML models.In the previous section we have seen that ML models work by analyzing large amounts of data for correlations and patterns which are then used for making their predictions and recommendations.However, sometimes they base their inferences on spurious or irrelevant correlations learnt during their training phase.A famous non-medical example is that of an AI model to distinguish between photographs of huskies and wolves.The AI made the right judgment most of the time, but when the data scientists looked more deeply, they found that it wasn't analyzing the physical attributes of each animal -instead, its decisions were based on whether there was snow in the background (Lapuschkin et al., 2019).
Similar spurious correlations have been found in medical AI models too.An AI tool developed at Mount Sinai Hospital performed very well in identifying high-risk patients based on chest x-rays.However, when it was applied at other hospitals there was a severe decline in its performance.It turned out that the predictions were not based on the x-ray features; but only on identifying hardware related information about the specific x-ray machine that was used in the ICU at Mount Sinai (Zech et al., 2018).
Wallis et al demonstrated that a commonly used brain MRI dataset achieved a high tumor classification accuracy, even when no information regarding the tumor itself was available.Here, the ML model used the position and orientation of the 2D MRI slices as a feature to classify tumors (Wallis & Buvat, 2022).
While these examples are not exhaustive, they indicate that such problems are serious practical realities.Building trust in AI solutions is extremely important when dealing with highly critical applications, such as healthcare.

explainability: The Need for Transparency
Machine Learning technologies, and most AI in healthcare is in this domain, are often called 'black box' models.This is because it is difficult, if at all, to understand how they arrived at a particular conclusion.(Beres, 2017).Many AI algorithms in the medical domain-particularly those used for image analysis -are virtually impossible to interpret or explain.Even though the capabilities of AI tools have been demonstrated in certain tasks, when human lives are at stake, it may not be prudent to blindly trust the decision of an AI tool, if one were to be held accountable for it.
For AI models to be effectively utilized in healthcare, they must be able to provide a clear rationale for their decisions.This not only develops trust among medical professionals but also ensures that the patients receive the highest standard of care.It can also aid system developers to fact check their models and prevent 'Clever Hans' phenomena (Eversberg, 2023) Notably, the EU's General Data Protection Regulation (GDPR) has also envisioned such a requirement and provides the 'right to explanation' to the users of AI (Casey et al., 2019).

Bias: Perpetuating Prejudices
There are numerous examples of bias from the use of AI tools in education, hiring, finance, and even the justice system (Baker & Hawn, 2022;Lagioia et al., 2022).Applications of AI in healthcare are similarly susceptible to bias.
Biases come in the way of the equitable provision of and access to healthcare services.A seminal study by Obermeyer et al. found that an algorithm used widely in US hospitals to predict which patients would require additional medical care favored white patients over black patients by a considerable margin.The algorithm used to identify patients in need of "high-risk care management" was far less likely to nominate black patients as they received lower risk scores, even when their needs were greater (Obermeyer et al., 2019).
Studies have shown that AI can reflect the same prejudices that we are striving to overcome in society.And, as the AI systems lack the contextual understanding of humans, they can neither discriminate nor mitigate biases effectively.In addition, being based on algorithms and feedback loops, these embedded biases can get further amplified and perpetuate themselves (A.Cohen, 2023).
AI is not inherently biased.However, biases arise in AI models because they are trained on data collected from healthcare organizations and reflect the disparities in accessing healthcare which can lead to imbalances in the representation of different populations within medical datasets.Societal inequities related to factors like race, gender, and socioeconomic status are ingrained in social structures, reflecting long-standing inequalities.As AI relies only on the available data, excluding realities outside the database, it inadvertently embeds the inequities present in the skewed data into its models.
Data of ethnicity is often not recorded in patient records; and this is with the best of intentions.A review of publicly available dermatological datasets found that subject ethnicity data were available for only 1•3% of the images (Wen et al., 2022).This makes it very difficult to address the bias through any statistical means.On the other hand, a study published in the New England Journal of Medicine (NEJM) found that race-corrections in algorithms could also further race-based inequities, causing delay or even denial of healthcare to traditionally marginalized communities.The study was conducted on various tools in use at US hospitals across multiple specialties (Vyas et al., 2020).
Some biases can creep in inadvertently.Images taken by sophisticated CT scan machines, being of better quality, may be preferentially used for training AI models.Although this would be a reasonable thing to do, it would end up ignoring a large segment of representative images from regions that cannot afford to install high-end CT machines (Monga, 2022).
Bias is neither new nor unique to AI and, the National Institute of Standards and Technology (NIST) has noted that it is impossible to achieve zero bias (NIST, 2022).Yet there are factors specific to AI that require new perspectives.Harmful impacts stemming from AI bias can impact society at large, undetected, and at unprecedented scale and speed.As the AI tools become more complex, biases will become even more difficult to identify, let alone control (Racine et al., 2019;Stahl et al., 2021).
Besides the social context, biases could also be clinically dangerous.Optimal recognition and timely management of myocardial infarction (MI), especially for reducing patient delay in seeking acute medical care, is critical.However, as cited in study in BMJ, a self-administered symptomchecker app was found to infer that a 60-year-old man experiencing chest pain was likely having a heart attack, whereas a 60-year-old woman with similar symptoms was likely experiencing a panic attack (Salisbury & Oxford, 2020).
It is critical to appreciate that AI systems are not objective; they reflect existing inequalities, and can also augment them through data cascades which, in turn, cause negative downstream effects (London, 2022).While these aberrations may enter the models without any malicious intent by the developers, their unintended consequences would certainly challenge public trust in AI (Turner Lee, 2018).

ethical Aspects of AI in Healthcare
Medical ethics describes the moral principles by which a doctor must conduct themselves.The four principles of biomedical ethics described by Beauchamp and Childress -autonomy, non-maleficence, beneficence, and justice -have been highly influential in the field of medical ethics.Confidentiality is sometimes considered the fifth pillar of medical ethics.However, with the increasing penetration of AI systems in healthcare, we need to go beyond the conventional ethical frameworks to address the novel complex challenges posed by them (Hagendorff, 2020).
Beneficence and non-maleficence can be traced back to the times of Hippocrates and are fundamental to defining the responsibility of a practicing physician towards a patient.While beneficence indicates that medical actions will have a reasonable expectation of benefit to the person, non-maleficence implies that they would not harm the patient.Justice implies having systems to assure the fair distribution of benefits and risks across all impacted populations.Autonomy means that each person has the right to make decisions related to their own body and their own personal information.
Earlier in this section we have explored concerns relating to reliability, bias and lack of transparency in AI and how this can well prejudice the safe and equitable delivery of healthcare contravening the basic ethical tenets.AI is supposed to be value-neutral, but it is not.AI-based decisions are susceptible to inaccuracies and bias.
Informed consent, which derives from autonomy, plays a key role in medical practice.Informing the patient about the role of AI in clinical decision-making will be a very important ethical and legal consideration for informed consent.This is, however, quite challenging for the physicians as it requires them to have full knowledge of the decision-making process of the AI system allowing them to reflect on, even query, the system's output.

SeCTION: THRee
"AI is only as good as the humans programming it and the system in which it operates.If we are not careful, AI could unintentionally exacerbate many of the worst aspects of our current healthcare system."-Bob Kocher 5   The relationship between humans and technology is not as straightforward as it might seem.The very meaning of technology itself changes through the way it is used.Sometimes the way new technology functions in the real world goes beyond all its intended purposes and the way we make sense of it.Social media is an excellent example of this.
AI is mistakenly considered to be more objective than our cognitive abilities.This is due to 'cognitive complacency', whereby users tend to put more trust in the computational power of AI systems without mindfully analyzing the outputs (Jarrahi et al., 2023).Misplaced confidence in AI could be very persuasive and increase the uptake of incorrect outputs.
Developing a clinical prediction model is a science and an art.It is important to find a balance between statistical performance and clinical applications because unreliable predictions could cause more harm than benefit.Despite all efforts, it is basically impossible to develop a model without any errors.Being based on assumptions, probabilities and statistics, a hundred percent accuracy is not a realistic goal in any model.And so it is essential that healthcare institutions, as well as regulatory bodies, establish structures to monitor key issues to limit negative effects.
AI tools, being adaptive, change through interactions with their environment and are also liable to decay in performance over time.Gravely though, these changes may be difficult to detect due to the intrinsic opacity of the tools.Consequently, from a medical point of view, not only regulatory oversight, but also constant human oversight is essential.It is important to ensure that clinicians have control over the decision-making process and that mechanisms are in place to verify the accuracy of AI decisions.- The adoption of AI tools in healthcare has lagged mainly because of ethical and safety considerations as discussed in the previous section.It would be natural for clinicians to be wary of AI systems because it is difficult to detect or fix aberrant behavior, or even trust them when it comes to safety and ethics.The absence of standard guidelines for the use of AI in healthcare has only served to worsen the situation.Robust clinical evaluation, using metrics that are intuitive to clinicians, and appropriately regulated systems are needed for trust in the technology.However, scientific, peerreviewed evidence of efficacy is lacking for most commercially available AI products (Daneshjou et al., 2021;Voter et al., 2021).

Pace and Race
Globally there is an ongoing race for AI dominance between many countries.When such race dynamics take over there is a real risk that ethics would be sidelined (I.G. Cohen, 2023).Furthermore, mutual mistrust makes any open mature discussion on regulation of the technology very difficult.
A separate race is parallelly playing out in the industry.In the last decade there has been an exponential increase in the annual global patent filings for AI technologies (Figure 1).The pace of AI growth has made the race to market one for survival as the innovators need to fully exploit their technological head start.As they sprint from the lab to the market, companies often cut corners, compromising testing, validation and safety (Skelton, 2023).
Transformative technologies like AI require comprehensive adoption strategies, including national-level strategies to contribute to the larger public good (NITI, 2018).The prospects of economic progress, and the quest for AI dominance, would thus influence strategies for regulation too.

Building an ecosystem of Trust
A 2021 study by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems found that only 26% of the general public trust AI, while 73% of respondents believe that AI should be regulated.Two years later, a survey of 17000 people by KPMG in 17 countries in 2023 showed similar findings (KPMG Australia, 2023).Evidently, despite widespread availability and awareness of AI tools, trust in the technology has not improved.
Such ethical dilemmas, and the implications on accountability and liability, unless resolved through appropriate regulations, could hinder the larger adoption of AI in medical practice.
It is difficult to regulate AI due to its complexity and pace of development.Robust safeguards in terms of validation, effective governance, regulatory and policy frameworks are critical for safe and ethical adoption of AI.Managing these concerns without stifling innovation, and the potential of AI, is, however, a challenge for regulators.While most countries have issued guidelines, the US and UK have both adopted a 'pro-innovation approach' by enabling current regulators to determine how best to adapt existing policies to the field (Skelton, 2023).
Whichever approach is adopted, a new regulatory mindset will be required to keep up with the pace of change.

Need for Standards and Regulation
One area where AI poses a significant challenge is the approaches traditionally used to regulate software and other technologies.Although AI tools consist of software, the existing standards for software are not compatible with typical AI methods because of their autonomous and complex nature and lack of transparency (Zielke, 2020).
A basic approach to standardization would involve the definition of common terms.When looking at terms relating to AI, the term artificial intelligence itself is a subject of discussion.European Commission's Rolling plan for ICT standardisation notes that there is no generally accepted definition of Artificial Intelligence (EU, 2023).
It is important to appreciate that AI is not just one technology but a group of capabilities being applied to diverse domains.For instance early AI was knowledge-based, while today's AI is datadriven, and uses a multitude of methods and models.Further, in most cases AI tools are not standalone devices, but involve the integration of AI components into other products and processes to augment their capabilities.
Standardization brings consensus which can support scientific and commercial exchange, It can also establish confidence amongst users and facilitate regulation.
Most guidelines for the regulation of AI, and the same are used for AI applications in healthcare, are general in nature.These cannot meet the sensitivities, specificities, and risks of AI in the healthcare domain.US FDA considers AI applications in healthcare under the category of "Software as a Medical Device" (SaMD) (US FDA, 2019).However, regulations for medical devices, being based on legacy models and traditional approaches, lack mechanisms for addressing the specific issues related to AI technologies.
AI technologies bring new and accelerated risks.They are evolving at a pace that exceeds the speed at which regulatory agencies can keep up because the existing models of regulation of healthcare are designed for 'locked' solutions, whereas AI is 'adaptive' i.e. it is flexible and evolves and changes over time (Dettling et al., 2021).Accordingly, it is essential for the regulatory environment to adapt to this fast-evolving field to anticipate and prevent potential risks and unintended harm.

CONCLUSION "In healthcare, artificial intelligence needs to pass the implementation game, not the imitation game." -John Powell 6
Alan Turing, widely considered the father of computer science and artificial intelligence, developed a test in 1950 called the "Imitation Game" to test a machine's ability to exhibit intelligent behavior.The test requires that a human examiner should be unable to distinguish the machine from another human being from replies to questions in natural language.We have come a long way from the times when the test was very challenging, and many AI applications available now can successfully pass the test.The implementation of AI, in the absence of clear standards and robust regulation, is however, a challenge.
As discussed in previous sections, AI technology holds immense potential but the same is perhaps yet to be realized.Individual studies showing fantastic results sometimes fail when subjected to rigorous scientific scrutiny and meta-analysis, underlining the need for robust testing, validation, and regulation.AI works in the digital world, but it impacts us in the real world.So, in the glare of possibilities we should not lose sight of its limitations.Perhaps the greatest source of risk is the myth of objectivity of AI, an illusion of neutrality of algorithms, and complacent trust in the omniscience of computational knowledge.AI, in its present state, is not worthy of any of these epithets.
Commercial interests and the race for AI supremacy can, and will affect the design, use, and longterm impact of the technology, which, unless regulated through appropriate oversight mechanisms can have serious consequences.The ethical and legal implications of AI in healthcare are not only quite significant but also challenging.Managing these concerns, and building regulatory safeguards, without stifling the potential of AI is a key challenge for regulators.
AI is not infallible.Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, and it will ultimately become a mature and effective tool for the healthcare sector (Topol, 2019b).For now, AI technologies need meaningful human control and regulations.