<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD 2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="letter">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">EXCLI J</journal-id>
      <journal-title>EXCLI Journal</journal-title>
      <issn pub-type="epub">1611-2156</issn>
      <publisher>
        <publisher-name>Leibniz Research Centre for Working Environment and Human Factors</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="publisher-id">2025-8679</article-id>
      <article-id pub-id-type="doi">10.17179/excli2025-8679</article-id>
      <article-id pub-id-type="pii">Doc824</article-id>
      <article-categories>
        <subj-group subj-group-type="heading">
          <subject>Letter to the editor</subject>
        </subj-group>
      </article-categories>
      <title-group>
        <article-title>Artificial intelligence in hospitals: Legal uncertainties and emerging risks for patient safety</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <name>
            <surname>Gaddas</surname>
            <given-names>Meriem</given-names>
          </name>
          <xref ref-type="aff" rid="A1">1</xref>
          <xref ref-type="aff" rid="A2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <name>
            <surname>Ben Dhiab</surname>
            <given-names>Mohamed</given-names>
          </name>
          <xref ref-type="aff" rid="A3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <name>
            <surname>Ben Saida</surname>
            <given-names>Imen</given-names>
          </name>
          <xref ref-type="aff" rid="A4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <name>
            <surname>Ben Saad</surname>
            <given-names>Helmi</given-names>
          </name>
          <xref ref-type="corresp" rid="COR1">&#x0002a;</xref>
          <xref ref-type="aff" rid="A1">1</xref>
          <xref ref-type="aff" rid="A2">2</xref>
        </contrib>
      </contrib-group>
      <aff id="A1">
        <label>1</label>University of Sousse, Faculty of Medicine &#x27;Ibn el Jazzar&#x27; of Sousse, Farhat HACHED University Hospital, Research Laboratory LR12SP09 &#x27;Heart Failure&#x27; Sousse, Tunisia</aff>
      <aff id="A2">
        <label>2</label>Department of Physiology and Functional Explorations, Farhat HACHED University Hospital, Sousse, Tunisia</aff>
      <aff id="A3">
        <label>3</label>University of Sousse, Faculty of Medicine of Sousse, Department of Forensic Medicine, EPS Farhat HACHED of Sousse, Tunisia</aff>
      <aff id="A4">
        <label>4</label>University of Sousse, Faculty of Medicine of Sousse, Department of Intensive care, Farhat Hached University Hospital, Sousse, Tunisia</aff>
      <author-notes>
        <corresp id="COR1">*To whom correspondence should be addressed: Helmi Ben Saad, University of Sousse, Faculty of Medicine 'Ibn el Jazzar' of Sousse, Farhat HACHED University Hospital, Research Laboratory LR12SP09 'Heart Failure' Sousse, Tunisia, E-mail: <email>helmi.bensaad@rns.tn</email></corresp>
      </author-notes>
      <pub-date pub-type="epub">
        <day>17</day>
        <month>07</month>
        <year>2025</year>
      </pub-date>
      <pub-date pub-type="collection">
        <year>2025</year>
      </pub-date>
      <volume>24</volume>
      <fpage>824</fpage>
      <lpage>827</lpage>
      <history>
        <date date-type="received">
          <day>23</day>
          <month>06</month>
          <year>2025</year>
        </date>
        <date date-type="accepted">
          <day>28</day>
          <month>06</month>
          <year>2025</year>
        </date>
      </history>
      <permissions>
        <copyright-statement>Copyright &#xA9; 2025 Gaddas et al.</copyright-statement>
        <copyright-year>2025</copyright-year>
        <license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
          <p>This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (http://creativecommons.org/licenses/by/4.0/) You are free to copy, distribute and transmit the work, provided the original author and source are credited.</p>
        </license>
      </permissions>
      <self-uri xlink:href="https://www.excli.de/vol24/excli2025-8679.pdf">This article is available from https://www.excli.de/vol24/excli2025-8679.pdf</self-uri>
    </article-meta>
  </front>
  <body>
    <sec>
      <title>⁯⁯⁯⁯⁯</title><p>Thanks to its promises of performance and safety, artificial intelligence (AI) has become a pervasive technology in modern life, influencing nearly every facet of human activity (Ganesh, 2020[<xref ref-type="bibr" rid="R9">9</xref>]). Its rapidly expanding impact affects hundreds of millions of individuals worldwide and is profoundly reshaping the structures of our societies (Ganesh, 2020[<xref ref-type="bibr" rid="R9">9</xref>]). However, a lesser-known and rarely discussed side of this technology involves serious failures-some of which have had fatal consequences, such as airplane crashes and automated vehicle accidents (Ganesh, 2020[<xref ref-type="bibr" rid="R9">9</xref>]). Despite its undeniable contribution to the optimization of healthcare delivery, the integration of AI into hospital settings is not without risks (Becker&#x27;s Health IT, 2024[<xref ref-type="bibr" rid="R1">1</xref>]; Boussina et al., 2024[<xref ref-type="bibr" rid="R4">4</xref>]; Dufour et al., 2020[<xref ref-type="bibr" rid="R8">8</xref>]; henricodolfing, 2024[<xref ref-type="bibr" rid="R10">10</xref>]; Luxton, 2019[<xref ref-type="bibr" rid="R11">11</xref>]; Obermeyer et al., 2019[<xref ref-type="bibr" rid="R13">13</xref>]; Powles and Hodson, 2017[<xref ref-type="bibr" rid="R14">14</xref>]; Schertz et al., 2023[<xref ref-type="bibr" rid="R15">15</xref>]; Wong et al., 2021[<xref ref-type="bibr" rid="R17">17</xref>]; Zhou et al., 2019[<xref ref-type="bibr" rid="R18">18</xref>]) (Supplementary information, Table S1). AI systems have already exhibited failures that have resulted in patient harm (Table S1). These incidents raise critical concerns regarding patient safety and the legal accountability associated with the use of such technologies (Table S1). Hospitals, like other healthcare providers, increasingly promote the integration of AI technologies and may therefore be held liable in the event of accidents or patient harm (Solaiman and Malik, 2025[<xref ref-type="bibr" rid="R16">16</xref>]). This liability may stem from their institutional duty to ensure that AI systems are properly validated, maintained, and used in accordance with applicable regulatory requirements. </p><p>With an emphasis on responsibility, regulatory loopholes, and the consequences for clinical practice and patient safety, this editorial sought to critically examine the medico-legal issues and safety concerns related to the integration of AI systems in healthcare. Hospitals are also responsible for providing adequate training to healthcare professionals on the appropriate use of AI systems and for implementing safeguards to prevent errors or algorithmic biases that could compromise clinical decision (Duffourc and Gerke, 2023[<xref ref-type="bibr" rid="R7">7</xref>]). Physicians, in turn, may face allegations of medical malpractice if the use-or misuse-of AI is deemed to fall below the accepted standard of care (Cohen et al., 2024[<xref ref-type="bibr" rid="R5">5</xref>]). Furthermore, the integration of AI raises concerns regarding informed consent, particularly when patients are unaware of the role AI plays in their diagnostic or therapeutic management (Duffourc and Gerke, 2023[<xref ref-type="bibr" rid="R7">7</xref>]).</p><p>The concept of medical liability lies at the heart of current legal reforms addressing the use of AI in healthcare (Duffourc and Gerke, 2023[<xref ref-type="bibr" rid="R7">7</xref>]; Solaiman and Malik, 2025[<xref ref-type="bibr" rid="R16">16</xref>]). Notably, the European Union has introduced two major legislative instruments (Duffourc and Gerke, 2023[<xref ref-type="bibr" rid="R7">7</xref>]): <italic>(i)</italic> the AI act, which is based on the classification of medical AI systems as high-risk, emphasizing transparency and the preservation of human oversight (Solaiman and Malik, 2025[<xref ref-type="bibr" rid="R16">16</xref>]), and <italic>(ii)</italic> the AI liability directive, which addresses the issue of accountability for harm caused by AI, by establishing new rules that make it easier for individuals to seek compensation (Duffourc and Gerke, 2023[<xref ref-type="bibr" rid="R7">7</xref>]). However, it is worth noting that in 2020, the European Parliament rejected the proposal to grant AI systems a form of &#x27;electronic legal personality,&#x27; arguing that such a status would undermine fundamental principles of law and risk diluting human accountability, thereby weakening legal protections for victims (Duffourc and Gerke, 2023[<xref ref-type="bibr" rid="R7">7</xref>]). Consequently, AI remains legally defined as a &#x27;product&#x27; meaning that developers may be held liable for system failures that result in patient harm (Duffourc and Gerke, 2023[<xref ref-type="bibr" rid="R7">7</xref>]). In the United States, the regulation of AI in healthcare is primarily governed by federal laws focused on safety, such as those enforced by the food and drug administration (Cohen et al., 2024[<xref ref-type="bibr" rid="R5">5</xref>]). Liability in the event of an adverse outcome falls under civil law; however, assigning responsibility remains complex due to the involvement of multiple stakeholders and the inherently opaque nature of AI systems (Cohen et al., 2024[<xref ref-type="bibr" rid="R5">5</xref>]).</p><p>Despite recent (<italic>ie</italic>; after 2020) legal reforms, it is important to note that, to date (<italic>ie</italic>; early July 2025), no AI system has been prosecuted or held legally accountable for harm before any national or international court (Solaiman and Malik, 2025[<xref ref-type="bibr" rid="R16">16</xref>]). Although AI is increasingly used in high-stakes sectors such as healthcare, transportation, and justice, the law continues to treat it as a tool rather than as a legal subject (Solaiman and Malik, 2025[<xref ref-type="bibr" rid="R16">16</xref>]). Moreover, there is currently no specific regulatory framework governing the liability of the various actors involved in the AI supply chain, nor are there harmonized operational guidelines to ensure the safe and ethical integration of AI technologies into clinical practice-particularly with respect to external validation, algorithmic decision traceability, and alignment with existing medical device regulations (Maliha et al., 2021[<xref ref-type="bibr" rid="R12">12</xref>]). </p><p>Current (<italic>ie</italic>, early July 2025) legal frameworks are often ill-equipped to address the complexities introduced by AI systems, resulting in legal gaps that may be exploited to evade liability. These shortcomings stem primarily from at least the following seven factors:</p><p><list list-type="order"><list-item><p><italic>Lack of legal status</italic>: AI has no recognized legal personality, meaning it cannot be sued or insured, nor can it hold assets to compensate victims. In legal proceedings, fault must therefore be transferred to a human actor or a legal entity such as a corporation (Bertolini and Episcopo, 2022[<xref ref-type="bibr" rid="R2">2</xref>]).</p></list-item><list-item><p><italic>Absence of clear attribution of liability</italic>: Traditional legal doctrines are not designed to hold non-human agents accountable. AI systems lack intent, moral agency, or autonomous legal will, all of which are central to the attribution of fault under current liability models (Bertolini and Episcopo, 2022[<xref ref-type="bibr" rid="R2">2</xref>]).</p></list-item><list-item><p><italic>Multiplicity of actors and fragmented responsibility</italic>: AI development and deployment often involve a diffuse network of stakeholders-including developers, healthcare institutions, and clinicians-making it difficult to trace failures and assign responsibility (Maliha et al., 2021[<xref ref-type="bibr" rid="R12">12</xref>]).</p></list-item><list-item><p><italic>Unpredictability of AI behavior</italic>: Legal liability frameworks are generally grounded in the foreseeability of actions (<italic>eg</italic>; negligence or intent) (Bottomley and Thaldar, 2023[<xref ref-type="bibr" rid="R3">3</xref>]). However, the unpredictable nature of many AI systems challenges this premise and raises significant safety concerns in clinical care. This unpredictability is attributable to two elements. The first is related to the &#x27;black box&#x27; nature of many machine-learning models, where the decision-making process is opaque and often not interpretable by users. This lack of transparency complicates the establishment of causal links between algorithmic faults and patient harm and hinders technical or design flaw analysis (Duffourc and Gerke, 2023[<xref ref-type="bibr" rid="R7">7</xref>]). The second element concerns the system&#x27;s relative autonomy and capacity for evolution through self-learning. AI can adapt its decision-making criteria over time-sometimes without explicit human oversight-leading to unanticipated outputs that were not originally programmed (Duffourc and Gerke, 2023[<xref ref-type="bibr" rid="R7">7</xref>]). In many cases, harmful autonomous behavior becomes evident only after the system has been deployed, further complicating the traceability of the initial failure and legally diluting responsibility across physicians, manufacturers, and the algorithm itself (Duffourc and Gerke, 2023[<xref ref-type="bibr" rid="R7">7</xref>]).</p></list-item><list-item><p><italic>Data quality</italic>: The accuracy and completeness of training data significantly influence AI performance. For instance, datasets that underrepresent certain patient groups can lead to suboptimal clinical decisions and racially biased outcomes (Cross et al., 2024[<xref ref-type="bibr" rid="R6">6</xref>]).</p></list-item><list-item><p><italic>Inherent complexity and stochastic nature of medical data</italic>: Unlike structured datasets (<italic>eg</italic>; mathematical data), medical data are heterogeneous and probabilistic, making it difficult to ensure consistent and reliable AI performance (Table S1).</p></list-item><list-item><p><italic>Lack of human oversight due to automation bias</italic>: Excessive reliance on AI may lead to clinician deskilling and diminished critical engagement, undermining secondary human control and increasing the risk of error propagation (Table S1). AI systems should augment-rather than replace-clinical judgment. To navigate the technological complexity and mitigate automation bias, continuous human oversight and close collaboration among all stakeholders are essential. Above all, patient safety must remain the foremost priority.</p></list-item></list></p><p>In conclusion, even though AI has a lot of potential to improve clinical judgment and healthcare delivery, its use is troubled by ethical, legal, and technological issues that the current regulatory frameworks are unable to appropriately handle. Efforts to protect patient rights and maintain therapeutic accountability are complicated by the unclear attribution of liability, lack of transparency, and unpredictable nature of AI conduct. Establishing a strong legal framework that outlines roles for all parties involved, guarantees algorithmic openness, and prioritizes patient safety is necessary for the safe and moral integration of AI into medical practice.</p><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="EXCLI-24-824-i-001" ></inline-graphic></p></sec>
    <sec>
      <title>Declaration</title><sec><title>Artificial Intelligence</title><p>The authors would like to reveal that in order to improve the consistency and clarity of the manuscript authoring, artificial intelligence techniques like QuillBot and ChatGPT ephemeral were used. Without changing the scientific substance or creating any new material, the tools were used solely for language improvement, making sure the text was understandable and cohesive.</p></sec><sec><title>Authors&#x27; contributions</title><p>All authors: Literature search, Manuscript preparation and Review of manuscript. All authors read and approved the final manuscript.</p></sec><sec><title>Conflict of interest</title><p>The authors confirm that there is no conflict of interest.</p></sec><sec><title>Data availability statement</title><p>Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.</p></sec></sec>
    <sec sec-type="supplementary-material">
      <title>Supplementary Material</title>
      <supplementary-material id="SD1" content-type="local-data">
        <caption>
          <title>Supplementary information</title>
        </caption>
        <media mimetype="application" mime-subtype="application/pdf" xlink:href="EXCLI-24-824-s-001.pdf" />
      </supplementary-material>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="R1">
        <label>1</label>
        <citation citation-type="web">
          <collab>Becker&#x2019;s Health IT</collab>
          <article-title>Becker&#x2019;s Hospital Review. Accuracy of Epic&#x27;s sepsis model faces scrutiny</article-title>
          <year>2024</year>
          <access-date>July 12, 2025</access-date>
          <comment>Available from: <ext-link ext-link-type="uri" xlink:href="https://www.beckershospitalreview.com/ehrs/accuracy-of-epics-sepsis-model-faces-scrutiny/">https://www.beckershospitalreview.com/ehrs/accuracy-of-epics-sepsis-model-faces-scrutiny/</ext-link></comment>
        </citation>
      </ref>
      <ref id="R2">
        <label>2</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Bertolini</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>Episcopo</surname>
              <given-names>F</given-names>
            </name>
          </person-group>
          <article-title>Robots and AI as legal subjects&#x3F; Disentangling the ontological and functional perspective</article-title>
          <source>Front Robot AI</source>
          <year>2022</year>
          <volume>9</volume>
          <fpage>842213</fpage>
        </citation>
      </ref>
      <ref id="R3">
        <label>3</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Bottomley</surname>
              <given-names>D</given-names>
            </name>
            <name>
              <surname>Thaldar</surname>
              <given-names>D</given-names>
            </name>
          </person-group>
          <article-title>Liability for harm caused by AI in healthcare: an overview of the core legal concepts</article-title>
          <source>Front Pharmacol</source>
          <year>2023</year>
          <volume>14</volume>
          <fpage>1297353</fpage>
        </citation>
      </ref>
      <ref id="R4">
        <label>4</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Boussina</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>Shashikumar</surname>
              <given-names>SP</given-names>
            </name>
            <name>
              <surname>Malhotra</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>Owens</surname>
              <given-names>RL</given-names>
            </name>
            <name>
              <surname>El-Kareh</surname>
              <given-names>R</given-names>
            </name>
            <name>
              <surname>Longhurst</surname>
              <given-names>CA</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>Impact of a deep learning sepsis prediction model on quality of care and survival</article-title>
          <source>NPJ Digit Med</source>
          <year>2024</year>
          <volume>7</volume>
          <issue>1</issue>
          <fpage>14</fpage>
        </citation>
      </ref>
      <ref id="R5">
        <label>5</label>
        <citation citation-type="book">
          <person-group person-group-type="author">
            <name>
              <surname>Cohen</surname>
              <given-names>IG</given-names>
            </name>
            <name>
              <surname>Slottje</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>Gerke</surname>
              <given-names>S</given-names>
            </name>
          </person-group>
          <person-group person-group-type="editor">
            <name>
              <surname>Sung</surname>
              <given-names>JJY</given-names>
            </name>
            <name>
              <surname>Stewart</surname>
              <given-names>C</given-names>
            </name>
          </person-group>
          <article-title>Medical AI and tort liability</article-title>
          <source>Artificial intelligence in medicine</source>
          <year>2024</year>
          <publisher-loc>New York</publisher-loc>
          <publisher-name>Academic Press</publisher-name>
          <fpage>89</fpage>
          <lpage>104</lpage>
        </citation>
      </ref>
      <ref id="R6">
        <label>6</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Cross</surname>
              <given-names>JL</given-names>
            </name>
            <name>
              <surname>Choma</surname>
              <given-names>MA</given-names>
            </name>
            <name>
              <surname>Onofrey</surname>
              <given-names>JA</given-names>
            </name>
          </person-group>
          <article-title>Bias in medical AI: Implications for clinical decision-making</article-title>
          <source>PLOS Digit Health</source>
          <year>2024</year>
          <volume>3</volume>
          <issue>11</issue>
          <fpage>e0000651</fpage>
        </citation>
      </ref>
      <ref id="R7">
        <label>7</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Duffourc</surname>
              <given-names>MN</given-names>
            </name>
            <name>
              <surname>Gerke</surname>
              <given-names>S</given-names>
            </name>
          </person-group>
          <article-title>The proposed EU directives for AI liability leave worrying gaps likely to impact medical AI</article-title>
          <source>NPJ Digit Med</source>
          <year>2023</year>
          <volume>6</volume>
          <issue>1</issue>
          <fpage>77</fpage>
        </citation>
      </ref>
      <ref id="R8">
        <label>8</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Dufour</surname>
              <given-names>N</given-names>
            </name>
            <name>
              <surname>Fadel</surname>
              <given-names>F</given-names>
            </name>
            <name>
              <surname>Gelee</surname>
              <given-names>B</given-names>
            </name>
            <name>
              <surname>Dubost</surname>
              <given-names>JL</given-names>
            </name>
            <name>
              <surname>Ardiot</surname>
              <given-names>S</given-names>
            </name>
            <name>
              <surname>Di Donato</surname>
              <given-names>P</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>When a ventilator takes autonomous decisions without seeking approbation nor warning clinicians: A case series</article-title>
          <source>Int Med Case Rep J</source>
          <year>2020</year>
          <volume>13</volume>
          <fpage>521</fpage>
          <lpage>529</lpage>
        </citation>
      </ref>
      <ref id="R9">
        <label>9</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Ganesh</surname>
              <given-names>MI</given-names>
            </name>
          </person-group>
          <article-title>The ironies of autonomy</article-title>
          <source>Humanit Soc Sci Commun</source>
          <year>2020</year>
          <volume>7</volume>
          <issue>1</issue>
          <fpage>157</fpage>
        </citation>
      </ref>
      <ref id="R10">
        <label>10</label>
        <citation citation-type="web">
          <collab>henricodolfing</collab>
          <article-title>Case study 20: The &#x24;4 billion AI failure of IBM Watson for oncology</article-title>
          <year>2024</year>
          <access-date>July 12, 2025</access-date>
          <comment>Available from: <ext-link ext-link-type="uri" xlink:href="https://www.henricodolfing.com/2024/12/case-study-ibm-watson-for-oncology-failure.html">https://www.henricodolfing.com/2024/12/case-study-ibm-watson-for-oncology-failure.html</ext-link></comment>
        </citation>
      </ref>
      <ref id="R11">
        <label>11</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Luxton</surname>
              <given-names>DD</given-names>
            </name>
          </person-group>
          <article-title>Should Watson be consulted for a second opinion&#x3F;</article-title>
          <source>AMA J Ethics</source>
          <year>2019</year>
          <volume>21</volume>
          <issue>2</issue>
          <fpage>E131</fpage>
          <lpage>E137</lpage>
        </citation>
      </ref>
      <ref id="R12">
        <label>12</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Maliha</surname>
              <given-names>G</given-names>
            </name>
            <name>
              <surname>Gerke</surname>
              <given-names>S</given-names>
            </name>
            <name>
              <surname>Cohen</surname>
              <given-names>IG</given-names>
            </name>
            <name>
              <surname>Parikh</surname>
              <given-names>RB</given-names>
            </name>
          </person-group>
          <article-title>Artificial intelligence and liability in medicine: balancing safety and innovation</article-title>
          <source>Milbank Q</source>
          <year>2021</year>
          <volume>99</volume>
          <fpage>629</fpage>
          <lpage>647</lpage>
        </citation>
      </ref>
      <ref id="R13">
        <label>13</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Obermeyer</surname>
              <given-names>Z</given-names>
            </name>
            <name>
              <surname>Powers</surname>
              <given-names>B</given-names>
            </name>
            <name>
              <surname>Vogeli</surname>
              <given-names>C</given-names>
            </name>
            <name>
              <surname>Mullainathan</surname>
              <given-names>S</given-names>
            </name>
          </person-group>
          <article-title>Dissecting racial bias in an algorithm used to manage the health of populations</article-title>
          <source>Science</source>
          <year>2019</year>
          <volume>366</volume>
          <issue>6464</issue>
          <fpage>447</fpage>
          <lpage>453</lpage>
        </citation>
      </ref>
      <ref id="R14">
        <label>14</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Powles</surname>
              <given-names>J</given-names>
            </name>
            <name>
              <surname>Hodson</surname>
              <given-names>H</given-names>
            </name>
          </person-group>
          <article-title>Google DeepMind and healthcare in an age of algorithms</article-title>
          <source>Health Technol (Berl)</source>
          <year>2017</year>
          <volume>7</volume>
          <fpage>351</fpage>
          <lpage>367</lpage>
        </citation>
      </ref>
      <ref id="R15">
        <label>15</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Schertz</surname>
              <given-names>AR</given-names>
            </name>
            <name>
              <surname>Lenoir</surname>
              <given-names>KM</given-names>
            </name>
            <name>
              <surname>Bertoni</surname>
              <given-names>AG</given-names>
            </name>
            <name>
              <surname>Levine</surname>
              <given-names>BJ</given-names>
            </name>
            <name>
              <surname>Mongraw-Chaffin</surname>
              <given-names>M</given-names>
            </name>
            <name>
              <surname>Thomas</surname>
              <given-names>KW</given-names>
            </name>
          </person-group>
          <article-title>Sepsis prediction model for determining sepsis vs SIRS, qSOFA, and SOFA</article-title>
          <source>JAMA Netw Open</source>
          <year>2023</year>
          <volume>6</volume>
          <issue>8</issue>
          <fpage>e2329729</fpage>
        </citation>
      </ref>
      <ref id="R16">
        <label>16</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Solaiman</surname>
              <given-names>B</given-names>
            </name>
            <name>
              <surname>Malik</surname>
              <given-names>A</given-names>
            </name>
          </person-group>
          <article-title>Regulating algorithmic care in the European Union: evolving doctor-patient models through the Artificial Intelligence Act (AI-Act) and the liability directives</article-title>
          <source>Med Law Rev</source>
          <year>2025</year>
          <volume>33</volume>
          <issue>1</issue>
          <fpage>fwae033</fpage>
        </citation>
      </ref>
      <ref id="R17">
        <label>17</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Wong</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>Otles</surname>
              <given-names>E</given-names>
            </name>
            <name>
              <surname>Donnelly</surname>
              <given-names>JP</given-names>
            </name>
            <name>
              <surname>Krumm</surname>
              <given-names>A</given-names>
            </name>
            <name>
              <surname>McCullough</surname>
              <given-names>J</given-names>
            </name>
            <name>
              <surname>DeTroyer-Cooley</surname>
              <given-names>O</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>External validation of a widely implemented proprietary sepsis prediction model in hospitalized patients</article-title>
          <source>JAMA Intern Med</source>
          <year>2021</year>
          <volume>181</volume>
          <fpage>1065</fpage>
          <lpage>1070</lpage>
        </citation>
      </ref>
      <ref id="R18">
        <label>18</label>
        <citation citation-type="journal">
          <person-group>
            <name>
              <surname>Zhou</surname>
              <given-names>N</given-names>
            </name>
            <name>
              <surname>Zhang</surname>
              <given-names>CT</given-names>
            </name>
            <name>
              <surname>Lv</surname>
              <given-names>HY</given-names>
            </name>
            <name>
              <surname>Hao</surname>
              <given-names>CX</given-names>
            </name>
            <name>
              <surname>Li</surname>
              <given-names>TJ</given-names>
            </name>
            <name>
              <surname>Zhu</surname>
              <given-names>JJ</given-names>
            </name>
            <etal />
          </person-group>
          <article-title>Concordance study between IBM Watson for oncology and clinical practice for patients with cancer in China</article-title>
          <source>Oncologist</source>
          <year>2019</year>
          <volume>24</volume>
          <fpage>812</fpage>
          <lpage>819</lpage>
        </citation>
      </ref>
    </ref-list>
  </back>
</article>