<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<!-- The Source File : SH-86 Online.docx -->
<article xml:lang="en" dtd-version="1.1" article-type="review-article" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">SH</journal-id>
      <journal-id journal-id-type="nlm-ta">SH</journal-id>
      <journal-title-group>
        <journal-title>SmartHealth</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">3107-3433</issn>
      <publisher>
        <publisher-name>LumoScience Publishing</publisher-name>
        <publisher-loc>18 Kaki Bukit Road 3, #05-17, Entrepreneur Business Centre, Singapore 415978</publisher-loc>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.64187/sh.2026.v2.i1.001</article-id>
      <article-categories>
        <subj-group subj-group-type="heading">
          <subject>REVIEW</subject>
        </subj-group>
      </article-categories>
      <title-group>
        <article-title>Artificial Intelligence in Healthcare: Review</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <name name-style="western">
            <surname>Negut</surname>
            <given-names>Irina</given-names>
          </name>
          <xref ref-type="aff" rid="aff1" />
        </contrib>
        <contrib contrib-type="author">
          <name name-style="western">
            <surname>Visan</surname>
            <given-names>Anita Ioana</given-names>
          </name>
          <xref ref-type="aff" rid="aff1" />
          <xref ref-type="corresp" rid="cor1">
            <sup>*</sup>
          </xref>
          <email>anita.visan@inflpr.ro</email>
        </contrib>
        <aff id="aff1">
          <addr-line>National Institute for Lasers, Plasma and Radiation Physics, Magurele, Romania</addr-line>
        </aff>
      </contrib-group>
      <author-notes>
        <corresp id="cor1">
          <sup>*</sup>Email: <email>anita.visan@inflpr.ro</email>;</corresp>
      </author-notes>
      <pub-date pub-type="epub">
        <day>5</day>
        <month>2</month>
        <year>2026</year>
      </pub-date>
      <volume>2</volume>
      <issue>1</issue>
      <fpage>1</fpage>
      <lpage>16</lpage>
      <history>
        <date date-type="received">
          <day>3</day>
          <month>1</month>
          <year>2026</year>
        </date>
        <date date-type="recd">
          <day>5</day>
          <month>2</month>
          <year>2025</year>
        </date>
        <date date-type="accepted">
          <day>3</day>
          <month>2</month>
          <year>2025</year>
        </date>
      </history>
      <abstract>
        <p>Artificial intelligence (AI) is becoming a steady presence in healthcare, changing how clinicians diagnose illness, plan treatment, and run medical systems. This review looks at how AI is being used in real medical settings today and examines the complex challenges and ethical questions that follow. Our goal is to offer a clear view of what AI can currently achieve, where it falls short, and how it might shape the future of patient care. To build this picture, we analyzed a wide range of research from the past ten years, focusing on AI&apos;s role in areas like diagnostics, treatment support, and patient management. We prioritized studies that provided a clear and comprehensive view, drawing insights from diverse healthcare fields and settings worldwide. The findings show that AI, particularly through machine learning and deep learning, is already making a difference. In specialties like radiology, oncology, and cardiology, it helps to detect diseases more accurately and forecast patient outcomes, which in turn supports clinical decisions and improves workflow. However, this progress is accompanied by serious ethical concerns. Issues of data privacy, hidden biases in algorithms, and a need for greater transparency and accountability are prominent. There is clear evidence that algorithmic bias can worsen health disparities, especially for underrepresented groups.In conclusion, AI in healthcare is a double-edged sword. It holds tremendous promise for improving patient care through smarter tools and greater efficiency, but it also forces us to confront crucial ethical issues. Moving forward, it is essential to build frameworks that ensure ethical standards, promote fairness, and actively reduce bias. The continued evolution of AI will depend on strong collaboration between technologists and healthcare professionals, ensuring we harness its power responsibly to earn patient trust and improve health for everyone.</p>
      </abstract>
      <kwd-group kwd-group-type="author">
        <kwd>Artificial intelligence</kwd>
        <kwd>Healthcare</kwd>
        <kwd>Diagnostics</kwd>
        <kwd>Bias</kwd>
        <kwd>Clinical decision-making</kwd>
        <kwd>Ethics</kwd>
      </kwd-group>
      <custom-meta-group>
        <custom-meta>
          <meta-name>header-authors-short</meta-name>
          <meta-value>Negut, I., Visan, A.I</meta-value>
        </custom-meta>
      </custom-meta-group>
    </article-meta>
  </front>
  <body>
    <sec id="s1">
      <title>1. Introduction: The Transformative Potential of AI</title>
      <p id="p00001">The integration of Artificial Intelligence (AI) into healthcare represents more than a technological advance; it marks a deep shift in how care is delivered, experienced, and imagined for the future. AI now supports clinicians in diagnosing complex conditions, personalizing treatment plans, and managing healthcare systems with unprecedented efficiency. From predictive algorithms that identify early signs of disease to machine learning (ML) tools that streamline hospital operations, its presence is reshaping both the science and the practice of medicine.</p>
      <p id="p00002">Yet, this transformation also brings new challenges. Questions of bias, accountability, data privacy, and trust remain central to the discussion, reminding us that technology alone cannot define the future of care. The true measure of AI’s success in medicine will not only be its accuracy or speed, but its ability to strengthen the human connection at the heart of healthcare.</p>
      <p id="p00003">This review explores how AI is currently being applied in medical contexts, the ethical and operational dilemmas it presents, and the directions it may take as healthcare continues to evolve. The arrival of AI in healthcare marks a pivotal moment in the evolution of medicine. No longer confined to automating routine tasks, AI is emerging as a true partner in clinical reasoning and decision-making, working alongside healthcare professionals to enhance patient care at every stage of the journey. This shift is already visible in the way AI improves diagnostic accuracy, tailors treatment plans to individual needs, streamlines administrative processes, and optimizes complex workflows.<sup><xref rid="b1" ref-type="bibr">1</xref>–<xref rid="b6" ref-type="bibr">6</xref></sup> Recent evidence syntheses have specifically highlighted the profound impact of Large Language Models like ChatGPT in clinical decision support<sup><xref rid="b7" ref-type="bibr">7</xref></sup>, while the implementation of AI scribes has begun to transform administrative efficiency by automating clinical documentation.<sup><xref rid="b8" ref-type="bibr">8</xref></sup> As AI technologies rapidly advance, their real-world applications are expanding; these range from supporting clinicians in early disease detection to predicting patient deterioration and assisting in emergency care.<sup><xref rid="b9" ref-type="bibr">9</xref>–<xref rid="b12" ref-type="bibr">12</xref></sup> Furthermore, specialized fields such as dentistry are seeing a surge in AI applications for diagnostics and future treatment planning<sup><xref rid="b13" ref-type="bibr">13</xref>,<xref rid="b14" ref-type="bibr">14</xref></sup>, while the role of nursing informatics is evolving to bridge the gap between AI tools and bedside care.<sup><xref rid="b15" ref-type="bibr">15</xref></sup> ML algorithms, for example, have demonstrated the ability to outperform traditional methods in certain diagnostic and prognostic tasks, offering new hope for improved outcomes and operational efficiency.<sup><xref rid="b1" ref-type="bibr">1</xref>,<xref rid="b2" ref-type="bibr">2</xref>,<xref rid="b5" ref-type="bibr">5</xref>,<xref rid="b12" ref-type="bibr">12</xref>,<xref rid="b16" ref-type="bibr">16</xref></sup> This includes targeted applications in chronic disease management, such as the use of AI algorithms to improve outcomes for patients with COPD.<sup><xref rid="b17" ref-type="bibr">17</xref></sup> Yet, the promise of AI is not just in its computational power, but in its potential to make healthcare more personal, responsive, and equitable. However, this transformation is not without its challenges. The integration of AI into clinical practice raises important questions about ethics, transparency, and trust. Concerns about algorithmic bias, data privacy, and the &#160;“black box” nature of some AI systems highlight the need for careful oversight and robust ethical frameworks.<sup><xref rid="b18" ref-type="bibr">18</xref>–<xref rid="b27" ref-type="bibr">27</xref></sup> Regulatory bodies are also responding to these shifts, with the FDA providing updated perspectives on the necessary oversight for AI in biomedicine to ensure safety and efficacy.<sup><xref rid="b28" ref-type="bibr">28</xref></sup> Building trust among clinicians and patients is essential, as is ensuring that AI systems are explainable, fair, and aligned with the values of human care.<sup><xref rid="b29" ref-type="bibr">29</xref>–<xref rid="b37" ref-type="bibr">37</xref></sup> Understanding patient perspectives is vital here, as global scoping reviews indicate that while patients see the benefits, they remain deeply concerned about the ethical implications of AI implementation.<sup><xref rid="b38" ref-type="bibr">38</xref></sup> Similarly, the perceptions of frontline staff are critical; qualitative reviews of nurses’ experiences show that successful adoption depends heavily on how these tools impact nursing practice and patient outcomes.<sup><xref rid="b29" ref-type="bibr">29</xref>,<xref rid="b40" ref-type="bibr">40</xref></sup> To navigate this complex landscape, collaboration is key. Healthcare professionals, technologists, and policymakers must work together to develop guidelines that prioritize patient safety, data security, and equitable access to AI-driven innovations.<sup><xref rid="b22" ref-type="bibr">22</xref>,<xref rid="b23" ref-type="bibr">23</xref>,<xref rid="b41" ref-type="bibr">41</xref>–<xref rid="b44" ref-type="bibr">44</xref></sup> Education and training for clinicians are also crucial, empowering them to understand, evaluate, and effectively use AI tools in their daily practice.<sup><xref rid="b4" ref-type="bibr">4</xref>,<xref rid="b22" ref-type="bibr">22</xref>,<xref rid="b33" ref-type="bibr">33</xref>,<xref rid="b45" ref-type="bibr">45</xref></sup> Ultimately, the goal is not to replace human expertise, but to augment it, creating a healthcare system where AI acts as a supportive partner, enhancing decision-making while respecting the dignity and individuality of every patient.<sup><xref rid="b2" ref-type="bibr">2</xref>,<xref rid="b4" ref-type="bibr">4</xref>,<xref rid="b6" ref-type="bibr">6</xref>,<xref rid="b23" ref-type="bibr">23</xref>,<xref rid="b27" ref-type="bibr">27</xref></sup> By embracing interdisciplinary engagement, ethical vigilance, and a commitment to transparency, the healthcare community can harness the transformative power of AI to deliver care that is not only more efficient but also more compassionate and just. In summary, the thoughtful integration of AI into healthcare holds the potential to revolutionize patient care if approached with responsibility, inclusivity, and a steadfast focus on the human experience at the heart of medicine.<sup><xref rid="b2" ref-type="bibr">2</xref>–<xref rid="b4" ref-type="bibr">4</xref>,<xref rid="b22" ref-type="bibr">22</xref>,<xref rid="b23" ref-type="bibr">23</xref>,<xref rid="b27" ref-type="bibr">27</xref>,<xref rid="b41" ref-type="bibr">41</xref>,<xref rid="b42" ref-type="bibr">42</xref></sup></p>
    </sec>
    <sec id="s2">
      <title>2. Methods: Building a Comprehensive View</title>
      <p id="p00004">This review was guided by a structured approach intended to systematically capture and synthesize the current landscape of AI in healthcare. A literature review was conducted, focusing on peer-reviewed articles, systematic reviews, and relevant conference proceedings published within the last decade. To ensure exhaustive and unbiased coverage, the methodology was organized around three main phases: a defined search strategy, clear inclusion and exclusion criteria, and a thematic analysis framework, as underlined in <bold>Table 1</bold>.</p>
      <table-wrap id="T1">
        <label>Table 1.</label>
        <caption>
          <p>Literature review methodology overview.</p>
        </caption>
        <table>
          <colgroup>
            <col width="1700" />
            <col width="3749" />
            <col width="2847" />
          </colgroup>
          <thead>
            <tr>
              <th align="center"><bold>Phase</bold></th>
              <th align="center"><bold>Description</bold></th>
              <th align="center"><bold>Key Sources / Filters</bold></th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td align="left"><bold>1. Search Strategy</bold></td>
              <td align="left">A systematic search was conducted across major academic databases (PubMed, IEEE Xplore, Scopus) using controlled vocabulary and keyword combinations related to AI and healthcare domains.</td>
              <td align="left"><bold>Databases:</bold> PubMed, IEEE Xplore, Scopus<break/><bold>Keywords:</bold> artificial intelligence, machine learning, clinical decision support, healthcare diagnostics, implementation</td>
            </tr>
            <tr>
              <td align="left"><bold>2.Inclusion/Exclusion</bold></td>
              <td align="left">Identified records were screened by title, abstract, and full text against predefined criteria. The focus was on studies directly addressing AI application, implementation, or evaluation in clinical or operational settings.</td>
              <td align="left"><bold>Included:</bold> English-language studies (2014–2025); direct AI healthcare applications; peer-reviewed articles or significant conference proceedings.<break/><bold>Excluded:</bold> Non-healthcare AI studies; opinion pieces without empirical data; studies before 2014.</td>
            </tr>
            <tr>
              <td align="left"><bold>3. Thematic Analysis</bold></td>
              <td align="left">Eligible studies were analyzed thematically. Key findings were extracted, categorized into predefined domains, and synthesized to identify patterns of application, reported outcomes, and emergent challenges.</td>
              <td align="left"><bold>Domains:</bold> Medical Diagnostics, Therapeutic Algorithms, Clinical Decision Support (CDS), Patient Management.<break/><bold>Focus:</bold> Applications, benefits, limitations, and ethical concerns.</td>
            </tr>
          </tbody>
        </table>
      </table-wrap>
      <p id="p00005">The review set out to explore how AI is shaping four essential areas of healthcare: medical diagnostics, therapeutic algorithms, Clinical Decision Support (CDS) systems, and patient management strategies. To ensure a well-rounded perspective, studies were carefully selected from a wide range of medical specialties, which include radiology, oncology, cardiology, and primary care, so that the review would capture the full spectrum of AI’s influence and the unique challenges faced in different clinical settings. This broad approach allows for a more nuanced understanding of how AI is being used to detect diseases, guide treatments, support clinical decisions, and manage patient care across diverse healthcare environments.<sup><xref rid="b1" ref-type="bibr">1</xref>,<xref rid="b2" ref-type="bibr">2</xref></sup></p>
      <p id="p00006">To maintain transparency and ensure that the findings are reliable and reproducible, the review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. This rigorous methodology strengthens the credibility of the review and helps readers trust that the conclusions are based on a thorough and unbiased assessment of the available evidence.<sup><xref rid="b3" ref-type="bibr">3</xref></sup></p>
      <p id="p00007">By taking a broad and focused approach, this review not only examines the tangible benefits of AI, such as greater diagnostic precision, more tailored treatments, and stronger patient involvement, but also engages with the ethical concerns and practical challenges that accompany the integration of these technologies into everyday healthcare practice. Issues like data privacy, algorithmic bias, and the need for clear regulatory frameworks are discussed as critical factors that must be addressed to ensure that AI-driven healthcare remains both effective and trustworthy.<sup><xref rid="b4" ref-type="bibr">4</xref></sup> While this review aimed to provide a comprehensive overview, certain limitations should be acknowledged. First, the reliance on published, peer‑reviewed literature may overlook emerging or unpublished real‑world implementations, a concern also raised in methodological discussions of AI reviews in healthcare that note the exclusion of non–peer‑reviewed preprints and conference work can introduce publication bias and omit cutting‑edge implementations.<sup><xref rid="b25" ref-type="bibr">25</xref>,<xref rid="b46" ref-type="bibr">46</xref></sup> Second, the geographic concentration of included studies may reflect a bias toward high‑income settings, potentially underrepresenting low‑resource contexts—a limitation similarly highlighted in scoping and systematic reviews that found AI in health research to be heavily skewed toward high‑income countries, with comparatively sparse evidence from low‑ and middle‑income countries.<sup><xref rid="b47" ref-type="bibr">47</xref>,<xref rid="b48" ref-type="bibr">48</xref></sup> Third, the rapid evolution of AI technologies means that some recent advancements may not be fully captured, a challenge repeatedly acknowledged in AI‑in‑healthcare reviews that caution their findings may become quickly outdated as new tools and evidence emerge.<sup><xref rid="b24" ref-type="bibr">24</xref>,<xref rid="b42" ref-type="bibr">42</xref></sup> Future reviews could therefore benefit from systematically including grey literature and ongoing trial data to mitigate publication bias and better capture emerging implementations<sup><xref rid="b46" ref-type="bibr">46</xref>,<xref rid="b47" ref-type="bibr">47</xref></sup>, as well as deliberately incorporating broader global perspectives, especially from low‑ and middle‑income countries, to enhance the representativeness and equity of the evidence base.<sup><xref rid="b48" ref-type="bibr">48</xref>,<xref rid="b49" ref-type="bibr">49</xref></sup></p>
      <p id="p00008">Ultimately, this review seeks to offer healthcare professionals, technology developers, and policymakers a clear and balanced perspective on how to harness the potential of AI responsibly, ensuring that innovation moves forward without compromising the ethical principles and human values at the core of patient care.</p>
    </sec>
    <sec id="s3">
      <title>3. Results: Current Applications and Breakthroughs</title>
      <p id="p00009">Our review confirms that AI has transitioned decisively from a futuristic concept to an active and core component of modern healthcare. This integration is primarily driven by the advanced pattern-recognition capabilities of ML and deep learning (DL), enabling tangible breakthroughs across clinical and operational domains.</p>
      <sec id="s3_1">
        <title>3.1 Enhanced Diagnostics and Precision Detection</title>
        <p id="p00010">AI is making some of its most profound impacts in medical specialties that rely heavily on complex data and imaging. In these fields, AI acts as a tireless, highly precise analytical partner, working alongside radiologists, pathologists, and cardiologists to enhance their diagnostic capabilities and ultimately improve patient outcomes.</p>
        <p id="p00011">Advanced computer vision algorithms, a branch of AI, excel at analyzing medical images with remarkable speed and accuracy. These systems can rapidly review chest X-rays to spot early signs of pneumonia or tuberculosis, detect subtle cancerous lesions in mammograms and skin images, and identify minute anomalies in cardiac MRI scans or retinal photographs, sometimes catching details that even experienced clinicians might miss.<sup><xref rid="b1" ref-type="bibr">1</xref></sup></p>
        <p id="p00012">In <bold>Table 2</bold>, we summarized how AI is transforming data- and image-intensive medical specialties, highlighting specific applications and benefits for both clinicians and patients</p>
        <table-wrap id="T2">
          <label>Table 2.</label>
          <caption>
            <p>AI’s impact on data-intensive medical specialties.</p>
          </caption>
          <table>
            <colgroup>
              <col width="1651" />
              <col width="2928" />
              <col width="2661" />
              <col width="1056" />
            </colgroup>
            <thead>
              <tr>
                <th><bold>Specialty</bold></th>
                <th><bold>AI Application Example</bold></th>
                <th><bold>Key Benefits for Clinicians &amp; Patients</bold></th>
                <th><bold>References</bold></th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td>Radiology</td>
                <td>Detecting early cancer signs in X-rays, MRIs, CT scans</td>
                <td>Faster, more accurate diagnoses; reduced errors</td>
                <td>1</td>
              </tr>
              <tr>
                <td>Pathology</td>
                <td>Analyzing tissue samples for disease classification</td>
                <td>Consistent, high-precision analysis; less variability</td>
                <td>2</td>
              </tr>
              <tr>
                <td>Cardiology</td>
                <td>Identifying anomalies in cardiac MRIs, predicting risk</td>
                <td>Early detection, improved patient outcomes</td>
                <td>1</td>
              </tr>
              <tr>
                <td>General Diagnostics</td>
                <td>Predicting disease progression, personalizing treatment</td>
                <td>Timely, tailored care; supports clinical decisions</td>
                <td>1</td>
              </tr>
              <tr>
                <td>Clinical Decision Support</td>
                <td>AI as a “second set of eyes” for clinicians</td>
                <td>Reduces diagnostic burden, enhances accuracy</td>
                <td>2</td>
              </tr>
            </tbody>
          </table>
        </table-wrap>
        <p id="p00013">To provide a comprehensive overview of how these technologies are being utilized across various clinical domains, <bold>Table 3</bold> summarizes the primary AI applications, their associated benefits, and current limitations across diverse medical specialties, including radiology, oncology, cardiology, and primary care.</p>
        <table-wrap id="T3">
          <label>Table 3.</label>
          <caption>
            <p>Overview of AI Applications Across Medical Specialties.</p>
          </caption>
          <table>
            <colgroup>
              <col width="1330" />
              <col width="1907" />
              <col width="1793" />
              <col width="1855" />
              <col width="1411" />
            </colgroup>
            <thead>
              <tr>
                <th valign="middle"><bold>Specialty</bold></th>
                <th valign="middle"><bold>AI Application Examples</bold></th>
                <th valign="middle"><bold>Key Benefits</bold></th>
                <th valign="middle"><bold>Limitations &amp; Risks</bold></th>
                <th valign="middle"><bold>Example References</bold></th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td valign="middle">Radiology</td>
                <td valign="middle">Detection of tumors, fractures, hemorrhages in imaging</td>
                <td valign="middle">Improved accuracy, reduced reading time</td>
                <td valign="middle">Data bias, over-reliance, explainability</td>
                <td valign="middle">50–52</td>
              </tr>
              <tr>
                <td valign="middle">Oncology</td>
                <td valign="middle">Cancer subtype classification, treatment response prediction</td>
                <td valign="middle">Personalized therapy, early intervention</td>
                <td valign="middle">Limited diverse datasets, validation gaps</td>
                <td valign="middle">53,54</td>
              </tr>
              <tr>
                <td valign="middle">Cardiology</td>
                <td valign="middle">Arrhythmia detection, risk stratification from ECG</td>
                <td valign="middle">Early warning, preventive care</td>
                <td valign="middle">Algorithmic bias, interoperability issues</td>
                <td valign="middle">55</td>
              </tr>
              <tr>
                <td valign="middle">Primary Care</td>
                <td valign="middle">Triage support, chronic disease management</td>
                <td valign="middle">Improved access, continuity of care</td>
                <td valign="middle">Privacy concerns, clinician acceptance</td>
                <td valign="middle">56</td>
              </tr>
              <tr>
                <td valign="middle">Dentistry</td>
                <td valign="middle">Caries detection, treatment planning from radiographs</td>
                <td valign="middle">Enhanced diagnostic consistency</td>
                <td valign="middle">Limited clinical integration, cost barriers</td>
                <td valign="middle">57,58</td>
              </tr>
              <tr>
                <td valign="middle">Nursing</td>
                <td valign="middle">Workflow optimization, patient monitoring</td>
                <td valign="middle">Reduced burnout, improved patient safety</td>
                <td valign="middle">Training requirements, ethical concerns</td>
                <td valign="middle">59,60</td>
              </tr>
              <tr>
                <td valign="middle">Administration</td>
                <td valign="middle">AI scribes, scheduling, billing automation</td>
                <td valign="middle">Reduced administrative burden, cost savings</td>
                <td valign="middle">Data security, implementation complexity</td>
                <td valign="middle">61</td>
              </tr>
            </tbody>
          </table>
        </table-wrap>
        <p id="p00014">These AI-powered tools are not just theoretical. In controlled studies, they have consistently matched or even outperformed human experts in specific diagnostic tasks. For example, DL models have demonstrated improved accuracy in predicting the spread of breast cancer to lymph nodes from MRI scans, compared to traditional diagnostic methods. This means that patients can receive more accurate diagnoses, with less variability between different clinicians, and often in a fraction of the time it would take for manual review.<sup><xref rid="b2" ref-type="bibr">2</xref></sup></p>
        <p id="p00015">For patients, the integration of AI into healthcare translates into earlier detection of disease, quicker access to treatment, and an improved likelihood of positive outcomes. For clinicians, AI acts as a reliable partner, an ever-attentive assistant that helps reduce diagnostic errors and manages the demanding task of analyzing vast amounts of data. This collaboration allows healthcare professionals to dedicate more time to direct patient care and complex clinical decisions, while AI takes on the heavy analytical work “behind the scenes”.<sup><xref rid="b3" ref-type="bibr">3</xref></sup></p>
        <p id="p00016">While AI’s technical capabilities are impressive, it is essential to remember that these tools are most effective when used in partnership with skilled clinicians. As AI continues to evolve, ongoing research and collaboration are needed to ensure these technologies are transparent, trustworthy, and seamlessly integrated into clinical workflows, always keeping patient well-being at the center.<sup><xref rid="b1" ref-type="bibr">1</xref></sup></p>
        <p id="p00017">In summary, AI is not just a tool for faster or more accurate diagnostics; it is a partner that, when thoughtfully integrated, can help clinicians deliver more timely, precise, and compassionate care.</p>
      </sec>
      <sec id="s3_2">
        <title>3.2 Intelligent Clinical Decision Support</title>
        <p id="p00018">AI is evolving far beyond the role of a diagnostic aid; it is emerging as a genuine cognitive partner for clinicians through the rise of next-generation Clinical Decision Support (CDS) systems. These sophisticated platforms integrate and interpret vast streams of patient information. The information ranges from structured data in electronic health records (EHRs) to laboratory results and to unstructured sources like clinical notes and genomic profiles. By processing this complex web of information, AI can reveal subtle patterns and relationships that often remain hidden to the human eye, offering insights that enhance both diagnosis and treatment planning. This holistic approach means that AI-driven CDS systems are no longer limited to sending out generic alerts or reminders. Instead, they can offer clinicians highly personalized, evidence-based recommendations tailored to each patient’s unique situation. For example, AI can help guide treatment choices and medication plans, supporting the shift toward precision medicine, where care is customized to the individual rather than based on broad guidelines.<sup><xref rid="b1" ref-type="bibr">1</xref>,<xref rid="b2" ref-type="bibr">2</xref></sup></p>
        <p id="p00019">One of the most transformative uses of these systems is in predictive analytics. By sifting through vast amounts of patient data, AI can forecast individual risks, such as the chance of developing a hospital-acquired infection or being readmitted after discharge. This allows healthcare teams to move from a reactive approach, where problems are addressed only after they arise, to a proactive model that anticipates issues and intervenes early. For patients, this can mean fewer complications, shorter hospital stays, and better overall outcomes. For healthcare systems, it can lead to more efficient use of resources and reduced costs.<sup><xref rid="b2" ref-type="bibr">2</xref></sup></p>
        <p id="p00020">Importantly, while AI brings powerful new capabilities, it is not meant to replace clinicians. Instead, it acts as a supportive partner, helping healthcare professionals make more informed, timely, and personalized decisions, while still relying on their expertise, empathy, and judgment to provide the best possible care.<sup><xref rid="b1" ref-type="bibr">1</xref></sup></p>
      </sec>
      <sec id="s3_3">
        <title>3.3 Operational and Administrative Transformation</title>
        <p id="p00021">AI is quietly revolutionizing the “behind the scenes” operations that keep healthcare running, tackling the everyday inefficiencies that often lead to clinician burnout and rising costs. By automating and optimizing administrative and logistical tasks, AI is helping to create a more sustainable, patient-centered healthcare system.</p>
        <p id="p00022">One of the most immediate impacts of AI is seen in clinical documentation. Natural Language Processing (NLP) tools can now listen in on patient-clinician conversations, automatically transcribe them, extract key medical information, and enter it directly into electronic health records (EHRs). This technology dramatically reduces the time clinicians spend on paperwork, freeing them to focus on what matters most: caring for patients. As a result, clinicians experience less administrative burden and more meaningful patient interactions.<sup><xref rid="b1" ref-type="bibr">1</xref>,<xref rid="b2" ref-type="bibr">2</xref></sup> AI’s influence extends to the very core of hospital logistics. Predictive and optimization algorithms are being used to forecast patient discharge times, manage bed availability, and streamline surgical suite scheduling. By anticipating patient flow and resource needs, these tools help hospitals reduce wait times, avoid bottlenecks, and make the best use of limited resources. This not only improves the patient experience but also eases the workload on staff and helps control operational costs.<sup><xref rid="b1" ref-type="bibr">1</xref>,<xref rid="b2" ref-type="bibr">2</xref></sup></p>
        <p id="p00023">The integration of AI into healthcare’s administrative foundation goes beyond improving efficiency; it’s about building systems that allow clinicians to spend more time with their patients and ensure that resources are used thoughtfully to deliver better care. As these technologies evolve, they hold the potential to ease administrative burdens, reduce burnout, lower costs, and ultimately create a more sustainable and patient-centered healthcare environment.<sup><xref rid="b1" ref-type="bibr">1</xref></sup></p>
        <p id="p00024">In short, AI is helping to lift the administrative weight off clinicians’ shoulders, allowing them to reconnect with the human side of medicine, while also making healthcare delivery more efficient and sustainable for the future.</p>
      </sec>
    </sec>
    <sec id="s4">
      <title>4. Challenges and Ethical Considerations</title>
      <p id="p00025">The remarkable potential of AI to transform healthcare comes with a set of deeply interconnected challenges that must be addressed to ensure its ethical, safe, and sustainable use. The critical balance between these innovative opportunities and the necessary ethical guardrails—such as mitigating bias and ensuring data privacy—is visually synthesized in <bold>Figure 1</bold>.</p>
      <fig id="fig-1">
        <label>Figure 1</label>
        <caption>
          <p>Dual aspects of AI in healthcare: opportunities vs. challenges.</p>
        </caption>
        <graphic xlink:href="/xmlfiles/images/648d914fe6194b8aa8cc055e4545782c/fig-1.png" specific-use="word-width-pt:415.60;word-height-pt:167.55" />
      </fig>
      <p id="p00026">At the heart of AI’s power is its ability to learn from enormous amounts of sensitive patient data, ranging from electronic health records and genetic profiles to medical images. This reliance on personal information raises serious concerns about privacy, data breaches, and the risk of unauthorized use. Patients need to trust that their most intimate health details are protected. To build and maintain this trust, healthcare systems must implement strong governance frameworks, including strict data anonymization, secure data-sharing models like federated learning, and clear, adaptable consent processes that keep patients in control of their information.<sup><xref rid="b1" ref-type="bibr">1</xref>,<xref rid="b2" ref-type="bibr">2</xref></sup></p>
      <p id="p00027">Another major challenge is the risk of bias. AI systems learn from historical data, which often reflects existing inequalities in healthcare. If these systems are trained on data that underrepresents certain groups, they can unintentionally perpetuate or even worsen disparities, leading to less accurate diagnoses or poorer treatment recommendations for marginalized populations. To counteract this, it is essential to audit algorithms for bias, use diverse and representative datasets, and develop fairness-aware AI models. These steps are crucial for ensuring that AI supports equitable care for all patients, not just those who are already well-served by the system.<sup><xref rid="b1" ref-type="bibr">1</xref>,<xref rid="b2" ref-type="bibr">2</xref></sup></p>
      <p id="p00028">Transparency is another pressing issue. Many advanced AI models, especially deep learning systems, operate as “black boxes”; they can make highly accurate predictions, but their reasoning is often opaque, even to experts. This lack of explainability makes it difficult for clinicians to trust AI recommendations, complicates informed consent, and raises questions about accountability when errors occur. While the field of Explainable AI (XAI) is working to make these systems more interpretable, current methods often fall short of providing the clarity needed for high-stakes clinical decisions. Until more robust solutions are developed, clinicians and regulators must approach AI recommendations with caution and prioritize rigorous validation of these tools.<sup><xref rid="b2" ref-type="bibr">2</xref>,<xref rid="b3" ref-type="bibr">3</xref></sup></p>
      <p id="p00029">The relationship between AI and healthcare professionals is inherently complex. While AI offers powerful support, there are growing concerns that over-reliance could weaken core clinical skills or that poorly designed systems might disrupt established workflows and add new pressures. For AI to truly enhance care, it must be developed as a supportive partner rather than a replacement for human judgment. Equally important, clinicians need the right training and education to engage with these technologies confidently, critically, and ethically.<sup><xref rid="b1" ref-type="bibr">1</xref>,<xref rid="b2" ref-type="bibr">2</xref></sup></p>
      <p id="p00030">In summary, while AI holds extraordinary promise for healthcare, realizing its benefits requires a thoughtful, collaborative approach, one that prioritizes patient privacy, fairness, transparency, and the empowerment of clinicians at every step.</p>
    </sec>
    <sec id="s5">
      <title>5. Conclusions and Future Directions</title>
      <p id="p00031">AI in healthcare stands at a crossroads: it offers the promise of transformative improvements in diagnosis, treatment, and system resilience, but this promise is inseparable from a host of ethical and practical challenges. The future of AI in medicine will be shaped not just by technological innovation, but by the choices society makes about how to govern, design, and deploy these tools.</p>
      <p id="p00032">To truly harness AI’s potential, the healthcare community must put people (patients, clinicians, and the broader public) at the center of every decision. This means developing robust governance structures that enforce clear ethical guidelines and regulatory standards, always prioritizing patient safety, privacy, and autonomy throughout the AI lifecycle.<sup><xref rid="b1" ref-type="bibr">1</xref></sup></p>
      <p id="p00033">AI’s benefits must be shared by all, not just a privileged few. This requires ongoing audits for bias, building partnerships to create diverse and representative datasets, and embedding equity checks as a core requirement in every stage of AI development and deployment. Only by actively addressing disparities can AI become a force for health equity rather than a driver of deeper divides.<sup><xref rid="b1" ref-type="bibr">1</xref></sup></p>
      <p id="p00034">Trust is the foundation of effective healthcare. As AI systems become more complex, it is essential to make their decisions as transparent and understandable as possible. While current explainability methods have limitations—especially for complex, “black box” models—ongoing research in Explainable AI (XAI) and rigorous validation are crucial for building systems that clinicians and patients can trust.<sup><xref rid="b2" ref-type="bibr">2</xref></sup></p>
      <p id="p00035">For clinicians, it is essential to engage in continuous education regarding AI tools, developing new competencies to understand the range of reasonable model outputs, monitor performance over time, and avoid skill decline, as highlighted in discussions of forward‑looking responsibilities for medical AI use by Sand et al.<sup><xref rid="b62" ref-type="bibr">62</xref></sup> Clinicians should also advocate for transparent and explainable systems, since lack of explainability threatens informed consent, accountability, and safe decision‑making in clinical AI, as emphasized in the trustworthiness framework for medical AI described by Goisauf et al<sup><xref rid="b63" ref-type="bibr">63</xref></sup>, and in broader ethical analyses discussed by Goktas et al.<sup><xref rid="b64" ref-type="bibr">64</xref></sup> Maintaining meaningful human oversight in all AI‑assisted decisions is a core requirement of human‑centric AI approaches, which stress that AI should supplement rather than supplant clinical expertise, as argued by Howard et al.<sup><xref rid="b65" ref-type="bibr">65</xref></sup> Taken together, these steps help ensure that AI augments, rather than replaces, professional judgment, thereby strengthening the quality and safety of patient care, consistent with the multidimensional roadmap for trustworthy healthcare AI.<sup><xref rid="b65" ref-type="bibr">65</xref></sup> </p>
      <p id="p00036">For policymakers, the priority lies in developing adaptive regulatory frameworks that can keep pace with rapidly evolving AI technologies, address liability and oversight, and operationalize principles such as human agency, transparency, fairness, and in the policy‑oriented analysis of ethical AI integration.<sup><xref rid="b66" ref-type="bibr">66</xref></sup> Policy initiatives should also fund audits and mitigation of algorithmic bias, reflecting recommendations to perform systematic fairness assessments and bias mitigation across the AI lifecycle, as outlined by Ueda et al<sup><xref rid="b67" ref-type="bibr">67</xref></sup>, and Chen et al.<sup><xref rid="b68" ref-type="bibr">68</xref></sup> At the same time, regulators are urged to promote equitable access to AI innovations and ensure they do not exacerbate health disparities, aligning with calls for socially responsible and globally inclusive AI governance in healthcare. Such actions are crucial to ensure that the benefits of AI are distributed fairly and responsibly across society, a central theme in the systematic review of privacy, oversight, and patient perceptions in AI‑driven healthcare by Williamson et al.<sup><xref rid="b69" ref-type="bibr">69</xref></sup></p>
      <p id="p00037">For patients, becoming informed participants in their own AI‑enabled care is key. Analyses of trust and patient perspectives emphasize that patients must understand how AI contributes to decision-making, its limitations, and how uncertainty is managed to sustain trust. This includes advocating for robust data privacy protections and clear rules on data use, ownership, and confidentiality, consistent with the examination of GDPR‑aligned privacy, consent, and data‑sharing challenges in AI‑driven health systems.<sup><xref rid="b67" ref-type="bibr">67</xref></sup> Furthermore, engaging patients and communities in the co‑design of AI‑enabled care pathways—through participatory design and stakeholder engagement across the AI lifecycle—is identified as critical for equitable and trustworthy AI, as underscored in recommendations for multi‑stakeholder engagement and human‑centered AI governance in healthcare.<sup><xref rid="b68" ref-type="bibr">68</xref></sup> Empowering patients in these ways helps align AI development with patient‑centered values and fosters greater trust in emerging technologies, echoing calls to enhance patient agency and trust in AI‑mediated care. For developers, a principled approach to design is imperative. Frameworks for trustworthy and fair medical AI emphasize prioritizing explainability so that end users can interpret model behavior, embedding fairness‑by‑design via representative datasets, debiasing strategies, and equity‑oriented evaluation, and ensuring robust privacy and security measures from the outset.<sup><xref rid="b68" ref-type="bibr">68</xref></sup> Beyond technical features, developers are encouraged to adopt human‑centered, multi‑stakeholder processes, involving clinicians, patients, ethicists, and regulators throughout problem formulation, development, validation, deployment, and post‑market monitoring, in line with the human‑ and multi‑centered trustworthy‑AI approach. By embedding these principles, developers can create AI tools that are not only technically effective and innovative but also ethically sound, socially trustworthy, and more readily integrated into routine clinical workflows, as advocated across these interdisciplinary frameworks for responsible healthcare AI.<sup><xref rid="b69" ref-type="bibr">69</xref></sup></p>
      <p id="p00038">No single group can solve these challenges alone. The future of AI in healthcare depends on sustained collaboration between technologists, clinicians, ethicists, policymakers, and, most importantly, the patients. By working together, these stakeholders can ensure that AI solutions are not only innovative but also clinically relevant, ethically sound, and socially responsible.<sup><xref rid="b1" ref-type="bibr">1</xref></sup></p>
      <p id="p00039">The journey ahead is not just about perfecting algorithms, but about making principled choices that put people first. The way society navigates the balance between innovation, ethics, and equity will determine whether AI becomes a tool for universal betterment or a source of new disparities. With thoughtful, collaborative action, AI can fulfill its promise to reshape medicine for the good of all.</p>
    </sec>
    <sec id="s6">
      <title>AI Use Declaration</title>
      <p id="p00040">During the preparation and revision of this manuscript, the authors used the following artificial intelligence–assisted tool(s): [Grammarly] solely for the purpose of improving the clarity, readability, and linguistic quality of the text. All scientific content, including the study design, data acquisition, data analysis, interpretation of results, discussion, and conclusions, was conceived, generated, and critically verified by the authors. The authors take full responsibility for the accuracy, originality, and integrity of the published work.</p>
    </sec>
    <sec id="s7">
      <title>Author Contributions</title>
      <p id="p00041">Irina Negut and Anita Ioana Visan contributed equally to the following: Conceptualization, Funding acquisition, Formal analysis, Investigation, Methodology, Project administration, Software, Supervision, Visualization, Writing – original draft, Writing – review &amp; editing. All authors have read and agreed to the published version of the manuscript.</p>
    </sec>
    <sec id="s8">
      <title>Conflicts of Interest</title>
      <p id="p00042">The authors declare they have no competing interests.</p>
    </sec>
    <sec id="s9">
      <title>Consent for Publication</title>
      <p id="p00043">Not applicable.</p>
    </sec>
    <sec id="s10">
      <title>Data Availability Statement</title>
      <p id="p00044">Not applicable.</p>
    </sec>
    <sec id="s11">
      <title>Ethics Approval and Consent to Participate</title>
      <p id="p00045">Not applicable.</p>
    </sec>
    <sec id="s12">
      <title>Funding</title>
      <p id="p00046">This work was supported by a grant from the Ministry of Education and Research, CCCDI-UEFISCDI, project number PN-IV-PCB-RO-MD-2024-0254, within PNCDI IV. We also acknowledge the Romanian Ministry of Research, Innovation and Digitalization under the Romanian National Nucleu Program LAPLAS VII—contract No. 30N/2023.</p>
    </sec>
    <sec id="pre1" sec-type="how-to-cite">
      <title>Cite this Article</title>
      <p id="p00047">Negut, I., Visan, A.I. Artificial Intelligence in Healthcare: Review. <italic>SmartHealth</italic>. 2026;2(1):1–13. &#160; https://doi.org/10.64187/sh.2026.v2.i1.001 </p>
    </sec>
    <sec id="pre2">
      <title>Copyright</title>
      <p id="p00048">© 2026 by the author(s). Published by LUMOSCIENCE PUBLISHING LIMITED. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</ext-link>), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author(s) and the source are properly credited, and any changes made are indicated.</p>
    </sec>
    <sec id="pre3">
      <title>Disclaimer</title>
      <p id="p00049">All statements, views, and opinions expressed in this article are solely those of the author(s) and do not necessarily reflect those of their affiliated institutions, the editors, reviewers, or LumoScience Publishing. Any products, methods, or claims mentioned are not guaranteed or endorsed by LumoScience Publishing. The publisher and editors disclaim any responsibility for harm to people or property resulting from the use of any information, procedures, or materials discussed in the publication. The publisher remains neutral with regard to jurisdictional claims in maps and affiliations, and does not guarantee or endorse any products, methods, or claims mentioned.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <title>References</title>
      <ref id="b1">
        <label>[1]</label>
        <mixed-citation>Jiang, F., Jiang, Y., Zhi, H., et al. Artificial intelligence in healthcare: past, present and future. <italic>Stroke Vasc Neurol</italic>. 2017;2(4):230–243. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1136/svn-2017-000101">https://doi.org/10.1136/svn-2017-000101</ext-link></mixed-citation>
      </ref>
      <ref id="b2">
        <label>[2]</label>
        <mixed-citation>Rajpurkar, P., Chen, E., Banerjee, O., et al. AI in health and medicine. <italic>Nat Med</italic>. 2022;28(1):31–38.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41591-021-01614-0">https://doi.org/10.1038/s41591-021-01614-0</ext-link></mixed-citation>
      </ref>
      <ref id="b3">
        <label>[3]</label>
        <mixed-citation>Yu, K.H., Beam, A.L., Kohane, I.S. Artificial intelligence in healthcare. <italic>Nat Biomed Eng</italic>. 2018;2(10):719–731.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41551-018-0305-z">https://doi.org/10.1038/s41551-018-0305-z</ext-link></mixed-citation>
      </ref>
      <ref id="b4">
        <label>[4]</label>
        <mixed-citation>Alowais, S.A., Alghamdi, S.S., Alsuhebany, N., et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. <italic>BMC Med Educ</italic>. 2023;23(1):689. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1186/s12909-023-04698-z">https://doi.org/10.1186/s12909-023-04698-z</ext-link></mixed-citation>
      </ref>
      <ref id="b5">
        <label>[5]</label>
        <mixed-citation>Busnatu, Ș., Niculescu, A.G., Bolocan, A., et al. Clinical applications of artificial intelligence—an updated overview. <italic>J Clin Med</italic>. 2022;11(8):2265. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3390/jcm11082265">https://doi.org/10.3390/jcm11082265</ext-link></mixed-citation>
      </ref>
      <ref id="b6">
        <label>[6]</label>
        <mixed-citation>Reddy, S., Fox, J., Purohit, M.P. Artificial intelligence-enabled healthcare delivery. <italic>J R Soc Med</italic>. 2019;112(1):22–28. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1177/0141076818815510">https://doi.org/10.1177/0141076818815510</ext-link></mixed-citation>
      </ref>
      <ref id="b7">
        <label>[7]</label>
        <mixed-citation>Iqbal, U., Tanweer, A., Rahmanti, A.R., et al. Impact of large language model (ChatGPT) in healthcare: an umbrella review and evidence synthesis. <italic>J Biomed Sci</italic>. 2025;32(1):45. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1186/s12929-025-01131-z">https://doi.org/10.1186/s12929-025-01131-z</ext-link></mixed-citation>
      </ref>
      <ref id="b8">
        <label>[8]</label>
        <mixed-citation>Hassan, H., Zipursky, A.R., Rabbani, N., et al. Special topic on burnout: clinical implementation of artificial intelligence scribes in healthcare: a systematic review. <italic>Appl Clin Inform</italic>. 2025;16(4):1121–1135.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1055/a-2597-2017">https://doi.org/10.1055/a-2597-2017</ext-link></mixed-citation>
      </ref>
      <ref id="b9">
        <label>[9]</label>
        <mixed-citation>Rong, G., Mendez, A., Assi, E.B., et al. Artificial intelligence in healthcare: review and prediction case studies. <italic>Engineering</italic>. 2020;6(3):291–301. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.eng.2019.08.015">https://doi.org/10.1016/j.eng.2019.08.015</ext-link></mixed-citation>
      </ref>
      <ref id="b10">
        <label>[10]</label>
        <mixed-citation>Yin, J., Ngiam, K.Y., Teo, H.H. Role of artificial intelligence applications in real-life clinical practice: systematic review. <italic>J Med Internet Res</italic>. 2021;23(4):e25759. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2196/25759">https://doi.org/10.2196/25759</ext-link></mixed-citation>
      </ref>
      <ref id="b11">
        <label>[11]</label>
        <mixed-citation>Shaik, T., Tao, X., Higgins, N., et al. Remote patient monitoring using artificial intelligence: Current state, applications, and challenges. <italic>WIREs Data Mining Knowl Discov</italic>. 2023;13(2):e1485. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1002/widm.1485">https://doi.org/10.1002/widm.1485</ext-link></mixed-citation>
      </ref>
      <ref id="b12">
        <label>[12]</label>
        <mixed-citation>Johnson, K.B., Wei, W.Q., Weeraratne, D., et al. Precision medicine, AI, and the future of personalized health care. <italic>Clin Transl Sci</italic>. 2021;14(1):86–93. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/cts.12884">https://doi.org/10.1111/cts.12884</ext-link></mixed-citation>
      </ref>
      <ref id="b13">
        <label>[13]</label>
        <mixed-citation>Lee, S.J., Poon, J., Jindarojanakul, A., et al. Artificial intelligence in dentistry: Exploring emerging applications and future prospects. <italic>J Dent</italic>. 2025;155:105648. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.jdent.2025.105648">https://doi.org/10.1016/j.jdent.2025.105648</ext-link></mixed-citation>
      </ref>
      <ref id="b14">
        <label>[14]</label>
        <mixed-citation>Weerakoon, A.T., Girdis, T., Peters, O. Artificial Intelligence in Australian Dental and General Healthcare: A Scoping Review. <italic>Aust Dent J</italic>. 2025;70(4):209–256. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/adj.70000">https://doi.org/10.1111/adj.70000</ext-link></mixed-citation>
      </ref>
      <ref id="b15">
        <label>[15]</label>
        <mixed-citation>Nashwan, A.J., Cabrega, J.A., Othman, M.I., et al. The evolving role of nursing informatics in the era of artificial intelligence. <italic>Int Nurs Rev</italic>. 2025;72(1):e13084. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/inr.13084">https://doi.org/10.1111/inr.13084</ext-link></mixed-citation>
      </ref>
      <ref id="b16">
        <label>[16]</label>
        <mixed-citation>Han, R., Acosta, J.N., Shakeri, Z., et al. Randomised controlled trials evaluating artificial intelligence in clinical practice: a scoping review. <italic>Lancet Digit Health</italic>. 2024;6(5):E367–E373. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/S2589-7500(24)00047-5">https://doi.org/10.1016/S2589-7500(24)00047-5</ext-link></mixed-citation>
      </ref>
      <ref id="b17">
        <label>[17]</label>
        <mixed-citation>Chen, Z., Hao, J., Sun, H., et al. Applications of digital health technologies and artificial intelligence algorithms in COPD: systematic review. <italic>BMC Med Inform Decis Mak</italic>. 2025;25(1):77. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1186/s12911-025-02870-7">https://doi.org/10.1186/s12911-025-02870-7</ext-link></mixed-citation>
      </ref>
      <ref id="b18">
        <label>[18]</label>
        <mixed-citation>Morley, J., Machado, C.C., Burr, C., et al. The ethics of AI in health care: a mapping review. <italic>Soc Sci Med</italic>. 2020;260:113172. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.socscimed.2020.113172">https://doi.org/10.1016/j.socscimed.2020.113172</ext-link></mixed-citation>
      </ref>
      <ref id="b19">
        <label>[19]</label>
        <mixed-citation>Norori, N., Hu, Q., Aellen, F.M., et al. Addressing bias in big data and AI for health care: A call for open science. <italic>Patterns</italic>. 2021;2(10):100347. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.patter.2021.100347">https://doi.org/10.1016/j.patter.2021.100347</ext-link></mixed-citation>
      </ref>
      <ref id="b20">
        <label>[20]</label>
        <mixed-citation>Lysaght, T., Lim, H.Y., Xafis, V., et al. AI-assisted decision-making in healthcare: the application of an ethics framework for big data in health and research. <italic>Asian Bioeth Rev</italic>. 2019;11(3):299–314.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s41649-019-00096-0">https://doi.org/10.1007/s41649-019-00096-0</ext-link></mixed-citation>
      </ref>
      <ref id="b21">
        <label>[21]</label>
        <mixed-citation>Zhang, J., Zhang, Z.M. Ethics and governance of trustworthy medical artificial intelligence. <italic>BMC Med Inform Decis Mak</italic>. 2023;23(1):7. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1186/s12911-023-02103-9">https://doi.org/10.1186/s12911-023-02103-9</ext-link></mixed-citation>
      </ref>
      <ref id="b22">
        <label>[22]</label>
        <mixed-citation>Esmaeilzadeh, P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. <italic>Artif Intell Med</italic>. 2024;151:102861.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.artmed.2024.102861">https://doi.org/10.1016/j.artmed.2024.102861</ext-link></mixed-citation>
      </ref>
      <ref id="b23">
        <label>[23]</label>
        <mixed-citation>Mennella, C., Maniscalco, U., De Pietro, G., et al. Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. <italic>Heliyon</italic>. 2024;10(4):e26297. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.heliyon.2024.e26297">https://doi.org/10.1016/j.heliyon.2024.e26297</ext-link></mixed-citation>
      </ref>
      <ref id="b24">
        <label>[24]</label>
        <mixed-citation>Albahri, A.S., Duhaim, A.M., Fadhel, M.A., et al. A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion. <italic>Inf Fusion</italic>. 2023;96:156–191.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.inffus.2023.03.008">https://doi.org/10.1016/j.inffus.2023.03.008</ext-link></mixed-citation>
      </ref>
      <ref id="b25">
        <label>[25]</label>
        <mixed-citation>Kelly, C.J., Karthikesalingam, A., Suleyman, M., et al. Key challenges for delivering clinical impact with artificial intelligence. <italic>BMC Med</italic>. 2019;17(1):195. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1186/s12916-019-1426-2">https://doi.org/10.1186/s12916-019-1426-2</ext-link></mixed-citation>
      </ref>
      <ref id="b26">
        <label>[26]</label>
        <mixed-citation>Mittermaier, M., Raza, M.M., Kvedar, J.C. Bias in AI-based models for medical applications: challenges and mitigation strategies. <italic>NPJ Digit Med</italic>. 2023;6(1):113. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41746-023-00858-z">https://doi.org/10.1038/s41746-023-00858-z</ext-link></mixed-citation>
      </ref>
      <ref id="b27">
        <label>[27]</label>
        <mixed-citation>Li, Y.H., Li, Y.L., Wei, M.Y., et al. Innovation and challenges of artificial intelligence technology in personalized healthcare. <italic>Sci Rep</italic>. 2024;14(1):18994. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41598-024-70073-7">https://doi.org/10.1038/s41598-024-70073-7</ext-link></mixed-citation>
      </ref>
      <ref id="b28">
        <label>[28]</label>
        <mixed-citation>Warraich, H.J., Tazbaz, T., Califf, R.M. FDA perspective on the regulation of artificial intelligence in health care and biomedicine. <italic>JAMA</italic>. 2025;333(3):241–247. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1001/jama.2024.21451">https://doi.org/10.1001/jama.2024.21451</ext-link></mixed-citation>
      </ref>
      <ref id="b29">
        <label>[29]</label>
        <mixed-citation>Cadario, R., Longoni, C., Morewedge, C.K. Understanding, explaining, and utilizing medical artificial intelligence. <italic>Nat Hum Behav</italic>. 2021;5(12):1636–1642. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41562-021-01146-0">https://doi.org/10.1038/s41562-021-01146-0</ext-link></mixed-citation>
      </ref>
      <ref id="b30">
        <label>[30]</label>
        <mixed-citation>Yang, C.C. Explainable artificial intelligence for predictive modeling in healthcare. <italic>J Healthc Inform Res</italic>. 2022;6(2):228–239. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s41666-022-00114-1">https://doi.org/10.1007/s41666-022-00114-1</ext-link></mixed-citation>
      </ref>
      <ref id="b31">
        <label>[31]</label>
        <mixed-citation>Ali, S., Akhlaq, F., Imran, A.S., et al. The enlightening role of explainable artificial intelligence in medical &amp; healthcare domains: A systematic literature review. <italic>Comput Biol Med</italic>. 2023;166:107555.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.compbiomed.2023.107555">https://doi.org/10.1016/j.compbiomed.2023.107555</ext-link></mixed-citation>
      </ref>
      <ref id="b32">
        <label>[32]</label>
        <mixed-citation>Chaddad, A., Peng, J., Xu, J., et al. Survey of explainable AI techniques in healthcare. <italic>Sensors (Basel)</italic>. 2023;23(2):634. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3390/s23020634">https://doi.org/10.3390/s23020634</ext-link></mixed-citation>
      </ref>
      <ref id="b33">
        <label>[33]</label>
        <mixed-citation>Asan, O., Bayrak, A.E., Choudhury, A. Artificial intelligence and human trust in healthcare: focus on clinicians. <italic>J Med Internet Res</italic>. 2020;22(6):e15154. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2196/15154">https://doi.org/10.2196/15154</ext-link></mixed-citation>
      </ref>
      <ref id="b34">
        <label>[34]</label>
        <mixed-citation>Saraswat, D., Bhattacharya, P., Verma, A., et al. Explainable AI for healthcare 5.0: opportunities and challenges. <italic>IEEE Access</italic>. 2022;10:844865–84517. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1109/ACCESS.2022.3197671">https://doi.org/10.1109/ACCESS.2022.3197671</ext-link></mixed-citation>
      </ref>
      <ref id="b35">
        <label>[35]</label>
        <mixed-citation>Wysocki, O., Davies, J.K., Vigo, M., et al. Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making. <italic>Artif Intell</italic>. 2023;316:103839.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.artint.2022.103839">https://doi.org/10.1016/j.artint.2022.103839</ext-link></mixed-citation>
      </ref>
      <ref id="b36">
        <label>[36]</label>
        <mixed-citation>Markus, A.F., Kors, J.A., Rijnbeek, P.R. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. <italic>J Biomed Inform</italic>. 2021;113:103655. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.jbi.2020.103655">https://doi.org/10.1016/j.jbi.2020.103655</ext-link></mixed-citation>
      </ref>
      <ref id="b37">
        <label>[37]</label>
        <mixed-citation>Ghassemi, M., Oakden-Rayner, L., Beam, A.L. The false hope of current approaches to explainable artificial intelligence in health care. <italic>Lancet Digit Health</italic>. 2021;3(11):e745–e750. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/S2589-7500(21)00208-9">https://doi.org/10.1016/S2589-7500(21)00208-9</ext-link></mixed-citation>
      </ref>
      <ref id="b38">
        <label>[38]</label>
        <mixed-citation>Osnat, B. Patient perspectives on artificial intelligence in healthcare: A global scoping review of benefits, ethical concerns, and implementation strategies. <italic>Int J Med Inform</italic>. 2025;203:106007.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.ijmedinf.2025.106007">https://doi.org/10.1016/j.ijmedinf.2025.106007</ext-link></mixed-citation>
      </ref>
      <ref id="b39">
        <label>[39]</label>
        <mixed-citation>Abdelmohsen, S.A., Al-jabri, M.M. Artificial intelligence applications in healthcare: a systematic review of their impact on nursing practice and patient outcomes. <italic>J Nurs Scholarsh</italic>. 2025;57(6):957–966.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/jnu.70040">https://doi.org/10.1111/jnu.70040</ext-link></mixed-citation>
      </ref>
      <ref id="b40">
        <label>[40]</label>
        <mixed-citation>Joo, J.Y., Liu, M.F., Mu-Hsing, H. Nurses’ perceptions of artificial intelligence adoption in healthcare: A qualitative systematic review. <italic>Nurse Educ Pract</italic>. 2025;88:104542. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.nepr.2025.104542">https://doi.org/10.1016/j.nepr.2025.104542</ext-link></mixed-citation>
      </ref>
      <ref id="b41">
        <label>[41]</label>
        <mixed-citation>Sharma, M., Savage, C., Nair, M., et al. Artificial intelligence applications in health care practice: scoping review. <italic>J Med Internet Res</italic>. 2022;24(10):e40238. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2196/40238">https://doi.org/10.2196/40238</ext-link></mixed-citation>
      </ref>
      <ref id="b42">
        <label>[42]</label>
        <mixed-citation>Secinaro, S., Calandra, D., Secinaro, A., et al. The role of artificial intelligence in healthcare: a structured literature review. <italic>BMC Med Inform Decis Mak</italic>. 2021;21(1):125. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1186/s12911-021-01488-9">https://doi.org/10.1186/s12911-021-01488-9</ext-link></mixed-citation>
      </ref>
      <ref id="b43">
        <label>[43]</label>
        <mixed-citation>Zahlan, A., Ranjan, R.P., Hayes, D. Artificial intelligence innovation in healthcare: Literature review, exploratory analysis, and future research. Technol Soc. 2023;74(4):102321. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.techsoc.2023.102321">https://doi.org/10.1016/j.techsoc.2023.102321</ext-link></mixed-citation>
      </ref>
      <ref id="b44">
        <label>[44]</label>
        <mixed-citation>Siala, H., Wang, Y. SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. <italic>Soc Sci Med</italic>. 2022;296:114782. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.socscimed.2022.114782">https://doi.org/10.1016/j.socscimed.2022.114782</ext-link></mixed-citation>
      </ref>
      <ref id="b45">
        <label>[45]</label>
        <mixed-citation>Dicuonzo, G., Donofrio, F., Fusco, A., et al. Healthcare system: Moving forward with artificial intelligence. <italic>Technovation</italic>. 2023;120(3):102510. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.technovation.2022.102510">https://doi.org/10.1016/j.technovation.2022.102510</ext-link></mixed-citation>
      </ref>
      <ref id="b46">
        <label>[46]</label>
        <mixed-citation>Chustecki, M. Benefits and risks of AI in health care: Narrative review. <italic>Interact J Med Res</italic>. 2024;13(1):e53616. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2196/53616">https://doi.org/10.2196/53616</ext-link></mixed-citation>
      </ref>
      <ref id="b47">
        <label>[47]</label>
        <mixed-citation>Ciecierski-Holmes, T., Singh, R., Axt, M., et al. Artificial intelligence for strengthening healthcare systems in low-and middle-income countries: a systematic scoping review. <italic>NPJ Digit Med</italic>. 2022;5(1):162.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41746-022-00700-y">https://doi.org/10.1038/s41746-022-00700-y</ext-link></mixed-citation>
      </ref>
      <ref id="b48">
        <label>[48]</label>
        <mixed-citation>López, D.M., Rico-Olarte, C., Blobel, B., et al. Challenges and solutions for transforming health ecosystems in low-and middle-income countries through artificial intelligence. <italic>Front Med (Lausanne)</italic>. 2022;9:958097.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fmed.2022.958097">https://doi.org/10.3389/fmed.2022.958097</ext-link></mixed-citation>
      </ref>
      <ref id="b49">
        <label>[49]</label>
        <mixed-citation>Botha, N.N., Segbedzi, C.E., Dumahasi, V.K., et al. Artificial intelligence in healthcare: a scoping review of perceived threats to patient rights and safety. <italic>Arch Public Health</italic>. 2024;82(1):188.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1186/s13690-024-01414-1">https://doi.org/10.1186/s13690-024-01414-1</ext-link></mixed-citation>
      </ref>
      <ref id="b50">
        <label>[50]</label>
        <mixed-citation>Rajpurkar, P., Irvin, J., Zhu, K., et al. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint. arXiv:171105225. 2017. <ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/1711.05225">https://arxiv.org/abs/1711.05225</ext-link></mixed-citation>
      </ref>
      <ref id="b51">
        <label>[51]</label>
        <mixed-citation>Gulshan, V., Peng, L., Coram, M., et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. <italic>JAMA</italic>. 2016;316(22):2402–2410.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1001/jama.2016.17216">https://doi.org/10.1001/jama.2016.17216</ext-link></mixed-citation>
      </ref>
      <ref id="b52">
        <label>[52]</label>
        <mixed-citation>Chilamkurthy, S., Ghosh, R., Tanamala, S., et al. Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. <italic>Lancet</italic>. 2018;392(10162):2388–2396. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/S0140-6736(18)31645-3">https://doi.org/10.1016/S0140-6736(18)31645-3</ext-link></mixed-citation>
      </ref>
      <ref id="b53">
        <label>[53]</label>
        <mixed-citation>Esteva, A., Kuprel, B., Novoa, R.A., et al. Dermatologist-level classification of skin cancer with deep neural networks. <italic>Nature</italic>. 2017;542(7639):115–118. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/nature21056">https://doi.org/10.1038/nature21056</ext-link></mixed-citation>
      </ref>
      <ref id="b54">
        <label>[54]</label>
        <mixed-citation>Kourou, K., Exarchos, T.P., Exarchos, K.P., et al. Machine learning applications in cancer prognosis and prediction. <italic>Comput Struct Biotechnol J</italic>. 2014;13:8–17. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.csbj.2014.11.005">https://doi.org/10.1016/j.csbj.2014.11.005</ext-link></mixed-citation>
      </ref>
      <ref id="b55">
        <label>[55]</label>
        <mixed-citation>Attia, Z.I., Noseworthy, P.A., Lopez-Jimenez, F., et al. An artificial intelligence-enabled ECG algorithm for the identification of patients with atrial fibrillation during sinus rhythm: a retrospective analysis of outcome prediction. <italic>Lancet</italic>. 2019;394(10201):861–867. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/S0140-6736(19)31721-0">https://doi.org/10.1016/S0140-6736(19)31721-0</ext-link></mixed-citation>
      </ref>
      <ref id="b56">
        <label>[56]</label>
        <mixed-citation>Fernandes, M., Vieira, S.M., Leite, F., et al. Clinical decision support systems for triage in the emergency department using intelligent systems: a review. <italic>Artif Intell Med</italic>. 2020;102:101762. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.artmed.2019.101762">https://doi.org/10.1016/j.artmed.2019.101762</ext-link></mixed-citation>
      </ref>
      <ref id="b57">
        <label>[57]</label>
        <mixed-citation>Lee, J.H., Kim, D.H., Jeong, S.N., et al. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. <italic>J Dent</italic>. 2018;77:106–111. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.jdent.2018.07.015">https://doi.org/10.1016/j.jdent.2018.07.015</ext-link></mixed-citation>
      </ref>
      <ref id="b58">
        <label>[58]</label>
        <mixed-citation>Schwendicke, F., Golla, T., Dreher, M., et al. Convolutional neural networks for dental image diagnostics: A scoping review. <italic>J Dent</italic>. 2019;91:103226. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.jdent.2019.103226">https://doi.org/10.1016/j.jdent.2019.103226</ext-link></mixed-citation>
      </ref>
      <ref id="b59">
        <label>[59]</label>
        <mixed-citation>Hak, F., Guimarães, T., Santos, M. Towards effective clinical decision support systems: A systematic review. <italic>PLoS One</italic>. 2022;17(8):e0272846. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1371/journal.pone.0272846">https://doi.org/10.1371/journal.pone.0272846</ext-link></mixed-citation>
      </ref>
      <ref id="b60">
        <label>[60]</label>
        <mixed-citation>Sendak, M.P., Ratliff, W., Sarro, D., et al. Real-world integration of a sepsis deep learning technology into routine clinical care: implementation study. <italic>JMIR Med Inform</italic>. 2020;8(7):e15182. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2196/15182">https://doi.org/10.2196/15182</ext-link></mixed-citation>
      </ref>
      <ref id="b61">
        <label>[61]</label>
        <mixed-citation>Davenport, T., Kalakota, R. The potential for artificial intelligence in healthcare. <italic>Future Healthc J</italic>. 2019;6(2):94–98. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.7861/futurehosp.6-2-94">https://doi.org/10.7861/futurehosp.6-2-94</ext-link></mixed-citation>
      </ref>
      <ref id="b62">
        <label>[62]</label>
        <mixed-citation>Sand, M., Durán, J.M., Jongsma, K.R.. Responsibility beyond design: Physicians’ requirements for ethical medical AI. <italic>Bioethics</italic>. 2022;36(2):162–169. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/bioe.12887">https://doi.org/10.1111/bioe.12887</ext-link></mixed-citation>
      </ref>
      <ref id="b63">
        <label>[63]</label>
        <mixed-citation>Goisauf, M., Cano Abadía, M., Akyüz, K., et al. Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop. <italic>J Med Internet Res</italic>. 2025;27:e71236. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2196/71236">https://doi.org/10.2196/71236</ext-link></mixed-citation>
      </ref>
      <ref id="b64">
        <label>[64]</label>
        <mixed-citation>Goktas, P., Grzybowski, A. Shaping the future of healthcare: ethical clinical challenges and pathways to trustworthy AI. <italic>J Clin Med</italic>. 2025;14(5):1605. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3390/jcm14051605">https://doi.org/10.3390/jcm14051605</ext-link></mixed-citation>
      </ref>
      <ref id="b65">
        <label>[65]</label>
        <mixed-citation>Howard, J.P., Zhang, Q., Salih, A.M., et al. Artificial intelligence in cardiovascular imaging: risks, mitigations and the path to safe implementation. <italic>Heart</italic>. Published online first: June 27, 2025. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1136/heartjnl-2024-324612">https://doi.org/10.1136/heartjnl-2024-324612</ext-link></mixed-citation>
      </ref>
      <ref id="b66">
        <label>[66]</label>
        <mixed-citation>Weiner, E.B., Dankwa-Mullan, I., Nelson, W.A., et al. Ethical challenges and evolving strategies in the integration of artificial intelligence into clinical practice. <italic>PLOS Digit Health</italic>. 2025;4(4):e0000810.  <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1371/journal.pdig.0000810">https://doi.org/10.1371/journal.pdig.0000810</ext-link></mixed-citation>
      </ref>
      <ref id="b67">
        <label>[67]</label>
        <mixed-citation>Ueda, D., Kakinuma, T., Fujita, S., et al. Fairness of artificial intelligence in healthcare: review and recommendations. <italic>Jpn J Radiol</italic>. 2024;42(1):3–15. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s11604-023-01474-3">https://doi.org/10.1007/s11604-023-01474-3</ext-link></mixed-citation>
      </ref>
      <ref id="b68">
        <label>[68]</label>
        <mixed-citation>Chen, R.J., Wang, J.J., Williamson, D.F., et al. Algorithmic fairness in artificial intelligence for medicine and healthcare. <italic>Nat Biomed Eng</italic>. 2023;7(6):719–742. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/s41551-023-01056-8">https://doi.org/10.1038/s41551-023-01056-8</ext-link></mixed-citation>
      </ref>
      <ref id="b69">
        <label>[69]</label>
        <mixed-citation>Williamson, S.M., Prybutok, V. Balancing privacy and progress: a review of privacy challenges, systemic oversight, and patient perceptions in AI-driven healthcare. <italic>Appl Sci</italic>. 2024;14(2):675. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3390/app14020675">https://doi.org/10.3390/app14020675</ext-link></mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>