Barriers and challenges to the use of artificial intelligence (AI) in oncology decision-making: a focused review

  • book a PhD consultation with Original PhD
  • Artificial intelligence (AI) is rapidly advancing across multiple industries, including in healthcare, where algorithms and large language models (LLMs) are potentially transformative in practice (Stenzl et al., 2024). The use of AI in oncology has been increasingly recognised in practice, with a potential impact on radiology, pathology, and treatment selection that could yield changes to diagnosis and treatment personalisation (Elemento et al., 2025). However, there remain key barriers to the implementation of AI in practice and a lack of integration of AI into clinical workflows (Ajmal et al., 2025). These challenges potentially reflect a lack of validation of AI models and tools, poor accuracy of available models, and practical issues in the clinic (Hassan et al., 2024; Stenzl et al., 2024; Elemento et al., 2025). To realise the full potential of AI in oncology decision-making, these challenges and barriers need to be identified and overcome. The focus of this review is on identifying these barriers and challenges and valuating the implications for AI use in oncology decision-making.

    One of the key requirements for effective clinical decision-making is the need to utilise accurate and valid information in support of evidence-based practice (Santos and Amorim-Lopes, 2025). Multiple tools and models, including LLMs, have been developed to support AI-based decision-making in practice and have the potential to streamline workflows and evaluate large amounts of data rapidly (Alsharif, 2024). However, the accuracy of these AI-based decision tools needs to be confirmed to ensure appropriate use in practice (Koco et al., 2024). It has been acknowledged that the validity of the technology, the data on which AIs are trained, and the lack of published studies supporting AI use in clinical decision-making in oncology may all serve as barriers to the use of technology (Giebel et al., 2025). Importantly, different AI models and proprietary technologies may differ in the data used to train the models, leading to complexity in validating different models across different patient populations (Kolla and Parikh, 2024).

    For instance, the evaluation of AI-based decision-making in prostate cancer highlights challenges in validating models. Modelling using multiparametric magnetic resonance imaging (MRI) scans to identify suspicious lesions and guide treatment decision-making has been based on small sample sizes (<250 men), with validation only based on a single public dataset, with a limited number of prostate lesion types (Mehta et al., 2021). Similarly, another model for prostate cancer found that its performance was comparable to radiologist image assessment in predicting clinically significant lesions (informing treatment), but the model was trained on only 68 men, evaluated in a single institution, and used a retrospective approach (Varghese et al., 2019). Together, these studies highlight how, even within a single cancer type, variability in AI models and a lack of robust external validation may influence the potential for clinical application.

    The use of AI models in colorectal cancer contexts is supported by more robust evidence, derived from multi-institutional studies (Kudo et al., 2020; Yang et al., 2020). For instance, the use of a deep learning approach to evaluate the classification of colorectal polyps according to levels of malignancy demonstrated a high level of diagnostic performance compared with endoscopist evaluation (Kudo et al., 2020). This model was trained on over 56,000 colonoscopy images and validated on 255 images from a range of institutions. However, the model could not accurately discriminate between high-grade dysplasia and invasive cancer types, limiting its application to recognised malignancy grading systems.

    Furthermore, comparison against endoscopist evaluation was based on endoscopists with a range of experience, including novices, which may not provide a valid comparison. Conversely, another model for evaluating malignancy in colorectal polyp imaging found that, despite being trained on over 3400 images and adopting a multi-institutional approach, performance was accurate but fell short of expert endoscopist performance (Yang et al., 2020). Hence, even where models are developed across institutions and more diverse and expansive data sets, they may lack clear validation against standard practice. Similar challenges have been noted across different cancer types and for numerous AI-based models (Tang et al., 2020; Fernandez et al., 2022; Santos and Amorim-Lopes, 2025). Therefore, while available studies may demonstrate the potential effectiveness of the model in matching, or exceeding, normal clinical workflows, the lack of prospective studies with large sample sizes limits the potential to apply findings in practice.

    Even where validation is achieved for AI models based on prospective trials including large sample sizes, the nature of AI technology and the generation of decisions remains an important potential barrier (Santos and Amorim-Lopes, 2025). For instance, it is noted that in many LLMs, the data on which models have been trained may not be disclosed (Maini et al., 2024). This includes the specific datasets, origins of the data and the methods of curation used in the LLMs (Koco et al., 2024). This poses a challenge to physicians when applying the models in practice, as validation and optimisation of the models may not be easily performed in the clinic (Elemento et al., 2025). Even where data sets are transparent, technical and operational challenges may persist in their use in practice. For instance, large, inclusive datasets would be preferred in oncology decision-making but are often not available for model training, or may have important limitations (Maini et al., 2024). Supplementing such datasets with real-world evidence is a possibility, but may not be achievable for many physicians in practice, while realising ethical issues (Koco et al., 2024). Furthermore, these datasets may not be representative of the different patient populations, which can create a bias in these models (Hanna et al., 2025). Therefore, the data on which AI models are trained should be transparent and as representative as possible to maximise use in practice; this remains an issue of contention and a barrier to practical use at present.

    In addition to the barriers linked to the accuracy or validity of AI technologies and models, oncology professionals may also have concerns regarding the use of AI in practice. It was recently shown that barriers to the introduction of an AI clinical support system for breast cancer treatments included mistrust in AI and concerns that the use of AI would reduce the input of humans in decision-making (Koco et al., 2024). This reflects concerns over the human-focused nature of oncology care and the importance of maintaining that in the face of evolving technology. However, it has been argued that the use of AI-based decision support as a tool should not preclude the human-focused nature of patient assessment and therapeutic decision-making, but serve to act as a tool for providers who continue to make decisions within a multidisciplinary capacity (Naqa et al., 2023). It will be important to ensure that physicians fears and uncertainties are allayed through clear guidance and integration of AI into decision-making workflows to overcome this barrier (Elemento et al., 2025).

    One strategy to enhance the acceptability of AI models in decision-making is to improve the integration of AI into clinical workflows (Naqa et al., 2023). Barriers to integration on a practical level include the cost, efficiency, and application of models in practice (Elemento et al., 2025). It has been shown that a clear training strategy for clinical staff in oncology, including oncologists, radiologists, and pathologists, can support the use of AI in practice, at least theoretically (Alsharif, 2024). Furthermore, a suitable organisational culture and infrastructure is essential when integrating AI into a clinical workflow to support physicians (Deist et al., 2017). Therefore, a comprehensive organisational strategy would be needed to support AI use, including the development of clear guidelines, frameworks, and training support for AI implementation. 

    One final challenge with the use of AI in oncology reflects the need to ensure ethical care of patients (Elemento et al., 2025). Indeed, the use of AI in oncology may raise ethical and legal challenges, including issues regarding informed consent, bias within algorithms, and accountability for decision-making (Froicu et al., 2025). Where gaps are evident in the origins or datasets or mechanisms of algorithms, it may be difficult to ensure informed decision-making in practice (Zakout, 2024). Therefore, it has been suggested that ethical and regulatory frameworks need to be developed to ensure patient safety and autonomy, evolving with advances in AI technology (Zakout, 2024). Experts in the field of oncology, including physicians, have highlighted the need for more robust approaches to ethical care when AI is used, reflecting the lack of transparency (‘black box’) in decision-making with AI-based tools, as well as issues of patient autonomy (Finkelstein et al., 2024; Giebel et al., 2025). Overcoming ethical issues and challenges with the use of AI will be a priority in future research and guideline development.

    In conclusion, the use of AI in oncology decision-making is advancing rapidly and is likely to change practice in the future. There remain important gaps in the transparency and representativeness of datasets used in AI tools and models, as well as the validation of their use in clinically relevant populations. Trials with large samples and across diverse patient populations will be needed to support evidence-based introduction of AI models in oncology decision-making. Furthermore, although AI may inform decision-making for precision therapy, it will be important to consider ethical, legal, and professional issues associated with AI use in practice.

    References

    Ajmal, C. S., Yerram, S., Abishek, V., Nizam, V. M., Aglave, G., Patnam, J. D., & Srivastava, S. (2025). Innovative approaches in regulatory affairs: leveraging artificial intelligence and machine learning for efficient compliance and decision-making. The AAPS Journal27(1), 22-32

    Alsharif, F. (2024). Artificial Intelligence in Oncology: Applications, Challenges and Future Frontiers. International Journal of Pharmaceutical Investigation14(3), 1-10

    Deist, T. M., Jochems, A., van Soest, J., Nalbantov, G., Oberije, C., Walsh, S., & Lambin, P. (2017). Infrastructure and distributed learning methodology for privacy-preserving multi-centric rapid learning health care: euroCAT. Clinical and Translational Radiation Oncology4, 24-31.

    Elemento, O., Khozin, S., & Sternberg, C. N. (2025). The use of artificial intelligence for cancer therapeutic decision-making. NEJM AI2(5), https://doi.org/10.1056/aira2401164

    Fernandez, G., Prastawa, M., Madduri, A. S., Scott, R., Marami, B., Shpalensky, N., & Donovan, M. J. (2022). Development and validation of an AI-enabled digital breast cancer assay to predict early-stage breast cancer recurrence within 6 years. Breast Cancer Research24(1), 93-103

    Finkelstein, J., Gabriel, A., Schmer, S., Truong, T. T., & Dunn, A. (2024). Identifying facilitators and barriers to implementation of AI-assisted clinical decision support in an electronic health record system. Journal of Medical Systems48(1), 89-109

    Froicu, E. M., Creangă-Murariu, I., Afrăsânie, V. A., Gafton, B., Alexa-Stratulat, T., Miron, L., & Marinca, M. V. (2025). Artificial Intelligence and Decision-Making in Oncology: A Review of Ethical, Legal, and Informed Consent Challenges. Current Oncology Reports, 27, 1-11.

    Giebel, G. D., Raszke, P., Nowak, H., Palmowski, L., Adamzik, M., Heinz, P., & Blase, N. (2025). Problems and barriers related to the use of AI-based clinical decision support systems: interview study. Journal of Medical Internet Research27, e63377.

    Hanna, J. J., Wakene, A. D., Johnson, A. O., Lehmann, C. U., & Medford, R. J. (2025). Assessing racial and ethnic bias in text generation by large language models for health care–related tasks: Cross-sectional study. Journal of Medical Internet Research27, e57257.

    Hassan, M., Kushniruk, A., & Borycki, E. (2024). Barriers to and facilitators of artificial intelligence adoption in health care: scoping review. JMIR Human Factors11, e48633.

    Koco, L., Siebers, C. C., Schlooz, M., Meeuwis, C., Oldenburg, H. S., Prokop, M., & Mann, R. M. (2024). The Facilitators and Barriers of the Implementation of a Clinical Decision Support System for Breast Cancer Multidisciplinary Team Meetings—An Interview Study. Cancers16(2), 401-411

    Kolla, L., & Parikh, R. B. (2024). Uses and limitations of artificial intelligence for oncology. Cancer130(12), 2101-2107.

    Kudo, S. E., Misawa, M., Mori, Y., Hotta, K., Ohtsuka, K., Ikematsu, H., & Mori, K. (2020). Artificial intelligence-assisted system improves endoscopic identification of colorectal neoplasms. Clinical Gastroenterology and Hepatology18(8), 1874-1881.

    Maini, P., Jia, H., Papernot, N., & Dziedzic, A. (2024). LLM Dataset Inference: Did you train on my dataset?. Advances in Neural Information Processing Systems, 37, 124069-124092.

    Mehta, P., Antonelli, M., Singh, S., Grondecka, N., Johnston, E. W., Ahmed, H. U., & Ourselin, S. (2021). AutoProstate: towards automated reporting of prostate MRI for prostate cancer assessment using deep learning. Cancers13(23), 6138-6148

    El Naqa, I., Karolak, A., Luo, Y., Folio, L., Tarhini, A. A., Rollison, D., & Parodi, K. (2023). Translation of AI into oncology clinical practice. Oncogene42(42), 3089-3097.

    Santos, C. S., & Amorim-Lopes, M. (2025). Externally validated and clinically useful machine learning algorithms to support patient-related decision-making in oncology: a scoping review. BMC Medical Research Methodology25(1), 45-55

    Stenzl, A., Armstrong, A. J., Sboner, A., Ghith, J., Serfass, L., Bland, C. S., … & Sternberg, C. N. (2024). Artificial INtelligence to Support Informed DEcision-making (INSIDE) for improved literature analysis in oncology. European Urology Focus10(6), 1011-1018.

    Tang, D., Wang, L., Ling, T., Lv, Y., Ni, M., Zhan, Q., & Zou, X. (2020). Development and validation of a real-time artificial intelligence-assisted system for detecting early gastric cancer: A multicentre retrospective diagnostic study. EBioMedicine62, 1-10

    Varghese, B., Chen, F., Hwang, D., Palmer, S. L., De Castro Abreu, A. L., Ukimura, O., & Pandey, G. (2019). Objective risk stratification of prostate cancer using machine learning and radiomics applied to multiparametric magnetic resonance images. Scientific Reports9(1), 1570-1577

    Yang, Y. J., Cho, B. J., Lee, M. J., Kim, J. H., Lim, H., Bang, C. S., & Baik, G. H. (2020). Automated classification of colorectal neoplasms in white-light colonoscopy images via deep learning. Journal of Clinical Medicine9(5), 1593-1603

    Zakout, G. (2024). Contextualising legal and ethical conundrums of artificial intelligence in oncology. The Lancet Oncology25(9), 1113-1116.

    Posted in