Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Contacts Login 
Home Print this page Email this page Users Online: 195



 
 Table of Contents  
REVIEW ARTICLE
Year : 2022  |  Volume : 10  |  Issue : 3  |  Page : 63-68

Artificial intelligence in oral radiology: A checklist proposal


1 University of Pernambuco, Recife, Brazil
2 Department of Dentistry, Federal University of Sergipe, Aracaju, Sergipe, Brazil
3 Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, São Paulo, Brazil
4 Department of Dentistry, Oral Radiology and Postdoctoral in Integrated Dentistry, Professor of Oral Radiology, Oral Diagnosis and Biostatistics, Paulista State University, São Paulo, Brazil

Date of Submission02-Jul-2022
Date of Decision26-Sep-2022
Date of Acceptance03-Oct-2022
Date of Web Publication29-Dec-2022

Correspondence Address:
Laura Luiza Trindade De Souza
Av. Claudio Batista. Aracaju, Sergipe
Brazil
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jomr.jomr_21_22

Rights and Permissions
  Abstract 


To develop and present a checklist proposed to assist in planning, conducting, and reporting artificial intelligence (AI) studies in dentomaxillofacial radiology (CAIDMR - Checklist for AI in Dentomaxillofacial Radiology). To prepare the CAIDMR, a review was performed with searches in the PubMed, Embase, Scopus, and Web of Science databases with the descriptors of “Artificial Intelligence,” “Deep learning,” “Machine learning,” “Checklist,” “Dental,” and “Radiology,” using the PICOT strategy. In addition, pre-existing guidance documents and the AI management and ethical principles manual provided by the WHO were evaluated. After searching, 81 manuscripts were recruited: 27 from PubMed, 34 from Embase, 10 from Scopus, and 10 from Web of Science. Duplicate articles were removed. The studies were selected by reading the titles and abstracts and finally, the full article, resulting in six manuscripts for the full reading. The checklist was developed with the topic of planning and conducting research and 27 structured items for verification divided into the title, abstract, introduction, method, result, discussion, and other information. The CAIDMR is a guideline with a checklist for reports and studies on the application of AI in oral radiology.

Keywords: Artificial intelligence, checklist, deep learning, dental, machine learning, radiology


How to cite this article:
Silva Filho WJ, Santana Lima BN, De Souza LL, Silva TP, Takeshita WM. Artificial intelligence in oral radiology: A checklist proposal. J Oral Maxillofac Radiol 2022;10:63-8

How to cite this URL:
Silva Filho WJ, Santana Lima BN, De Souza LL, Silva TP, Takeshita WM. Artificial intelligence in oral radiology: A checklist proposal. J Oral Maxillofac Radiol [serial online] 2022 [cited 2023 Jan 28];10:63-8. Available from: https://www.joomr.org/text.asp?2022/10/3/63/366161




  Introduction Top


Artificial intelligence (AI) has become increasingly prevalent and imperative for solving COMPLEX problems and more trivial uses.[1] It is defined as a branch of computer science formed by a constellation of items (algorithms, robotics, and neural networks) dedicated to the development of computer algorithms to perform tasks conventionally associated with human intelligence.[1],[2],[3],[4]

Currently, AI is a reality, with software presenting intelligence properties comparable to those of human beings.[2] It involves competencies such as recognizing patterns and images, understanding open written and spoken language, perceiving relationships and nexuses, following decision algorithms, understanding concepts, acquiring the ability to reason by integrating new experiences, self-improving (“self-learning”), solving problems, or accomplishing tasks.[2]

In the context of dentistry, AI has been applied to different specialties, mainly to track and diagnose diseases, work decision support systems, establish prognosis, and predict therapeutic responses, potentially improving care and making it more efficient and affordable.[5],[6] There is also an increasing number of studies and publications on the subject in the field,[6] but despite improving clinical decision-making, the research reports are very heterogeneous with diverse assessment procedures.[5]

One of the purposes of using AI in oral radiology is to facilitate clinical practice, such as marking cephalometric tracings using software that can reproduce a marking pattern and execute it automatically, effectively, and quickly; optimizing the time of professionals.[7],[8] The diagnostic process in radiology can also be optimized with AI, such as the early detection of caries lesions, detection and classification of dental implants from radiographic images, and aiding diagnostic accuracy that can ensure better prognosis.[9],[10] However, despite the important number of publications on oral radiology, the methodologies are heterogeneous, the information is imprecise, and there are several validation methods. Therefore, it is imperative to develop a checklist with the recommendations of the literature and the World Health Organization (WHO).

Although new technologies are always attractive in science, using this technology requires not only studies contesting and comparing it with the conventional one but also understanding that the application of AI is based on the fundamental principles of science and the scientific publication,[11] with the formulation of norms that guarantee methodological reproducibility and scientific rigor required in science.[12] AI promotes the interest of patients, the community, and professionals, so it must be based on ethically designed laws and policies in line with human rights, which must be prioritized with guidelines and regulations for using AI in the health field, to quote the guide prepared by the WHO, in 2021.[4] Therefore, what are the prerequisites for AI studies in oral radiology?

Ensuring the scientific efficacy of AI requires addressing issues related to its validity, utility, feasibility, safety, and ethical use.[5] In this scenario, the present study is the first study that aims to provide a guideline, with a checklist based on the WHO criteria and manuscripts already published in other fields, and to guide planning, conducting, and reporting for authors, reviewers, and readers of AI studies in oral radiology.


  Materials and Methods Top


The proposal is to provide a checklist to plan, conduct, and report or evaluate studies involving AI in oral radiology Checklist for AI in Oral Radiology (CAIOR). To prepare the items that compose the checklist, searches were initially conducted in the Pubmed, Embase, Scopus, and Web of Science databases, with prerequisites of checklists on AI in dentistry, using the PICOS strategy and aided by the descriptors of “Artificial Intelligence,” “Checklist,” “Deep learning,” “Dental,” “Machine learning,” and “Radiology” [Table 1].
Table 1: Electronic database and search strategy

Click here to view


After the initial literature search, two authors (WJSF and WMT) evaluated existing guidance documents based on the ethical and management principles provided by the WHO.[4] A checklist recently published by radiology: AI, for the use of AI in medical images (CLAIM),[12] was used as the basis for producing the CAIOR-AI checklist in dental research.[6] Other checklists used were published by the EQUATOR network, such as CONSORT[13] and the CONSORT-AI extension,[14] the SPIRIT-AI,[15] and the STARD-AI.[16]

A clinical checklist for assessing the suitability of machine learning applications in health care[5] and a study of current clinical applications of AI in radiology[17] were subsequently used to identify potential items to be incorporated into CAIOR.


  Results Top


After searching the Pubmed, Embase, Scopus, and Web of Science databases using the PICOS strategy, with the keywords of “Artificial Intelligence,” “Checklist,” “Deep learning,” “Dental,” “Machine learning,” and “Radiology,” 81 articles were retrieved: 27 from PUBMED, 34 from EMBASE, 10 from SCOPUS, and 10 from WEB OF SCIENCE [Figure 1]. Duplicates were removed. The remaining studies were selected from the reading of titles and abstracts and finally from the reading of the full article, resulting in six manuscripts included in the review.[6],[12],[14],[16] The CAIOR was developed with the topic of planning and conducting research and 27 structured items for verification divided into the title, abstract, introduction, method, results, discussion, and other information [Table 2].
Figure 1: Flowchart of manuscript selection

Click here to view
Table 2: Checklist for artificial intelligence in oral radiology (CAIOR)

Click here to view



  Discussion Top


This study presents a guideline with a checklist, based on the WHO criteria and manuscripts already published in other health fields. It seeks to provide reviewing authors and readers with guidelines on planning, conducting, and reporting AI studies in oral radiology. The development of a guideline for this specialty is important because it provides a high-quality research tool that validates techniques, guides evidence-based care, and assists scientific production in a reproducible and transparent manner, as well as describing details that serve as a basis for the production of further studies.[18],[19]

The rapid development of computational technologies and growth in the volume of digital data for analysis has resulted in an unprecedented increase in AI-related research activities, particularly in the field of health care. Studies involving algorithms and clinical applications of AI have brought new challenges, as will be better reported, evaluated, and compared. Neglecting to identify limitations may cause a premature adoption of new technologies, while well-detailed and well-reported studies ensure a low risk of bias.[19]

A systematic review assessed the quality of the reporting of studies validating machine learning-based models for clinical diagnoses and noted that the studies included failed to use reporting guidelines, and many lacked adequate details about the participants, making it difficult to replicate, evaluate, and interpret the study results.[20] In this scenario, to address this issue, the adapted guidelines for research reporting based on the EQUATOR (Improving Quality and Transparency of Health Research) network have been proposed.

Designed to standardize the reporting of clinical trials and clinical trial protocols, the Consolidated Standards of Reporting Trials (CONSORT) guidelines and the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) were developed.[21] The advent of AI use in health care showed a need for developing extensions to these guidelines – the CONSORT-AI and the SPIRIT-AI – which were published in the British Medical Journal, the Lancet Digital Health, and Nature Medicine, the former with 14 and the latter with 15 additional AI-specific recommendations.[14],[15]

The STARD-AI is an AI-specific version of the STARD checklist, which is intended to serve as the first global consensus guidance for reporting AI-centric diagnostic accuracy studies by helping readers to assess the completeness, applicability, and potential bias of study results. Developed through a clear multi-sectoral dissemination policy, it seeks to contribute to the reduction of research waste, as well as to serve as an auxiliary tool in the simplified translation of new technologies.[16]

Moreover, with the proposal to assist authors and reviewers of AI in medical imaging manuscripts, the CLAIM was proposed. It was produced according to the STARD guideline and has been extended to address AI applications in medical imaging that include classification, image reconstruction, text analysis, and workflow optimization.[12]

The use of AI has become an imperative reality in oral radiology. A recent study evaluated the current clinical applications and diagnostic performance of AI in this specialty and observed that most studies addressed the automated localization of cephalometric landmarks, osteoporosis diagnosis, classification of maxillofacial cysts and tumors, and identification of periodontitis and periapical disease. However, the performance of AI models varies with the different algorithms used, which requires using sufficient and representative images to verify their generalizability and reliability before they are incorporated into clinical practice.[22]

In this scenario, the CAIOR proposed in this paper provides a checklist that has the topic of guidance for planning and conducting research and 27 structured items for verification. The instrument was based on the bioethical principles established in the Guide for AI Ethics (WHO), which addresses the main risks and ethical challenges in the field. The issue of the quantity and quality of information managed by databases that will be transformed into guidelines and practices to instruct the actions of health professionals is a major concern of the WHO guide.[4]

The CAIOR items are divided into the topics of title, abstract, introduction, method, result, discussion, and other information. For the title, it is important to present the focus of the problem and the type of AI technology category used. The introduction should present state-of-the-art AI for the problem proposed, justify the study considering the literature gap, and present the objective of the study.

The methodology should be detailed, presenting the local ethics committee's approval for using the images and/or data in oral radiology. Details on the source of data for training and testing; eligibility criteria (inclusion and exclusion); how, where, and when eligible participants or studies should be identified; sampling frame; and suitability for the target population should be detailed.

The subtopic image quality was incorporated into the instrument in the methodology, considering the importance of image standardization in oral radiology studies. Providing details of how the images were acquired, classified, and manipulated in oral radiology is important to ensure the transparency and reproducibility of the studies, which is one of the precepts most emphasized by the WHO. This detailing of the methodology used allows the essential reproducibility for developing new studies focused on safety, accuracy, and efficacy.[4] In this phase, the process of creating the application with the approach of the algorithm developed should be presented. Many applications are ruled by intellectual property laws, but it is vital to provide a minimum of information to researchers so they can understand the step-by-step creation of the application, provide transparency, and describe details that serve as a basis for the production of further studies.[19] Finally, the statistical analyses should be detailed to allow replicability by other researchers.

The results should present the performance of the AI model proposed, including diagnostic accuracy studies, accuracy estimates (such as 95% confidence intervals), and misclassified case failure analyses. For comparison, a parameter model considered most optimal by the literature selected is recommended. The discussion should present the outcome of the study, agreeing or disagreeing with the literature selected for the application proposed. It should also present the limitations of the study, including potential bias, statistical uncertainty, generalizability, and implications for practice, including intended use and/or clinical function. For applications, which are in beta form or commercially available, it is recommended to make the application link available so that other researchers can learn about and test the application. Moreover, indicate if and how the AI intervention and/or its code can be accessed, including any access or reuse restrictions.

A major concern of the WHO is to ensure the ethical commitment to the development of new technologies, based on attributes such as inclusion and equality and free of any biases. The use of the checklist allows for guiding the design of studies and establishing criteria from the data collection phase, guaranteeing a result with greater scientific validation and lower bias.[4] The CAIOR has limitations, as it may not cover the new AI propositions, but it can be adjusted and improved to keep up with technological development. There is also the next step, in which the CAIOR tool would be used to assess whether oral radiology studies agree with the CAIOR guidelines.

Finally, adequate training must be guaranteed for the professionals using AI in health care, thus contributing to their clinical decisions. The performance and functionality of the software must be constantly evaluated to correct occasional errors. Despite assisting management, AI must remain controlled by professionals responsible for monitoring the systems and the final management of the cases.[4] Therefore, AI intends to be a tool to optimize the activities of oral radiology professionals.


  Conclusions Top


This study presents the CAIOR, a checklist that provides a roadmap for authors, readers, and reviewers on the application of AI in oral radiology. Aiming to promote standardization and consequently quality scientific communication, this instrument seeks to follow the fundamental principles of clarity, transparency, and reproducibility in the study of AI in oral radiology. Although not all studies meet every CAIOR criteria, the roadmap, even if partially, aims to serve as a basis for the development of further studies of AI in oral radiology.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Forsting M. Machine learning will change medicine. J Nucl Med 2017;58:357-8.  Back to cited text no. 1
    
2.
Silva TP, Carvalho MN, Takeshita WM. The state of art of artifical intelligence (AI) in dental radiology: A systematic review arch. Health Invest 2021;10:1084-9.  Back to cited text no. 2
    
3.
Silva TP, Hughes MM, Menezes LD, de Melo MF, Freitas PH, Takeshita WM. Artificial intelligence-based cephalometric landmark annotation and measurements according to Arnett's analysis: Can we trust a Bot to do that? Dentomaxillofac Radiol 2021;21:20200548. doi: 10.1259/dmfr.20200548.  Back to cited text no. 3
    
4.
World Health Organization. Ethics and Governance of Artificial Intelligence for Health: Who Guidance. Geneva: World Health Organization; 2021.  Back to cited text no. 4
    
5.
Scott I, Carter S, Coiera E. Clinician checklist for assessing suitability of machine learning applications in healthcare. BMJ Health Care Inform 2021;28:e100251.  Back to cited text no. 5
    
6.
Schwendicke F, Singh T, Lee JH, Gaudin R, Chaurasia A, Wiegand T, et al. Artificial intelligence in dental research: Checklist for authors, reviewers, readers. J Dent 2021;107:103610.  Back to cited text no. 6
    
7.
Chen SK, Chen YJ, Yao CC, Chang HF. Enhanced speed and precision of measurement in a computer-assisted digital cephalometric analysis system. Angle Orthod 2004;74:501-7.  Back to cited text no. 7
    
8.
Park JH, Hwang HW, Moon JH, Yu Y, Kim H, Her SB, et al. Automated identification of cephalometric landmarks: Part 1-comparisons between the latest deep-learning methods YOLOV3 and SSD. Angle Orthod 2019;89:903-9.  Back to cited text no. 8
    
9.
Lee DW, Kim SY, Jeong SN, Lee JH. Artificial intelligence in fractured dental implant detection and classification: Evaluation using dataset from two dental hospitals. Diagnostics (Basel) 2021;11:233.  Back to cited text no. 9
    
10.
Schwendicke F, Rossi JG, Göstemeyer G, Elhennawy K, Cantu AG, Gaudin R, et al. Cost-effectiveness of artificial intelligence for proximal caries detection. J Dent Res 2021;100:369-76.  Back to cited text no. 10
    
11.
Kahn CE Jr. Artificial intelligence, real radiology. Radiol Artif Intell 2019;1:e184001.  Back to cited text no. 11
    
12.
Mongan J, Moy L, Kahn CE Jr. Checklist for artificial intelligence in medical imaging (CLAIM): A guide for authors and reviewers. Radiol Artif Intell 2020;2:e200029.  Back to cited text no. 12
    
13.
Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. CONSORT 2010 explanation and elaboration: Updated guidelines for reporting parallel group randomised trials. J Clin Epidemiol 2010;63:e1-37.  Back to cited text no. 13
    
14.
Cruz Rivera S, Liu X, Chan AW, Denniston AK, Calvert MJ. Guidelines for clinical trial protocols for interventions involving artificial intelligence: The SPIRIT-AI extension. Nat Med 2020;26:1351-63.  Back to cited text no. 14
    
15.
Sounderajah V, Ashrafian H, Golub RM, Shetty S, De Fauw J, Hooft L, et al. Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: The STARD-AI protocol. BMJ Open 2021;11:e047709.  Back to cited text no. 15
    
16.
Tariq A, Purkayastha S, Padmanaban GP, Krupinski E, Trivedi H, Banerjee I, et al. Current clinical applications of artificial intelligence in radiology and their best supporting evidence. J Am Coll Radiol 2020;17:1371-81.  Back to cited text no. 16
    
17.
Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK, SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: The CONSORT-AI extension. Nat Med 2020;26:1364-74.  Back to cited text no. 17
    
18.
Ibrahim H, Liu X, Rivera SC, Moher D, Chan AW, Sydes MR, et al. Reporting guidelines for clinical trials of artificial intelligence interventions: The SPIRIT-AI and CONSORT-AI guidelines. Trials 2021;22:11.  Back to cited text no. 18
    
19.
Shelmerdine SC, Arthurs OJ, Denniston A, Sebire NJ. Review of study reporting guidelines for clinical studies using artificial intelligence in healthcare. BMJ Health Care Inform 2021;28:e100385.  Back to cited text no. 19
    
20.
Yusuf M, Atal I, Li J, Smith P, Ravaud P, Fergie M, et al. Reporting quality of studies using machine learning models for medical diagnosis: A systematic review. BMJ Open 2020;10:e034568.  Back to cited text no. 20
    
21.
Campbell JP, Lee AY, Abràmoff M, Keane PA, Ting DS, Lum F, et al. Reporting guidelines for artificial intelligence in medical research. Ophthalmology 2020;127:1596-9.  Back to cited text no. 21
    
22.
Hung K, Montalvao C, Tanaka R, Kawai T, Bornstein MM. The use and performance of artificial intelligence applications in dental and maxillofacial radiology: A systematic review. Dentomaxillofac Radiol 2020;49:1-22.  Back to cited text no. 22
    


    Figures

  [Figure 1]
 
 
    Tables

  [Table 1], [Table 2]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Materials and Me...
Results
Discussion
Conclusions
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed581    
    Printed22    
    Emailed0    
    PDF Downloaded84    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]