Abstract
Artificial intelligence(AI) is advancing airway management applications, especially for airway assessment, clinical decision support, and training. Traditional assessment methods struggle with time and precision as complex airway disorders become more common. AI’s powerful data processing and pattern recognition capabilities can assess patient imaging and clinical characteristics using deep learning algorithms to predict airway complications. In dynamic clinical environments, AI-assisted management solutions can improve airway control safety and efficiency by providing unique decision support. Additionally, AI systems using virtual reality and simulation training technologies can customize training programs for healthcare professionals based on airway difficulty, improving learning curves and clinical competencies in complex airway scenarios. AI in airway management shows its potential in assessment, clinical decision-making, and medical education. In clinical applications, we must also weigh AI’s advantages and disadvantages. This review examines AI technology’s current uses, future potential, and limitations in clinical practice and medical education.
Highlights
- •
AI uses deep learning to analyze imaging/clinical data to predict airway issues.
- •
Clinical experience and AI tech optimize machine learning for real-world research.
- •
Advantages and disadvantages of AI.
- •
Optimal clinical-tech ratio is key to advancing future medical prediction.
1
Introduction
Airway management is crucial for survival in the operating room(OR), post-anesthesia care unit(PACU), intensive care unit(ICU), and emergency medicine. In specific airway-related diseases, difficult airway management has become more varied [ ]. Pirlich et al. [ ]surveyed all registered members of the German Society of Anesthesiology and Intensive Care Medicine (DGAI) and revealed that half of the participants had encountered a “Can’t Intubate, Can’t Oxygenate” (CICO) scenario. Traditional airway management is insufficient to satisfy therapeutic requirements.
AI is characterized as the study of algorithms that facilitate machines in making inferences, decisions and executing tasks [ ]. AI focuses on the design and training of these algorithms [ ] to enable machines to replicate human cognitive functions such as learning, reasoning, and perception [ ]. Their integration into the medical domain is also progressing towards enhanced precision and personalization [ ].
The clinician’s subjective judgment makes assessment and intervention uncertain in conventional airway treatment [ ]. Big data’s rapid growth has made machine learning better than traditional predictive tools and algorithms [ ] at reducing overfitting and integrating higher-order nonlinear predictor interactions. Airway management with this technology could improve clinical safety and patient experience. Because it can use its robust data processing and deep learning algorithms to evaluate data, pictures, and physiological indicators.
2
AI and image analysis
AI acts as a doctor’s autopilot for simple scan reporting [ ]. AI utilizes reinforcement and deep learning [ , ] for picture recognition. The former analyzes high-dimensional input like images, audio, and texts using multi-layer neural networks. The latter uses convolutional, recurrent, and autoencoders [ ] to interpret visual and sequential data.
2.1
Image training
Manual analysis and clinical experience are time-consuming and subjective in image-based airway assessment [ ]. Some scholars [ ] have used deep metric learning to extract significant features from chest radiographs and use convolutional neural network algorithms to process the input images through multiple layers of convolution, pooling, and activation functions to produce embedding vectors. This deep learning strategy for tracheal chest radiograph image retrieval during a worldwide pandemic benefits airway image retrieval in similar conditions. In X-ray pictures, Chu et al. [ ] used a convolutional neural network technique to predict three-dimensional information from two-dimensional data and identified the upper airway’s smallest cross-sectional area. This development helps physicians accurately detect airway stenosis in patients before surgery using a deep-learning model. However, the study’s high radiation dosage and expenditures limit its airway complications prediction. Zhang et al. [ ] executed upper airway image segmentation on cone-beam CT (CBCT) utilizing a fully convolutional neural network algorithm model, while Jeong et al. [ ] trained a neural network model derived from medical imaging and acoustic data to identify the location and severity of obstruction autonomously. Novel image-processing methods for clinical airway evaluation and diagnosis are highlighted in these studies.
2.2
Facial characteristics
Computers learn by labeling datasets, and supervised learning [ ] uses data features to identify input-output relationships and test new data predictions. Models that evaluate the risk of intubation are typically constructed using algorithms such as logistic regression, decision trees, random forests, support vector machines [ ], and linear regression. Principal component analysis, self-organizing mapping, and association rule learning are unsupervised learning [ ] algorithms that train unlabeled data and reveal structure.
Connor and Segal et al. [ ] enhanced the acquisition of frontal and lateral facial image data, training the digital model to extract facial features from 40 patients with dual-angle facial characteristics, while a validation set comprised another 40 patients. They used supervised learning to develop and demonstrate the feasibility of facial image recognition models without the need for time-consuming and potentially radiation-exposing X-rays. Transmission of patient facial photos over a network bettered bedside testing in the research population. Despite the clinical relevance of the results, the investigators’ imprecise definition of difficult intubation, the retrospective design’s inability to confirm that the included images retained the original intubation features, and the small training dataset warrant further prospective validation with a larger patient population.
Researchers such as Tavolara [ ] addressed this issue by employing an extensive facial database to customize and train a convolutional neural network to achieve the data magnitude for algorithmic learning. A multi-stage integrated deep learning model they designed showed that deep learning can predict difficult intubations from frontal facial images, exceeding bedside assessments for enhanced airway management and patient safety. By comparison, Cuendet et al. [ ] utilized the largest database of images, videos, and ground truth data pertinent to tracheal intubation. By capturing the patient’s face and head movements and recording depth images with a Kinect device, the researchers extracted key face and neck features (such as width and height in different postures) and assessed the risk of intubation. After feature selection and Bayesian classifier training on the training set, the classifier was tested on the test set. Finally, the model will automatically analyze, extract features, and forecast facial photos. To improve the computational model, future versions may integrate patients’ voices. A prospective multicenter emergency medicine study was used to build 7 machine learning models (e.g., penalized logistic regression, random forest) to predict airway management problems by Yamanaka et al. [ ]. The same proves that the generated models outperformed traditional reference models in airway problem prediction and the first successful intubation after 5-fold cross-validation.
2.3
Position for intubation
In the ICU and OR, the precise placement of the tracheal tube following intubation is critical for patient respiratory support [ ]. Researchers [ ] have employed deep convolutional network algorithms to autonomously classify and detect the positioning of tracheal tubes in chest radiographs. Additionally, Oliver et al. [ ] integrated deep learning with image enhancement techniques to create the CarinaNet model, which automatically identifies and measures the distance from the tip of the tracheal tube to the rongeur. Furthermore, Brown et al. [ ] utilized semantic embedding techniques and deep neural networks to analyze clinical data, including medical images, to automatically evaluate the correct positioning of the tracheal tube, illustrating the scalability of deep learning and semantic embedding techniques in medical image processing and diagnosis.
These studies demonstrate that machine learning algorithm assessment reduces human error and speeds up diagnosis. Certain web-based applications or emergency information systems reduce physicians’ workload and partially mitigate judgmental errors in high-pressure or emergency situations involving human experts.
3
AI and case data
3.1
Prehospital emergency airway management
Bayesian learning systems integrate previous knowledge with observed data to calculate posterior probabilities. It may perpetually assimilate new data and refresh the model so that the patient’s condition can be dynamically evaluated. Principal Component Analysis (PCA) [ ] is a dimensionality reduction method for the simplification and extraction of features from high-dimensional data. It is employed to streamline data by projecting the original multidimensional dataset onto a limited number of new orthogonal dimensions, known as principal components, thereby eliminating redundancy for the most significant information.
Luckscheiter et al. [ ] employed a PCA methodology to discern critical attributes necessitating preclinical airway management, condensing numerous variables into significant factors, and subsequently developing a predictive model through Random Forests and Ordinary Bayesian algorithms employing real-world data to train and assess a machine learning model aimed at enhancing prehospital detection of airway-related issues. Rahimian et al. [ ] also demonstrated that machine learning improves the identification and calibration of airway risk issues related to emergency admissions. This technology may optimize the distribution of Emergency Medical Service(EMS) resources and assist EMS personnel in making swift treatment decisions in the field, particularly when resources are constrained or resuscitation time is crucial, thereby enhancing patient survival and prognosis. Researchers [ ] like Langeron and Lenfant support the incorporation of this novel technology into the forecasting of perilous intubations, particularly via algorithmic models that can analyze extensive clinical data and discern potential patterns in high-risk intubation cases.
Migration learning [ , ] is the utilization of a model developed for one job in a different but similar task. It is especially appropriate for data-deficient domains.
Nguyen et al. [ ] developed a multimodal machine learning classifier. The study employed medical images and electronic medical record data from COVID-19 inpatients. The pre-trained model underwent retraining using algorithms such as transfer learning and 10-fold cross-validation, resulting in a final fused model that effectively predicted the probability of intubation. Similarly, Arvind et al. [ ] gathered electronic health record data from hospitalized COVID-19 patients, Various machine learning models were developed utilizing random forest and support vector machines to predict the necessity for intubation in COVID-19 patients, also with significant accuracy.
3.2
Extubation failure
Extubation failure can cause respiratory problems or life-threatening diseases in individuals with challenging airways. The precise assessment of extubation failure risk is essential for enhancing patient safety during the perioperative or airway care phase. Employing machine learning to improve predictive accuracy is clinically significant. Huang et al. [ ] gathered postoperative clinical case data from patients undergoing head, neck, and maxillofacial surgeries across various clinical centers, specifically targeting individuals with a history of challenging airway management. The researchers assessed various machine learning algorithms, including Random Forest and Support Vector Machines, substantially exceed the predictive accuracy of traditional clinical scoring systems for extubation failure. The machine learning model exhibited a notable capacity to identify high-risk patients, reducing extubation complications. In the ICU, prompt identification of patients susceptible to extubation failure can diminish the probability of reintubation. Zhao et al. [ ] utilized clinical data(such as vital signs and ventilation parameters)from ICU patients to construct and assess the efficacy of models employing various machine learning algorithms. The researchers illustrated the efficacy of machine learning in clinical extubation decision-making, establishing that data-driven decision support systems can enhance the safety and effectiveness of clinical procedures.
4
AI and robotics
The technology sector persists in augmenting investments in AI research, which is swiftly advancing across multiple domains. It is also essential for hospitals to remain informed about technological advancements and to proactively manage the relationship between physicians and AI for efficient integration and optimization in medical practice.
4.1
3D printing technique
Conventional airway evaluation techniques may not adequately visualize airway issues in individuals. 3D printing [ ] is a digital model-based AI technology that employs computers to create layered models, facilitating the layer-by-layer manufacturing and building of digital model files. In medical airways, a simulation model of the patient’s airway can be fabricated to evaluate airway outcomes [ , ]. A study [ ] was conducted with 40 patients undergoing general anesthesia, who were randomly assigned to either a group utilizing a 3D printed airway model for management (observation group) or a group employing traditional airway difficulty management (control group). In the observation group, 3D images were reconstructed via CT, subsequently processed with computerized 3D editing software, and a 1:1 solid model was fabricated on a 3D printer utilizing acrylic material to replicate the patient’s airway structure. Based on that, anesthesia protocols, surgical locations, and intubation instruments were developed. The study demonstrated that the intubation duration and success rate in the observation group surpassed those in the control group, thereby enhancing preoperative planning. Xu et al. [ ] on the other hand, personalized 3D-printed airway models guide the implantation of airway stents, allowing for more accurate airway customization. Dinsmore et al. [ ] have even provided a 3D-printed thermo-laryngoscope with good device performance and sustainability through 3D printing technology and human-driven mechanisms. Clinically, 3D printing facilitates the precise visualization of airway anomalies, thereby enhancing physician support in managing complex airway scenarios.
3D printing technology also contributes to clinical training. Pedersen et al. [ ] created a cost-effective bronchial tree simulator derived from human chest computed tomography, demonstrating superior anatomical fidelity compared to commercially available bronchoscopy simulators. This innovation enables the simulation of patient-specific anatomical structures in vitro and facilitates the planning of in vivo bronchoscopies to improve medical safety. Xu and other researchers have affirmed that instructional techniques based on 3D printing technology can offer economical and pragmatic training solutions, particularly in resource-constrained healthcare settings [ ].
However, some scholars [ ] contend that although it is cost-effective in the long term, 3D printing technology necessitates substantial investments in manpower and material resources upfront, resulting in elevated treatment costs for patients [ ]. Consequently, further exploration is essential for the future advancement of this technology.
4.2
Autonomous robots
Robotic airway management, as an innovative technology, is advantageous during human fatigue or other high-risk settings, as it can execute standardizing intubation procedures through manual intervention or fully automated intelligent control so as to diminish the danger of contact between healthcare professionals and the patient’s airway offering stabilization support [ , ]. Researchers have combined intelligent automation technology with healthcare professionals’ expertise to create multiple types of robots.
Ma et al. [ ]developed a robotic system for assisted noninvasive mechanical ventilation with dual snake arms to support the terminal and a mask fixation structure to elevate the patient’s lower jaw. This research represents a proficient endeavor to implement a robotic system for noninvasive positive pressure ventilation via face mask; nonetheless, additional enhancement of patient comfort and clinical safety in this domain remains necessary. Robotic tracheal intubation improves medical safety and accuracy by combining automated intelligence and medical expertise. In 2010, scholars like Tighe [ ], influenced by robot-assisted telesurgery and the Da Vinci Surgical System Model S (DVS), utilized an airway simulation model to simulate a robot aiding in oral and nasal fiber intubation. Research indicates that the DVS can execute assisted oral and nasal fiberoptic intubation in 75 s and 67 s, respectively, implying that robotic-assisted airway management is viable however remains difficult in intricate clinical decision-making contexts. Hemmerling et al. [ ]created the robotic oral tracheal intubation system Kepler Intubation System (KIS) in 2012 to address the cost limitations of the DVS system. This system combines robotics with fiberoptic bronchoscopy and uses a standard gaming joystick with 12 buttons and 5 axes to simulate wrist and arm movements. After a cohort of volunteers was tested, the results showed a high success rate and no issues within an acceptable time frame, proving that tracheal intubation can be done without patient interaction.
Telemedicine is advancing swiftly, posing challenges for inexperienced emergency medical personnel to perform urgent intubations in prehospital environments [ ]. Wang et al. [ ]developed a remote robot-assisted intubation system (RRAIS) in 2018 to address this issue. It allows inexperienced prehospital staff to position the robot in the patient’s mouth while backend specialists perform the necessary intubation procedures. Experts can remotely manage airways using AI robotics and computerized control systems for telemedicine. In 2020, the Biro research team [ ] created a fully automated robotic system for tracheal endoscopic intubation. It uses video cameras to capture real-time laryngeal and airway images and image-recognition algorithms to analyze vocal folds and airway structure for endoscopic maneuvers. Beginner healthcare practitioners can learn tracheal intubation with this system, which guides the tube through the glottis and intubates it in a virtual model. In 2024, Liu et al. [ ] introduced an automated robot-assisted tracheal intubation safety system utilizing a soft actuator, featuring a flexible anterior segment. The soft actuator comprises variable-hardness silicone, supplemented with fibers and helical for flexibility. The hydraulic drive system operates concurrently with the catheter delivery mechanism to facilitate bending and accurate advancement of the catheter. Upon detecting specific anatomical features, the system may direct the robotic arm in a coordinated manner with the flexible actuator. The report also suggests that subsequent research would consider vocal cord injuries and the presence of blood and mucus to enhance target recognition rates. In the same year, Qi et al. [ ]studied the design and motion control of a master-slave tracheal intubation robot, where the doctor controls the device and the robot performs the procedure. A computer algorithm controls the slave end robot’s movements using force and visual feedback from the master end operation.
Researchers have offered technical guidance for the future trajectory of automation and intelligence in endotracheal intubation. However, robot-assisted airway management requires more high-quality studies across patient demographics, which could change healthcare delivery and educational resources. Despite the limitations of this domain, these advancements can modernize and improve airway management, fostering clinically relevant medical innovations.
5
AI and pedagogical training
Generative adversarial networks (GAN) comprise a generator and a discriminator, wherein the generator produces realistic data and the discriminator assesses the authenticity of this data. This methodology can be employed to synthesize airway image data [ ], which is utilized for simulating challenging airway scenarios in surgical training and simulation. And airway control is an essential skill for healthcare professionals in medical emergencies and surgical procedures [ , ].
Traditional airway management training includes simulations and hands-on practice. Simulation training lacks variation and cannot match clinical complexity and dynamism. Hands-on instruction follows a “see one, do one, teach one” strategy, which hinders recurrent skill development. Practice fidelity and immersion also impair the learner’s ability to anticipate and respond to real-world procedures. Traditional airway management training is less effective due to these factors [ ].
Virtual Reality (VR) technology [ ] employs artificial intelligence to generate an interactive three-dimensional environment, serving as an innovative instructional tool that offers immersive, repeatable practice scenarios. Augmented Reality (AR) technology [ ] integrates computer-generated imagery with real-world environments, acting as a perceptual enhancement to real-world immersion rather than a substitute. Boet et al. [ ] enlisted learners with varying degrees of competence and commenced with fundamental hands-on training. Several trainees received instruction in fiberoptic bronchoscopy intubation through a virtual reality application, but the control group did not undergo virtual training, resulting in the assessment of their manipulative skill outcomes. The study found that virtual training improved practical movements, particularly maneuvering time, hand-eye coordination, and success rate.
A computer-generated environment with realistic, interacting patients and anatomical components provides true scenarios with sensory input. Ryason et al. [ ] demonstrates the capacity of virtual tracheal intubation training simulator to enhance the trainer’s skill level in a risk-free setting, while participants generally perceived the simulator as a more intuitive learning experience that effectively increased confidence.Steffensen et al. [ ] used acoustic sensors to record airflow and device operation sounds during intubation and machine learning algorithms to train the model to identify operational steps and outcomes. Acoustic sensing technology combined with machine learning can autonomously analyze sound signals produced during intubation, accurately distinguish between successful and unsuccessful trainee attempts, and provide objective feedback, facilitating learning.
6
The limits and ethics of AI
I n clinical application, we should maintain a multi-dimensional thinking attitude, and explore the possible negative impacts and ethical challenges of AI in healthcare. In particular, biases in model training data or input model data may influence decision-making findings, which leads to model extrapolation being weak in some complex or rare aberrant airway circumstances. Besides, Data leakage and exploitation [ ] of users’ sensitive health information in the training dataset compound AI application challenges. For example, will the “black box” of AI [ ]cause policymakers to lose faith in them? Who should be held accountable for AI misjudgments and system faults that hurt patients? Additionally, AI robot-assisted medical technology is poorly popularized, requires significant equipment investment and maintenance expenses, and can cause serious clinical mishaps owing to machine and equipment failure. Second, emotionless robots don’t understand doctor-patient communication, which can impair diagnosis and therapy. In emergency situations, robots without ethical thinking usually follow the program, which lacks the flexibility to respond to the patient’s sudden condition and seems like a fatal legal problem when faced with humanitarian decision-making for medical errors. Along with technological progress, ethical and regulatory frameworks are crucial to AI-assisted healthcare.
7
Conclusion
Each paradigm of AI machine learning algorithms offers unique advantages for applications in the airway domain( Figure_ 1 ). These techniques turn high-dimensional data into feature vectors to find complicated structures. Machine learning, deep learning, computer vision, natural language processing, and machine automation are examined in this paper to create a data-driven tailored airway evaluation and management paradigm. However, modeling algorithms have trouble explaining their decision-making processes and AI system autonomy, thus thorough clinical evaluation and regulation are needed to ensure their safety and efficacy [ ]. AI participation and responsibility in healthcare are still complex, and we need to find a balance between technological innovation and ethical and legal safeguards [ , ]. The future of healthcare may be a “human-machine collaborative” model, where AIs complement physicians rather than replace them. Long-term sustainability requires addressing its possible limitations and ethical issues and promoting global legislative reforms and technology standards. In addition, fostering a new generation of professionals with expertise in both healthcare and technology is crucial to bridging the gap between medical practice and AI development, ensuring better integration of both fields.
