Informatics in Perioperative Medicine





Key Points





  • Individual computers are connected via networks to share information across many users.



  • Information security is about ensuring that the correct information is available only to the correct users at the correct time.



  • Healthcare information storage and exchange is regulated to protect patient privacy.



  • Information regarding the provision of anesthesia care is highly structured and organized compared to most healthcare specialties.



  • Anesthesia care documentation systems have evolved in complexity and are now widely adopted in the perioperative care of patients in the United States.



  • Benefits of electronic documentation of anesthesia care typically emerge from integration with monitoring, scheduling, billing, and enterprise electronic health record (EHR) systems.



  • Active and passive decision-support tools may suggest typical courses of action or call to attention patterns that are not apparent to the clinician.



  • Secondary use of EHR data is valuable in understanding the impact of clinical decisions on patient outcomes and the measurement of quality of care.



  • Electronic devices may act as distractions within the operating room (OR) care environment.





Acknowledgment


The editors and publisher would like to thank Dr. C. William Hanson for contributing a chapter on this topic in the prior edition of this work.




Introduction


Computers have become ubiquitous in modern life. Their use has penetrated every medical field and the practice of perioperative care is no different. Computers have given rise to the academic discipline of informatics, the study of information creation, storage, handling, manipulation, and presentation. Within health care this is referred to as medical, biomedical, or clinical informatics.




Computer Systems


At their most basic, computer systems are complex electronic circuits that perform mathematical operations (add, subtract, multiply, divide, and compare) on information available to them. Even the most complicated computer systems consist of these operations repeated millions of times per second, which collectively generate the activity specified by the user. Every operation performed within the computer begins with the retrieval of information in the memory, a mathematical operation within the processor, and the storage of the output of that operation back to the memory. This cycle of retrieval, processing, and storage repeats millions of times per second.


Software applications execute the instructions that a computer uses to process information. The operating system is the fundamental software that controls the communication among the components of the computer. The operating system controls the order in which a processor completes tasks, allocates memory among different applications, provides a structure for organizing files in the long-term storage, controls access to files, determines which applications may run, and manages the interaction between the user and the computer. Modern operating systems provide graphic interfaces that act as paradigms to describe the organization of information and methods of user-specified computer action.


A software application is a set of instructions for a computer designed to perform a specific set of tasks. Electronic health record (EHR) software is an example of a software application. Software may (via the operating system) interact with external hardware devices, data held in long-term storage, and the user by way of input devices and display devices.


Because of the proliferation of mobile devices, traditional laptop or desktop computer systems have been supplanted in many environments by tablets or smartphone computers. These devices are structurally similar to traditional computing devices; however, the operating systems and software applications feature user interfaces that have been re-engineered to support use by touch screen or voice control operation. These devices trade off computational power, portability (size- and weight-related), and duration of operation (battery power).




Computer Networks


Networks are the means for the exchange of information among computers, enabling the sharing of resources. These networks may be established using wireless (e.g., microwave radio spectrum) or wired connections ( Fig. 4.1 ). Dedicated hardware (equipment) controls the sending and receiving of information across these links, with specialized devices required to ensure that information is sent correctly to the intended computers on the network. Software is used to ensure communication is performed according to predefined standards. In order for a computer to be accessible in the network, each computer must be given a unique address on the network so that information can be identified as destined for that computer. The process of obtaining and maintaining network addresses is performed within the local operating system and network hardware. This allows software applications to specify the information to be sent and the operating system and network hardware to manage how it is exchanged between computers.




Fig. 4.1


Relationship between a local intranet (within an institution) and the wider Internet.

Institutions may choose to use an external vendor to provide certain services hosted on external servers, this is referred to as “cloud” computing or services. Prevention of unauthorized access to the intranet from external parties while allowing users to access the Internet and other remote resources is of paramount concern. “Firewall” devices aid in the separation of the institutional network from the wider Internet and control access.


Wired networks require the computer system and the receiving hardware to be physically connected by electrical or optical cable. This limits the flexibility in the connection points, which must be placed in preplanned areas, with any subsequent adjustments requiring re-routing of cables. However, information travelling on the network cannot be intercepted or accessed without physical access to the network cables or connection points.


Wireless network systems offer advantages of convenience and the ability to move around a work environment without maintaining a physical connection among the computer systems. However, this usually occurs at the expense of speed of information exchange. Information exchange via wireless links is an order of magnitude slower than the fastest wired connections. Because wireless systems require the availability of strong radio links between the computer and the network equipment, they are subject to issues of poor reception (possibly because of physical barriers) and interference, which manifest as inaccessible or degraded network performance. It is difficult to control the precise limits of where a wireless network is available (i.e., only within a building and not immediately outside of it), therefore processes to limit wireless network access to authorized users and to encrypt data transmitted across wireless links are required.


In practice, healthcare facilities use a blend of both wired and wireless networks to ensure that the advantages of each system are available to support the users.


In most settings, the network is organized as a “client-server” model. The computer that hosts the shared resources is referred to as the “server” and the computer accessing the resources is the “client.” The server is responsible for ensuring the client is an authorized user of the shared resource (access control) and ensuring the resource remains available to multiple users, potentially by preventing one client from monopolizing the use of the resource.


The client-server concept stands in contrast to peer-to-peer architecture, whereby resources are distributed across systems, with each computer on the network contributing its resources (e.g., files or specialized hardware). All computers are both clients and servers in this arrangement. There is limited ability to control access in a planned and coordinated manner.


Use of a client-server infrastructure may allow for a significant amount of the computational tasks to be outsourced to the central server. When the client has very limited computational resources this is referred to as a “thin client.” Computationally intensive tasks can be performed by the server and the client receives the results of the computation. Fundamentally, the thin client is viewing and interacting with a software application that is running on the server. The client is little more than a means of sending user input to the server and a dynamic display of application results. In order for this arrangement to work, there must be a limited, predictable set of software applications that the client accesses on the server, with a reliable network connection. Without the network connection, the thin client has no functionality. This model may be easier to maintain because any changes are done centrally and need to be made once and then become available to every client connecting in.


An alternative model is the “thick client,” where the client is capable of significant computational activities, retains a fully functional state when not connected to the network, accesses only the information required across the network, and processes it independently. However, these clients require individual maintenance.


A hybrid solution is the concept of “application virtualization,” whereby a single software application is hosted and uses the computational resources centrally and the client systems access this application regardless of their configurations. This blends the advantages of a thin client—control of the application’s availability, ease of maintenance, and ensuring compatibility (by not requiring any level of computational resources aside from running the connection to the server)—with users having a fully functional computer or device to use for the remainder of their tasks. Additionally, this hybrid enforces a separation between the information stored on the server and any applications running on the client and thus information can be secured within the server that is housed within the institutional network.




Computer Networks


Networks are the means for the exchange of information among computers, enabling the sharing of resources. These networks may be established using wireless (e.g., microwave radio spectrum) or wired connections ( Fig. 4.1 ). Dedicated hardware (equipment) controls the sending and receiving of information across these links, with specialized devices required to ensure that information is sent correctly to the intended computers on the network. Software is used to ensure communication is performed according to predefined standards. In order for a computer to be accessible in the network, each computer must be given a unique address on the network so that information can be identified as destined for that computer. The process of obtaining and maintaining network addresses is performed within the local operating system and network hardware. This allows software applications to specify the information to be sent and the operating system and network hardware to manage how it is exchanged between computers.




Fig. 4.1


Relationship between a local intranet (within an institution) and the wider Internet.

Institutions may choose to use an external vendor to provide certain services hosted on external servers, this is referred to as “cloud” computing or services. Prevention of unauthorized access to the intranet from external parties while allowing users to access the Internet and other remote resources is of paramount concern. “Firewall” devices aid in the separation of the institutional network from the wider Internet and control access.


Wired networks require the computer system and the receiving hardware to be physically connected by electrical or optical cable. This limits the flexibility in the connection points, which must be placed in preplanned areas, with any subsequent adjustments requiring re-routing of cables. However, information travelling on the network cannot be intercepted or accessed without physical access to the network cables or connection points.


Wireless network systems offer advantages of convenience and the ability to move around a work environment without maintaining a physical connection among the computer systems. However, this usually occurs at the expense of speed of information exchange. Information exchange via wireless links is an order of magnitude slower than the fastest wired connections. Because wireless systems require the availability of strong radio links between the computer and the network equipment, they are subject to issues of poor reception (possibly because of physical barriers) and interference, which manifest as inaccessible or degraded network performance. It is difficult to control the precise limits of where a wireless network is available (i.e., only within a building and not immediately outside of it), therefore processes to limit wireless network access to authorized users and to encrypt data transmitted across wireless links are required.


In practice, healthcare facilities use a blend of both wired and wireless networks to ensure that the advantages of each system are available to support the users.


In most settings, the network is organized as a “client-server” model. The computer that hosts the shared resources is referred to as the “server” and the computer accessing the resources is the “client.” The server is responsible for ensuring the client is an authorized user of the shared resource (access control) and ensuring the resource remains available to multiple users, potentially by preventing one client from monopolizing the use of the resource.


The client-server concept stands in contrast to peer-to-peer architecture, whereby resources are distributed across systems, with each computer on the network contributing its resources (e.g., files or specialized hardware). All computers are both clients and servers in this arrangement. There is limited ability to control access in a planned and coordinated manner.


Use of a client-server infrastructure may allow for a significant amount of the computational tasks to be outsourced to the central server. When the client has very limited computational resources this is referred to as a “thin client.” Computationally intensive tasks can be performed by the server and the client receives the results of the computation. Fundamentally, the thin client is viewing and interacting with a software application that is running on the server. The client is little more than a means of sending user input to the server and a dynamic display of application results. In order for this arrangement to work, there must be a limited, predictable set of software applications that the client accesses on the server, with a reliable network connection. Without the network connection, the thin client has no functionality. This model may be easier to maintain because any changes are done centrally and need to be made once and then become available to every client connecting in.


An alternative model is the “thick client,” where the client is capable of significant computational activities, retains a fully functional state when not connected to the network, accesses only the information required across the network, and processes it independently. However, these clients require individual maintenance.


A hybrid solution is the concept of “application virtualization,” whereby a single software application is hosted and uses the computational resources centrally and the client systems access this application regardless of their configurations. This blends the advantages of a thin client—control of the application’s availability, ease of maintenance, and ensuring compatibility (by not requiring any level of computational resources aside from running the connection to the server)—with users having a fully functional computer or device to use for the remainder of their tasks. Additionally, this hybrid enforces a separation between the information stored on the server and any applications running on the client and thus information can be secured within the server that is housed within the institutional network.




The Internet


The Internet is a global network of networks. Best known by two of the ways in which it can be used—websites and email—the Internet is at its simplest a method for transferring electronic information across the world. Internet service providers (ISPs) provide access to optical and electrical cables, which transfer information across the world. As these cables are all interconnected, multiple paths are available to transfer data at any one time. Routers control the flow of Internet traffic and ensure that it takes the most direct and fastest routes across the multiple paths available to it. Although the delay that a user may experience in accessing information varies widely and is dependent on many factors, the flow of information around the world can be measured in the order of hundreds of milliseconds or less.


Use of the Internet has led to the development of a series of technologies where computing resources are offered to multiple clients using an Internet connection as a means of distribution and interaction with the clients (see Fig. 4.1 ). These “cloud” platforms allow on-demand and scalable use of computing resources. Computing resources can be bought and sold based on the variable amount of time they are used or the amount of information stored; additional capacity can be flexibly added. These resources are accessible from anywhere with an Internet connection. Furthermore, cloud platforms give organizations the ability to transfer the management of the specialized computer hardware needed to provide these services to another party.


The integration of mobile phone data networks and the proliferation of increasingly powerful handheld devices (such as smartphones or tablets) has increased further the number of potential clients. For healthcare organizations, there is significant user pressure to be able to access healthcare information systems remotely or from these mobile devices.


The most ubiquitous usage of the Internet is in the delivery of “web pages.” Information is stored on a “web server” and upon request from an application being run on a remote client computer (web browser), the information and display formatting instructions (i.e., size, shape, position of text, or graphics) are sent to the client. The web browser then interprets these instructions and displays the information according to the specified instructions. This process is highly dependent on well-defined and accepted standards of information exchange between client and server and rendering by the client.


These web pages have become increasingly sophisticated incorporating text, video, audio, complex animations, stylesheets, and hypertext links. Technologies have evolved into an interactive process that can dispense information specific to only one user (e.g., a record of the user’s bank transactions) and that can be supplied in a manner that is generalizable to many different users (so all customers can access their bank transactions this way). When these instructions are assembled to generate specific business processes, they function as software applications that are web based and are referred to as “web applications” or “web apps.” Interaction with web pages may lead to complex business processes being undertaken in the physical world. For example, the ability to buy a book over the Internet starts with a web page displaying the information and ends in someone delivering it to the door, with many physical steps in between. Healthcare organizations have embraced these technologies to support their delivery and administration of patient care, including scheduling systems, laboratory result reporting, patient communications, and equipment management systems, all of which are delivered in this manner.


Of note, information which is travelling across the Internet, without additional measures, is not necessarily private. A salient metaphor would be to consider the difference between information being conveyed in an envelope (where the contents are not visible) and information being conveyed on a postcard (where the message is clear to anyone who holds it).




Information Security


Although computing technology has significantly influenced the delivery of medical care, it has also brought a series of challenges that must be addressed. A major consideration is information security. Core to these considerations is ensuring that the correct information is available to the correct users at the correct time.


These threats to information security may come from within or outside an organization. Within organizations, an employee may access information that they are not authorized to so do or by transferring and storing it in an insecure manner. They may introduce security threats by using applications that may transfer information outside of the organization or by modifying an existing network by using a personal device. External threats may seek to improperly access information (“hacking”) by obtaining passwords or identities from legitimate users (via “phishing” attacks) or by introducing applications that degrade computer function to extort payment (“ransomware” attacks).


The paradigm used for controlling access to computing resources is users and accounts. Each person who uses the computer is considered to be a user. Users can be identified and mapped to real-world persons. Users may belong to groups that share common attributes. It should be known in advance which resources should be available to which users or groups of users. A group of users (i.e., anesthesia providers) may have access to particular resources (e.g., a document of anesthesia policies) but each user may also have access based on their individual parameters (e.g., an individual anesthesiologist may have sole access to his or her own private files). A group of users with similar functional roles who have a defined set of resource privileges is known as “role-based security.” Changes in privileges affect all users in that functional group.


Users should be able to positively identify themselves; commonly this involves the combination of a username and password with the password being known only to the user and the computer system. However, other methods of authentication, such as biometric information (fingerprint, iris scan, or face scan) or physical access tokens (e.g., identification badges) are now commonplace. Password policies that enforce a mandatory level of complexity (minimum length, mixing letters and numbers, or special characters), specific expiry dates, and prevent password reuse are designed to make it harder for passwords to be guessed by an unknown party or to mitigate or minimize the risk of passwords being accessed or used externally. However, requirements for increasing complexity or frequency of changes may pose additional burdens on users that they consider unacceptable and may not decrease risk.


Organizations may also choose to adopt “two-factor authentication” methods, which can be summarized as requiring “something you know and something you have” to gain access to the computer system. The password fulfills the first part of this concept as it is meant to be known only to the user. Devices such as physical token code generators (which provide a predictable response to be entered alongside the password) or an interactive system (authentication via a smartphone application or phone call) may satisfy the second concept. Thus, in order for someone to impersonate the user they must have both the password (that may have been taken without the user’s knowledge) and a physical device (that the user is more likely to detect the absence of). This makes remote access less likely because an external user on the other side of the world may be able to obtain or guess a password but is very unlikely to also be able to obtain the token or smartphone required for access.


Physical security is an integral part of information security. Ensuring that an unauthorized person does not have physical access to computer hardware or access to the means of connecting to that computer hardware are important considerations. This can be accomplished by physical measures (such as locked rooms, doors, and devices that prevent movement of computer hardware) and considerations of where computers containing controlled information are placed (to prevent an unauthorized person from having access to a computer that is available in a public area).


However, as alluded to before, these restrictions are balanced against desires for increased usability and portability of computing devices from computer users and the need to make information available to the provider at the point of clinical interaction.


Therefore, it is necessary to ensure secure access to information across wireless links and across the Internet. One method for doing this is to ensure that the information transferred is not readily visible along its means of transmission. This is performed by a group of processes known as encryption. Encryption is the process of transforming a piece of information from its original and accessible state to one that is not accessible and lacks meaning without an additional piece of information (an encryption key).


The transformation to and from encrypted text takes place in a manner that is relatively easy to perform with the known encryption key but is infeasible to do so without knowing this key. Encryption processes are based on mathematics involving multiplication of very large numbers, which creates many possible combinations of different factors that could have led to the same outcome. Therefore, it would be computationally infeasible, with current technology, to attempt to try all possible solutions.


External threats to an organization involve outside entities attempting to access services or applications that are meant for internal use only. Because healthcare organizations must be connected to the Internet to enable many information exchange functions, their data may potentially be available to every Internet-connected device in the world. “Firewalls” are used to ensure that only legitimate transactions and interactions with the external world are exposed to the internal hospital network. These hardware or software tools, collectively known as a firewall, prevent the creation of unauthorized connections from outside the organization to the internal computing systems. Firewalls can also limit the types of network traffic that are allowed to exit from the internal networked system. For example, it may restrict network traffic typically used for the sharing of files.


In order to allow legitimate external access, organizations may allow the creation of virtual private networks (VPNs). After appropriate authentication and verification, VPNs set up an encrypted path for information from an external Internet-connected computer to the organization’s internal network. This allows the external computer to act as if it was physically connected to the organization’s internal network and to access resources such as specialized software or shared files. This adds an additional layer of access security to the connection and ensures the communication is secure. A healthcare organization may require use of a VPN to access an EHR from outside the organization’s network.




Standards for Healthcare Data Exchange


Although not always obvious, the EHR is typically an amalgamation of multiple computer systems and devices of various complexity. These systems exchange data according to common standards, languages, and processes.


Common connections include monitoring devices that allow automatic transfer for measured parameters into the electronic chart, infusion pumps (recording programmed settings), laboratory instruments (blood gas machines, cell counters, biochemistry analyzers, point-of-care testing devices), or systems that manage patient admission, identification, and bed occupancy (admission, discharge, and transfer [ADT] system). All of these devices and systems need methods of communicating with the EHR ( Fig. 4.2 ). Although in some situations it may be possible to use a proprietary standard for communication between systems, it can quickly become difficult to manage across an entire institution. As a consequence, a series of commonly used standards have been established that allow the communication of healthcare information.




Fig. 4.2


Information flows from connected devices across the institution into the electronic health record ( EHR ). Some departments maintain specialized software to manage the needs specific to their workflow—for example Radiology departments using Picture Archiving and Communication Systems—that are interfaced into the EHR (i.e., to allow a report to be connected to the original CT scan). Similarly networked monitor data is made available by the use of a gateway interface device. PACU , Postanesthesia care unit


The Health Level-7 (HL7) standard, originally developed in the late 1980s, is still used widely in the exchange of health information. HL7 allows the transmission of data in a standardized manner among devices and clinical systems. The information can be identified to a specific patient and organized into different data types, indicating laboratory results, monitor data, and billing information. It can also cause the receiving system to perform an action, such as update previously obtained data. The HL7 standard and subsequent derivatives that address the exchange of clinical documents in a structured and identified manner support communication among different clinical systems. However, this standard was based on data exchange within different software application systems within an institution and did not envisage the proliferation of Internet-connected devices remotely accessing shared resources across many healthcare organizations.


This new paradigm led to the development of Fast Health Interoperability Resources (FHIR). This communication standard is analogous to how modern Internet applications exchange data via simple standardized requests to a central resource. FHIR enables easier integration across different types of software and integrates security features necessary due to the proliferation of mobile devices. This standard is designed to facilitate the exchange of data regardless whether it is a single vital sign or a scanned document from a physical chart.




Regulation of Electronic Data Exchange


In the United States, the 1996 passage of the Health Insurance Portability and Accountability Act (HIPAA) established a common regulatory framework that defined health information and the processes by which it should be stored and transferred, and established powers to investigate concerns regarding noncompliance with these rules.


There are four major regulatory rules: the HIPAA Privacy Rule, Security Rule, Enforcement Rule, and the Breach Notification Rule. Each update is a complex regulatory document and professional advice should be sought on the applicability and relevance of each of these to a particular situation.


The HIPAA Privacy Rule details the allowable uses and disclosures of individually identifiable health information, which is referred to as “protected health information” (PHI). Identifiers that are considered PHI are listed in Table 4.1 . The privacy rule additionally defines the healthcare agencies covered by the rule. It defines processes that must be taken when working with business partners outside the healthcare agency through the creation of business associate agreements. Further, it establishes the concept of a limited data set, which is a set of identifiable healthcare information that is devoid of direct identifiers and can be shared with certain entities for research purposes, healthcare operations, and public health reasons; use of these is governed by “data use agreements.”



Table 4.1

Data Elements that Allow Patients to Be Identified




























































Hipaa Identifiers
Names
All geographic subdivisions smaller than a state, including street address, city, county, precinct, ZIP code
All elements of dates (except year) for dates that are directly related to an individual. Ages over 89 and all elements of dates (including year) indicative of such age
Telephone numbers
Vehicle identifiers and serial numbers, including license plate numbers
Fax numbers
Device identifiers and serial numbers
Email addresses
Web Universal Resource Locators (URLs)
Social security numbers
Internet Protocol (IP) addresses
Medical record numbers
Biometric identifiers, including finger and voice prints
Health plan beneficiary numbers
Full-face photographs and any comparable images
Account numbers
Any other unique identifying number, characteristic, or code
Certificate/license numbers

HIPAA, Health Insurance Portability and Accountability Act.


The HIPAA Security Rule applies specifically to electronic PHI (e-PHI). The rule requires that e-PHI created, received, maintained, or transmitted by an organization should be done so confidentially and in a manner that ensures data integrity and availability. Additionally, the rule requires that threats to information security be monitored and measures be taken to mitigate these threats; this includes audits of computer systems to ensure unauthorized access has not occurred. The specification includes physical, technical, procedural, and administrative measures, all of which need to be undertaken for compliance. In general, the rule does not specify a particular set of computing resources that should be used, but instead specifies the standards to which they should be verified.


The HIPAA Enforcement Rule established the processes whereby a breach of the privacy rule could be investigated, and sanctions enforced. The Office of Civil Rights (OCR) within the Department of Health and Human Services (HHS) is responsible for receiving and investigating these complaints. Complaints may also be referred to the Department of Justice if it is believed that a criminal breach has occurred. Penalties for noncompliance can involve significant monetary fines or imprisonment in the context of criminal acts.


Finally, the HIPAA Breach Notification Rule defines what a breach of PHI data security is and obligates covered organizations to report to the OCR breaches of PHI that are discovered. Differing timelines for reporting apply, depending on if the breach involved greater or fewer than 500 individuals. Notification must also be provided to affected individuals and potentially to the media, depending on the number of individuals involved.




The Nature of Healthcare Information in the Anesthesia Encounter


In the conduct of anesthesia care, much of the information gathered could be considered as frequently-occurring structured data. That is, much of the information contained within the encounter can be categorized into one of a relatively small number of groups. This information is present commonly across anesthesia encounters. And the information itself can often be restricted to a small number of possible options—consider the example of an airway assessment.


This applies to information gathered in the preoperative phase of care (i.e., Mallampati classification from an airway examination) and the intraoperative phase of care (i.e., heart rate or systolic blood pressure). Furthermore, the intraoperative phase of care is marked by repetition of information at predefined intervals with measurements that may be taken in an automated manner (e.g., noninvasive blood pressure recordings every 3 minutes).


A majority of data gathered during an anesthesia case is structured, limited, and predictably repeated. However, the data are also voluminous with data generated and captured continuously on monitors, anesthesia machines, and medication pumps. More than 50 different parameters may describe a single minute of anesthesia care.


This is in contrast with the nature of the information captured in many medical specialties that are not easily constrained by content or structure. The documentation of a primary care visit may follow a standard format, however the number of variables captured may not be easily defined in advance or constrained to a standard structure; the range of possible issues to be documented may be too broad.


Anesthesia-derived data is well suited for capture into electronic charting systems. A number of mature commercially available systems are available for undertaking this task. These systems are often not standalone, and we will discuss how they are integrated in the next section.




Development and Deployment of Anesthesia Information Management Systems


Given the suitability for automated capture of recurring high-volume data, the concept of using computerized capture and storage for parts of the anesthesia record is not new. McKesson in 1934 described an early form of monitor that integrated with a vital signs data recorder ( Fig. 4.3 ). Early pioneering systems included the Duke Automatic Monitoring Equipment (DAME) System and its more compact successor, microDAME, which combined an internal monitoring platform with an integrated network architecture for central data recording. Anesthesia Record Keeper Integrating Voice Recognition (ARKIVE) developed commercially in 1982 by Diatek included both a voice and touch screen interface. Over time, other systems became available and these progressively morphed from being described as “anesthesia record keeping” (ARK) systems to “anesthesia information management systems” (AIMS) as the range of features and integration with other systems progressed.




Fig. 4.3


McKesson’s apparatus for the automated recording of physiologic recordings and gas mixtures. From 1934.

From McKesson EI. The technique of recording the effects of gas-oxygen mixtures, pressures, rebreathing and carbon-dioxide, with a summary of the effects. AnesthAnalg. 1934;13[1]:1–7 [“Apparatus” Page 2]


Despite extensive development of a number of commercial systems, the use of AIMS was relatively limited in the early 2000s. Survey estimates suggest that by 2007 market penetration in academic medical centers increased from approximately 10% to approximately 75% by the end of 2014. By 2020, it is estimated that market penetration will reach 84% of all medical centers. In the United States, the implementation of EHRs has been encouraged by federal government financial incentives including the American Reinvestment and Recovery Act of 2009, which authorized up to $11 million dollars per hospital to finance the adoption of health information technology.


The adoption of health information technology has resulted in the increasing integration of the anesthesia record with other clinical systems. The American Society of Anesthesiologists (ASA) has produced a statement on the documentation of anesthesia care. Such systems can be used to fulfill clinical documentation needs; however, much of the promise of these systems is in the potential for integration with the broader hospital environment and secondary uses of the data that they potentially facilitate.


Anatomy of an Anesthesia Information Management System


A mature AIMS must be capable of (1) recording all aspects of the anesthesia encounter (preoperative, intraoperative, and postanesthesia care unit [PACU]); (2) must automatically gather the high-fidelity physiologic data generated by monitoring platforms and anesthesia machines; and (3) must allow the anesthesia provider to record observations regarding the conduct of the anesthetic. These three simple requirements allow us to closely specify the anatomy of an AIMS.


The first requirement for access of the same patient record during multiple phases of a case suggests the use of a system organized on a computer network, where the computer record is maintained on a central server and accessed by multiple clients. This capability requires accessibility of computer workstations at each patient-care location to facilitate documentation. The computer must be accessible during the clinical interaction but in a way that does not interfere with this interaction, which is both an issue of ergonomics and of provider behavior. In the operating room (OR), the system should be directly accessible at the time of clinical care to allow contemporaneous documentation without the anesthesia provider physically moving away from the patient or care area. In many deployments, this is achieved with a computer mounted to the anesthesia workstation alongside the monitoring equipment. Because the computer hardware is located in clinical environments, these may become contaminated with pathogens and it is important that the hardware can be cleaned in a manner that is compatible with infection control policies.


The second requirement for automated capture of data from OR monitors and anesthesia machines is some form of interface device between the computer hardware and the hemodynamic monitors, anesthesia machines, and other patient-connected equipment (infusion pumps or ventilators)— Table 4.2 . In most AIMS implementations, this interface occurs at a central level, where a network of the physiologic monitors and the central server hosting the AIMS communicate via a gateway device. Typically, interfaces use standardized data formats, such as those described earlier that transmit communication among devices and software solutions from different manufacturers and developers. Interfacing these devices with a computer network may require specialized hardware and additional cost. However, the interface enables the automated capture of monitor and anesthesia machine data, freeing up clinical providers from the recording of these data elements. In light of the cost and practical challenges, some AIMS situated in low-resource settings (e.g., an office-based anesthesia location) may choose to eliminate the data interface feature.



Table 4.2

Examples of Parameters Commonly Included in the Anesthesia Record Gathered Automatically from Different Sources



































































































From Core Physiologic Monitor
Arterial blood pressure (systolic, diastolic, mean)
Cardiac index
Cardiac output
Central venous pressure
End tidal CO 2
Heart rate (ECG monitoring and SpO 2 )
Intracranial pressure (ICP)
Noninvasive blood pressure (systolic, diastolic, mean)
Pulmonary artery pressure (systolic, diastolic, mean)
Pulse pressure variation (PPV) and systolic pressure variation (SPV)
Saturation of peripheral oxygen (SpO 2 )
ST segment analysis
Systemic vascular resistance
Temperature (all sources)
From Stand-Alone Devices (May be Available within Some Core Physiologic Monitors)
Acceleromyography value
Cerebral oximeter (NIRS)
Continuous cardiac output measurement devices
Level of consciousness monitors
Mixed venous oxygen saturation (SvO 2 )
From Anesthesia Workstation
Fraction of inspired oxygen (FiO 2 )
Fresh gas flows: oxygen, air, nitrous oxide
Volatile anesthetic agents (inspired and expired concentrations)
Minute volume
Nitrous oxide (inspired and expired concentrations)
Oxygen (inspired and expired concentrations)
Peak inspiratory pressure (PIP)
Positive end-expiratory pressure (PEEP)
Respiratory rate (ventilator and ETCO 2 )
Tidal volume
Ventilator mode

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Mar 7, 2020 | Posted by in ANESTHESIA | Comments Off on Informatics in Perioperative Medicine

Full access? Get Clinical Tree

Get Clinical Tree app for offline access