Artificial intelligence, or “AI,” is fast becoming indispensable to modern medical practice. Whether they are using complex prognostic algorithms to model possible disease outcomes or they are upgrading medical bookkeeping applications, physicians employ various forms of AI to aid in daily tasks. And while these tools can produce unprecedented benefits, the increased incorporation of AI in medical practice also brings with it a number of key cybersecurity challenges. Medical information, naturally, is highly sensitive and confidential, and with the implementation of AI, the chances of such sensitive information falling into the wrong hands are ever increasing. For a medical practitioner, cyber threats and how they can be mitigated are now becoming an integral part of medical practice.
The authors come at this problem from very different perspectives. One of us is a fourth-year medical student, digitally proficient and well-acquainted with the hidden dangers of ubiquitous technology. The other is a physician in his mid-60s, excited about potential uses of AI to improve medical practice but far less familiar with how to safeguard patient data in this new age. Both of us, however, agree that all physicians, no matter their age or comfort with new technologies, must be prepared to consider and address the challenges of using AI in health care.
AI models often rely on training using vast datasets that include electronic health records, numerical data, imaging results, and demographic information. If this data is not properly protected and de-identified, it can become a target for cyber attacks. Several institutions have already experienced ransomware attacks that disrupt hospital operations and expose patient data, often resulting in legal consequences and extensive litigation. When AI systems are connected to external networks, they may create additional access points for attackers. Physicians who use AI tools should therefore be aware that accessing sensitive systems through unsecured devices or public networks create opportunities for cyber attacks.
Another vulnerability is the integration of third-party AI platforms into clinical systems. Hospitals and clinics often depend on third-party providers for the delivery of AI services. Although these platforms can be convenient, they can also pose a weakness, especially if the vendors do not comply with robust cybersecurity standards. The physician may not always know where confidential information is being stored or how the AI platforms are processing the information. Patient data may also traverse the vendor’s unsecure systems before being processed by the actual AI model, creating a weak link in the cybersecurity chain. The patient’s information can sometimes be transmitted to cloud servers for processing. When the servers are breached, confidential patient information is at risk. Physicians should, therefore, thoroughly scrutinize AI platforms.
Another way data privacy is compromised is when clinicians input easily identifiable patient data into AI models without adequate de-identification. There are some AI programs that store the input data from users or use it to enhance the model. Therefore, when patients’ data is input into an unsecure platform, the data may be stored or examined without proper oversight. A physician must avoid inputting protected health data into an AI program unless the program is specifically authorized and tested for clinical use. Additionally, a physician should avoid inputting large amounts of identifiable data whenever possible, as multiple demographic data points may allow malicious actors to identify a patient more easily. Therefore, only the necessary data should be entered, and where possible, approximate values can be used for numerical variables such as age, weight, and height to make it more difficult to identify a patient uniquely, a concept known as “differential privacy.” Secure training methods such as “federated learning” can also allow models to be trained locally at the point of use, without the need to externally transmit data.
Threats to cybersecurity also extend to the actual processing and interpretation of the data. Malicious methods can be used to manipulate AI models themselves. Attackers might try to add tainted data to training datasets, which could lead to inaccurate results from the model. Malign actors can also modify input data in subtle ways, such as introducing minor perturbations to the individual pixels of an X-ray or CT scan, in such a way that leads the AI model to a misdiagnosis. This could result in incorrect clinical recommendations in diagnostic systems. Doctors should therefore keep in mind that AI results are not perfect and should always be cross-checked using clinical judgment.
In summary, physicians can implement a number of useful initiatives to lower the risk of cyberattacks when it comes to AI. Clinicians should first be trained in responsible technology use and digital security. Many frequent cyber incidents can be avoided with a basic understanding of phishing emails, dubious links, and unreliable networks. In addition, doctors should adhere to institutional policies regarding device security and enable two-factor authentication when it is available to lessen the possibility that unauthorized actors will access health care systems. Before implementing new AI platforms, hospitals and clinics should also conduct rigorous cybersecurity evaluations. This entails confirming data storage procedures, encryption standards, and HIPAA compliance.
Artificial intelligence has tremendous potential. On the other hand, AI also presents a complex cybersecurity landscape that must be navigated prudently. The physician has a key part to play in ensuring that the privacy of patients is not breached by ensuring that artificial intelligence technology is used in a responsible manner. This will help in ensuring that privacy breaches and cyber attacks are averted, thereby allowing the full benefits of technology and innovation in the delivery of medical services to be realized with minimal risks.
Purab Patel is a medical student.
Francisco M. Torres is an interventional physiatrist specializing in diagnosing and treating patients with spine-related pain syndromes. He is certified by the American Board of Physical Medicine and Rehabilitation and the American Board of Pain Medicine and can be reached at Florida Spine Institute and Wellness.
Dr. Torres was born in Spain and grew up in Puerto Rico. He graduated from the University of Puerto Rico School of Medicine. Dr. Torres performed his physical medicine and rehabilitation residency at the Veterans Administration Hospital in San Juan before completing a musculoskeletal fellowship at Louisiana State University Medical Center in New Orleans. He served three years as a clinical instructor of medicine and assistant professor at LSU before joining Florida Spine Institute in Clearwater, Florida, where he is the medical director of the Wellness Program.
Dr. Torres is an interventional physiatrist specializing in diagnosing and treating patients with spine-related pain syndromes. He is certified by the American Board of Physical Medicine and Rehabilitation and the American Board of Pain Medicine. He is a prolific writer and primarily interested in preventative medicine. He works with all of his patients to promote overall wellness.




![Politics and fear have replaced science in U.S. pain management [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-4-190x100.jpg)


