As AI services like ChatGPT develop, potential benefits and problems are also being identified. At least $11 billion in AI technology is currently being deployed or developed for use in healthcare, and some estimates predict that could grow to more than $188 billion in the next eight years. As AI technology rises, what can we expect from the intersection of ChatGPT, healthcare, and HIPAA compliance?
ChatGPT, Healthcare, and HIPAA Compliance: The Current State
The first thing that must be recognized is that in its current state, there is no way to use ChatGPT with any patient’s protected health information (PHI) in a manner that would be considered HIPAA compliant.
The HIPAA Privacy Rule clearly states the need to limit access to PHI. The terms of use of ChatGPT allows them to use personal information gained from the use of their services including log data, device information, and most importantly:
Usage data: We may automatically collect information about your use of the Services, such as the types of content that you view or engage with, the features you use, and the actions you take, as well as your time zone, country, the dates and times of access, user agent and version, type of computer or mobile device, computer connection, IP address, and the like.
ChatGPT, Healthcare, and HIPAA Compliance: Meet Your AI Doctor
One benefit that is being touted for AI in healthcare is the ability to streamline and summarize medical data for healthcare providers in order to save time and possibly even assist with diagnosis. The question is, can AI be trusted to make the right decisions?
IBM Watson Health launched a highly publicized effort to leverage AI technology to assist with improving treatment and diagnosis for cancer patients. In the end, poor quality results as a lack of sufficient data doomed the effort.
A recent article on ChatGPT posted on the Abstractive Health website noted that ChatGPT could be used to help translate detailed treatment notes into a version that is more easily understood by patients. The ability to use AI to generate responses to patient requests for information on demand would help improve communications for patients while reducing the workload for healthcare professionals.
But the article pointed out a potentially troubling aspect of using ChatGPT or any other AI today. The answers created by ChatGPT are delivered in a manner that appears to be confident and very comprehensible, even when the answer is incorrect.
[cp_popup display=”inline” style_id=”24968″ step_id = “1”][/cp_popup]
ChatGPT, Healthcare, and HIPAA Compliance: Can ChatGPT Make You HIPAA Compliant?
While ChatGPT is not suitable for any circumstance involving patient PHI, the Compliancy Group team decided to test ChatGPT as a tool to aid in HIPAA compliance. Two of the most critical aspects of having an effective HIPAA compliance program are conducting the annual HIPAA Security Risk Assessment and having effective HIPAA Policies and Procedures.
We asked ChatGPT to write a set of HIPAA-compliant policies and procedures for a medical office with doctors, nurses, physician assistants, and office personnel. We also asked the AI to generate a HIPAA-compliant IT Risk Assessment questionnaire and Asset and Device Audit for the same office.
The final policies were based upon the responses to 22 requests to ChatGPT. We then asked our Lead Compliance Attorney, Dan Lebovic to review the results.
While Lebovic was impressed with some of the results, he found many flaws in the response from ChatGPT.
“When you consider that the results are coming from an AI program, it’s a surprisingly good first step,” said Lebovic. “But as you analyze the results, you see some pretty severe shortcomings.”
“In many cases, the results are disorganized, legal citations are incorrect, and the policies are generalized regurgitations of what the HIPAA law says instead of being effective policies that an organization could implement. There are also concepts that appear in different rules for different reasons that are not addressed adequately.”
Lebovic added that the results we obtained would not meet the policy and procedure standards required by HIPAA’s Rules and Regulations but that it sounds professional enough to easily mislead someone not well versed in the requirements of HIPAA.
“The technology is fascinating, but HIPAA compliance requires that policies and procedures address the specifics of each organization,” said Lebovic. “You can have two dental offices or pediatrician practices that appear to be identical in every way, but what needs to be addressed by their HIPAA policies and procedures could be very different.”
ChatGPT, Healthcare, and HIPAA Compliance: AI’s Built-in Bias and Other Concerns
It is important to remember that every AI product has its roots in the work of human beings, each of whom possesses their own unique life experiences, perspectives, and biases. How much of those biases seep into the programming that powers the AI?
Demographic results from Zippia.com report that only 20.4 percent of data scientists are women. From a race and culture perspective, just over five percent are Hispanic, and only one percent are African-American. There is a real risk that AI results will fail to address the needs of specific populations or further marginalize groups.
Anne Marie Anderson is Compliancy Group’s Director of Product Content and Compliance, bringing her compliance attorney expertise to her role. She notes that any AI solution must address the potential for bias and marginalization.
“Results that are produced by any AI project cannot be used in a way that will discriminate or exclude any group, especially in a matter that is as important as healthcare,” said Anderson.
Finally, Elon Musk, one of the founders of ChatGPT’s parent organization Open AI left the company following disagreements about the direction of the organization and raised concerns about safety and transparency within Open AI.
Musk has also been famously quoted as saying that AI represents an “existential threat” to humanity and has observed that the technology could be used to develop weapons and influence elections.
ChatGPT does an exceptional job of assisting with writing code for software and applications, but there are already reports that bad actors and cybercriminals are using it to write malware code.
Compliancy Group will continue to monitor and report the development of AI, ChatGPT, and its impact on healthcare compliance and the healthcare industry in general.