HIPAA and ChatGPT
ChatGPT is not HIPAA compliant. It’s as simple as that.
Usage data: We may automatically collect information about your use of the Services, such as the types of content that you view or engage with, the features you use, and the actions you take, as well as your time zone, country, the dates and times of access, user agent and version, type of computer or mobile device, computer connection, IP address, and the like.
How ChatGPT uses and discloses consumer data is in direct contradiction with how HIPAA permits patient information to be used.
Filtering protected health information (PHI) through ChatGPT is strictly prohibited, but ChatGPT can be used for other purposes. You can use ChatGPT, in a limited capacity, to create the framework for code, emails, and articles.
Our Experts on ChatGPT
While ChatGPT can be used as a framework for several purposes, you ultimately need to know what you’re going. Several experts in the field have pointed out that ChatGPT will confidently produce content on various subject matters, but this doesn’t mean the content it produces is entirely accurate. ChatGPT has been known to produce broken code and inaccurately cite legal publications.
So, you need to know enough about the subject you’re asking ChatGPT to produce content for and remember to fact-check before you do anything with what ChatGPT produces.
Compliancy Group Lead Compliance Attorney Dan Lebovic commented on a set of policies and procedures we asked ChatGPT to produce.
“When you consider that the results are coming from an AI program, it’s a surprisingly good first step,” said Lebovic. “But as you analyze the results, you see some pretty severe shortcomings.”
“In many cases, the results are disorganized, legal citations are incorrect, and the policies are generalized regurgitations of what the HIPAA law says instead of being effective policies that an organization could implement. There are also concepts that appear in different rules for different reasons that are not addressed adequately.”
While there are currently shortcomings in technology such as ChatGPT, some are hopeful that it can eventually become a useful tool for healthcare security. “I believe there is the potential for AI models to be trained so that they could be used to help a company manage and perform an effective gap analysis or analyze a company’s risk with various Security or Privacy controls,” stated Craig Baldassare, VP Product, Compliancy Group.
Others were concerned with how AI mistakes could impact the future. Paul Redding, VP Partner Engagement, Compliancy Group, warns, “Without question, artificial intelligence will change nearly every industry on the planet at some level. The potential is literally endless. One thing we as a society have to remember is that AI, like the people who created it, is inherently biased. To be truly aware, we as individuals have to learn from our experiences, but it’s this act of learning that makes us biased. We have to remember that the same bias that drives people to make bad decisions can cause AI to do the same. This is one of those things that keeps me awake at night – as we put AI in charge of more and more critical systems, what happens when AI makes the wrong choice.”