ChatGPT Healthcare Software

The implications of using ChatGPT when you have healthcare clients are real. Open source is not particularly secure, and when your systems have the potential to access healthcare data, you have to think of how HIPAA factors in.

HIPAA and ChatGPT

ChatGPT is not HIPAA compliant. It’s as simple as that. 

The terms of use of ChatGPT allows them to use personal information gained from the use of their services, including log data, device information, and most importantly:

Usage data: We may automatically collect information about your use of the Services, such as the types of content that you view or engage with, the features you use, and the actions you take, as well as your time zone, country, the dates and times of access, user agent and version, type of computer or mobile device, computer connection, IP address, and the like.

How ChatGPT uses and discloses consumer data is in direct contradiction with how HIPAA permits patient information to be used. 

Filtering protected health information (PHI) through ChatGPT is strictly prohibited, but ChatGPT can be used for other purposes. You can use ChatGPT, in a limited capacity, to create the framework for code, emails, and articles.

Our Experts on ChatGPT

While ChatGPT can be used as a framework for several purposes, you ultimately need to know what you’re going. Several experts in the field have pointed out that ChatGPT will confidently produce content on various subject matters, but this doesn’t mean the content it produces is entirely accurate. ChatGPT has been known to produce broken code and inaccurately cite legal publications.

So, you need to know enough about the subject you’re asking ChatGPT to produce content for and remember to fact-check before you do anything with what ChatGPT produces. 

Compliancy Group Lead Compliance Attorney Dan Lebovic commented on a set of policies and procedures we asked ChatGPT to produce. 

“When you consider that the results are coming from an AI program, it’s a surprisingly good first step,” said Lebovic. “But as you analyze the results, you see some pretty severe shortcomings.”

“In many cases, the results are disorganized, legal citations are incorrect, and the policies are generalized regurgitations of what the HIPAA law says instead of being effective policies that an organization could implement. There are also concepts that appear in different rules for different reasons that are not addressed adequately.”

While there are currently shortcomings in technology such as ChatGPT, some are hopeful that it can eventually become a useful tool for healthcare security. “I believe there is the potential for AI models to be trained so that they could be used to help a company manage and perform an effective gap analysis or analyze a company’s risk with various Security or Privacy controls,” stated Craig Baldassare, VP Product, Compliancy Group.

Others were concerned with how AI mistakes could impact the future. Paul Redding, VP Partner Engagement, Compliancy Group, warns, “Without question, artificial intelligence will change nearly every industry on the planet at some level. The potential is literally endless. One thing we as a society have to remember is that AI, like the people who created it, is inherently biased. To be truly aware, we as individuals have to learn from our experiences, but it’s this act of learning that makes us biased. We have to remember that the same bias that drives people to make bad decisions can cause AI to do the same. This is one of those things that keeps me awake at night – as we put AI in charge of more and more critical systems, what happens when AI makes the wrong choice.” 

Rated #1 on G2

“Compliancy Group makes a highly complex process easy to understand.”

Easiest To Do Business With 2024

The Future of AI in Healthcare and What NIST Has to Say

While ChatGPT may not have a place in healthcare, other forms of AI have huge potential. 

The machine learning capabilities of AI can be used to detect disease earlier and manage patient care. AI is also useful from a cybersecurity perspective in malware, fraud, and intrusion detection, determining network risks, and user behavioral analysis. 

With each application of AI in healthcare, there’s a compliance discussion to be had regarding HIPAA and potentially NIST.

On January 26, 2023, NIST released its AI Risk Management Framework. Organizations can voluntarily adopt the NIST AI Risk Management Framework (AI RMF) to improve trustworthiness in design, development, use, and evaluation of AI products, services, and systems.

According to NIST, Framework users are expected to benefit from: 

  • enhanced processes for governing, mapping, measuring, and managing AI risk, and clearly documenting outcomes; 
  • improved awareness of the relationships and tradeoffs among trustworthiness characteristics, socio-technical approaches, and AI risks; 
  • explicit processes for making go/no-go system commissioning and deployment decisions; 
  • established policies, processes, practices, and procedures for improving organizational accountability efforts related to AI system risks; 
  • enhanced organizational culture which prioritizes the identification and management of AI system risks and potential impacts to individuals, communities, organizations, and society; 
  • better information sharing within and across organizations about risks, decision-making processes, responsibilities, common pitfalls, TEVV practices, and approaches for continuous improvement; 
  • greater contextual knowledge for increased awareness of downstream risks; 
  • strengthened engagement with interested parties and relevant AI actors; and 
  • augmented capacity for TEVV of AI systems and associated risks.

This can help organizations prevent risks proactively and develop more trustworthy AI systems by: 

  • improving their capacity for understanding contexts; 
  • checking their assumptions about context of use; 
  • enabling recognition of when systems are not functional within or out of their intended context; 
  • identifying positive and beneficial uses of their existing AI systems; 
  • improving understanding of limitations in AI and ML processes; 
  • identifying constraints in real-world applications that may lead to negative impacts; 
  • identifying known and foreseeable negative impacts related to intended use of AI systems; and 
  • anticipating risks of the use of AI systems beyond intended use.

Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270) offers guidance connected to NIST AI RFM. One of its authors and NIST’s Principal Investigator for AI Bias, Reva Schwartz, commented, “Context is everything. AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI.” 

“Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point. Organizations often default to overly technical solutions for AI bias issues. But these approaches do not adequately capture the societal impact of AI systems. The expansion of AI into many aspects of public life requires extending our view to consider AI within the larger social system in which it operates. It’s important to bring in experts from various fields — not just engineering — and to listen to other organizations and communities about the impact of AI.”

HIPAA Protects You

Protect your business from expensive breaches and fines!