
AI Models, Client Data, & HIPAA Compliance: What SLPs Need to Know
Welcome back! Are you ready to dig deep in to AI and client privacy? In our recent ‘HIPAA Compliance At Home‘ article, safeguarding protected health information (PHI) is key. It’s not just a professional guideline; it’s a fundamental ethical responsibility and a legal imperative under laws like HIPAA. Now we narrow our focus on HIPAA and the use of AI tools.
The Critical Importance of Client Data Privacy (HIPAA)
Now, we’ll examine one of the most critical concerns for us as clinicians when dealing with AI: HIPPA and PHI. Specifically, our clients’ sensitive health information must be kept private and secure when interacting with AI tools. As Speech-Language Pathologists, safeguarding protected health information (PHI) is a fundamental ethical and legal responsibility under laws like HIPAA.
The rapid rise of AI brings with it questions about how these models learn from vast datasets. What are the implications for the confidentiality and security of the clinical data we manage daily? Is any interaction with any AI tool a HIPAA violation? How can we ensure our client’s trust isn’t compromised?
In this post, we’ll unpack these crucial issues of AI and HIPPA for SLPs. We’ll examine how AI actually learns from massive amounts of text and data. Then we’ll tackle how to keep PHI data private and secure with AI use.
Don’t forget to take our poll at the end! Results will be shared in Part 8.
The Human Brain and Clinical Data Processing: A Parallel Perspective
As Speech-Language Pathologists, we constantly process and learn from clinical information. From graduate school to continuing education, supervision, and countless client interactions, our brains process vast clinical data. We are immersed in a sea of de-identified case studies, research articles, diagnostic reports, and therapy session data. We absorb this input, identify patterns, recognize clinical presentations, and synthesize effective intervention strategies.
This exposure allows us to develop our clinical judgment, personalize therapy plans, and communicate professionally about our clients’ needs. We don’t memorize every detail of every case. Instead, we internalize the underlying principles and relationships, allowing us to generate our own unique, ethical, and individualized clinical insights. This human learning process involves drawing inferences from numerous, often sensitive, pieces of information.
How AI Learns: Pattern Recognition on a Massive Scale

While humans and AI language models learn differently, they both rely on an immense quantity of input. AI language models are trained on enormous datasets of text and code – encompassing millions of books, articles, websites, and more.
AI Training: Leveraging Vast Datasets
Healthcare AI datasets include anonymized data. This covers research, journals, guidelines, and de-identified clinical records. It’s like they’ve “read” the entire internet and a specialized library of medical and clinical literature.
Beyond Memorization: The Art of Pattern Synthesis
However, during this training process, the AI doesn’t typically “download” and store individual client records or copyrighted works. Instead, AI analyzes data for patterns. It identifies word probabilities, grammar, and style.
It’s not directly copying and regurgitating. Instead, it is learning the underlying rules of language and the common ways information is conveyed, though without real meaning. When text is generated (e.g., a SOAP note), it’s not pulling verbatim from a specific client’s file. Instead, it’s using patterns to construct new sequences of words that are statistically probable and relevant to your prompt. It’s synthesizing information into something novel.
We will dive more into how AI works in Part 3 of the series. However, the key takeaway for SLPs is that the data you input for a specific task may violate privacy and security.
The Imperative of Data Privacy and Security for SLPs (HIPAA and Beyond)

For SLPs, the core legal and ethical imperative is HIPAA (Health Insurance Portability and Accountability Act). This legislation dictates how protected health information (PHI) must be handled, stored, and transmitted. The myth from our next post – that all AI use inherently violates HIPAA – stems from a valid concern about PHI.
This is why it’s crucial to differentiate the tools you use for PHI and non-PHI purposes. Even seemingly “de-identified” notes can still be problematic with AI and HIPAA for SLPs.
Public/General-Purpose AI Tools (e.g., standard ChatGPT, Google Gemini)

These tools are NOT HIPAA compliant by default. They are designed for general use, not for processing sensitive health information. This AI use directly impacts client data security.
The “De-identification” Trap
You might believe you’ve removed all identifying information when using AI, by leaving out a name, specific dates, or location. However, HIPAA’s definition of de-identification (the “Safe Harbor” method) is incredibly stringent. It requires removing 18 specific identifiers, and crucially, “any other unique identifying number, characteristic, or code.”
Your session notes, even without a name, contain highly specific clinical details and narrative elements.
For example:
- “Data on /s/ blend in the initial position was 82% accurate.”
- “Discussed pictures from her trip to Chicago.”
- “/k/ sounds 50% accurate at word level.”
- “Cued /k/ by saying ‘in your throat’.”
- “Continues to answer ‘I don’t know’ and look to her parent”).
These specific clinical observations, unique behaviors, and personal anecdotes (like the Chicago trip) ARE PHI. They can, taken together, make a client potentially re-identifiable. Look at it through the lens of someone familiar with your caseload, or who has other publicly available information.
Even if your account is anonymous, or you remove all “obvious” identifiers, you may still violate HIPPA.
Transmitting such detailed, potentially re-identifiable information to a non-HIPAA compliant service constitutes an impermissible disclosure. An anonymous user account doesn’t change the fact that the AI service itself is not operating under HIPAA’s legal framework.
No Business Associate Agreement (BAA)
This is the ultimate barrier. Under HIPAA, covered entities sharing PHI with service providers need a BAA. This includes re-identifiable data. Public (free) AI models like ChatGPT and Gemini do not offer or sign BAAs. Without this legal contract, you are essentially exposing PHI to the AI, an unsecured third party, which is a clear HIPAA violation.
Learn more about BAAs in Navigating Business Associate Agreements as an SLP: Your HIPAA BAA Guide.
Transparency and Due Diligence: Your Role

As AI evolves, so do the guidelines and expectations for its use in healthcare. There are ongoing debates and legal discussions about the broader implications of AI training data. Legislators and leading technology experts are investigating issues of intellectual property and potential re-identification even from de-identified datasets. For SLPs, the immediate action points are:
- NEVER input any PHI into non-HIPAA compliant AI tools like public (free) ChatGPT or Google Gemini. This is because even your best efforts at de-identification are unlikely to meet HIPAA’s stringent standards for PHI. Therefore, the lack of a BAA creates an immediate compliance risk.
- Exercise extreme due diligence when considering specialized AI tools for your practice. Ask critical questions about security protocols, data handling, BAAs, and data storage and use policies. To help you, in upcoming parts of this series (see below), we’ll delve deeper into finding compliant tools. We will explore what secure AI solutions might look like for SLP practice.
- Stay informed about professional guidelines and emerging legal interpretations regarding AI & HIPPA for SLPs from your governing professional bodies.
We use predictive technology daily, with autofill, grammar checkers, and search engines. While they are are not typically thought of as AI, these tools still demonstrate pattern recognition. Full-fledged generative AI is even more advanced version of these. We must remain vigilant with data privacy and security, especially in a clinical context.
Ready to Navigate AI with Confidence?
The potential of AI is exciting, but vetting tools for HIPAA compliance can feel like deciphering a secret code. To make it easier, I’ve created a free, in-depth checklist! It can help guide you through finding and evaluating AI solutions that truly safeguard your clients’ Protected Health Information (PHI).

Sign up to Download Your Free HIPAA-Compliant AI Tool Vetting Checklist for SLPs!
Conclusion: Responsible Innovation in Clinical Practice
The conversation around AI and clinical data is less about AI “stealing” in a direct sense. Instead, it is more about responsible data governance, robust privacy protocols, and unwavering security measures. AI models learn from patterns, not by directly appropriating individual client files, but the data is still there. Therefore, the ethical and legal burden falls squarely on the SLP to ensure their tools adhere to stringent privacy regulations.
We have to understand how AI learns and, more importantly, prioritize HIPAA compliance and PHI security when using AI. Only then can we harness the potential of this technology while upholding our professional obligations and preserving our clients’ trust.

I want to know!
What are your primary strategies for ensuring client data privacy when using AI in your current practice? Do you have questions about vetting AI tools for HIPAA compliance? What is your take on AI & client privacy for SLPs?
Share your thoughts in the comments below, and then share your perspective in our quick poll! Results will be shared at the end of the series!
AI & Client Privacy Part 1 Poll
To keep the poll fair and ensure unique responses, a Google account sign-in is required. Be assured, however, your email address is neither collected nor visible to me.
The AI & SLPs Series: Your Comprehensive Guide
Welcome to the AI & SLPs Series! Over the next eight weeks, we’ll delve deep into how Artificial Intelligence is shaping the world of speech-language pathology. Here’s what you can expect:
- Part 1: AI & Clinical Data Privacy
- This foundational post explores AI training data, client privacy, and HIPAA compliance for SLPs, including the non-negotiable role of BAAs.
- Part 2: Separating AI Truth vs Myth
- We debunk common AI myths in SLP practice. Get a realistic understanding of AI’s true role and capabilities.
- Part 3: How AI Tools Work
- Get a clear, jargon-free explanation of how large language models function. Understand their capabilities and limitations.
- Part 4: AI for Clinical Spark & Efficiency
- Discover ethical ways to use AI. Brainstorm, overcome planning hurdles, and refine non-clinical communications.
- Part 5: Mastering AI Prompts
- Learn prompt engineering. Communicate effectively with AI models to get tailored, useful results for SLP needs.
- Part 6: Compliant AI Platforms & Tools
- This post guides you through AI tools. Learn key factors for ethically and compliantly selecting platforms for your SLP practice.
- Part 7: Ethical & Responsible AI Use
- This crucial post delves into broader ethical responsibilities for SLPs using AI. It covers principles beyond data privacy.
- Part 8: The Future of AI
- This concluding post explores emerging AI trends and future possibilities in Speech-Language Pathology. Prepare to adapt, innovate, and lead responsible AI integration.
Stick around as we keep figuring out this whole AI thing together. By the end of the series, I hope to give SLPs the knowledge they need to help us all find a balance. There is a lot of gray area and strong opinions on this topic. I hope I can provide some facts to help you make informed choices that correspond with your own values.
Keep on clickin’!
Social Media Icons: designed by rawpixel.com – Freepik.com