this post was submitted on 30 Mar 2026
34 points (79.3% liked)

Asklemmy

53776 readers
543 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 7 years ago
MODERATORS
 

AI can't be all that bad. The problem I'm always seeing with AI is a double-edged sword. You have corporations shoving AI in just about everything, treating it like its a cure for cancer and that really rubs people the wrong way. Then, on a more of a society level, you've got people who use AI for an assortment of things like making art with AI and still accredit themselves as an artist to people who treat AI like a therapist when it is not advised to.

However, I've found some benefits with AI. For example, I'm chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It's helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.

you are viewing a single comment's thread
view the rest of the comments
[–] MerrySkeptic@sh.itjust.works 11 points 23 hours ago (1 children)

I'm a therapist. I use HIPAA compliant AI to generate my (editable) case notes for my sessions now. Not only is it a huge time saver to simply edit a generated note as opposed to making one from scratch, but in many cases it takes more detailed notes, including quotes from clients.

I have heard of other therapists and medical doctors also using AI to help with diagnosing.

The danger is when therapistsdon't review the content to check for accuracy. Because occasionally it will generate something not really reflective of what the therapist might have been doing, or it might lack detail that the therapist might have otherwise inclused. But more often the stuff it comes up with is surprisingly accurate.And editing is even easier when you can just tell the AI something like, "include more details about how the client noticed their pattern of putting their own feelings last," and it just does what you asked. You don't necessarily have to edit manually, though you can.

[–] The_Picard_Maneuver@lemmy.world 2 points 23 hours ago (1 children)

So how does that work? Do you just have an AI listening throughout the session like a note-taker?

[–] MerrySkeptic@sh.itjust.works 8 points 22 hours ago (1 children)

Yes basically, but since it is HIPAA compliant the recording is automatically destroyed when the note is saved. Also no protected recordings are used to teach the AI. The therapist can also choose from a number of different case note formats that might focus on different things

[–] helix@feddit.org 3 points 22 hours ago (3 children)

no protected recordings are used to teach the AI

How do you know for certain?

[–] lepinkainen@lemmy.world 1 points 2 hours ago

A HIPAA violation is a death sentence to a company, along with massive fines.

There’s no incentive for them to fuck around

[–] MerrySkeptic@sh.itjust.works 2 points 7 hours ago

I can't know for certain, as I'm not on the product side of things. But I do know that HIPAA standards are very rigorous and if it were discovered that they were intentionally misleading therapists and clients then it would invite a class action lawsuit that would be insanely large.

I do ask for and document my clients' consent, though, so if anyone is not comfortable with it that's fine. I just write the note the old fashioned way. Most are fine but a few have said they don't want to and it's not a big deal.

[–] SuperUserDO@piefed.ca 8 points 19 hours ago (1 children)

People conflate security with risk mitigation. It's not secure in the way that you can confirm the data has been deleted. The risk however is mitigated due to vendor attestations reinforced by contracts.

[–] helix@feddit.org 3 points 9 hours ago

Yep, so you can't actually know if the recording is destroyed, it's just contractually required to be destroyed. Big difference in my book.

Wished these sensitive audios would be processed locally and never leave the therapist's network instead.