this post was submitted on 12 Apr 2025
192 points (91.0% liked)

Technology

68639 readers
3528 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

even if you disable the feature, I have zero to no trust I'm OpenAI to respect that decision after having a history of using copyrighted content to enhance their LLMs

top 49 comments
sorted by: hot top controversial new old
[–] vane@lemmy.world 9 points 7 hours ago* (last edited 7 hours ago) (1 children)

It's interesting to watch from a perspective of a person, who used to be able to find knowledge only in books. I'm slowly start to feel like Neanderthal. This global (d)arpanet experiment on humans looks more and more intriguing.

[–] cecilkorik@lemmy.ca 5 points 2 hours ago* (last edited 2 hours ago)

AI is just a search engine you can talk to that summarizes everything it finds into a small nugget for you to consume, and in the process sometimes lies to you and makes up answers. I have no idea how people think it is an effective research tool. None of the "knowledge" it is sharing is actually created by it, it's just automated plagiarism. We still need humans writing books (and websites) or the AI won't know what to talk about.

Books are going to keep doing just fine.

[–] zenpocalypse@lemm.ee 26 points 12 hours ago (1 children)

I'm not going to defend OpenAI in general, but that difference is meaningless outside of how the LLM interacts with you.

If data privacy is your focus, it doesn't matter that the LLM has access to it during your session to modify how it reacts to you. They don't need the LLM at all to use that history.

This isn't an "I'm out" type of change for privacy. If it is, you missed your stop when they started keeping a history.

[–] Evotech@lemmy.world 5 points 10 hours ago

Yeah, like they have the history already…

[–] cupcakezealot@lemmy.blahaj.zone 13 points 11 hours ago
[–] huppakee@lemm.ee 87 points 18 hours ago (1 children)

The headline: ChatGPT Will Soon Remember Everything You've Ever Told It

[–] Australis13@fedia.io 33 points 17 hours ago (2 children)

The irony is that, according to the article, it already does. What is changing is that the LLM will be able to use more of that data:

OpenAI is rolling out a new update to ChatGPT's memory that allows the bot to access the contents of all of your previous chats. The idea is that by pulling from your past conversations, ChatGPT will be able to offer more relevant results to your questions, queries, and overall discussions.

ChatGPT's memory feature is a little over a year old at this point, but its function has been much more limited than the update OpenAI is rolling out today... Previously, the bot stored those data points in a bank of "saved memories." You could access this memory bank at any time and see what the bot had stored based on your conversations.... However, it wasn't perfect, and couldn't naturally pull from past conversations, as a feature like "memory" might imply.

[–] Tim_Bisley@piefed.social 5 points 10 hours ago

Hmm this is interesting because I had a lengthy chat with gpt over this. It randomly recalled a previous convo acting like it was part of the current convo. I asked about it and it was like oh I can't recall previous conversations. I was like yet you did and after going back and forth it was pretty much like oops my bad I'm not supposed to do that but I accidentally did.

[–] huppakee@lemm.ee 18 points 17 hours ago* (last edited 17 hours ago)

For context: it already saved your data as you had acces to your previous chats. Then came the memory feature, which meant they saved like a summary in to a new dataset (eg 'the user lives in country x' and 'the user doesn't like birthdays'), so you are right it does save it already. The news is that they will now the bot will acces more of your chat history, I think when they write ChatGPT they mean it as 'your personal chatbot' instead of 'the company that offers the chatbot'.

[–] cronenthal@discuss.tchncs.de 89 points 19 hours ago* (last edited 19 hours ago) (1 children)

ai systems that get to know you over your life

That's not as attractive as Sam Altman thinks it is.

[–] Buffalox@lemmy.world 40 points 19 hours ago (2 children)

If we knew it was altruistic, and only working for our benefit, it might be.
But as it is, it is not working for you, you are not its master.
Big corps and governments are.

[–] Flemmy@lemm.ee 12 points 18 hours ago (1 children)

Snitched by chat because of a fun drug moment.

[–] Buffalox@lemmy.world 11 points 18 hours ago

First it makes you trust it, then it turns on you.

[–] spankmonkey@lemmy.world 1 points 17 hours ago

I would be even more worried if it was altruistic and for our benefit because we fuck that shit up all the time even before malicious actors are able to weasel their way into power and turn it into something horrible.

[–] mattc@lemmy.world 18 points 16 hours ago

What worries me, is all the info from those conversations actually becoming public. I haven't fed it personal info, but I bet a lot of people do. Not only stuff you might tell it, but information fed from people you know. Friends, family, acquaintances, even enemies could say some really personal or downright false things about you to it and it could one day add that to public ChatGPT. Sounds like some sort of Black Mirror episode, but I think it could happen. Wouldn't be surprised if intelligence agencies already have access to this data. Maybe one day cyber criminals or even potential employers will have all this data too.

[–] gravitas_deficiency@sh.itjust.works 42 points 19 hours ago (2 children)

This will never ever be used in a surveillance capacity by an administration that’s turning the country into a fascist hyper capitalist oligarchical hellscape. Definitely not. No way. It can’t happen here.

[–] Frjttr@lemm.ee 14 points 18 hours ago (1 children)

This will be useful to the user, but it won’t change privacy. Humans at OpenAI still have full access to your history, and this will only expand AI capabilities to tap into previous conversations. However, rogue and unlawful administrations will still seek to access that data regardless.

[–] just_an_average_joe@lemmy.dbzer0.com 1 points 12 hours ago (1 children)

Imagine if someone writes a malicious extension and now with this, they will also have access to entire chat history.

[–] Frjttr@lemm.ee 1 points 10 hours ago

If this were true, the attacker would need to send prompts to retrieve information, making it an easy attack for the user to spot. However, if the malicious actor has the power to delete prompts and chats, I would suspect they already have access to every other chat.

[–] PattyMcB@lemmy.world 4 points 16 hours ago

It reminds me of the kids in 1984 who turn their father in for being an enemy of the state

[–] knighthawk0811@lemmy.ml 29 points 18 hours ago (1 children)

wait, weren't they always doing this?

[–] balder1991@lemmy.world 12 points 17 hours ago (2 children)

There’s a difference between OpenAI storing conversations and the LLM being able to search all your previous conversations in every clean session you start.

[–] knighthawk0811@lemmy.ml 1 points 7 hours ago

i think the difference was only ever going to be when they felt like running the extra processing to do the work.

[–] zenpocalypse@lemm.ee 3 points 12 hours ago* (last edited 12 hours ago) (1 children)

That is the difference, but it's a pretty minimal difference. Open AI hardly needs to give the LLM access to your conversations during your session to access your conversations.

In fact, I don't see any direct benefit to OpenAI with this change. All it does is (probably) improve its answers to the user during a session.

[–] huppakee@lemm.ee 2 points 12 hours ago (1 children)

The benefit is them offering a better service which might help them sell more subscriptions. They don't need this change for the more malicious benefits like more data for training or more insight in their customers etc.

[–] zenpocalypse@lemm.ee 1 points 2 hours ago

True, I should have said a benefit that is a negative for the consumer.

[–] bjoern_tantau@swg-empire.de 31 points 19 hours ago (1 children)

They literally tell you when you sign up that they can and will look at what you tell ChatGPT. This changes absolutely nothing about that.

[–] balder1991@lemmy.world 9 points 19 hours ago* (last edited 17 hours ago) (1 children)

Maybe for training new models, which is a totally different thing. This update is like everything you type will be stored and used as context.

I already never share any personal thing on these cloud-based LLMs, but it’s getting more and more important to have a local private LLM on your computer.

[–] WhatAmLemmy@lemmy.world 14 points 18 hours ago* (last edited 18 hours ago) (1 children)

Always has been. Nothing has changed. Every conversation you've ever had with chatGPT is stored and owned by open AI. This is why I've largely rejected their use.

If it's not local or E2EE, you are the product (even when you pay for the service).

[–] balder1991@lemmy.world 1 points 17 hours ago (1 children)

But the fact they OpenAI stored all input typed doesn’t mean you can make a prompt and ChatGPT will use it as context, unless you had that memory feature turned on (which allowed you to explicitly “forget” what you choose from the context).

You’re confusing what it means OpenAI to have a conversation stored and ChatGPT using that text and searchable context for every prompt you make.

[–] zenpocalypse@lemm.ee 2 points 12 hours ago

I think you might be confused about the difference between giving the LLM access to your stored conversations during your session and using OpenAI using AI to search your stored conversations.

What the LLM has access to during your session changes nothing but your session.

It's not some "I, Robot" central AI that either has access or doesn't as a whole.

[–] ReverendIrreverence@lemmy.world 5 points 13 hours ago (1 children)

This only works if you have an account and sign in. Don't do that and have your browser clear Cookies and Site Data at quit and the problem is solved.

[–] huppakee@lemm.ee 5 points 12 hours ago* (last edited 12 hours ago) (1 children)

If you're not on a VPN they might still log your IP and connect your chats in the back end though.

[–] Wildly_Utilize@infosec.pub 5 points 12 hours ago

Duck.AI on tor browser

[–] Rubisco@slrpnk.net 4 points 15 hours ago

Where is this being stored? What is the capacity? How many accounts would be needed to overflow storage?

[–] Yerbouti@sh.itjust.works 11 points 18 hours ago

I run deepseek locally on a M1, good enough for 80% of what I need. OpenAi is just another wannabe gafam, can't trust it.

[–] pHr34kY@lemmy.world 10 points 18 hours ago

I assumed they would log everything and create a profile in you from day one. I signed up with a fresh email account.

[–] FartsWithAnAccent@fedia.io 7 points 18 hours ago (1 children)

Bold of you to assume this wasn't already happening in some form.

[–] sunzu2@thebrainbin.org 1 points 2 hours ago

Some people are still in the dark or outright denial stages about how all of these companies operate and their dual purpose operations.

[–] huppakee@lemm.ee 6 points 18 hours ago (1 children)
[–] Iamnotafish@lemmy.ml 7 points 17 hours ago (1 children)

I worked in cybersecurity and my global org was handing over details about who used ChatGPT during certain timeframes at the request of the feds (United States) two years ago on at least one occasion.

[–] huppakee@lemm.ee 2 points 16 hours ago

Yeah I don't think they encrypt it anyway so I guess if they would deny a governments request they might still find a way to get to data like this.

[–] Imgonnatrythis@sh.itjust.works 5 points 18 hours ago

Seems like if they weren't completely evil the obvious way to execute something like this would be to give people the option to keep all the personal data locally. This probably amounts to a few hundred kb of data that the complex server side LLM could just temporarily pull as needed. In my mind this seems most useful for a LLM home assistant but the idea of openai keeping a database of learned trends, preferences, and behaviors is pretty repulsive.

[–] Viri4thus@feddit.org 3 points 16 hours ago

Were long due on some guillotine action.

[–] primemagnus@lemmy.ca 2 points 17 hours ago

So you’re just now finding out the rules are totally different for those with money and power.

Neat 😀

[–] Opinionhaver@feddit.uk -5 points 17 hours ago (1 children)

I think this is great. One of the main reasons I’ve been paying for the subscription is the limited memory of the free version. Now, the more I use it, the more it remembers about me and references things I’ve mentioned in past conversations. Sure, there are potential privacy concerns, but the same goes for commenting on Lemmy - I don’t tell ChatGPT anything I wouldn’t be comfortable sharing here.

[–] zenpocalypse@lemm.ee 4 points 12 hours ago (1 children)

You're getting down-voted, but, yes, this change only really affects user experience.

I don't know why anyone would think that what the LLM can access for context during your session is a limiting factor for what OpenAI has access to.

If this change freaks you out, the time for you to be freaked out about history was the moment they started storing it.

[–] sunzu2@thebrainbin.org 1 points 2 hours ago

Same thing with having all your shit in "cloud"

[–] geography082@lemm.ee 0 points 19 hours ago

Is not a bad feature, I find it interesting. The thing is that I doubt normal users would take care not puting sensitive information on it, that can profile them. Clearly there should be more campaigns on schools and even for citizens about being conscious of the information they share. That would absolutely change a lot of the shit we are living nowadays .