1
8
submitted 28 minutes ago by FlyingSquid@lemmy.world to c/fuck_ai@lemmy.world

As if beauty pageants with humans weren't awful enough. Let's celebrate simulated women with beauty standards too unrealistic for any real women to live up to!

2
167
submitted 6 hours ago by lemmee_in@lemm.ee to c/fuck_ai@lemmy.world

As part of the wider tech industry's wider push for AI, whether we want it or not, it seems that Google's Gemini AI service is now reading private Drive documents without express user permission, per a report from Kevin Bankster on Twitter embedded below. While Bankster goes on to discuss reasons why this may be glitched for users like him in particular, the utter lack of control being given over his sensitive, private information is unacceptable for a company of Google's stature —and does not bode well for future privacy concerns amongst AI's often-forced rollout.

3
15
4
23

OpenAI is partnering with Los Alamos National Laboratory to study how artificial intelligence can be used to fight against biological threats that could be created by non-experts using AI tools, according to announcements Wednesday by both organizations. The Los Alamos lab, first established in New Mexico during World War II to develop the atomic bomb, called the effort a “first of its kind” study on AI biosecurity and the ways that AI can be used in a lab setting.

The difference between the two statements released Wednesday by OpenAI and the Los Alamos lab is pretty striking. OpenAI’s statement tries to paint the partnership as simply a study on how AI “can be used safely by scientists in laboratory settings to advance bioscientific research.” And yet the Los Alamos lab puts much more emphasis on the fact that previous research “found that ChatGPT-4 provided a mild uplift in providing information that could lead to the creation of biological threats.”

Much of the public discussion around threats posed by AI has centered around the creation of a self-aware entity that could conceivably develop a mind of its own and harm humanity in some way. Some worry that achieving AGI—advanced general intelligence, where the AI can perform advanced reasoning and logic rather than acting as a fancy auto-complete word generator—may lead to a Skynet-style situation. And while many AI boosters like Elon Musk and OpenAI CEO Sam Altman have leaned into this characterization, it appears the more urgent threat to address is making sure people don’t use tools like ChatGPT to create bioweapons.

“AI-enabled biological threats could pose a significant risk, but existing work has not assessed how multimodal, frontier models could lower the barrier of entry for non-experts to create a biological threat,” Los Alamos lab said in a statement published on its website.

The different positioning of messages from the two organizations likely comes down to the fact that OpenAI could be uncomfortable with acknowledging the national security implications of highlighting that its product could be used by terrorists. To put an even finer point on it, the Los Alamos statement uses the terms “threat” or “threats” five times, while the OpenAI statement uses it just once.

5
48
6
13

I do not recommend reading this article on a full stomach.

7
28

Generative AI is the nuclear bomb of the information age

8
45
9
77
submitted 3 days ago by uint@lemmy.world to c/fuck_ai@lemmy.world

Written by a so called "Julie Howell" who "loves scouring the internet for delicious, simple, heartwarming recipes that make her look like a MasterChef winner" on a website called "Chef's Resource."

I get the "scouring the internet" part, but less the "MasterChef winner" part.

10
32
submitted 4 days ago* (last edited 4 days ago) by theacharnian@lemmy.ca to c/fuck_ai@lemmy.world
11
18
12
26

cross-posted from: https://discuss.tchncs.de/post/18541227

cross-posted from: https://discuss.tchncs.de/post/18541226

Google’s research focuses on real harm that generative AI is currently causing and could get worse in the future. Namely, that generative AI makes it very easy for anyone to flood the internet with generated text, audio, images, and videos.

13
1232
It isn't worth it (lemmy.world)
14
35
submitted 1 week ago by ZDL@ttrpg.network to c/fuck_ai@lemmy.world
15
120
submitted 1 week ago by lemmee_in@lemm.ee to c/fuck_ai@lemmy.world
16
138
17
61
18
25
Honest Government Ad | AI (www.youtube.com)
submitted 2 weeks ago by cerement@slrpnk.net to c/fuck_ai@lemmy.world

cross-posted from: https://lemmy.world/post/17078489

The Government™ has made an ad about the existential threat that AI poses to humanity, and it’s surprisingly honest and informative

19
1112
20
226
One of us (lemmy.world)
21
100
22
46
submitted 2 weeks ago* (last edited 2 weeks ago) by deikoepfiges_dreirad@lemmy.zip to c/fuck_ai@lemmy.world
23
276
24
46
submitted 2 weeks ago* (last edited 2 weeks ago) by octopus_ink@lemmy.ml to c/fuck_ai@lemmy.world

I'm now starting to wonder if it's a bug, but kind of astounded that I am seemingly the only person impacted. I only see myself added once. However, I have not been able to get any response from @VerbFlow@lemmy.world

25
30
submitted 2 weeks ago by lemmee_in@lemm.ee to c/fuck_ai@lemmy.world

If it's free then, you're the product

Last July, Google made an eight-word change to its privacy policy that represented a significant step in its race to build the next generation of artificial intelligence.

Buried thousands of words into its document, Google tweaked the phrasing for how it used data for its products, adding that public information could be used to train its A.I. chatbot and other services.

We use publicly available information to help train Google’s ~~language~~ AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.

The subtle change was not unique to Google. As companies look to train their A.I. models on data that is protected by privacy laws, they’re carefully rewriting their terms and conditions to include words like “artificial intelligence,” “machine learning” and “generative A.I.”

Those terms and conditions — which many people have long ignored — are now being contested by some users who are writers, illustrators and visual artists and worry that their work is being used to train the products that threaten to replace them.

Archive : https://archive.is/SOe5w

view more: next ›

Fuck AI

903 readers
208 users here now

A place for all those who loathe machine-learning to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 4 months ago
MODERATORS