0
submitted 1 year ago by Cloudless@kbin.social to c/tech@kbin.social
top 8 comments
sorted by: hot top controversial new old
[-] Kill_joy@kbin.social 0 points 1 year ago

Bard is by far one of the worst ai language models. Especially if you're trying to use it as a Google Search. It will just make shit up over and over and over again.

[-] admiralteal@kbin.social 0 points 1 year ago

I went to Bard hoping it might help me do some actual link-finding.

It pretty much always hallucinates articles. Ask it to find local news stories about some particular kind of thing or research papers -- it will find 4-8 of them and they will all be made the fuck up.

Bard's the worst of the lot.

[-] Kill_joy@kbin.social 1 points 1 year ago* (last edited 1 year ago)

I recently remembered a song I enjoyed in high school but couldn't remember some lyrics. It was an old punk song - not super popular but also had a video on MTV and was a big hit amongst fans of the genre.

I asked it for the lyrics of the song and it said "sure here are the lyrics for the song by the band" and it literally made the whole thing up. I asked 5 more times for accurate lyrics and it just kept apologizing for making them up and promising the next one would be right.

My wife was also watching old episodes of shark tank the other night and asked me to find out if a product was successful after no deals were made. I asked Bard and it told me about how 2 investors fought over the product, a successful deal was made, and the company did 2 mil in profits in 2022. Knowing that they did not make a deal, I just did a regular bing search and learned the company actually went bankrupt before that episode even aired in 2012.

Bard is literal horse shit and if people do not check their facts after engaging with it they will be fucked. It is 100% confident in the information it fabricates.

[-] bionicjoey@lemmy.ca 0 points 1 year ago

I'm getting tired of repeating this but Language models are incapable of doing math. They generate text which has the appearance of a mathematical explanation, but they have no incentive or reason for it to be accurate.

[-] HarkMahlberg@kbin.social 0 points 1 year ago

Hikaru Nakamura tried to play ChatGPT in a game of chess, and it started making illegal moves after 10 moves. When he tried to correct it, it apologized and gave the wrong reason for why the move was illegal, and then it followed up another illegal move. That's when I knew that LLM's were just fragile toys.

[-] exscape@kbin.social 0 points 1 year ago

It is after all a Large LANGUAGE Model. There's no real reason to expect it to play chess.

[-] Pons_Aelius@kbin.social 1 points 1 year ago* (last edited 1 year ago)

There's no real reason to expect it to play chess.

There is. All the general media is calling these LLMs AI and AIs have been playing chess and winning for decades.

[-] fartsinger@kbin.social 1 points 1 year ago* (last edited 1 year ago)

Yeah for that we'd need a Gigantic LANGUAGE Model.

this post was submitted on 21 Jul 2023
0 points (NaN% liked)

Technology

165 readers
1 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 1 year ago