HedyL

joined 2 years ago
[–] HedyL@awful.systems 22 points 1 month ago (3 children)

This somehow reminds me of a bunch of senior managers in corporate communications on LinkedIn who got all excited over the fact that with GenAI, you can replace the background of an image with something else! That's never been seen before, of course! I'm assuming that in the past, these guys could never be bothered to look into tools as widespread as Canva, where a similar feature had been present for many years (before the current GenAI hype, I believe, even if the feature may use some kind of AI technology - I honestly don't know). Such tools are only for the lowly peasants, I guess - and quite soon, AI is going to replace all the people who know where to click to access a feature like "background remover", anyway!

[–] HedyL@awful.systems 2 points 1 month ago (1 children)

By the way, is there a DuckDuckGo bang for Google "udm=14" ("web" tab) yet? I have been looking for something like this for awhile, but no success so far. It's very frustrating to receive these AI generated answers even when using "!g".

[–] HedyL@awful.systems 7 points 1 month ago* (last edited 1 month ago) (1 children)

Of course, it has long been known that some private investors would buy shares in any company just because its name contains letters like “.com” or “blockchain”. However, if a company invests half a billion in an ".ai" company, shouldn't it make sure that the business model is actually AI-based?

Maybe, if we really wanted to replace something with AI, we should start with the VC investors themselves. In this case, we might not actually see any changes for the worse.

Edit: Of course, investors only bear part of the blame if fraud was involved. But the company apparently received a large part of its funding in 2023, following reports of similar lies in as early as 2019. I find it hard to imagine that tech-savvy investors really wouldn't have had a chance to spot the problems earlier.

Edit No. 2: Of course, it is also conceivable that the investors didn't care at all because they were only interested in the baseless hype, which they themselves fueled. But with such large sums of money at stake, I still find it hard to imagine that there was apparently so little due diligence.

[–] HedyL@awful.systems 11 points 1 month ago

As all the book authors on the list were apparently real, I guess the "author" of this supplemental insert remembered to google their names and to remove all references to fake books from fake authors made up by AI, but couldn't be bothered to do the same with the book titles (too much work for too little money, I suppose?). And for an author to actually read these books before putting them on a list is probably too much to ask for...

It's also funny how some people seem to justify this by saying that the article is just “filler material” around ads. I don't know, but I believe most people don't buy printed newspapers in order to read nonsensical “filler material” garnished with advertising. The use of AI is a big problem in this case, but not the only one.

[–] HedyL@awful.systems 3 points 1 month ago

Please help me understand this: It was supposedly fine, because "only one minor was molested", and this confession made everyone more trustworthy? Am I missing something?

[–] HedyL@awful.systems 17 points 2 months ago

Reportedly, some corporate PR departments "successfully" use GenAI to increase the frequency of meaningless LinkedIn posts they push out. Does this count?

[–] HedyL@awful.systems 4 points 2 months ago

In my experience, if some "innovation" makes no sense and yet is continuously hyped up by people who should absolutely know better, it is usually because it allows them to circumvent some law or regulation they don't like. That was certainly true for cryptocurrencies and for a lot of complex financial products during the subprime crisis, and it appears to be true in this case again (this time, it's copyright laws). If AI "rewords" existing content and adds fresh errors, the result is (supposedly) not copyrighted anymore (I guess) and can be used to sell more ads - mission accomplished.

[–] HedyL@awful.systems 9 points 2 months ago (1 children)

For me, everything increasingly points to the fact that the main “innovation” here is the circumvention of copyright regulations. With possibly very erroneous results, but who cares?

[–] HedyL@awful.systems 17 points 2 months ago (2 children)

It's also worth noting that your new variation of this “puzzle” may be the first one that describes a real-world use case. This kind of problem is probably being solved all over the world all the time (with boats, cars and many other means of transportation). Many people who don't know any logic puzzles at all would come up with the right answer straight away. Of course, AI also fails at this because it generates its answers from training data, where physical reality doesn't exist.

[–] HedyL@awful.systems 17 points 2 months ago (1 children)

This is particularly remarkable because - as David pointed out - being a pilot is not even one of those jobs that nobody would want to do. There is probably still an oversupply of suitable people who would pass all the screening tests and really want to become pilots. Some of them would probably even work for a relatively average salary (as many did in the past outside the big airlines). The only problem for the airlines is probably that they can no longer count on enough people being willing (and able!) to take on the high training costs themselves. Therefore airlines would have to hire somewhat less affluent candidates and pay for all their training. However, AI probably looks a lot more appealing to them...

[–] HedyL@awful.systems 5 points 2 months ago (1 children)

To me, those forced Google AI answers are a lot more disconcerting than even all the rest. Sure, publishers always hated content creators, because paying them ate into their profit margins from advertising. However, Google always got most of its content (the indexed webpages) for free anyway, so what exactly was their problem?

Also, how much more energy do these forced AI answers consume, compared with regular search queries? Has anyone done the math?

Furthermore, if many people really loved that feature so much, why not make it opt-in?

At the same time, as many people already pointed out, prioritizing AI-generated answers will probably further disincentivize creators of good original content, which means there will be even less usable material to feed to AI in the future.

Is it really all about pleasing Wall Street? Or about getting people to spend more time on Google itself rather than leave for other websites? Are they really confident that they will all stay and not disappear completely at some point?

[–] HedyL@awful.systems 7 points 2 months ago

The only reason the tool supposedly has value is because the websites are made to be bad on purpose so that they make more money.

Yes, and because, as it appears, AI occasionally ingests content from some of the better websites out there. However, without sources, you'll be unable to check whether that was the case for your specific query or not. At the same time, it is getting more and more difficult for us to access these better websites ourselves (see above), and sadly, incentives for creators to post this type of high-quality content appear to be decreasing as well.

view more: ‹ prev next ›