At the very least, actual humans have an incentive not to BS you too much, because otherwise they might be held accountable. This might also be the reason why call center support workers sound less than helpful sometimes - they are unable to help you (for various technical or corporate reasons) and feel uneasy about this. A bot is probably going to tell you whatever you want to hear while sounding super polite all the time. If all of it turns out to be wrong... well, then this is your problem to deal with.
HedyL
Almost sounds as if in order to steal intellectual property, they had to go down the "traditional" route of talking to someone, making promises etc. If it turns out that a chatbot isn't the best tool for plagiarizing something, what is it even good for?
And there might be new "vulture funds" that deliberately buy failing software companies simply because they hold some copyright that might be exploitable. If there are convincing legal reasons why this likely won't fly, fine. Otherwise I wouldn't rely on the argument that "this is a theoretical possibility, but who would ever do such a thing?"
And, after the end of the AI boom, do we really know what wealthy investors are going to do with the money they cannot throw at startups anymore? Can we be sure they won't be using it to fund lawsuits over alleged copyright infringements instead?
At the very least, many of them were probably unable to differentiate between "coding problems that have been solved a million times and are therefore in the training data" and "coding problems that are specific to a particular situation". I'm not a software developer myself, but that's my best guess.
Even the idea of having to use credits to (maybe?) fix some of these errors seems insulting to me. If something like this had been created by a human, the customer would be eligible for a refund.
Yet, under Aron Peterson's LinkedIn posts about these video clips, you can find the usual comments about him being "a Luddite", being "in denial" etc.
It is funny how, when generating the code, it suddenly appears to have "understood" what the instruction "The dog can not be left unattended" means, while that was clearly not the case for the natural language output.
FWIW, due to recent developments, I've found myself increasingly turning to non-search engine sources for reliable web links, such as Wikipedia source lists, blog posts, podcast notes or even Reddit. This almost feels like a return to the early days of the internet, just in reverse and - sadly - with little hope for improvement in the future.
I think all of this should be true about almost any other company. However, if OpenAI employees had a reasonably strong belief in the hype surrounding their company and their technology, wouldn't they be holding more shares? After all, the rest of the world is constantly being told that this is the future and that pretty much all of our jobs are at risk because of it.
If I'm not mistaken, in past tech booms, many employees used to become rich by keeping at least some of their stock, though. I think it is somewhat telling if most of the employees (who could be expected to be familiar with the company, its technology, its products and markets) don't seem to expect this to happen here, but rather treat this as a job in a more "mature" industry with little growth potential, such as manufacturing or banking.
Also, capital market investors tend to consider so-called "insider trading" (which includes trades by company employees and executives) as somewhat predictive of stock prices, as far as I know.
Google has a market cap of about 2.1 trillion dollars. Therefore the stock price only has to go up by about 0,00007 percent following the iNaturalist announcement for this "investment" to pay off. Of course, this is just a back-of-the-envelope calculation, but maybe popular charities should keep this in mind before accepting money in a context like this.
Just guessing, but the reported "90% accuracy" are probably related to questions that could be easily answered from an FAQ list. The rest is probably at least in part about issues where the company itself f*cked up in some way... Nothing wrong with answering from an FAQ in theory, but if all the other people get nicely worded BS answers (for which the company couldn't be held accountable), that is a nightmare from every customer's point of view.