21
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 22 Jul 2023
21 points (95.7% liked)
BestOfLemmy
7158 readers
58 users here now
Manual curation of great Lemmy discussions and threads
founded 1 year ago
MODERATORS
A few thoughts:
There are different kinds of AI. One type that's already been useful is Machine Learning (ML). Take lots of tagged data and feed it to an ML algorithm, then it can detect things that are similar. So, for instance, we can do that with pathology slides where we've identified different kinds of diseases, and then we can have the AI identify them. That's important because humans have a higher miss rate (false negatives). You can also have it create similar things, so we can feed it the chemical makeup for lots of beneficial drugs and then have it design a new one. We still have to test the result, but it can be useful.
The kind of AI you're taking about is a Large Language Model (LLM). Those are designed to take giant amounts of text of different kinds and create a model of what a conversation looks like. Give it a prompt and it will return what the model thinks a good response is. Since source code is a kind of language, it can do it with that too. But it doesn't know or understand anything. Ask for a new recipe for a pastry dessert, it will give you one based on what pastry dessert recipes generally look like, but it could be very off and the result if you followed it could be terrible. It might look pretty close though.
But we're at an early stage for that kind of AI. If we ultimately want to make machines that can understand their environments and interact with it and us, we'll need both types of AI together. It would need to recognize a door and the different kinds of knobs to be able to go in out of of a room/building. It would need to determine what we're asking it, and then figure out how to do that given the current environment.
As a software engineer/manager in aerospace, I don't want my folks relying on any code that comes from an AI (today). I don't mind if they use one to understand a construct or help review something, but they can get things wildly wrong (or, maybe worse, subtly wrong). We do things that are human-rated, and the risk is just too high.
I think we're close to having LLMs that can implement common solutions, but still a ways from ones that can design novel ones. I haven't played with that aspect, but I'm biased by the knowledge that current LLMs were trained on existing open source software. They're going to be good at the kinds of things that come up a lot, not great at things that come up rarely, and bad at things that haven't come up at all.
Some day that will be different.
Thank you for this, but as this is the bestoflemmy community, I was mostly sharing the link to another post
Ah, guess I didn't understand the community. What's the difference between this community and just sorting by top all time?
Top does not automatically reflect the best content (I'm pretty sure for instance this one won't make it that far) This community is manually selected content
Pretty sure if you sort a Lemmy main page by top your get most upvoted.