There are humans that can do this too. It’s pretty wild.
Openstreetmaps let's you write some insanely precise queries. There's a company around that had as a plan to team up with governments to pinpoint mass shooters when they were streaming (as a usage case).
So say in the video it was clear they were in X city and they see things in the video like McDonald's, Starbucks, fenced in playgrounds, churches, what have you you can give the query a bounding box with all that info and very quickly narrow down where the video could be taken.
I think there was also some people who would pinpoint images from mountain outlines as a game. Kind of like geoguessr on steroids.
This isn’t unique to AI, like most LLM programs it’s just accomplishing it faster and on a larger scale. Personally think if you want privacy you should limit the personal things you post to what you’re okay with being out there and form habits such as waiting until home from vacation to post pictures.
Yes, and people like me having continued to point out that this problem stems from a bad view of expectation of privacy.
A non-famous person has a reasonable expectation of privacy on public property. If you take a photo and a non-famous person's face is in it, you should have written consent for only that photo or blur it out. If Disney can own an image of a mouse for 95 fucking years I can own my own image.
Don't take pictures of people or their property without consent. Just because technology allows you to be a disgusting creep doesn't mean you should. If you want jerk off material just use the internet like the rest of us.
If you want jerk off material just use the internet like the rest of us.
The kind of thing this can be used for is about ten stages past jerking off, and in to stalker territory. So a person already using the internet for jerking off can now pinpoint exactly where the person they're jerking off to lives, and potentially turn up at their house, and escalate from there. This is beyond just creepy (and exploitative, in the case of corporations using the info), it's potentially putting lives at risk.
Ok I don't know what I am supposed to do about that. Let's just work on the problem we can solve for now.
I never asked you to do anything? just pointing out things are much more serious than your comment makes out. I also don't see how what you said is a problem we can solve now and it's ok to focus on, but what I added somehow isn't..
New geoguessr cheat just dropped
To get that kind of accuracy from a student project with such a small sample set is pretty remarkable and pretty frightening. Yes, there are people who are good at this, but (1) this AI just beat one of the most skilled humans and (2) having it in an AI brings the capability to anyone, regardless of their motives.
Plus, with an AI you can incorporate more heuristics than any human could reasonably master. The article mentions types of foliage, which is a good example. An AI could incorporate thousands of things like that easily. Seems like a tool that's ripe for abuse, but I don't know what you could do about it.
so they turned rainbolttwo into an AI
Machine learning gets creepier and creepier
This is the best summary I could come up with:
The project, known as Predicting Image Geolocations (or PIGEON, for short) was designed by three Stanford graduate students in order to identify locations on Google Street View.
But it also could be used to expose information about individuals that they never intended to share, says Jay Stanley, a senior policy analyst at the American Civil Liberties Union who studies technology.
It's a neural network program that can learn about visual images just by reading text about them, and it's built by OpenAI, the same company that makes ChatGPT.
Rainbolt is a legend in geoguessing circles —he recently geolocated a photo of a random tree in Illinois, just for kicks — but he met his match with PIGEON.
And it guessed that a picture of the Snake River Canyon in Idaho was of the Kawarau Gorge in New Zealand (in fairness, the two landscapes look remarkably similar).
They've written a paper on their technique, which they co-authored along with their professor, Chelsea Finn — but they've held back from making their full model publicly available, precisely because of these concerns, they say.
The original article contains 1,049 words, the summary contains 181 words. Saved 83%. I'm a bot and I'm open source!
Saved 83%
And 100% of the quality/context.
4chan did it first
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed