Uplifting News
Welcome to /c/UpliftingNews (rules), a dedicated space where optimism and positivity converge to bring you the most heartening and inspiring stories from around the world. We strive to curate and share content that lights up your day, invigorates your spirit, and inspires you to spread positivity in your own way. This is a sanctuary for those seeking a break from the incessant negativity and rage (e.g. schadenfreude) often found in today's news cycle. From acts of everyday kindness to large-scale philanthropic efforts, from individual achievements to community triumphs, we bring you news—in text form or otherwise—that gives hope, fosters empathy, and strengthens the belief in humanity's capacity for good, from a quality outlet that does not publish bad copies of copies of copies.
Here in /c/UpliftingNews, we uphold the values of respect, empathy, and inclusivity, fostering a supportive and vibrant community. We encourage you to share your positive news, comment, engage in uplifting conversations, and find solace in the goodness that exists around us. We are more than a news-sharing platform; we are a community built on the power of positivity and the collective desire for a more hopeful world. Remember, your small acts of kindness can be someone else's big ray of hope. Be part of the positivity revolution; share, uplift, inspire!
view the rest of the comments
This is one of the good uses of AI. It is called object detection with neural networks and is a very classic use of convolutional neural networks (CNNs) in computer vision.
There was no LLM, no transformer, no huge data center necessary for training this model.
Please distinguish generative from predictive AI, it means a lot to all the data scientists out there inventing cool stuff!
No, this is not a good use for AI. First, the concept here is to treat symptoms instead of the actual issue, second, stopping suicide attempts from happening isn't the only thing AI is used for here and third, in the context of second, a 15% hallucination rate means a lot of innocent people are harassed for doing nothing wrong.
Edit, because according to the votes and both answers many people seem to have misunderstand this: the 15% false positive means too many innocent people harassed when this surveillance system is used against crime.
On the suicide issues, the AI just doesn't solve the issue sustainably.
When it comes to suicide prevention, I'd prefer having a few false positives than any false negatives.
Stopping people from dying is good.
That's not the issue where I criticised the false positive rate though, if you read my comment carefully. I hope I made it clearer with my edit.
people only said suicide detection was a good use of AI, not crime surveillance. and nobody's pretending stopping a suicide attempt treats the underlying issues either, and that's still better than not stopping it
But you don't get one without the other. This does not exist in a vacuum.
Oh no someone might ask the victim of a false positive if they’re okay. The horror.
I guess you never have been targeted by unfounded police action. I hope you never will. It is fucking scary and traumatising.
American police is in its own category when it comes to cruelty in “police action”.
Don’t generalise it to other countries where becoming a police officer isn’t a 3 week online course.
I am not talking american police. I'm talking about experience with german police. You know, where you have to go to university for three years.
The training doesn't matter when several armed officers are applying forceful measures against you, you have no idea why, are panicing and your panic reaction is read as resisting police officers. Because that is standard procedure.
That's a good point. Good thing the area is very highly surveilled and recorded.
Unfortunately, being surveilled and recorded is not a reliable deterrent against unwarranted polive actions, especially when the executing officers thought they acted justified.
I don't know how the AI can hallucinates in such scenario, but it's better to harass some people to prevent some other people from committing suicide on those bridges.
That is how it hallucinates.
Besides that, you unfortunately also did not read my comment completely. I specifically pointed out the other instances this AI surveillance system is used on, the article brings up crime as an example. That is where 15 percent hallucination rate means a damn lot of innocent people get harassed by police.
Oops, my bad for didn't read the whole article.
The AI only flagged the people (or the objects it misidentified as people), but the human still decides whether those people are worth checking on. I think it still the human's fault if a lot of innocent people get harrassed by the police.
Why is AI necessary for that? I'm not a programmer, but that seems like a simple if/then statement
Sure, but you need a way to identify whether an object is a person. That's a lot less simple.
Still don't need AI for that
There are very few things we as humans “need”.
But a ton of things that make stuff easier. Like using “AI” to detect humans in this case
Wow, you've convinced me
Cool, what made you change your mind?
Yes you do, because an object on camera can look a lot of different ways. This is very different from LLMs like ChatGPT. It trains on different images like a Captcha and gets positive reinforcement if it identifies something correctly and negative if it is incorrect.
Machine Learning like this has been in use since the days of early digital computing. There isn't a more efficient way to achieve something like this.
Damn, I cant solve societal problems, might as well just let them jump!
Youre a fucking moron.
Well, if you choose to misread my comment like that, AI can't stop you.