AI

6286 readers
4 users here now

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

founded 5 years ago
1
 
 

Abstract: Resistance to artificial intelligence (AI) is widespread and persists even when known psychological barriers are removed. What explains this persistent aversion? Across four studies, we investigate whether moral reactions to AI—rooted in deeply held beliefs about right and wrong—help explain resistance beyond pragmatic concerns. In Study 1, we analyzed all news headlines in a major US media corpus (COCA, 2018–2024) and found that AI is moralized at levels comparable to GMOs and vaccines—technologies whose moral opposition has received considerable attention—and that surges in moralization followed the launch of major AI applications such as ChatGPT and DALL-E. In Studies 2a, 2b, and 3, representative samples of Americans reported their attitudes toward several AI applications and other technologies. Although few participants opposed AI outright, most opponents indicated their views would remain unchanged even if AI proved beneficial—suggesting moral rather than pragmatic roots. Structural equation models revealed that moralization of AI was best captured by a single latent factor, indicating a generalized moral sentiment rather than domain-specific risk–benefit appraisals. Qualitative analyses further uncovered the most common justifications people invoke and how opponents and supporters differ in their reasoning. In Study 4, participants from Studies 2b and 3 completed a subsequent behavioral grading task; moralization scores measured in the earlier surveys predicted greater reluctance to use AI even when doing so would benefit participants (a one standard deviation increase in moralization corresponded to 42% decrease in AI usage). Together, these findings demonstrate that resistance to AI is partly moral in nature, suggesting that reaping the potential benefits of AI tools may require addressing moral concerns rather than relying solely on pragmatic arguments.

2
 
 

Who could have seen it coming

3
 
 

Hi! I am new to local AI. Got a HP SFF server and added a Low profile RTX 3050 6GB to play around with AI. My expectations are according to my hardware. My use case is actually for Paperless-AI and Inbox Zero, which should be reasonably achievable. Now I am also playing around with OpenWeb UI and Perplexica, both working as expected. Weird issue: I got Qwen3.5:4B looping, asking it how to plant peas :D It started questioning itself, if it got the right peas, questioning itself all the time.

Wait, wait... Actually,  "Zuckereerbsen" is often used as a synonym for  "Mintpea". 
Okay, proceed. 

Wait, hold on. I need to check if there is any specific variety called "Zuckereeherbe" which is actually a  Garden Pea. 
Yes, they are essentially the same family. 
Okay, proceed. 

Wait, wait... Wait, I found a reference:  "Zuckereerbsen" (Sugar Peas) are sometimes grown for their sweet flesh, similar to "Snow Peas". 
Okay, proceed. 

Wait, one last thing: Is there any chance they meant  "Zuckererbsen" (which is actually  Zuckerrübe)? No. 
Okay, write it. 

Wait, wait... Wait, I found a specific product:  "Zuckereeherbe" (Sugar Bean) from some seed companies. They are indeed peas. 
Okay, proceed. 

Wait, final check: Is there any chance they meant  "Zuckererbsen" (which is actually  Zuckerrübe)? No. 
Okay, write it. 

Wait, wait... Wait, I found a reference:  "Zuckereerbsen" (Sugar Peas) are sometimes grown for their sweet flesh, similar to "Snow Peas". 
Okay, proceed. 

Wait, one last thing: Is there any chance they meant  "Zuckererbsen" (which is actually  Zuckerrübe)? No. 
Okay, write it. 

Wait, wait... Wait, I found a reference:  "Zuckereerbsen" (Sugar Peas) are sometimes grown for their sweet flesh, similar to "Snow Peas". 
Okay, proceed. 

Wait, one last thing: Is there any chance they meant  "Zuckererbsen" (which is actually  Zuckerrübe)? No. 
Okay, write it. 

Wait, wait... Wait, I found a reference:  "Zuckereerbsen" (Sugar Peas) are sometimes grown for their sweet flesh, similar to "Snow Peas". 
Okay, proceed. 

Wait, one last thing: Is there any chance they meant  "Zuckererbsen" (which is actually  Zuckerrübe)? No. 
Okay, write it. 

Wait, wait... Wait, I found a reference:  "Zuckereerbsen" (Sugar Peas) are sometimes grown for their sweet flesh, similar to "Snow Peas". 
Okay, proceed. 

Wait, one last thing: Is there any chance they meant  "Zuckererbsen" (which is actually  Zuckerrübe)? No. 
Okay, write it. 

Wait, wait... Wait, I found a reference:  "Zuckereerbsen" (Sugar Peas) are sometimes grown for their sweet flesh, similar to "Snow Peas". 
Okay, proceed. 

Wait, one last thing: Is there any chance they meant  "Zuckererbsen" (which is actually  Zuckerrübe)? No. 
Okay, write it. 

Wait, wait... Wait, I found a reference:  "Zuckereerbsen" (Sugar Peas) are sometimes grown for their sweet flesh, similar to "Snow Peas". 
Okay, proceed. 

Wait, one last thing: Is there any chance they meant  "Zuckererbsen" (which is actually  Zuckerrübe)? No. 
Okay, write it. 

Wait, wait... Wait, I found a reference:  "Zuckereerbsen" (Sugar Peas) are sometimes grown for their sweet flesh, similar to "Snow Peas". 
Okay, proceed. 

Wait, one last thing: Is there any chance they meant  "Zuckererbsen" (which is actually  Zuckerrübe)? No. 
Okay, write it. 

Wait, wait... Wait, I found a reference:  "Zuckereerbsen" (Sugar Peas) are sometimes grown for their sweet flesh, similar to "Snow Peas". 
Okay, proceed. 

Wait, one last thing: Is there any chance they meant  "Zuckererbsen" (which is actually  Zuckerrübe)? No. 
Okay, write it. 

Wait, wait... Wait, I found a reference:  "Zuckereerbsen" (Sugar Peas) are sometimes grown for their sweet flesh, similar to "Snow Peas". 
Okay, proceed. 

Somebody of you got experience with that? Does this happen often? Whats the best way to prevent it? It basically leads to a complete block and high energy usage of my graphics card.

4
 
 

So, my friend was trying out AI and told it to generate a story where Character A was snoring that night at a sleepover with Character B. Character B said "You were snoring last night". Character A told him he doesn't snore, and Character B was like: "Yeah you do, it was like nose commentary but audio." Is this its attempt at a roast? A joke? What does that even mean?? Do people actually say this, wouldn't it just be better to leave it at "yeah you do"?? LOLL

5
6
7
 
 

Manus Browser Operator alternative

8
9
10
11
12
13
14
15
 
 

cross-posted from: https://lemmy.ml/post/43427523

Brett Wilkins
Feb 19, 2026

The New Brunswick, New Jersey City Council voted Wednesday to cancel plans to construct an artificial intelligence data center and instead build a new public park where the 27,000-square foot facility would have gone.

Artificial intelligence data centers—which house the servers and other infrastructure needed to train and power AI models—have major environmental and climate impacts, as they consume massive amounts of electricity and water, as well as rare earth metals and other resources.

According to New Brunswick Patch, hundreds of people packed into Wednesday evening’s city hall meeting to voice concerns that the proposed data center would send their electricity and water bills skyrocketing, and that the facility would harm the environment.

16
17
 
 

The Paris-based AI lab released two new speech-to-text models: Voxtral Mini Transcribe V2 and Voxtral Realtime. The former is built to transcribe audio files in large batches and the latter for nearly real-time transcription, within 200 milliseconds; both can translate between 13 languages. Voxtral Realtime is freely available under an open source license.

At 4 billion parameters, the models are small enough to run locally on a phone or laptop—a first in the speech-to-text field, Mistral claims—meaning that private conversations needn’t be dispatched to the cloud. According to Mistral, the new models are both cheaper to run and less error-prone than competing alternatives.

18
 
 

Hey ! I posted last year about this project, and I recently went back to it. Since last year, I've added an overlay with a green timer + graphs and finished implementing the Rainbow paper.

There is no sound, and I'm skipping the cinematics (level intros such as 1-2 or flagpole animations) to save some CPU cycles (it adds up over the whole training)

This is definetly not a "foreground stream", its more of something fun to have in a corner of your second monitor or as some kind of screensaver. Anyway, I hope you find this interesting.

PS : I'm looking for some nice royalty free music to play in the background without getting a copyright strike. Feel free to drop some links and I'll try making a playlist

19
 
 

I am using Handy w/ Paraaket and Whisper Large with Handy and it's inarrcurate.

Is there a way I read a page and it tailor's recognition to my voice.

20
 
 

What are some of your favorite things to include into system prompts. I like my assistants to have a "sense of humor and not overly professional" and have included that before.

For specific cases I have inlcuded "you are a silent AI assistant" instead of "you are a helpful AI assistant" to have them avoid all the plesantires at the beginning and summaries at the end of outputs.

21
22
 
 

cross-posted from: https://lemdro.id/post/36196733

Just finished reading the report on Qwen-Image-2.0 that dropped the other day. This looks like the efficiency breakthrough we've been waiting for.

The "Headline" Stats:

  • Model Size: 7B parameters.
  • Previous Gen: The old Qwen-Image-2512 was a heavy 20B model.
  • Architecture: Unified "Omni" model (handles both generation and editing in the same weights).
  • Resolution: Native 2K (2048x2048).

The 20B to 7B Optimization: This is the most important part for us. The previous 20B model was a pain to run locally without 24GB VRAM. Crushing that performance down to a 7B model means this should theoretically run on:

  • 12GB Cards (3060/4070): Comfortably at FP16 or Q8.
  • 8GB Cards: Likely possible with aggressive quantization (Q4/Q5) once the community gets hold of it.

Beating "Nano Banana" (Gemini 2.5 Flash Image): The technical report explicitly calls out their performance on blind leaderboards (ELO score). They are claiming Qwen-Image-2.0 achieves a higher ELO rating than Gemini 2.5 Flash Image (aka. Nano Banana) in blind human preference testing.

  • Why this matters: Nano Banana is currently regarded as the SOTA for instruction following and complex prompt adherence. If a 7B local model is actually beating it in ELO, that is insane efficiency.

The "Catch": Weights are not open yet. It is currently available via their API and Demo (Qwen Chat). However, Qwen has an excellent track record (Apache 2.0 releases for almost everything eventually). Given that they released the 20B weights previously, it is highly likely we see the 7B weights in a matter of weeks.

TL;DR: They optimized the 20B heavy-hitter down to a consumer-viable 7B, it claims to beat Google's best efficiency model in ELO, and now we wait for the HF upload to see if the quantization holds up.

23
24
25
 
 

Speech to text model inference in pure C.

This is a C implementation of the inference pipeline for the Mistral AI's Voxtral Realtime 4B model. It has zero external dependencies beyond the C standard library. The MPS inference is decently fast, while the BLAS acceleration is usable but slow (it continuously convert the bf16 weights to fp32).

Audio processing uses a chunked encoder with overlapping windows, bounding memory usage regardless of input length. Audio can also be piped from stdin (--stdin), or captured live from the microphone (--from-mic, macOS), making it easy to transcode and transcribe any format via ffmpeg. A streaming C API (vox_stream_t) lets you feed audio incrementally and receive token strings as they become available.

Similar projects: Whisper.cpp

view more: next ›