289
  • Rabbit R1, AI gadget, runs on Android app, not requiring "very bespoke AOSP" firmware as claimed by Rabbit.
  • Rabbit R1 launcher app can run on existing Android phones, not needing system-level permissions for core functionality.
  • Rabbit R1 firmware analysis shows minimal modifications to standard AOSP, contradicting claims of custom hardware necessity by Rabbit.
top 50 comments
sorted by: hot top controversial new old
[-] warm@kbin.earth 111 points 3 months ago

This is one of those shitty products that you can see being shitty from a mile away, yet all the coverage and discussion around it gives it a life it otherwise wouldn't have had.

[-] magic_lobster_party@kbin.run 26 points 3 months ago* (last edited 3 months ago)

It’s probably only because it’s co-designed by Teenage Engineering. Usually their devices get quite the fuzz.

[-] SnotFlickerman@lemmy.blahaj.zone 21 points 3 months ago* (last edited 3 months ago)

They have so much quality audio equipment, it's understandable why people would stump for this because of the Teenage Engineering (TE) involvement alone.

Funnily enough, despite being a TE enthusiast, this is my first time hearing that they had anything to do with this joke of a product.

...which kind of does make me sort look at TE like... wtf were you thinking? ...and sort of definitely makes me question future endeavors from TE. Because this thing is a fucking joke.

[-] MeaanBeaan@lemmy.world 12 points 3 months ago

Imo TE has always been a shady company in terms of business decisions. I still for the life of me cannot understand why the OP-1 is over 2 grand. It's a music making machine with a cruddy keybed that's not even volocity sensitive that's also intentionally limited in terms of how you can use it.

Now, don't get me wrong. I definitely think it's a cool little device capable of doing cool things. But there's no way in hell this tiny thing is worth 2k. You can spend 1/4th of the price on something like an elektron Digitakt or a polyend play and get very similar functionality in an arguably better more robust package.

TE are a boutique company that intentionally releases overpriced products so they can have this reputation of being a "premium" company. Just like Apple. If it weren't for their pocket operators (which are arguably closer to being toys than actual audio equipment) I wouldn't think they'd have anything remotely worth buying.

Side note: the playdate looks adorable. But, similarly to the OP-1, is very overpriced for what it does.

[-] SnotFlickerman@lemmy.blahaj.zone 2 points 3 months ago

I got into TE because of the Pocket Operators, which are built incredibly solidly and are reasonably priced for all they can do, in my opinion.

...or they were anyway. Prior to COVID they were about $60 a pop.

Now they're pushing $100, which is a lot less in line with the price point I originally bought some at.

[-] MeaanBeaan@lemmy.world 2 points 3 months ago

Yeah I do genuinely think the pocket operators are cool. At least I did when I thought they were cheap. Had no idea they got that expensive. Though that's obviously not at all expensive when you're talking about audio gear. So I'm willing to give them a modicum of leeway there.

[-] realharo@lemm.ee 1 points 3 months ago

Which is exactly like the "camera collabs" that phone makers sometimes do that end up being nothing more than marketing gimmicks.

Like the OnePlus camera "by Hasselblad" that is quality wise the same as any other smartphone camera in that price category.

[-] abhibeckert@lemmy.world 14 points 3 months ago* (last edited 3 months ago)

Hopes were set unreasonably high because the hardware designer has a great reputation. And the hardware seems well made (for the price) and certainly tries out some interesting new ideas. I love how the camera is physically blocked while not in use for example.

The software team has let this product down. Not surprising, but dissapointing.

[-] ahornsirup@sopuli.xyz 5 points 3 months ago

The hardware team made a device that just couldn't be turned into a good product no matter what the software team did. None of those AI-in-a-box devices are good products because they simply don't have a reason to exist. Everything they can do, phones can do. If you have a phone, you don't need one of those AI boxes, however if you buy one of those AI assistant things, you'll still need a phone (which, again, can completely replace the AI box with no loss in functionality).

[-] db2@lemmy.world 4 points 3 months ago

Camera shutters aren't new...

[-] Joelk111@lemmy.world 4 points 3 months ago

Automatic physical camera shutters? Only ones I can think of on phones are pop-up selfie cameras like the LG Wing and OnePlus 7. LG doesn't make phones any more and OnePlus dropped the pop-up camera in their next phone, and haven't brought it back.

[-] bandwidthcrisis@lemmy.world 2 points 3 months ago

Some phones and PDAs from decades ago had a "jog dial" on the side, like a mouse scroll wheel.

It was so easy to roll through menus and just push it to click.

The separate roller and button arrangement this has seems such a poor choice in comparison.

if you'd give me 17€ minimum to use it, I'd for sure be using it.

[-] MargotRobbie@lemmy.world 33 points 3 months ago

Technically, every Android phone uses a "very bespoke AOSP", because the Android kernel is customized for the hardware of every single phone model, which can include things like hardware drivers and carrier services.

This is the reason that there is no universal Android ROM that works across every Android phone, unlike Windows or normal GNU/Linux distributions.

[-] Kolanaki@yiffit.net 3 points 3 months ago* (last edited 3 months ago)

What about custom ROMs like Lineage? The only thing holding it back from working on every phone is that many phones have blocks to prevent installing a custom ROM in the first place. Could be just like windows in that it has every driver for every piece of hardware in the package, just bloating it unnecessarily.

[-] MargotRobbie@lemmy.world 13 points 3 months ago

The manufacturer has to release the phone's kernel source code before any custom ROM development can happen for the phone most of the time for that reason.

There is a reason that GrapheneOS only works on a couple of Pixel phones.

Could be just like windows in that it has every driver for every piece of hardware in the package, just bloating it unnecessarily.

Google specifically designed the Android kernel so that the driver are excluded, unlike the normal Linux or Windows kernel, because, long story short, Qualcomm did not want it to happen.

[-] madscience@lemmy.world 6 points 3 months ago

Every device had it's own device tree and kernel with custom driver's, binary blinds, and system software. All of it runs beside the closed source modern os

[-] balder1991@lemmy.world 2 points 3 months ago* (last edited 3 months ago)

I’m not entirely sure because I’m not very knowledgeable about CPUs, but it seems this is largely a problem with ARM architectures and their lack of standardization, isn’t it?

[-] madscience@lemmy.world 7 points 3 months ago

There's nothing like uefi between the os and the hardware

[-] Assman@sh.itjust.works 31 points 3 months ago

Very bespoke? Are there varying degrees of.. bespokeness?

[-] AnActOfCreation@programming.dev 8 points 3 months ago
[-] sugar_in_your_tea@sh.itjust.works 4 points 3 months ago

But bespoke doesn't really mean unique, it just means custom. So either something is bespoke, or it's off-the-shelf. I guess you could have semi-bespoke where you just add stuff to the off-the-shelf platform.

[-] AnActOfCreation@programming.dev 6 points 3 months ago

Like how unique means "one-of-a-kind", so something is either unique or not. 😉

[-] GlassHalfHopeful@lemmy.ca 14 points 3 months ago

I've no skin in the game related to this device or software. I simply don't care. However, in terms of an AI assistant, I am curious if there is anything on the market, including this ~APK~ device, that is worth using. For someone who is privacy centric, advert avoidant, and security focused... does (or even can) such an app/tool exist?

[-] db2@lemmy.world 28 points 3 months ago

You won't like anything on offer currently except for that which is entirely self hosted.

[-] GlassHalfHopeful@lemmy.ca 2 points 3 months ago

What are a couple of the best self hosted options? I wouldn't mind giving it a go on my server. I may not have enough juice with what I currently run, but perhaps as a proof of concept first and then new hardware later.

[-] hedgehog@ttrpg.network 7 points 3 months ago

I haven’t used it and only heard about it while writing this post, but Open WebUI looks really promising. I’m going to check it out the next time I mess with my home server’s AI apps. If you want more options, read on.

Disclaimer: I’ve looked into most of the options below enough to feel comfortable recommending them, but I’ve only personally self hosted the Automatic 1111 webui, the Oobabooga webui, and Kobold.cpp.

If you want just an LLM and an image generator, then:

For the image generator, something that leverages Stable Diffusion models:

And then find models that you like at Civitai.

For the LLM, the best option depends on your hardware. Not knowing anything about your hardware, I recommend a llama.cpp based solution. Check out one of these:

Alternatively, VLLM is allegedly the fastest for multi-user CPU-based inference, though as far as I can tell it doesn’t have its own webui (but it does expose OpenAI compatible API endpoints).

And then find a model you like at Huggingface. I recommend finding a model quantized by TheBloke.

There are a couple communities not on Lemmy that discuss local LLMs - r/LocalLLaMA and r/LocalLLM for example - so if you’re trying to figure out which model to try, that’s a good place to check.

If you want a multimodal AI, you can use llama.cpp with a model like LLAVA. The options below also have multimodal support.

If you want an AI assistant with expanded capabilities - like searching your documents or the web (RAG), etc. - then I don’t have a ton of experience there, but these seem to do that job:

If you want to use your local model as more than just a chat bot - integrating it into your IDE or a browser extension - then there are options there, and as far as I know every LLM above can be configured to expose an API allowing it to be used by your other tools. Some, like Open WebUI, expose OpenAI compatible APIs and so can be used with tools built to be used with OpenAI. I don't know of many tools like this, though - I was surprisingly not able to find a browser extension that could use your own API, for example. Here are a couple examples:

Also, I found this Medium article listed some of the things I described above as well as several others that I’d never heard of.

[-] GlassHalfHopeful@lemmy.ca 3 points 3 months ago

Incredible resource! Thanks much. I'm looking forward to playing a bit and seeing how far I can push the hardware I currently have.

[-] Andromxda@lemmy.dbzer0.com 1 points 3 months ago

Thanks for this detailed and informative comment!

[-] bassomitron@lemmy.world 3 points 3 months ago* (last edited 3 months ago)

That's a deep rabbit hole (no pun intended). I know it's blasphemy to mention the other site around here, but check out the r/locallama subreddit. It covers more models than just LLaMA. There are literally thousands of variations at this point, so preferences are quite subjective based on your use case and your best bet is just to begin researching on your own for your intended purposes and available resources. Huggingface is the main model repository, as well.

[-] balder1991@lemmy.world 1 points 3 months ago

Yeah, the best way out of it is to get a few of the most recommended ones and test by yourself.

[-] andrew0@lemmy.dbzer0.com 4 points 3 months ago

What db2 already said. Microsoft just released Phi-3 mini, which could, allegedly, run locally on newer smartphones.

If I understood correctly, the Rabbit thingy just captures your information locally and then forwards it to their server. So, if you want more power, you could probably do the same by submitting the same info to a bigger open source model than Phi-3, like Llama 3, hosted on your homelab. I believe you can set it up with huggingface/gradio, which sort of provides an API that you could use.

That way, you don't need a shitty orange box, and can always get the latest open source models with a few lines of code. There are plenty of open source frameworks in the works at the moment, and I believe that we're not far off from having multi-modal LLMs running on homelab-level hardware (if you don't mind a bit of lag).

[-] GlassHalfHopeful@lemmy.ca 2 points 3 months ago

huggingface! i found it once and never could remember again who hosted all those models. thank you!

i use a different device than homelab, but now I am curious what I may be able to achieve with my syno system. it's hw weak, so probably not a lot. but i would like to give it a go. if it's decent, i may consider another device for the purpose.

[-] andrew0@lemmy.dbzer0.com 3 points 3 months ago

Good luck! You can try the huggingface-chat repo, or ollama with this web-ui. Both should be decent, as they have instructions to set up a docker container.

I believe the Llama 3 models are out there in a torrent somewhere, but I didn't dig to find it. For the 70B model, you'll probably need around 64GB of RAM available, but the 7B one should run fine with just 8GB. It will be somewhat slow though, compared to the ChatGPT experience. The self-attention mechanism can be parallelized, which is why you will see much better results on a GPU. According to some others that tested it, if you offload some stuff to RAM, you could see ~10-12 tokens per second on an RTX 3090 for certain 70B models. But more capable ones will be at less than 1 token per second, all depending on the context window you use.

If you don't have a GPU available, just give the Phi-3 model a try :D If you quantize it to 4 bits, it can apparently get 12 tokens per second on an iPhone haha. It should play nice with pooling information from a search engine, or a vector database like milvus, qdrant or chroma.

[-] GlassHalfHopeful@lemmy.ca 3 points 3 months ago

Thank you for all this! Much appreciated!

load more comments (3 replies)
[-] XEAL@lemm.ee 9 points 3 months ago* (last edited 3 months ago)

No, you don't need a 'very bespoke AOSP' to turn your phone into a Rabbit R1

I just fucking need to get the APK from a goddman reliable source and I only know APKMirror

[-] viking@infosec.pub 2 points 3 months ago

The article says that you need a custom launcher to establish a connection to their cloud service, so the apk only won't do it either way. But give it a few days and someone will have rigged up a full package, no doubt. Then you've probably got 6-8 weeks to use the service before they kick the bucket and shut down forever...

[-] XEAL@lemm.ee 4 points 3 months ago

The APK is called "R1 Launcher"...

They've already proved it works from an Android phone.

[-] matto@lemm.ee 8 points 3 months ago

I don't really care for this product. It's another unnecessary AI assistant. What I'm struggling to understand is why it matters which platform it's been built on. What difference would it make if they wrote an entire OS from scratch only for this device instead of using Android, if the end product would be the same?

[-] themoonisacheese@sh.itjust.works 15 points 3 months ago

Because the main criticism of this class of products is "why in the fuck would I need a device for this? my phone already has a data plan, a microphone, and a camera. Make it an app" and the response is some vague "oh well it's so advanced (it's not.) it couldn't possibly run on a phone".

The vision is that once TPUs become affordable enough to run these models on-device, you would need a device that has such a TPU and you would go to them. But this is completely overlooking the fact that all snapdragons and the like would also have the same TPUs integrated, and also we're not there yet, so for as long as you need to send the query to openAI's API, why is this not an app?

[-] moog@lemm.ee 12 points 3 months ago

It just drives the point home that it should have, and could have been an app on your phone. If they hadn't sold this as some sort of revolutionary new product and instead were honest to themselves and us about what it was this would be a different story.

Just like all tech bs it's just a bunch of lies to make initial sales that they never follow through with.

[-] TwoCubed@feddit.de 1 points 3 months ago* (last edited 3 months ago)

But maybe people would like a standalone device, without the distraction a regular smartphone brings.

I'm not defending this thing as AI in its current state is near useless, bar some niche applications. But the decision to make it a standalone device isn't really controversial in my opinion.

[-] moog@lemm.ee 2 points 3 months ago

I think that you're in the minority with that opinion then. I don't want another thing to carry around. And I don't see how this would be less of a distraction seeing as it doesn't do anything useful. To me that makes it more of a distraction. If you want to not look at your screen to perform tasks that's already an option with your phone using Siri or Google's assistant.

load more comments (1 replies)
[-] Hawk@lemmy.dbzer0.com 4 points 3 months ago

If the product was as they marketed and sold it, it shouldn't be able to run on Android.

Clearly they lied to their customers, I'd be pissed.

[-] dmalteseknight@programming.dev 6 points 3 months ago* (last edited 3 months ago)

Not sure what the surprise is. It is a device that needs you to sign into an account and have an internet connection. Ie it is just a dumb terminal. Kind of like how Alexa speakers are useless without an internet connection. All the processing is done on servers.

Even if it was not an android app, you probably could have made a clone of it since you are basically interacting with a web api.

~~In their defence the device has the advantage of giving you instant access to their ai through the touch of a button whilst on phones you would need to physically open the app as the "assistant" functionality is already reserved to Siri, Bixby, etc.~~ EDIT: I wasn't aware you can change assistants.

Personally I like the concept if the processing was done within the device, but considering you need monster machines to run llms I guess we are least decades away from that reality.

[-] Zo0@feddit.de 4 points 3 months ago

Well you can change your 'assistant' in Android so not even that.

[-] dmalteseknight@programming.dev 1 points 3 months ago

Ah wasn't aware of the ability to change assistants. I stand corrected!

[-] rimjob_rainer@discuss.tchncs.de 1 points 3 months ago

Another day, another scam, nothing new.

load more comments
view more: next ›
this post was submitted on 04 May 2024
289 points (95.6% liked)

Technology

57175 readers
3808 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS