451
submitted 11 months ago by L4s@lemmy.world to c/technology@lemmy.world

New acoustic attack steals data from keystrokes with 95% accuracy::A team of researchers from British universities has trained a deep learning model that can steal data from keyboard keystrokes recorded using a microphone with an accuracy of 95%.

top 50 comments
sorted by: hot top controversial new old
[-] Coreidan@lemmy.world 159 points 11 months ago* (last edited 11 months ago)

I’ll believe it when it actually happens. Until then you can’t convince me that an algorithm can tell what letter was typed from hearing the action through a microphone.

This sounds like absolute bullshit to me.

The part that gets me is that the ONLY reason this works is because they first have to use a keylogger to capture the keystrokes of the target, then use that as an input to train the algorithm. If you switch out the target with someone else it no longer works.

This process starts with using a keylogger. The fuck you need “ai” for if you have a keylogger?!? Lol.

[-] Obsession@lemmy.world 51 points 11 months ago

That's pretty much what the article says. The model needs to be trained on the target keyboard first, so you won't just have people hacking you through a random zoom call

[-] bdonvr@thelemmy.club 19 points 11 months ago

And if you have the access to train such a model, slipping a keylogger onto the machine would be so much easier

load more comments (3 replies)
[-] LouNeko@lemmy.world 18 points 11 months ago

I think you might have misunderstood the article. In one case they used the sound input from a Zoom meeting and as a reference they used the chat messenges from set zoom meetings. No keyloggers required.

I haven't read the paper yet, but the article doesn't go into detail about possible flaws. Like, how would the software differentiate between double assigned symbols on the numpad and the main rows? Does it use spell check to predict words that are not 100% conclusive? What about external keyboards? What if the distance to the microphone changes? What about backspace? People make a lot of mistakes while typing. How would the program determine if something was deleted if it doesn't show up in the text? Etc.

I have no doubt that under lab conditions a recognition rate of 93% is realistic, but I doubt that this is applicable in the real world. Noboby sits in a video conference quietly typing away at their keyboard. A single uttered word can throw of your whole training data. Most importantly, all video or audio call apps or programs have an activation threshold for the microphone enabled by default to save on bandwith. Typing is mostly below that threshold. Any other means of collecting the data will require you to have access to the device to a point where installing a keylogger is easier.

[-] imaradio@lemmy.ca 8 points 11 months ago

It sounds like it would have to be a very targeted attack. Like if the CIA is after you this might be a concern.

[-] LouNeko@lemmy.world 4 points 11 months ago
[-] imaradio@lemmy.ca 7 points 11 months ago

Actually I just saw this: Zoom terms of use updated to allow AI training on user-generated data, no opt-out

Maybe if zoom is systematically collecting data on all users they would be able to build a reasonable model. Then it could be leaked or shared.

What do you think?

[-] LouNeko@lemmy.world 4 points 11 months ago

Good question. Since Zoom is mainly a buisness tool and a lot if high profile companies rely on it - if there's even the suspicion that zoom uses collected data to steal passwords or company secrets, they will bring the hammer down in the most gruesome class action lawsuit. Companies pay good money for the buisness license and Zoom will certainly not bite the hand that feeds them.
However, this might not apply to private Zoom users. And I'm certain that Zoom does some shady stuff behind the scenes with the data they collect on private individuals beyond simply "improving our services".

[-] Ironfist@sh.itjust.works 7 points 11 months ago

I'm skeptical too, it sounds very hard to do with the sound alone, but lets assume that part works.

The keylogger part could be done with a malicious website that activates the microphone and asks the user to input whatever. The site would know what you typed and how it sounded. Then that information could be used against you even when you are not in the malicious website.

[-] Imgonnatrythis@lemmy.world 8 points 11 months ago

Hard to do, but with a very standard keyboard like a Mac keyboard the resonance signatures should be slightly different based on location on the board, take into account pattern recognition, relative pause length between keystrokes, and perhaps some forced training ( ie. Get them to type know words like a name and address to feed algorithm) I think it's potentially possible.

[-] barryamelton@lemmy.ml 6 points 11 months ago

it doesn't need a keylogger. Just needs a Videocall meeting, a Discord call meanwhile you type to a public call, a recording of you on youtube streaming and demoing something.. etc.

[-] joel_feila@lemmy.world 6 points 11 months ago

Well to train ai you need to known what the correct answer is.

[-] HankMardukas@lemmy.world 4 points 11 months ago

It's bad now, but where we're at with AI... It's like complaining that MS paint in 1992 couldn't make photorealistic fake images. This will only get better, never worse. Improvements will come quickly.

load more comments (2 replies)
[-] abraham_linksys@sh.itjust.works 39 points 11 months ago

It looks like they only tested one keyboard from a MacBook. I'd be curious if other keyboard styles are as susceptible to the attack. It also doesn't say how many people's typing that they listened to. I know mine changes depending on my mood or excitement about something, I'm sure that would affect it.

[-] supercheesecake@aussie.zone 14 points 11 months ago

My wife types with her fists when I’m trying to have Zoom meetings.

[-] abraham_linksys@sh.itjust.works 7 points 11 months ago

Coworkers amirite 🙄

[-] the_beber@lemm.ee 38 points 11 months ago

Tangentially related: Did you know, that it‘s technically also possible to reconstruct sound via smartphone accelerometers and there‘s no restrictions on which apps can use it. Have fun with this info (:

[-] Slotos@feddit.nl 10 points 11 months ago

Thanks, I hate it.

[-] Tangent5280@lemmy.world 8 points 11 months ago

Reconstruct sound using smartphone accelerators? What do you mean? That accelerometers can act as speakers and produce sound? Or they can act as microphones and record sound as numerical data of vibrations etc? Can you point me to any articles or sources?

[-] Aopen@discuss.tchncs.de 4 points 11 months ago

SpyApp is spying in background

User thinks "why is battery draining so fast?"

Opens battery setting

Oh, this app shouldnt work right now

Restricts SpyApp's battery permissions

load more comments (1 replies)
load more comments (2 replies)
[-] chaorace 29 points 11 months ago

laughs in custom multi-layer orthogonal layout with one-of-a-kind enclosure & artisan keycaps

[-] malloc@lemmy.world 9 points 11 months ago

Only plebs type. I write all of my content in machine code with a custom compiler to translate it to QWERTY.

NSA/CIA/DEA/Interpol/FBI still trying to decode my shitposts to this day

[-] quadropiss@lemmy.world 23 points 11 months ago

You have to train it on per device + per room basis and you don't give everything access to your microphones

[-] Sloogs@lemmy.dbzer0.com 12 points 11 months ago* (last edited 11 months ago)

I was just thinking, streamers might have to be careful actually — you can often both see and hear when they're typing, so if you correlated the two you could train a key audio → key press mapping model. And then if they type a password for something, even if it's off-screen from their stream, the audio might clue you in on what they're typing.

[-] quadropiss@lemmy.world 7 points 11 months ago

That could hypothetically be avoided by distorting the streamed audio just a tiny bit

[-] Sloogs@lemmy.dbzer0.com 5 points 11 months ago

Yeah. Or just use a password wallet.

[-] Mockrenocks@lemmy.world 4 points 11 months ago

Could be a fun honey pot.

[-] CriticalMiss@lemmy.world 23 points 11 months ago

I’m sweating. I use blue switches. Help.

[-] Botree@lemmy.world 3 points 11 months ago

Never knew my mutant blue switch keeb would come in handy one day. I've lubed the blue switches and added foam and tapes so now it sounds like a clicky-thocky blue-brown switches keeb.

load more comments (1 replies)
[-] randint@lemm.ee 12 points 11 months ago

Assuming that this does not only work on English words, this is actually really terrifying.

[-] lagomorphlecture@lemm.ee 3 points 11 months ago

I have to assume it could be modified to work on any language. You just have to know the keyboard layout for the language in question do you know what to listen for. Languages with a lot of accents like French maybe could be slightly more complicated but I seriously doubt that it couldn't be done. I'm honestly not sure how the keyboard is set up for something like Chinese with so very many characters but again if this can be done, that can be done with some dedication and know how.

[-] randint@lemm.ee 3 points 11 months ago

There are several different ways of inputting Chinese, but generally they all map 2~6 keystrokes to one or multiple Chinese characters, and then the user chooses one. I'd imagine it wouldn't be much harder.

[-] MossBear@lemmy.world 12 points 11 months ago

Good luck hearing my cherry reds as I type slowly and softly!

[-] 3arn0wl@lemmy.world 9 points 11 months ago

Does the research presume the use of a qwerty keyboard?

[-] FlyingSquid@lemmy.world 12 points 11 months ago

I would think that would be a safe assumption most of the time. Less than 1% of typists use Dvorak, for example.

[-] sci@feddit.nl 10 points 11 months ago

in french speaking countries Azerty is the standard i think

[-] CareHare@sh.itjust.works 3 points 11 months ago

Belgium as well and I hate my country for it.

[-] Necromnomicon@lemmy.world 6 points 11 months ago

It uses the sounds it records and compares again the messages you send. So in theory it's layout agnostic.

[-] Teal@lemm.ee 9 points 11 months ago* (last edited 11 months ago)

Phreaking for the modern era.

[-] RizzRustbolt@lemmy.world 8 points 11 months ago

Van Eck Phreaking?

It was a pipe dream then, and it's a pipe dream now.

[-] SaintWacko@midwest.social 8 points 11 months ago

It says it's acoustic, not electromagnetic...

[-] Exusia@lemmy.world 7 points 11 months ago

Sweet! More man-made horrors beyond my comprehension! I sure am glad we're investing our time into things that will never be stolen or misused!

[-] mudcrip@lemm.ee 6 points 11 months ago

I find this article kinda mid bc No link to og paper Article doesn't specify what kinds of keystrokes were being detected (so title seems kind of clickbait)

  • probably not all kinds of keyboards if they only trained model on macbooks? Also no mention of kind of data used to demonstrate 95% accuracy
[-] Buddahriffic@lemmy.world 6 points 11 months ago

When your ADHD fidgeting and a mic attached to your head become a super power. No one can read my keystrokes!

[-] prenatal_confusion@lemmy.one 4 points 11 months ago

Wasn't that a thing already? Thought it was part of the Snowden releases.

[-] HiddenLayer5@lemmy.ml 4 points 11 months ago* (last edited 11 months ago)

A very widespread implication of this is if you are on a call with a bad actor and are on speaker phone, and you enter your password while talking to them, they could potentially get that password or other sensitive information that you typed.

Assuming it really is that accurate, a real-world attack could go something like this. Call someone and social engineer them in a way that causes them to type their login credentials, payment information, whatever, into the proper place for them. They will likely to this without a second thought because "well, I'm signing into the actual place that uses those credentials and not a link someone sent me so it's all good! I even typed in the address myself so I'm sure there's no URL trickery!" And then attempt to extract what they typed. Lots of people, especially when taking calls or voice conference meetings or whatever from their desk, prefer to not hold their phone to their ear of use a headset mic and instead just use their normal laptop mic or an desktop external one. And, most people stop talking when they're focused on typing which makes it even easier. Hell if you manage to reach, say, the IT server department of a major company and play your cards right, you might even be able to catch them entering a root password for a system that's remotely accessible.

load more comments (1 replies)
[-] thefloweracidic@lemmy.world 4 points 11 months ago

From the article:

The researchers gathered training data by pressing 36 keys on a modern MacBook Pro 25 times each and recording the sound produced by each press.

In their experiments, the researchers used the same laptop, whose keyboard has been used in all Apple laptops for the past two years, an iPhone 13 mini placed 17cm away from the target, and Zoom.

Now they should do this under real usage and see if they get anywhere close to 95% accuracy. Phones are usually in pockets, people listen to music, not everyone has a MacBook.

I think it will be difficult for the average person to use this attack effectively, but I think this will become some sort of government spy thing for sure.

[-] hal_5700X@lemmy.world 4 points 11 months ago

Will a faraday bag help with a phone? Seeing how it blocks connections. You can unplug desktop mics.

[-] art@lemmy.world 3 points 11 months ago

I'm just going to play a keyboard ASMR video while I type. Problem solved.

load more comments
view more: next ›
this post was submitted on 06 Aug 2023
451 points (94.5% liked)

Technology

55919 readers
2561 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS