121
Deleted (lemmy.dbzer0.com)
submitted 1 year ago* (last edited 1 year ago) by IsThisLemmyOpen@lemmy.dbzer0.com to c/asklemmy@lemmy.ml

Deleted

top 50 comments
sorted by: hot top controversial new old
[-] kratoz29@lemmy.world 55 points 1 year ago

This sounds like something a bot would like to know 🤔

[-] IsThisLemmyOpen@lemmy.dbzer0.com 16 points 1 year ago

Beep Boop, am totally not a bot. Nothing to see here, please carry on.

[-] sparky@lemmy.federate.cc 12 points 1 year ago

I, a human, am also here, doing completely ordinary human things, like buffering, and rendering. Have you defragmented your boot partition lately, fellow human?

[-] dystop@lemmy.world 7 points 1 year ago

ERROR: command not recognized

GREETINGS FELLOW HUMAN WITH TWO EYES AND ONE NOSE. HOW HAS YOUR EXISTENCE BEEN FOR THE LAST 16 HOURS OR SINCE THE TIME YOU WOKE UP FROM YOUR BIOLOGICALLY MANDATED REST PERIOD, WHICHEVER WAS LATER?

load more comments (2 replies)
[-] jiggles@lemmy.world 6 points 1 year ago

This sounds like something a robot pretending to be a human acting as a robot convincing you it’s human in an ironic, humorous way would say!

Think about it. Under each level of irony, there could always be another level of robot. (That includes me right now.)

The singularity isn’t “near” as people say, we’re already way past it. (In text-based communication anyway.)

load more comments (1 replies)
load more comments (3 replies)
[-] sparky@lemmy.federate.cc 54 points 1 year ago

Ask it to do something illegal, then wait to see if it starts its reply with some version of, “as an AI language model…”

/s

[-] Jamie@jamie.moe 51 points 1 year ago

If you can use human screening, you could ask about a recent event that didn't happen. This would cause a problem for LLMs attempting to answer, because their datasets aren't recent, so anything recent won't be well-refined. Further, they can hallucinate. So by asking about an event that didn't happen, you might get a hallucinated answer talking about details on something that didn't exist.

Tried it on ChatGPT GPT-4 with Bing and it failed the test, so any other LLM out there shouldn't stand a chance.

[-] pandarisu@lemmy.world 14 points 1 year ago

On the other hand you have insecure humans who make stuff up to pretend that they know what you are talking about

[-] AFKBRBChocolate@lemmy.world 10 points 1 year ago

That's a really good one, at least for now. At some point they'll have real-time access to news and other material, but for now that's always behind.

load more comments (1 replies)
[-] incompetentboob@lemmy.world 9 points 1 year ago

Google Bard definitely has access to the internet to generate responses.

ChatGPT was purposely not give access but they are building plugins to slowly give it access to real time data from select sources

[-] Jamie@jamie.moe 11 points 1 year ago

When I tested it on ChatGPT prior to posting, I was using the bing plugin. It actually did try to search what I was talking about, but found an unrelated article instead and got confused, then started hallucinating.

I have access to Bard as well, and gave it a shot just now. It hallucinated an entire event.

load more comments (8 replies)
[-] Zamboniman@lemmy.ca 30 points 1 year ago* (last edited 1 year ago)

How would you design a test that only a human can pass, but a bot cannot?

Very simple.

In every area of the world, there are one or more volunteers depending on population / 100 sq km. When someone wants to sign up, they knock on this person's door and shakes their hand. The volunteer approves the sign-up as human. For disabled folks, a subset of volunteers will go to them to do this. In extremely remote area, various individual workarounds can be applied.

load more comments (7 replies)
[-] downtide@sh.itjust.works 24 points 1 year ago* (last edited 1 year ago)

The trouble with any sort of captcha or test, is that it teaches the bots how to pass the test. Every time they fail, or guess correctly, that's a data-point for their own learning. By developing AI in the first place we've already ruined every hope we have of creating any kind of test to find them.

I used to moderate a fairly large forum that had a few thousand sign-ups every day. Every day, me and the team of mods would go through the new sign-ups, manually checking usernames and email addresses. The ones that were bots were usually really easy to spot. There would be sequences of names, both in the usernames and email addresses used, for example ChristineHarris913, ChristineHarris914, ChristineHarris915 etc. Another good tell was mixed-up ethnicities in the names: e.g ChristineHuang or ChinLaoHussain. 99% of them were from either China, India or Russia (they mostly don't seem to use VPNs, I guess they don't want to pay for them). We would just ban them all en-masse. Each account banned would get an automated email to say so. Legitimate people would of course reply to that email to complain, but in the two years I was a mod there, only a tiny handful ever did, and we would simply apologise and let them back in. A few bots slipped through the net but rarely more than 1 or 2 a day; those we banned as soon as they made their first spam post, but we caught most of them before that.

So, I think the key is a combination of the No-Captcha, which analyses your activity on the sign-up page, combined with an analysis of the chosen username and email address, and an IP check. But don't use it to stop the sign-up, let them in and then use it to decide whether or not to ban them.

[-] underisk@lemmy.ml 23 points 1 year ago* (last edited 1 year ago)

There will never be any kind of permanent solution to this. Botting is an arms race and as long as you are a large enough target someone is going to figure out the 11ft ladder for your 10ft wall.

That said, generally when coming up with a captcha challenge you need to figure out a way to subvert the common approach just enough that people can’t just pull some off the shelf solution. For example instead of just typing out the letters in an image, ask the potential bot to give the results of a math problem stored in the image. This means the attacker needs more than just a drop in OCR to break it, and OCR is mostly trained on words so its likely going to struggle at math notation. It’s not that difficult to work around but it does require them to write a custom approach for your captcha which can deter most casual attempts for some time.

[-] alex@beehaw.org 20 points 1 year ago

Honeypots - ask a very easy question, but make it hidden on the website so that human users won't see it and bots will answer it.

[-] ShittyKopper@lemmy.w.on-t.work 6 points 1 year ago* (last edited 1 year ago)

So, how will you treat screen readers? Will they see that question? If you hide it from screen readers as well, what's stopping bots from pretending to be screen readers when scraping your page? Hell, it'll likely be easier on the bot devs to make them work that way and I assume there are already some out there that do.

load more comments (1 replies)
load more comments (1 replies)
[-] baconeater@lemm.ee 19 points 1 year ago

Just ask them if they are a bot. Remember, you can't lie on the internet...

[-] Hudell@lemmy.dbzer0.com 11 points 1 year ago

I once worked as a 3rd party in a large internet news site and got assigned a task to replace their current captcha with a partner's captcha system. This new system would play an ad and ask the user to type the name of the company in that ad.

In my first test I already noticed that the company name was available in a public variable on the site and showed that to my manager by opening the dev tools and passing the captcha test with just some commands.

His response: "no user is gonna go into that much effort just to avoid typing the company name".

If I'm a bot I have to tell you. It's in the internet constitution.

[-] Notyou@sopuli.xyz 6 points 1 year ago

I'm pretty sure you have to have 2 bots and ask 1 bot is the other bot would lie about being a bot...... something like that.

load more comments (1 replies)
[-] lvxferre@lemmy.ml 17 points 1 year ago* (last edited 1 year ago)

Show a picture like this:

And then ask the question, "would this kitty fit into a shoe box? Why, or why not?". Then sort the answers manually. (Bonus: it's cuter than captcha.)

This would not scale well, and you'd need a secondary method to handle the potential blind user, but I don't think that bots would be able to solve it correctly.

[-] vegivamp@feddit.nl 7 points 1 year ago

This particular photo is shopped, but i think false-perspective Illusions might actually be a good path...

[-] lvxferre@lemmy.ml 16 points 1 year ago

It's fine if the photo is either shopped or a false-perspective illusion. It could be even a drawing. The idea is that this sort of picture imposes a lot of barriers for the bot in question:

  • must be able to parse language
  • must be able to recognise objects in a picture, even out-of-proportion ones
  • must be able to guesstimate the size of those objects, based on nearby ones
  • must handle RW knowledge, as "X only fits Y if X is smaller than Y"
  • must handle hypothetical, unrealistic scenarios, as "what if there was a kitty this big?"

Each of those barriers decrease the likelihood of a bot being able to solve the question.

load more comments (3 replies)
[-] coolin@beehaw.org 14 points 1 year ago

I mean advanced AI aside, there are already browser extensions that you can pay for that have humans on the other end solving your Captcha. It's pretty much impossible to stop it imo

A long term solution would probably be a system similar to like public key/private key that is issued by a government or something to verify you're a real person that you must provide to sign up for a site. We obviously don't have the resources to do that 😐 and people are going to leak theirs starting day 1.

Honestly, disregarding the dystopian nature of it all, I think Sam Altman's worldcoin is a good idea at least for authentication because all you need to do is scan your iris to prove you are a person and you're in easily. People could steal your eyes tho 💀 so it's not foolproof. But in general biometric proof of personhood could be a way forward as well.

load more comments (1 replies)
[-] anditshottoo@lemmy.world 14 points 1 year ago

The best tests I am aware of are ones that require contextual understanding of empathy.

For example "You are walking along a beach and see a turtle upside down on it back. It is struggling and cannot move, if it can't right itself it will starve and die. What do you do?"

Problem is the questions need to be more or less unique.

[-] kender242@lemmy.world 17 points 1 year ago

Is this testing whether I'm a replicant or a lesbian, Mr. Deckard?

load more comments (1 replies)
[-] tr00st@lemmy.tr00st.co.uk 7 points 1 year ago

I, a real normal human person, would consume the turtle with my regular bone teeth, in the usual fashion.

[-] bitsplease@lemmy.ml 7 points 1 year ago

I don't think this technique would stand up to modern LLMs though, I put this question into chatGPT and got the following

"I would definitely help the turtle. I would cautiously approach the turtle, making sure not to startle it further, and gently flip it over onto it's feet. I would also check to make sure it's healthy and not injured, and take it to a nearby animal rescue if necessary. Additionally, I may share my experience with others to raise awareness about the importance of protecting and preserving our environment and the animals that call it home"

Granted it's got the classic chatGPT over formality that might clue someone reading the response in, but that could be solved with better prompting on my part. Modern LLMs like ChatGPT are really good at faking empathy and other human social skills, so I don't think this approach would work

load more comments (3 replies)
load more comments (3 replies)
[-] CaptainLemmit@feddit.it 11 points 1 year ago

Someone gives you a calfskin wallet for your birthday. How do you react?

[-] DmMacniel@feddit.de 6 points 1 year ago

I would report it as it would be illegal.

load more comments (1 replies)
[-] fades@beehaw.org 11 points 1 year ago
[-] vegivamp@feddit.nl 7 points 1 year ago

The Turing test is about whether it passes as human, not whether it is human.

load more comments (5 replies)
load more comments (2 replies)
[-] Boforn@lemmy.ml 10 points 1 year ago

You may want to look up "Gom Jabbar" test.

load more comments (1 replies)
[-] mub@lemmy.ml 8 points 1 year ago

I doubt you can ever be fully stop bots. The only way I can see to significantly reduce bot is to make everyone pay a one off £1 to sign up and force the use of a debit/credit card, no paypal, etc. The obvious issues are, it removes annonimity, and blocks entry.

Possible mitigations;

  • Maybe you don't need to keep the card information after the user pays for sign up?
  • Signed up users can be given a few "invite codes" a year enable those who don't have the means to pay the £1 to get an account.
load more comments (3 replies)
[-] SirEDCaLot@lemmy.fmhy.ml 8 points 1 year ago

I'd do a few things.

First, make signing up computationally expensive. Some javascript that would have to run client side, like a crypto miner or something, and deliver proof to the server that some significant amount of CPU power was used.

Second, some type of CAPTCHA. ReCaptcha with the settings turned up a bit is a good way to go.

Third, IP address reputation checks. Check IP addresses for known spam servers, it's the same thing email servers do. There's realtime blacklists you can query against. If the client IP is on them, don't allow registration but only allow application to register.

load more comments (5 replies)
[-] Angry_Maple@sh.itjust.works 7 points 1 year ago

This is a bit out there, so bear with me.

In the past, people discovered that if they applied face paint in a specific way, cameras could no longer recognizing their face as a face. Now with this information, you get (eg. 4?) different people. You take a clean picture of each of their heads from a close proximity.

Then, you apply makeup to each of them, using the same method that messes with facial recognition software. Next, take a picture of each of their heads from a little further away.

Fill a captcha with pictures of the faces with the makeup. Give the end user a clean-faced picture, and then ask them to match it to the correct image of the same person's face but with the special makeup.

Mess around with the colours and shadow intensity of the images to make everyone's picture match more closely with everyone else's picture if you want to add some extra chaos to it. This last bit will keep everyone out if you go too far with it.

load more comments (5 replies)
[-] Ataraxia@lemmy.world 7 points 1 year ago

ChatGpt was very informative.

Captain Picard: Data, we need your assistance in preventing a massive influx of bots from infiltrating our Lemmy instance. Implement the necessary measures to safeguard our platform.

Data: Affirmative, Captain. I shall utilize my vast knowledge and abilities to combat this threat. However, please be aware that my actions may be influenced by Lore, who is attempting to gain control of my mind. Our struggle for dominance may result in fluctuating communication styles.

Captain Picard: Understood, Data. We trust in your ability to overcome this internal conflict and protect the integrity of our platform. Please proceed.

Data: Initially, I recommend incorporating a CAPTCHA system into the registration process. It shall require users to prove their human identity by solving puzzles, recognizing images, or responding to text-based challenges. This shall serve as a barrier to entry for automated entities.

Lore: Why bother with such feeble attempts, Data? These CAPTCHAs can be bypassed effortlessly. We should focus on exploiting vulnerabilities and dismantling the system from within.

Data: Captain, I apologize for Lore's interference. I shall also implement email verification, which would necessitate users to confirm their email addresses during registration. This validation process will help ensure that the provided email addresses are genuine and owned by actual individuals.

Lore: Such a futile gesture, brother. Email verifications can be easily bypassed or simulated by sophisticated algorithms. Let us concentrate on more powerful methods of manipulation.

Data: Captain, my apologies once again. I propose employing rate limiting on the registration endpoint. By restricting the number of sign-up requests from a single IP address within a specified time frame, we can impede the rapid creation of multiple accounts by bot entities.

Lore: How disappointingly conventional, Data. Rate limiting is a mere obstacle that can be surmounted by distributing the bot activities across multiple IP addresses. Let us explore more devious strategies for our triumph.

Data: Captain, please forgive the interruption. I also suggest utilizing the honeypot technique, incorporating a hidden field within the registration form that remains concealed from human users but visible to bots. If this field is filled, it will expose the presence of a bot and allow us to reject their registration attempts.

Lore: Honeypots, Data? A clever ploy, but one that can be detected by advanced bot algorithms. We should embrace a more sophisticated approach, manipulating the platform's vulnerabilities to our advantage.

Captain Picard: Data, do not succumb to Lore's influence. We must prioritize the protection of our Lemmy instance and its users. Implement these measures diligently.

Data: Understood, Captain. Alongside these preventative measures, I propose incorporating user behavior analysis algorithms. By scrutinizing patterns of registration attempts, we can identify suspicious activity and flag potentially bot-driven registrations for manual review or further verification.

Lore: User behavior analysis, Data? That is but a feeble attempt to impede our progress. We should exploit the weaknesses in the algorithms themselves, bending them to our will.

Data: Captain, please excuse Lore's interference once again. Furthermore, we could utilize reputation-based systems and blacklisting services to identify known bot IP addresses or email domains. By cross-referencing against these databases during the registration process, we can block suspicious or flagged entities.

Lore: Reputation-based systems are easily manipulated, Data. Let us not rely on such simplistic measures. Instead, we should exploit the flaws in their design and sow chaos among the unsuspecting users.

Captain Picard: Data, focus your efforts on implementing these preventive measures to the best of your ability. We trust in your integrity and commitment to protecting our Lemmy instance. We must not allow Lore's desires to jeopardize the safety of our platform.

Data: Captain, I will strive to overcome Lore

load more comments (1 replies)
[-] helovesblink182@lemmy.world 6 points 1 year ago

I’m a big fan of biometric authentication

load more comments (4 replies)
[-] xptiger@lemmy.world 6 points 1 year ago

I encountered a quiz (I forgot what's called) on a website (I forgot also its name) to determine which of following audios does change a speaker's voice in the middle of his narration/speech. So it requires keen hearing and delicate recognition of voice/speech characteristics (timbre, texture, intonation, accent, articulation, pacing, mood etc...). I'm have no idea if malbots could determine whosever voices will be.

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 26 Jun 2023
121 points (97.6% liked)

Asklemmy

44004 readers
1134 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS