ptz

joined 2 years ago
MODERATOR OF
[–] ptz@dubvee.org 35 points 1 day ago (7 children)

The thing about these deprecated tools is that the replacements either suck, are too convoluted, don't give you the same info, or are overly verbose/obtuse.

ifconfig gave you the most relevant information for the network interfaces almost like a dashboard: IP, MAC address, link status, TX/RX packet counts and errors, etc. You can get that with ip but you've got to add a bunch of arguments, make multiple calls with different arguments, and it's still not quite what ifconfig was.

Similarly, iwconfig gave you that same "dashboard" like information for your wireless adapters. I use iw to configure but iwconfig was my go-to for viewing useful information about it. Don't get me started on how much I hate iw's syntax and verbosity.

They can pry scp out of my cold dead hands.

At least nftables is syntax-compatible.

[–] ptz@dubvee.org 3 points 1 day ago (1 children)

When I put the IMEI into OP's tool, it said I had to go through TMobile to [bootloader] unlock it since it was a retail model.

TMobile said the phone "wasn't in our system" and couldn't provide either SIM or bootloader unlock codes.

[–] ptz@dubvee.org 2 points 1 day ago* (last edited 1 day ago) (4 children)

but I can only find it t-mobile locked. I can carrier unlock it myself

Careful with that assumption. I bought a TMO-locked OP Nord N200 thinking I could do that but even after using it on my account for a year, they say "it's not in our system" and it remains carrier-locked. Basically when it hit the secondhand market, it was removed from TMO's system and they have no record of it and cannot carrier-unlock it (or that's the story that was told to me by 3-4 different people within TMO).

but will I be able to access the boot loader

Depends if you can carrier-unlock it.

https://service.oneplus.com/us/search/search-detail?id=op588

 
[–] ptz@dubvee.org 42 points 1 day ago

Ban ~~USA~~ politics from this sub please

[–] ptz@dubvee.org 16 points 3 days ago* (last edited 3 days ago)

1080p buffered generously but it worked :) The sweet spot was having it transcode to 720p (yay hardware acceleration). I wasn't sharing it with anyone at the time, so it was just me watching at work on one phone while using my second phone at home for internet.

[–] ptz@dubvee.org 31 points 3 days ago* (last edited 3 days ago) (2 children)

Just about anything as long as you don't need to serve it to hundreds of people simultaneously. Hell, I once hosted Jellyfin over a 3G hotpot and it managed.

Pretty much any web-based app will work fine. Streaming servers (Emby, Plex, Jellyfin, etc) work fine for a few simultaneous people as long as you're not trying to push 4K or something. 1080p can work fine at 4 Mbps or less (transcoding is your friend here). Chat servers (Matrix, XMPP, etc) are also a good candidate.

I hosted everything I wanted with 30 Mbps upload before I got symmetric fiber.

 
[–] ptz@dubvee.org 4 points 5 days ago* (last edited 5 days ago)

Not healthcare (thank gods!) but equally esoteric as far as acquiring line-of-business software goes.

[–] ptz@dubvee.org 5 points 5 days ago (2 children)

Doing so would effectively doxx the org I work for and, by association, me, so no name and shaming this time. They're not a big player you'd recognize, anyway, and mostly deal with specialty/niche software.

[–] ptz@dubvee.org 16 points 5 days ago* (last edited 5 days ago) (2 children)

Maybe I should flesh it out into an actual guide. The Nepenthes docs are "meh" at best and completely gloss over integrating it into your stack.

You'll also need to give it corpus text to generate slop from. I used transcripts from 4 or 5 weird episodes of Voyager (let's be honest: shit got weird on Voyager lol), mixed with some Jack Handy quotes and a few transcripts of Married...with Children episodes.

https://content.dubvee.org/ is where that bot traffic lands up if you want to see what I'm feeding them.

[–] ptz@dubvee.org 20 points 5 days ago

I'm 100% for bringing back Absolute Candor Janeway.

 

Cross-posted from "Dear OpenAI: Go fuck yourselves. [Copypasta]" by @ptz@dubvee.org in !oldmanyellsatcloud@dubvee.org


There's an article posted on Slashdot about OpenAI complaining that DeepSeek distilled their models to gain an edge Link.

Against my better judgement, I clicked into the comments there and actually found something insightful with the level of snark I would have done myself.

Without further ado:

Dear OpenAI,

Go fuck yourselves.

For years, individuals, other companies, publishing industries across the board, and user groups have been fed the ever-loving fuck up with you stealing everything that isn't bolted down to train your fucked up vision of a future devoid of work, in a world that doesn't value humans when they don't work. And you and your ilk have proclaimed repeatedly that this is just the way things have to be, that the information belongs to you if it exists because you can access it. That there is no unfairness in you taking any information you can find because all information should go into training your new computer god.

And now that someone has done the exact fucking same god damned thing that you've been doing all along, you whine about it like a spoiled fucking child that had one tiny piece of its candy taken from a pile of ever-expanding, continually self-replenishing candy that will never end and never could end.

Fuck you, you entitled pieces of god damned shit. Combine this with your future visions of completely disrupting society for your financial benefit, while potentially causing the entire economy to collapse whether your visions come true, or you crash out in your quest and take the entire dream-o-sphere of Wall Street with you, and you are beyond disgusting. We're sick of your shit, and I hope to crap that this begins to drive home the fact that the term "corporation" does not deserve the respect it gets in society today. You are a parasite, through and through. And just because you found a tapeworm within your shit, it doesn't mean you aren't a tapeworm in society's shit yourself.

Good grief, some of us can't wait until you AI based companies stop being the 100% focal point of all world governments and all economic concerns. It's like we've handed the reins of society over to the most narcissistic, self-obsessed idiots in all of existence, and somehow they always find a way to double-down on the ugliest parts of humanity in their quest.

[–] ptz@dubvee.org 34 points 5 days ago* (last edited 5 days ago) (4 children)

Thanks!

Mostly there's three steps involved:

  1. Setup Nepenthes to receive the traffic
  2. Perform bot detection on inbound requests (I use a regex list and one is provided below)
  3. Configure traffic rules in your load balancer / reverse proxy to send the detected bot traffic to Nepenthes instead of the actual backend for the service(s) you run.

Here's a rough guide I commented a while back: https://dubvee.org/comment/5198738

Here's the post link at lemmy.world which should have that comment visible: https://lemmy.world/post/40374746

You'll have to resolve my comment link on your instance since my instance is set to private now, but in case that doesn't work, here's the text of it:

So, I set this up recently and agree with all of your points about the actual integration being glossed over.

I already had bot detection setup in my Nginx config, so adding Nepenthes was just changing the behavior of that. Previously, I had just returned either 404 or 444 to those requests but now it redirects them to Nepenthes.

Rather than trying to do rewrites and pretend the Nepenthes content is under my app's URL namespace, I just do a redirect which the bot crawlers tend to follow just fine.

There's several parts to this to keep my config sane. Each of those are in include files.

  • An include file that looks at the user agent, compares it to a list of bot UA regexes, and sets a variable to either 0 or 1. By itself, that include file doesn't do anything more than set that variable. This allows me to have it as a global config without having it apply to every virtual host.

  • An include file that performs the action if a variable is set to true. This has to be included in the server portion of each virtual host where I want the bot traffic to go to Nepenthes. If this isn't included in a virtual host's server block, then bot traffic is allowed.

  • A virtual host where the Nepenthes content is presented. I run a subdomain (content.mydomain.xyz). You could also do this as a path off of your protected domain, but this works for me and keeps my already complex config from getting any worse. Plus, it was easier to integrate into my existing bot config. Had I not already had that, I would have run it off of a path (and may go back and do that when I have time to mess with it again).

The map-bot-user-agents.conf is included in the http section of Nginx and applies to all virtual hosts. You can either include this in the main nginx.conf or at the top (above the server section) in your individual virtual host config file(s).

The deny-disallowed.conf is included individually in each virtual hosts's server section. Even though the bot detection is global, if the virtual host's server section does not include the action file, then nothing is done.

Files

map-bot-user-agents.conf

Note that I'm treating Google's crawler the same as an AI bot because....well, it is. They're abusing their search position by double-dipping on the crawler so you can't opt out of being crawled for AI training without also preventing it from crawling you for search engine indexing. Depending on your needs, you may need to comment that out. I've also commented out the Python requests user agent. And forgive the mess at the bottom of the file. I inherited the seed list of user agents and haven't cleaned up that massive regex one-liner.

# Map bot user agents
## Sets the $ua_disallowed variable to 0 or 1 depending on the user agent. Non-bot UAs are 0, bots are 1

map $http_user_agent $ua_disallowed {
    default 		0;
    "~PerplexityBot"	1;
    "~PetalBot"		1;
    "~applebot"		1;
    "~compatible; zot"	1;
    "~Meta"		1;
    "~SurdotlyBot"	1;
    "~zgrab"		1;
    "~OAI-SearchBot"	1;
    "~Protopage"	1;
    "~Google-Test"	1;
    "~BacklinksExtendedBot" 1;
    "~microsoft-for-startups" 1;
    "~CCBot"		1;
    "~ClaudeBot"	1;
    "~VelenPublicWebCrawler"	1;
    "~WellKnownBot"	1;
    #"~python-requests"	1;
    "~bitdiscovery"	1;
    "~bingbot"		1;
    "~SemrushBot" 	1;
    "~Bytespider" 	1;
    "~AhrefsBot" 	1;
    "~AwarioBot"	1;
#    "~Poduptime" 	1;
    "~GPTBot" 		1;
    "~DotBot"	 	1;
    "~ImagesiftBot"	1;
    "~Amazonbot"	1;
    "~GuzzleHttp" 	1;
    "~DataForSeoBot" 	1;
    "~StractBot"	1;
    "~Googlebot"	1;
    "~Barkrowler"	1;
    "~SeznamBot"	1;
    "~FriendlyCrawler"	1;
    "~facebookexternalhit" 1;
    "~*(?i)(80legs|360Spider|Aboundex|Abonti|Acunetix|^AIBOT|^Alexibot|Alligator|AllSubmitter|Apexoo|^asterias|^attach|^BackDoorBot|^BackStreet|^BackWeb|Badass|Bandit|Baid|Baiduspider|^BatchFTP|^Bigfoot|^Black.Hole|^BlackWidow|BlackWidow|^BlowFish|Blow|^BotALot|Buddy|^BuiltBotTough|
^Bullseye|^BunnySlippers|BBBike|^Cegbfeieh|^CheeseBot|^CherryPicker|^ChinaClaw|^Cogentbot|CPython|Collector|cognitiveseo|Copier|^CopyRightCheck|^cosmos|^Crescent|CSHttp|^Custo|^Demon|^Devil|^DISCo|^DIIbot|discobot|^DittoSpyder|Download.Demon|Download.Devil|Download.Wonder|^dragonfl
y|^Drip|^eCatch|^EasyDL|^ebingbong|^EirGrabber|^EmailCollector|^EmailSiphon|^EmailWolf|^EroCrawler|^Exabot|^Express|Extractor|^EyeNetIE|FHscan|^FHscan|^flunky|^Foobot|^FrontPage|GalaxyBot|^gotit|Grabber|^GrabNet|^Grafula|^Harvest|^HEADMasterSEO|^hloader|^HMView|^HTTrack|httrack|HTT
rack|htmlparser|^humanlinks|^IlseBot|Image.Stripper|Image.Sucker|imagefetch|^InfoNaviRobot|^InfoTekies|^Intelliseek|^InterGET|^Iria|^Jakarta|^JennyBot|^JetCar|JikeSpider|^JOC|^JustView|^Jyxobot|^Kenjin.Spider|^Keyword.Density|libwww|^larbin|LeechFTP|LeechGet|^LexiBot|^lftp|^libWeb|
^likse|^LinkextractorPro|^LinkScan|^LNSpiderguy|^LinkWalker|msnbot|MSIECrawler|MJ12bot|MegaIndex|^Magnet|^Mag-Net|^MarkWatch|Mass.Downloader|masscan|^Mata.Hari|^Memo|^MIIxpc|^NAMEPROTECT|^Navroad|^NearSite|^NetAnts|^Netcraft|^NetMechanic|^NetSpider|^NetZIP|^NextGenSearchBot|^NICErs
PRO|^niki-bot|^NimbleCrawler|^Nimbostratus-Bot|^Ninja|^Nmap|nmap|^NPbot|Offline.Explorer|Offline.Navigator|OpenLinkProfiler|^Octopus|^Openfind|^OutfoxBot|Pixray|probethenet|proximic|^PageGrabber|^pavuk|^pcBrowser|^Pockey|^ProPowerBot|^ProWebWalker|^psbot|^Pump|python-requests\/|^Qu
eryN.Metasearch|^RealDownload|Reaper|^Reaper|^Ripper|Ripper|Recorder|^ReGet|^RepoMonkey|^RMA|scanbot|SEOkicks-Robot|seoscanners|^Stripper|^Sucker|Siphon|Siteimprove|^SiteSnagger|SiteSucker|^SlySearch|^SmartDownload|^Snake|^Snapbot|^Snoopy|Sosospider|^sogou|spbot|^SpaceBison|^spanne
r|^SpankBot|Spinn4r|^Sqworm|Sqworm|Stripper|Sucker|^SuperBot|SuperHTTP|^SuperHTTP|^Surfbot|^suzuran|^Szukacz|^tAkeOut|^Teleport|^Telesoft|^TurnitinBot|^The.Intraformant|^TheNomad|^TightTwatBot|^Titan|^True_Robot|^turingos|^TurnitinBot|^URLy.Warning|^Vacuum|^VCI|VidibleScraper|^Void
EYE|^WebAuto|^WebBandit|^WebCopier|^WebEnhancer|^WebFetch|^Web.Image.Collector|^WebLeacher|^WebmasterWorldForumBot|WebPix|^WebReaper|^WebSauger|Website.eXtractor|^Webster|WebShag|^WebStripper|WebSucker|^WebWhacker|^WebZIP|Whack|Whacker|^Widow|Widow|WinHTTrack|^WISENutbot|WWWOFFLE|^
WWWOFFLE|^WWW-Collector-E|^Xaldon|^Xenu|^Zade|^Zeus|ZmEu|^Zyborg|SemrushBot|^WebFuck|^MJ12bot|^majestic12|^WallpapersHD)" 1;

}

deny-disallowed.conf

# Deny disallowed user agents
if ($ua_disallowed) { 
    # This redirects them to the Nepenthes domain. So far, pretty much all the bot crawlers have been happy to accept the redirect and crawl the tarpit continuously 
	return 301 https://content.mydomain.xyz/;
}

 

There's an article posted on Slashdot about OpenAI complaining that DeepSeek distilled their models to gain an edge Link.

Against my better judgement, I clicked into the comments there and actually found something insightful with the level of snark I would have done myself.

Without further ado:

Dear OpenAI,

Go fuck yourselves.

For years, individuals, other companies, publishing industries across the board, and user groups have been fed the ever-loving fuck up with you stealing everything that isn't bolted down to train your fucked up vision of a future devoid of work, in a world that doesn't value humans when they don't work. And you and your ilk have proclaimed repeatedly that this is just the way things have to be, that the information belongs to you if it exists because you can access it. That there is no unfairness in you taking any information you can find because all information should go into training your new computer god.

And now that someone has done the exact fucking same god damned thing that you've been doing all along, you whine about it like a spoiled fucking child that had one tiny piece of its candy taken from a pile of ever-expanding, continually self-replenishing candy that will never end and never could end.

Fuck you, you entitled pieces of god damned shit. Combine this with your future visions of completely disrupting society for your financial benefit, while potentially causing the entire economy to collapse whether your visions come true, or you crash out in your quest and take the entire dream-o-sphere of Wall Street with you, and you are beyond disgusting. We're sick of your shit, and I hope to crap that this begins to drive home the fact that the term "corporation" does not deserve the respect it gets in society today. You are a parasite, through and through. And just because you found a tapeworm within your shit, it doesn't mean you aren't a tapeworm in society's shit yourself.

Good grief, some of us can't wait until you AI based companies stop being the 100% focal point of all world governments and all economic concerns. It's like we've handed the reins of society over to the most narcissistic, self-obsessed idiots in all of existence, and somehow they always find a way to double-down on the ugliest parts of humanity in their quest.

[–] ptz@dubvee.org 166 points 5 days ago (17 children)

I was blocking them but decided to shunt their traffic to Nepenthes instead. There's usually 3-4 different bots thrashing around in there at any given time.

If you have the resources, I highly recommend it.

206
We all have "that vendor" right? (tesseract.dubvee.org)
submitted 5 days ago* (last edited 5 days ago) by ptz@dubvee.org to c/iiiiiiitttttttttttt@programming.dev
 

In my 25 year career, I have never had to deal with such a garbage software vendor as this one. I don't want to go on a rant since I have other things to do today and could go on for hours, but here are the highlights:

  • Application has like 30 modules. Each module has its own config file.
  • Config files are not centralized and reside in the directory with each module.
  • Vendor ships a zip file of slop code that requires manual assembly.
  • Vendor provides no substantial documentation. Every request for technical documentation is met with "Let's setup a meeting to solve your immediate problem".
  • Vendor ships dummy config files in every release necessitating manual backup/restore of the config for each of the 30 modules
  • Vendor changes config file format every third release. This requires re-configuring the entire application stack and all 30 modules.
  • Vendor puts the version info in the goddamned config file instead of building it into the compiled .NET application as a variable like a sane person would do.
  • Vendor sends an update every week and gets pissy when we don't deploy it within 3 hours of their "we shat out an update" email.
  • Vendor has been asked repeatedly to address this. The only response we've gotten to these complaints is the sound of crickets chirping. 🦗

To answer any questions:

  1. Yes, I voiced my concerns long ago. They were ignored.
  2. Yes, they are the "lowest bidder" and it goddamned shows.
  3. Yes, they know I hate them.
  4. Yes, I tried writing scripts to manage the config files. They work once or twice until the vendor changes the config file format every 3rd-4th release.
  5. Yes, it is sunk cost fallacy all the way down, but I've been given my orders.
 

Fuck it. I'm just posting the Orphan Black soundtrack now.

5
Arya - Warriors (tube.dubvee.org)
submitted 6 days ago* (last edited 6 days ago) by ptz@dubvee.org to c/eternalplaylist@crazypeople.online
 

Posting this because I'm doing an "Orphan Black" rewatch and Krystal's fucking ringtone (this song) is stuck in my head and will not budge 😆

 

An American superhero fan short film based on the Power Rangers franchise; unlike the kid-friendly franchise, the short depicts an adult-oriented take on the source material.

It was directed by Joseph Kahn, who co-wrote with James Van Der Beek and Dutch Southern, and produced by Adi Shankar and Jil Hardin. The short film featured an ensemble cast starring Katee Sackhoff, Van Der Beek, Russ Bain, Will Yun Lee, and Gichi Gamba. It was released on YouTube and Vimeo on February 23, 2015.

 

Also, damn, there's some big names in this video. Dennis Franz, Lauren Holly, and Jane.

 
 

Warning: Site does not work well with DarkReader, so you'll need to deal with the flashbang.

Increasingly, we’re pushed to trash tech that should still work, such as Chromebooks, phones, and smart home devices, just because the software has expired or lost support. This database lists more than 100 tech products that have stopped working after manufacturers dropped support. It calculates the total weight of all these dead devices which have joined the 68 million tons of electronic waste disposed of each year.

Everyone here can think of a cloud-connected product that was killed because the company that made it stopped supporting it. While these corporations have forgotten their products, the US PIRG Education Fund has immortalized them in their Electronic Waste Graveyard.

With an estimated “130,000,000 pounds of electronic waste” produced since 2014, the amount of wasted resources is staggering. The advent of the cloud promised us reduced waste as lightweight devices could rely on remote brains to keep the upgrades going long after a traditional device would have been unable to keep up. The opposite seems to have occurred, wreaking havoc on the environment and pocketbooks.

Of course, we can count on hackers to circumvent the end of companies or services, but while that gives us plenty of fodder for projects, it isn’t so great for the normal folks who make up the rest of the population. We appreciate PIRG giving such a visceral reminder of the cost of business-as-usual for those who aren’t always thinking about material usage and waste.

If PIRG sounds familiar, they’re one of the many groups keeping an eye on Right-to-Repair legislation. We’ve been keeping an eye on it too with places like the EU, Texas, and Washington moving the ball forward on reducing e-waste and keeping devices running longer.


Summary from Hack-a-Day

view more: next ›