evenwicht

joined 9 months ago
MODERATOR OF
[–] evenwicht 1 points 3 days ago

Well it’s still the same problem. I mean, it’s likely piracy to copy the public lib’s disc to begin with, even if just for a moment. From there, if I want to share it w/others I still need to be able to exit the library with the data before they close. So it’d still be a matter of transcoding as a distinctly separate step.

[–] evenwicht 1 points 3 days ago* (last edited 3 days ago)

Not sure how that makes sense. Why would a captive portal block the 1st 39 attempts but not the 40th, for example?

My workaround is to establish a VPN (which happens quickly w/out issue) then run tor over that, which is also instantly working over the VPN.

 

There is a particular public hotspot where tor takes like an hour to establish a connection on. It’s stuck on 10% shows a running count of connection attempts upwards of 40.

What does this mean? Is it that the wi-fi operator is blocking guard nodes, but perhaps only a snapshot of guard nodes? When I finally connect, is it a case where I managed to get a more recent guard node than the wi-fi operator knows about?

[–] evenwicht 1 points 5 days ago* (last edited 5 days ago) (2 children)

What’s the point of spending a day compressing something that I only need to watch once?

If I pop into the public library and start a ripping process using Handbrake, the library will close for the day before the job is complete for a single title. I could check-out the media, but there are trade-offs:

  • no one else can access the disc while you have it out
  • some libraries charge a fee for media check-outs
  • privacy (I avoid netflix & the like to prevent making a record in a DB of everything I do; checking out a movie still gets into a DB)
  • libraries tend to have limits on the number of media discs you can have out at a given moment
  • checking out a dozen DVDs will take a dozen days to transcode, which becomes a race condition with the due date
  • probably a notable cost in electricity, at least on my old hardware
[–] evenwicht 2 points 5 days ago

Wow, thanks for the research and effort! I will be taking your approach for sure.

[–] evenwicht 10 points 1 week ago (6 children)

I’ll have a brief look but I doubt ffmpeg would know about DVD CSS encryption.

 

Translating the Debian install instructions to tor network use, we have:

  torsocks wget https://apt.benthetechguy.net/benthetechguy-archive-keyring.gpg -O /usr/share/keyrings/benthetechguy-archive-keyring.gpg
  echo "deb [signed-by=/usr/share/keyrings/benthetechguy-archive-keyring.gpg] tor://apt.benthetechguy.net/debian bookworm non-free" > /etc/apt/sources.list.d/benthetechguy.list
  apt update
  apt install makemkv

apt update yields:

Ign:9 tor+https://apt.benthetechguy.net/debian bookworm InRelease
Ign:9 tor+https://apt.benthetechguy.net/debian bookworm InRelease
Ign:9 tor+https://apt.benthetechguy.net/debian bookworm InRelease
Err:9 tor+https://apt.benthetechguy.net/debian bookworm InRelease
  Connection failed [IP: 127.0.0.1 9050]

Turns out apt.benthetechguy.net is jailed in Cloudflare. And apparently the code is not developed out in the open -- there is no public code repo or even a bug tracker. Even the forums are a bit exclusive (registration on a particular host is required and disposable email addresses are refused). There is no makemkv IRC channel (according to netsplit.de).

There is a blurb somewhere that the author is looking to get MakeMKV into the official Debian repos and is looking for a sponsor (someone with a Debian account). But I wonder if this project would even qualify for the non-free category. Debian does not just take any non-free s/w.. it's more for drivers and the like.

Alternatives?


The reason I looked into #makemkv was that Handbrake essentially forces users into a long CPU-intensive transcoding process. It cannot simply rip the bits as they are. MakeMKV relieves us of transcoding at the same time as ripping. But getting it is a shit show.

[–] evenwicht 3 points 2 weeks ago

The design approach would also help serve people in impoverished areas, where you might imagine they have no internet access at home but can still participate through some community access point.

[–] evenwicht 2 points 2 weeks ago* (last edited 2 weeks ago)

Indeed, I really meant tools that have some cloud interaction but give us asynchronous autonomy from the cloud.

Of course there are also scenarios that normally use the could but can be made fully offline. E.g. Argos Translate. If you use a web-based translator like Google Translate or Yandex Translate, you are not only exposed to the dependency of having a WAN when you need to translate, but you give up privacy. Argos Translate empowers you to translate text without cloud dependency while also getting sensible level of privacy. Or in the case of Google Maps vs. OSMand, you have the privacy of not sharing your location and also the robustness of not being dependant on a functioning uplink.

Both scenarios (fully offline apps and periodic syncing of msgs) are about power and control. If all your content is sitting on someone else’s server, you are disempowered because they can boot you at any moment, alter your content, or they can pull the plug on their server spontaneously without warning (this has happened to me many times). They can limit your searching capability too. A natural artifact of offline consumption is that you have your own copy of the data.

if it aint broke dont fix it

It’s broke from where I’m sitting. Many times Mastodon and Lemmy servers went offline out of the pure blue and all my msgs were mostly gone, apart from what got cached on other hosts which is tedious and non-trivial to track down. It’s technically broken security in the form of data loss/loss of availability.

[–] evenwicht 3 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

I have nothing for these use cases, off the top of my head:

  • Lemmy
  • kbin
  • Mastodon (well, I have Mastodon Archive by Kensenada but it’s only useful for backups and searching, not posting)
  • airline, train, and bus routes and fares -- this is not just an app non-existence problem since the websites are often bot-hostile. But the idea is that it fucking sucks to have to do the manual labor of using their shitty web GUI app to search for schedules one parameter set at a time. E.g. I want to go from city A to B possibly via city C anytime in the next 6 or 8 weeks, and I want the cheapest. That likely requires me to do 100+ separate searches. When it should just be open data... we fetch a CSV or XML file and study the data offline and do our own queries. For flights Matrix ITA was a great thing (though purely online).. until Google bought it to ruin it.
  • Youtube videos -- yt-dl and invideous is a shitshow (Google’s fault). YT is designed so you have to be online because of Google’s protectionism. I used to be able to pop into a library and grab ~100 YT videos over Invideous in the time that I could only view a few, and have days of content to absorb offline (and while the library is closed). Google sabotaged that option. But they got away with it because of a lousy culture of novice users willing to be enslaved to someone else’s shitty UIs. There should have been widespread outrage when Google pulled that shit.. a backlash that would twist their arm to be less protectionist. But it’s easy to oppress an minority of people.
[–] evenwicht 4 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

You just wrote your response using an app that’s dysfunctional offline. You had to be online.

Perhaps before your time, Usenet was the way to do forums. Gnus (an emacs mode) was good for this. Gnus would fetch everything to my specification and store a local copy. It served as an offline newsreader. I could search my local archive of messages and the search was not constrained to a specific tool (e.g. grep would work, but gnus was better). I could configure it to grab all headers for new msgs in a particular newsgroup, or full payloads. Then when disconnected it was possible to read posts. I never tested replies because I had other complexities in play (mixmaster), but it was likely possible to compose a reply and sync/upload it later when online. The UX was similar to how mailing lists work.

None of that is possible with Lemmy. It’s theoretically possible given the API, but the tools don’t exist for that.

Offline workflows were designed to accommodate WAN access interruptions, but an unforeseen benefit was control. Having your own copy naturally gives you a bit of control and censorship resilience.

(update) Makes no sense that I have to be online to read something I previously wrote. I sometimes post some useful bit of information but there are only so many notes I can keep organised. Then I later need to recall (e.g. what was that legal statute that I cited for situation X?) If I wrote it into a Lemmy post, I have to be online to find it again. The search tool might be too limited to search the way I need to.. and that assumes the host I wrote it on is even still online.

20
submitted 2 weeks ago* (last edited 2 weeks ago) by evenwicht to c/text_ui
 

Back in the days of dial-up BBSs and Internet via a real modem, speed and availability constraints led to apps that work well offline.

Now that most people have unlimited broadband, offline tools have become rare. Now we are trapped in an infrastructure that constrains us to having internet at all times which is then reinforced by the Tyranny of Convenience.

So when someone makes the point “boycott Time Warner/Spectrum because they support right-wing politics and assault privacy”, ppl are helpless.. unable to stomach the idea of being offline. It’s like no one has the constitution to say “fuck this shit”.

The web has become such garbage that I am happy to be offline. Shitty ISPs don’t get a dime from me. No more paying for something that is infested with surveillance advertising, CAPTCHA, and garbage. I’m content to periodically login from public hotspots.

But not a single lemmy client for offline use.. to sync when plugged in and then read and compose replies later. This would give a better workflow even if always online because you would have a local copy (useful when servers bail out out of the pure fucking blue).

The hecklers will say “what are you waiting for.. write it yourself!” As if 1 person can recreate a whole infrastructure (lemmy, kbin, mastodon, xmpp, scraper bots, etc). The heart of the issue is it’s a paradigm that’s being overlooked. If you are going to create an app for whatever reason, why not design it at the ground level to work offline and headless? Of course it would also work online and a GUI can be a separate module. But the reverse is not true.. design an app to expect always-available internet and you have something that cannot easily adapt to an offline workflow.

[–] evenwicht 1 points 1 month ago

That page is dead for me.

But I just wanted to add that Chevron is an ALEC member in the US. Thus they feed the GOP and many extreme right policy. Should be boycotting them already anyway. They also got caught financing the Cloakroom project.

[–] evenwicht 1 points 1 month ago

Did you ever find an answer on this?

Lately I’ve seen lots of external PCI-E slots that attach to a USB 3 cable, and they’re quite cheap. Theoretically you could add an external graphics card that way, but I have not tried it. Someone told me the heat that would bring to my USB 3 expresscard would risk damage. I think these external PCI-E slots are intended to be attached to a desktop so a cryptocurrency miner can use the GPU.

 

If you sit in front of a PC with a big screen all day, smartphones are not a good way to do SMS. Rationale:

  • you have to reach for a small screen & possibly tap around (enter PINs) to see the txt that just arrived
  • smartphones have a huge attack surface; street-wise people do not put GSM chips in them
  • to send a msg, you have to tap on a tiny keyboard (or fiddle with a dicey speech-to-text tool)
  • if your phone breaks, you lose access to all your SMS msgs. (Gammu can copy all your SMS msgs even if your screen is shattered, but only from dumb phones)
  • (countless software freedom issues here… gammu does not work with smartphones because smartphones do not support a standard AT command protocol)

Theoretically, isn’t gammu or gnokii a smarter way of working? If you have a text terminal and gnu screen/tmux running, possibly with irssi, it would be a much more efficient workflow if SMS msgs would arrive in an irssi window just like an IRC channel so you can use your full size keyboard to enter an SMS.

Anyone doing this?

I got gammu working on an old dumb phone. Haven’t checked yet whether it can be integrated into irssi or bitlbee.

Possible snag: serial connections are possibly unreliable with Gammu. My USB→serial DCU-65 cable attached to a Sony Ericsson dumb phone chronically disconnects and reconnects to the PC. I wonder if using bluetooth instead would solve that.

The gammu and gnokii projects seem to be somewhat idling.. having been pushed aside due to smartphones. But it’s unjust and an artifact of tech wisdom fading in the population.

[–] evenwicht 1 points 1 month ago* (last edited 1 month ago)

Worth noting that some countries adjust to the reduced demand for postal service by reducing the number of delivery days. Belgium did this, where they only deliver standard class letters a couple times per week. Priority class gets better treatment.

In the US, Trump is trying to fuck with USPS. He wants to privatize it. Funny thing is, it’s the (Trump supporting) rural areas that would be fucked over the most by privatization. Since he prioritizes his voters above ethical people, he’s struggling with cognitive dissonance.

 

Woah, this is sickening.

unplugged off-gridders fucked
If you live off-grid outside of Denmark, wtf.. what happens to your letter when it is sent to a Danish address in 2026? Will every national postal service worldwide have to negotiate a contract with FedEx? Extra sick: FedEx is a hard-right GOP-supporting ALEC org that ships slave dolphins, hunting trophies, and shark fins. UPS is also an ALEC member. So if you boycott both, then what? Maybe you get lucky and live in a country that does a deal with DHL (assuming they operate in DK).

e-mail is still broken
I sent a critically important time-sensitive e-mail to a Danish landlord. The recipient’s e-mail service accepted my email for delivery, then silently sent it to a spam folder. The asshole dip-shit landlord argued it was my fault they did not receive my email message in their inbox. WTF? How can a sender possibly control what the recipient’s mail server does with a message after the SMTP transaction is over? I was legally screwed because I was expected to be accountable for the action of a server I had no control over. Denmark is not ready for forced-email with this kind of ignorance in play.

I can’t even get a msg accepted by Google and Microsoft mail servers. And I’m happy about that because I boycott those companies anyway.. would not send email to gmail or outlook recipients even if it worked. Hence why I send lots of snail mail and almost no email.

In Denmark, I would be fucked. Glad I left. I think DHL would make my cost of living shoot up even higher amid the housing shit show there.

workaround
Send faxes, for the few recipients who still have fax numbers. It will still typically arrive in the recipient’s email inbox, but there are still some advantages:

  • delivery is more reliable than pure email because the email segment of the transmission has the same endpoints, thus not much risk of dicey treatment.
  • the payload is raster dots, thus more effort for surveillance advertiser-operated email services to snoop (doubt they bother with OCR -- but if they do, fine.. let them work for it).
  • you can send msgs without being forced to reveal an email address to the recipient.

action
Would a few million people outside DK please send a snail mail to a DK address in the 1st week of 2026?

 

The linked thread shows a couple bash scripts for using Gimp to export to another file format. Both scripts are broken for me. Perhaps they worked 14 years ago but not today.

Anyone got something that works?

 

I heard someone was forced to solve a Google reCAPTCHA in the course of applying for unemployment in Ohio.

I’m not sure of the circumstances but the user would not have been using Tor, so it is likely imposed on everyone. They said they were unsure if there was an analog alternative (during COVID).

 

From the article:

“In terms of cost, we estimate that – during over 13 years of its deployment – 819 million hours of human time has been spent on reCAPTCHA, which corresponds to at least $6.1 billion USD in wages. Traffic resulting from reCAPTCHA consumed 134 Petabytes of bandwidth, which translates into about 7.5 million kWhs of energy, corresponding to 7.5 million pounds of CO₂. In addition, Google has potentially profited $888 billion USD from cookies and $8.75-32.3 billion USD per each sale of their total labeled data set.”

This means when a CAPTCHA serves as a barrier between people and an essential public transaction, people are being forced into involuntary uncompensated servitude. I believe this is a human rights issue.

 

Since this community discusses CAPTCHA (see sidebar), I thought I should plug a community I just started. !captcha_required@lemmy.sdf.org is not about CAPTCHA in general, but it has the sole purpose of collecting situations where people are forced to solve a CAPTCHA in the public sector.

 

The Secretary of State (SoS) for most (if not all) states maintain a database of registered companies. This basic dataset is needed to lookup how a company is registered, their contact info, status, etc. Most queries have come to impose a CAPTCHA.

If you fax or mail a request for records, the SoS offices simply ignore it without even the courtesy to respond. So if you boycott Google, you’re fucked. The state makes you choose between access to “public” records, and witholding your labor and data from Google. Can’t have it both ways.

Unless you make a FOIA request, in which case you have to pay the state for the info.

This thread could be used to document the states that push this shitty practice on people.

 

From the article:

“In terms of cost, we estimate that – during over 13 years of its deployment – 819 million hours of human time has been spent on reCAPTCHA, which corresponds to at least $6.1 billion USD in wages. Traffic resulting from reCAPTCHA consumed 134 Petabytes of bandwidth, which translates into about 7.5 million kWhs of energy, corresponding to 7.5 million pounds of CO₂. In addition, Google has potentially profited $888 billion USD from cookies and $8.75-32.3 billion USD per each sale of their total labeled data set.”

This means when a CAPTCHA serves as a barrier between people and an essential public transaction, people are being forced into involuntary uncompensated servitude. I believe this is a human rights issue.

 

Elon’s DOGE regime stormed into NOAA and demanded direct access to their IT systems to snoop on the data. This is in the name of cutting fat.

climate

Climate scientists worldwide rely on weather data from NOAA. Obviously the party of climate denial is no friend to climate science. They want to stamp out that particular segment of science.

abolition of environmental regs

The GOP also hates environmental regs because they prioritize big business over the environment. From the linked article:

“The organization [NOAA] cited impacts of cuts could include overfishing, increased imports of illegal or unethically sourced seafood, threats to endangered wildlife, and threats to life and property without its weather forecasting and data resources.”

DEI

Team GOP is also looking to stamp out diversity, equity, and inclusion. This article covers that angle of DOGE’s likely assault on NOAA.

privatization

Of course Musk is also looking for his personal business advantage and any maneuver using government power to increase Tesla and Space-X revenue. Any opportunities to kill off public spending on public resources create opportunities for his private corporate empire will not be overlooked.


I tagged it as “US/world” because even though the data comes from the US, and is threatened within the US, the whole world uses the data.

(edit) It was noticed on !science@mander.xyz (where I was about to cross-post):
https://mander.xyz/post/24567559

view more: next ›