I use filezilla but AFAIK it’s just a client not a server.
evenwicht
Indeed i noticed openssh-sftp-server
was automatically installed with Debian 12. Guess I’ll look into that first. Might be interesting if ppl could choose between FTP or mounting with SSHFS.
(edit) found this guide
Thanks for mentioning it. It encouraged me to look closer at it and I believe it’s well suited for my needs.
Well it’s still the same problem. I mean, it’s likely piracy to copy the public lib’s disc to begin with, even if just for a moment. From there, if I want to share it w/others I still need to be able to exit the library with the data before they close. So it’d still be a matter of transcoding as a distinctly separate step.
Not sure how that makes sense. Why would a captive portal block the 1st 39 attempts but not the 40th, for example?
My workaround is to establish a VPN (which happens quickly w/out issue) then run tor over that, which is also instantly working over the VPN.
What’s the point of spending a day compressing something that I only need to watch once?
If I pop into the public library and start a ripping process using Handbrake, the library will close for the day before the job is complete for a single title. I could check-out the media, but there are trade-offs:
- no one else can access the disc while you have it out
- some libraries charge a fee for media check-outs
- privacy (I avoid netflix & the like to prevent making a record in a DB of everything I do; checking out a movie still gets into a DB)
- libraries tend to have limits on the number of media discs you can have out at a given moment
- checking out a dozen DVDs will take a dozen days to transcode, which becomes a race condition with the due date
- probably a notable cost in electricity, at least on my old hardware
Wow, thanks for the research and effort! I will be taking your approach for sure.
I’ll have a brief look but I doubt ffmpeg would know about DVD CSS encryption.
The design approach would also help serve people in impoverished areas, where you might imagine they have no internet access at home but can still participate through some community access point.
Indeed, I really meant tools that have some cloud interaction but give us asynchronous autonomy from the cloud.
Of course there are also scenarios that normally use the could but can be made fully offline. E.g. Argos Translate. If you use a web-based translator like Google Translate or Yandex Translate, you are not only exposed to the dependency of having a WAN when you need to translate, but you give up privacy. Argos Translate empowers you to translate text without cloud dependency while also getting sensible level of privacy. Or in the case of Google Maps vs. OSMand, you have the privacy of not sharing your location and also the robustness of not being dependant on a functioning uplink.
Both scenarios (fully offline apps and periodic syncing of msgs) are about power and control. If all your content is sitting on someone else’s server, you are disempowered because they can boot you at any moment, alter your content, or they can pull the plug on their server spontaneously without warning (this has happened to me many times). They can limit your searching capability too. A natural artifact of offline consumption is that you have your own copy of the data.
if it aint broke dont fix it
It’s broke from where I’m sitting. Many times Mastodon and Lemmy servers went offline out of the pure blue and all my msgs were mostly gone, apart from what got cached on other hosts which is tedious and non-trivial to track down. It’s technically broken security in the form of data loss/loss of availability.
I have nothing for these use cases, off the top of my head:
- Lemmy
- kbin
- Mastodon (well, I have Mastodon Archive by Kensenada but it’s only useful for backups and searching, not posting)
- airline, train, and bus routes and fares -- this is not just an app non-existence problem since the websites are often bot-hostile. But the idea is that it fucking sucks to have to do the manual labor of using their shitty web GUI app to search for schedules one parameter set at a time. E.g. I want to go from city A to B possibly via city C anytime in the next 6 or 8 weeks, and I want the cheapest. That likely requires me to do 100+ separate searches. When it should just be open data... we fetch a CSV or XML file and study the data offline and do our own queries. For flights Matrix ITA was a great thing (though purely online).. until Google bought it to ruin it.
- Youtube videos -- yt-dl and invideous is a shitshow (Google’s fault). YT is designed so you have to be online because of Google’s protectionism. I used to be able to pop into a library and grab ~100 YT videos over Invideous in the time that I could only view a few, and have days of content to absorb offline (and while the library is closed). Google sabotaged that option. But they got away with it because of a lousy culture of novice users willing to be enslaved to someone else’s shitty UIs. There should have been widespread outrage when Google pulled that shit.. a backlash that would twist their arm to be less protectionist. But it’s easy to oppress an minority of people.
You just wrote your response using an app that’s dysfunctional offline. You had to be online.
Perhaps before your time, Usenet was the way to do forums. Gnus (an emacs mode) was good for this. Gnus would fetch everything to my specification and store a local copy. It served as an offline newsreader. I could search my local archive of messages and the search was not constrained to a specific tool (e.g. grep would work, but gnus was better). I could configure it to grab all headers for new msgs in a particular newsgroup, or full payloads. Then when disconnected it was possible to read posts. I never tested replies because I had other complexities in play (mixmaster), but it was likely possible to compose a reply and sync/upload it later when online. The UX was similar to how mailing lists work.
None of that is possible with Lemmy. It’s theoretically possible given the API, but the tools don’t exist for that.
Offline workflows were designed to accommodate WAN access interruptions, but an unforeseen benefit was control. Having your own copy naturally gives you a bit of control and censorship resilience.
(update) Makes no sense that I have to be online to read something I previously wrote. I sometimes post some useful bit of information but there are only so many notes I can keep organised. Then I later need to recall (e.g. what was that legal statute that I cited for situation X?) If I wrote it into a Lemmy post, I have to be online to find it again. The search tool might be too limited to search the way I need to.. and that assumes the host I wrote it on is even still online.
oh, sorry. Indeed. I answered from the notifications page w/out context. Glad to know Filezilla will work for that!