605
submitted 6 months ago by Samsy@lemmy.ml to c/linuxmemes@lemmy.world

Next evolution, just a one line bash script.

top 50 comments
sorted by: hot top controversial new old
[-] linearchaos@lemmy.world 127 points 6 months ago

Me: install it, doesn't work, read the docs, screw with all the missing things, doesn't work, read the forms, install something else I missed, doesn't work, find more forums, find the right answer, patch it up, get it working, figure out that the application is slow, missing critical features, and really just doesn't do what I needed to do.

[-] TeaEarlGrayHot@lemmy.ca 72 points 6 months ago

All-in-all a weekend well spent

load more comments (1 replies)
[-] deweydecibel@lemmy.world 14 points 6 months ago

Or it does work, and then I never actually end up using it again.

And then months later I'll have to do something similar and I've forgotten I even installed something that can do that, so I install another related thing.

[-] punkwalrus@lemmy.world 6 points 6 months ago

really just doesn’t do what I needed to do.

This has been my experience, or sort of does what I want it to do, but I have to rethink what I need it to do instead of something really simple. Like a "new type of shared file system" that replaces NFS/Windows sharing. So instead of files in a standard file system one can manage with a file browser, it has "indexed" your files in such a way that the actual files are renamed into data chunks, and one "finds" files by their non-intuitive search engine that can't do even basic search engine tricks like "AND/OR" searches, wildcards, and the results are hit and miss. "But it's faster and more elegant!" So how do you restore from backup when the system fails? "When the system does whatnow?"

Yeah, no thanks. I can recover files from a file system much easier than some proprietary encoded bullshit fronted with a bad search engine over a proprietary and buggy index.

[-] linearchaos@lemmy.world 5 points 6 months ago

I asked the other week if anyone made a system that left files alone and just indexed them and gave you a place to store meta without moving them. Options do seem to exist, but they need LOTS of extra work

[-] Dasnap@lemmy.world 44 points 6 months ago

When the project installation steps start with a 'git clone'.

[-] TheInsane42@lemmy.world 4 points 6 months ago

Nah, to much work, use curl to download a script and blindly run it...

load more comments (1 replies)
[-] Tja@programming.dev 38 points 6 months ago

I was blown away how a relatively unknown project like immich provides one Docker compose file to bring up a whole self-hosted ~~Google photos~~ photo management suite, complete with tagging, mapping, transcoding and semantic search. Local, offline semantic search.

[-] dutchkimble@lemy.lol 6 points 6 months ago

I got into the idea of selfhosting my photos, and wanted facial recognition to search them. I also wanted a selfhosted chat server. Nextcloud came as the obvious choice that did both. After days of tinkering and fixing errors in the log one by one, everything worked except the facial recognition which said everything was fine but the faces just didn't get recognised. Also the mobile experience for Memories wasn't the best. Then luckily I came across immich and it was up and running in about 5-6 mins of configuration max, and it has better facial recognition than the main big commercial option. Insane. For chat I got a synapse server with coturn which also took about 15 mins of docker composing and setting configurations/accounts to my liking.

(I still think Nextcloud is cool but it's overkill and loaded with too many features I don't need + installation is a task & documentation/online support communities are scattered between the various methods of installation)

[-] jukibom@lemmy.world 38 points 6 months ago

I'm the opposite because I've had nothing but bad luck with docker. I should really spend more time with it but ugh

[-] CosmicTurtle@lemmy.world 14 points 6 months ago

It's definitely worth learning. I had the damnedest time with docker until I went to a meetup and had someone ELI5 to me. And it wasn't that I wasn't technical. I just couldn't wrap my head around so many layers of extraction.

The guy was very patient with me and helped me get started with docker compose and the rest is history.

[-] Archer@lemmy.world 6 points 6 months ago

Abstraction?

[-] pHr34kY@lemmy.world 4 points 6 months ago

I'm like that. It feels like a total waste of resources, and introduces unneeded complexity for backup, updates, file access, networking and general maintenance.

I would take a deb repo over docker any day of the week.

load more comments (2 replies)
[-] rushaction@programming.dev 35 points 6 months ago

For me it's more like new interesting self hosted project and then find out it's only distributed as a docker container without any proper packaging. As someone who runs FreeBSD, this is a frustration I've run into with quite a number of projects.

[-] zaphod@lemmy.ca 18 points 6 months ago* (last edited 6 months ago)

Eh even as a Linux admin, I prefer hand installs I understand over mysterious docker black boxes that ship god knows what.

Sure, if I'm trialing something to see if it's worth my time, I'll spin up a container. But once it's time to actually deploy it, I do it by hand.

[-] caseyweederman@lemmy.ca 7 points 6 months ago

Same. Frustrating when you have to go digging for non-Docker instructions.

[-] AllHailTheSheep@sh.itjust.works 6 points 6 months ago

yes very much agreed on this. docker is awesome but imo the reliance on it will absolutely cause issues down the line

[-] JasonDJ@lemmy.zip 4 points 6 months ago* (last edited 6 months ago)

Sorry but IMO that’s FUD.

The reliance on it legitimately prevents the issues that it’s likely to cause. It’s made to be both idempotent and ephemeral.

Give an example of a Python project. You make a venv and do all your work in there. You then generate a requirements with all the versions pinned. You start build a container on a pinned version of alpine, or Ubuntu, or w/e. Wherever possible, you are pinning versions.

With best practices applied, result is that the image will be functionally the same regardless of what system builds it, though normally it gets built and stored on a registry like Docker Hub.

The only chance a user has to screw things up is in setting environment variables. That’s no different than ever before. At least now, they don’t have to worry about system or language-level dependencies introducing a breaking change.

load more comments (6 replies)
load more comments (2 replies)
[-] Harvey656@lemmy.world 33 points 6 months ago

This oh my God. Just the other day I tried to install a project off git, it had a nice little .bat file to install all the requirements except half if them just didn't exist or were so niche I couldn't find anything on them after searching. Would love more dockers please.

[-] Railcar8095@lemm.ee 51 points 6 months ago* (last edited 6 months ago)

.bat?

*starts loading shotgun.

Surely you mean .sh, right?

[-] Samsy@lemmy.ml 22 points 6 months ago

Ah well, maybe he used to much aliases? starts sweating in penguin costume

[-] Harvey656@lemmy.world 7 points 6 months ago

Naw they only had windows projects. I run all my stuff through VMware. Gotta have windows for stupid easy anti-cheat. Trust me I only use it when I have to, please put the gun down mr railcar!

load more comments (2 replies)
[-] Evilschnuff@feddit.de 28 points 6 months ago

I believe this to be true for nearly all products. It has to be super simple to test, because you need to assess if it fits your needs. The mental model for a priori assessment is not strong enough usually.

[-] Crow@lemmy.world 27 points 6 months ago

It’s because I’ve seen What people can do with a simple docker container that I completely agree. It’s too nice to go back.

[-] fireflash38@lemmy.world 7 points 6 months ago

I'd agree more if most docker stuff didn't depend on running as root.

[-] possiblylinux127@lemmy.zip 17 points 6 months ago

I think your looking for podman

[-] platypus_plumba@lemmy.world 7 points 6 months ago* (last edited 6 months ago)

yeha, but the big projects like linuxserver.io love creating docker images with root access, even if people have warned them it is an awful security practice. I rewrote all of their images in a personal repo, screw that. I won't run shit as root in my machine, even in containers.

[-] Tja@programming.dev 10 points 6 months ago

Ah to be young and have that kind of energy... Enjoy it!

[-] Im_old@lemmy.world 6 points 6 months ago
load more comments (1 replies)
[-] brophy@lemmy.world 7 points 6 months ago

There's rootless docker, or podman, or numerous other container runtimes. The beauty in containers is separating concerns. How you choose to run it, root or rootless, is up to you in all but the nichest of scenarios.

load more comments (1 replies)
[-] brenno@lemmy.brennoflavio.com.br 14 points 6 months ago

As someone that uses FreeBSD as its main server, it's kinda the other way around haha

[-] Presi300@lemmy.world 5 points 6 months ago
[-] Sibbo@sopuli.xyz 4 points 6 months ago

As a NixOS user, I pick whatever is supported well as a NixOS package.

load more comments (1 replies)
[-] possiblylinux127@lemmy.zip 4 points 6 months ago

Building a docker container isn't normally to hard. I usually will create a PR with a dockerfile and docker compose

load more comments
view more: next ›
this post was submitted on 03 Feb 2024
605 points (95.5% liked)

linuxmemes

20363 readers
528 users here now

I use Arch btw


Sister communities:

Community rules

  1. Follow the site-wide rules and code of conduct
  2. Be civil
  3. Post Linux-related content
  4. No recent reposts

Please report posts and comments that break these rules!

founded 1 year ago
MODERATORS