51
submitted 3 months ago by z00s@lemmy.world to c/linux@lemmy.ml

Last night I was nearly tearing my hair out trying to get a crappy inkjet printer installed when I turned to ChatGPT for help; 5 minutes later it was done (driver issues).

I realised that I use it a lot for all sorts of things but haven’t really seen any posts on here about it, and was wondering if other people do the same.

I’ve used it to create bash scripts, write small, niche python programs (eg sort photos into separate folders based on their orientation), and create cron jobs among other things.

In particular, I find it useful when I want to use a command line program that has a complicated set of options / syntax, but I might only ever want to use it once, or very infrequently. So I tell it what I want to accomplish, and ask it to give me the correct command to run with the options I specify in plain language.

Example:

User

Give me the correct rsync command to recursively copy all files in /home/user/Documents that are image files of any type but were created more recently than 1/1/2024. The destination is /home/user/foo

ChatGPT

rsync -avm --include='*/' --include='*.jpg' --include='*.jpeg' --include='*.png' --include='*.gif' --include='*.bmp' --include='*.tiff' --include='*.svg' --exclude='*' --prune-empty-dirs --min-age=$(date -d '2024-01-01' +%s) /home/user/Documents/ /home/user/foo/

User

Break down the command and explain each part of it in dot point format.

ChatGPT

- rsync`: Invokes the rsync program, which is used for efficiently transferring and synchronizing files across systems or within a system.

  • -avm:
    • -a: Archive mode, which preserves permissions, symlinks, modification times, group, and ownership. Implicitly includes recursion.
    • -v: Verbose mode, which provides detailed output of what rsync is doing.
    • -m: Prune empty directory chains from the file-list.
  • --include='*/': Include directories themselves, which is necessary for recursion.
  • --include='*.jpg' --include='*.jpeg' --include='*.png' --include='*.gif' --include='*.bmp' --include='*.tiff' --include='*.svg': Include files that match these image file extensions.
  • --exclude='*': Exclude all other files that do not match the previously stated include patterns.
  • --prune-empty-dirs: Exclude empty directories that result from the inclusion/exclusion pattern.
  • --min-age=$(date -d '2024-01-01' +%s): Exclude files that are older than the timestamp for January 1, 2024. This uses date -d to convert a date string into a timestamp, which rsync uses to filter files.
  • /home/user/Documents/: Source directory from which files are being synchronized.
  • /home/user/foo/: Destination directory to which files are being synchronized.`

It’s also really good for explaining concepts / commands in plain language.

It’s like having a 24 hour on call Linux expert :)

#Things to note:

- Get the subscription. ChatGPT 3.5 is pretty useless. ChatGPT4 is fine, but I’m pretty sure you need the subscription to access it.

- Give it pre-instructions. I have told mine what distro, what shell I’m using and the make and model of my laptop. If you have a subscription you can add these as permanent pre-instructions, so to speak. That way it will be much more likely to give you correct answers.

- It’s not magic In order to get what you want, you have to be able to ask the right questions. It will boost your ability but won’t turn you in to a 1337 haxx0r

-Ask questions As always, don’t run any commands that you don’t understand. Ask it to break down any commands it tells you to run if you don’t understand them.

-Sometimes it goofs For various reasons, sometimes it will ask you to install a package that no longer exists, or will give you a command that doesn’t work. When that happens, I just feed the error message back into ChatGPT and it usually is able to correct itself.

-Ask “Is there a better or easier way to do this?” This is probably the most valuable question I’ve found to ask chatGPT. Sometimes it gets so far in the weeds looking for a solution to a problem that you need to pull back and start fresh.

top 50 comments
sorted by: hot top controversial new old
[-] barbara@lemmy.ml 47 points 3 months ago* (last edited 3 months ago)

Chatgpt does not know truth. It does not know if the info it provides is true. It does not know if the code actually works. It just concatenates strings based on probability. You may be lucky or you aren't. The easier the task, the more likely it'll succeed. But a low difficulty is no guarantee for success.

It is great for layouts, structure and for the basic concept. "For loop in fish". But it may struggle to convert a video from x264 to av1 with ffmpeg. It depends on info that's provided online. If it uses misinformation, then that's in there as well.

The command you got is just wrong. What about avif, jxl or most other image formats? Use it, but think.

[-] lorkano@lemmy.world 1 points 3 months ago

Note that sometimes Ai models check if code works by executing it. For example gemini can python function and execute it to write down the results

load more comments (1 replies)
load more comments (13 replies)
[-] kolorafa@lemmy.world 33 points 3 months ago

don’t run any commands that you don’t understand. Ask it to break down any commands it tells you to run if you don’t understand them.

You need to pay extra attention to this, as ML models will spit out commands and parameters that doesn't exists if there was not enough examples in training dataset for that action. Especially with explain as it could just spit out totally wrong but "sounding good" explanation for parameter etc as it not always will tell the magic keywords like "typically" that indicate that it doesn't have confidence as it's "based on other similar command/knowledge".

In your example it spit out:

 -m: Prune empty directory chains from the file-list.
 --prune-empty-dirs: Exclude empty directories that result from the inclusion/exclusion pattern.

which is actually exactly the same parameter with 2 different explanations, you can confirm this with man rsync

 --prune-empty-dirs, -m   prune empty directory chains from file-list

So the more edge case you have the bigger chance it will spill out bad results, but those new models are shockingly good especially for very common use cases.

load more comments (1 replies)
[-] lemmyreader@lemmy.ml 26 points 3 months ago

Interesting post, but made me also think of this one : https://xkcd.com/1168/

[-] nyan@sh.itjust.works 6 points 3 months ago

What I've always wondered about that one is: why bother forbidding Google but not 'man tar'? 🤨

[-] lemmyreader@lemmy.ml 9 points 3 months ago

😀 After seeing the comic for the first time I thought that the "UNIX" (TM) person simply could have gone for tar --help or tar --version as valid command to show off their "UNIX" skills and save all.

[-] noli@programming.dev 7 points 3 months ago

Also surely a lot of people would know tar -Create Ze Vucking File and/or tar -Xtract Ze Vucking File

[-] TheGrandNagus@lemmy.world 3 points 3 months ago

I interpret "use a valid tar command on your first try" as not allowing to run other commands before the tar command.

[-] nyan@sh.itjust.works 1 points 2 months ago

Surely the bomb isn't the only computer in the immediate area.

[-] HopFlop@discuss.tchncs.de 1 points 2 months ago

It says you only have ten seconds, I doubt you could log onto another (Unix) computer in that time, open the terninal, run the man oage and then run over and enter a valid command...

[-] ricdeh@lemmy.world 22 points 3 months ago

I'm not opposed at all to using LLMs for such purposes, however, please consider a solution that aligns with the values of GNU/Linux and the Free Software Movement. If you have sufficient RAM and a somewhat modern CPU, you can do inference on your very own machine, locally, with no connection to any external servers. And at very respectable speed.

[-] PanoramicAddict@lemmy.ml 5 points 3 months ago

Serious question: Can running locally be as good as ChatGPT-4?

[-] wuphysics87@lemmy.ml 11 points 3 months ago

It's worth doing anyway to get a sense of how computationally intensive it is. Then consider how many people ask for the daily fart joke and you get a sense of the environmental impact.

[-] zolax@programming.dev 4 points 3 months ago* (last edited 3 months ago)

in terms of the quality of writing you can get models from 20GB at a similar level to GPT-4 (good for creative writing but much worse if knowledge of something is required)

the model I use (~20GB) would know what rclone is but would most likely not know how to use it

EDIT: now that I think about it is was based off of some benchmark. personally I wouldn't say it performs at GPT-4 but maybe GPT-3.5

[-] oldfart@lemm.ee 5 points 3 months ago

Which model is that? I tried several ones that were complete trash, then Mixtrail appeared and starting giving answers that are very basic but mostly factually correct. But none of these are even close to ChatGPT that I can rely on with writing scripts.

Don't get me wrong, I'd rather not give them my data and money if there was an alternative. But for tech stuff we're not there yet.

[-] zolax@programming.dev 1 points 3 months ago

yeah, my bad. edited the comment with more accurate info

and this does apply to creative writing, not knowledgeable stuff like coding

load more comments (1 replies)
[-] DaveX64@lemmy.ca 21 points 3 months ago* (last edited 3 months ago)

User: "ChatGPT, write me a script to clean up my hard disk on Linux"

ChatGPT: sudo rm -rf / 😁

[-] z00s@lemmy.world 6 points 3 months ago

Squeaky clean 😅

[-] fluckx@lemmy.world 20 points 3 months ago

I'm all for it as long as you keep using your brain. Coworker of mine set something upn on AWS that wasn't working. Going through it I found the error. He said he tried it using chatgpt. He knows how to do it himself, he knows the actual mistake was a mistake, but he trusted Amazon Q when it said the mistake was correct. Even when double checking.

Trust, but verify.

I found it to be a helpful tool in your toolkit. Just like being able to write effective search queries is. Copying scripts off the internet and running them blindly is a bad idea. The same thing holds up for LLMs.

It may seem like it knows what it's talking about, but it can often talk out of its arse too...

I've personally had good results with 3.5 on the free tier. Unless you're really looking for the latest data

[-] russjr08@bitforged.space 20 points 3 months ago

For myself, I'm fine with using ChatGPT and other LLMs (I've been experimenting with trying to run them locally, so that I can gain some insight on them a bit better) to "fill in the gaps", or as a sort of interactable Wikipedia - but I try to avoid asking LLMs something that I have zero knowledge of, because it then makes it a bit more difficult to verify the results it produces.

[-] dataprolet@lemmy.dbzer0.com 8 points 3 months ago

Is this as ad?

You could also use free LLMs, check out FMHY.

[-] tsonfeir@lemm.ee 8 points 3 months ago

I love ChatGPT. It’s an invaluable tool. It has helped me solve my problems by pointing me in the right direction significantly faster than any search engine.

[-] miss_brainfarts@lemmy.blahaj.zone 5 points 3 months ago

It's a pretty helpful tool, but I still prefer running an LLM locally, even though it'll take a while to answer, then

[-] gravitywell@sh.itjust.works 4 points 3 months ago

If you haven't already tried it I would also highly recommend phind.com for troubleshooting or coding questions.

Also for a nice quick access to gpt from your terminal grab "tgpt" and you can ask questions directly from your terminal.

load more comments (1 replies)
[-] filister@lemmy.world 4 points 3 months ago

You can use Copilot or Mistral Chat for pretty much the same. Copilot offers GPT-4 (or 4.5) for free, Mistral Chat is using their own models which sometimes produce better results.

[-] lorkano@lemmy.world 1 points 3 months ago* (last edited 3 months ago)

To be honest Microsoft restrictions made copilot extremely ineffective. I asked it to help me disable ssl verification in one of the java's http clients for testing purposes during development. It said it's something I never should do and will not give me an answer. ChatGPT restrictions are way more rational than that. Microsoft gutted the tool a lot.

[-] aleph@lemm.ee 2 points 3 months ago

Yup, same here. I've also used it to write shell scripts and figure out unique ways to configure my system that I would have been hard pushed to achieve via a web search. It's incredibly useful.

[-] ouch@lemmy.world 2 points 2 months ago

Someone excitedly demonstrated to me how easy it is to code with copilot. They generated a bunch of code easily. And then proceeded to debug subtle bugs for longer than it would have taken to write it yourself in the first place.

And in the end they were still left with badly structured and maintainable code.

LLMs will do exactly what Stackoverflow has done, but more efficiently: allow profileration of bad/outdated solutions to problems, and application of those with no real understanding.

More garbage code and more work for the few people who continue to actually read manuals and understand what they are doing.

[-] z00s@lemmy.world 1 points 2 months ago* (last edited 2 months ago)

Perhaps, but I'm not really suggesting its use for professional programming in this post.

What it is good for is helping with simple stuff like terminal commands, learning python etc. Stuff that has a low risk profile that you're not relying on for anything too important.

[-] human6439@programming.dev 2 points 3 months ago* (last edited 3 months ago)

It's funny you should mention rsync; I set up a shell script the other day that backs up my stuff using Borg and rsync, and I was basically chatting with an AI all day to learn about Borg and get everything set up correctly. I was reading man pages to get the details of the arguments, though; I didn't think to ask the AI to explain them to me. That's a great idea.

In any case, the experience was something of an eye opener. It was fun and easy.

[-] lobster_teapot@lemmy.blahaj.zone 2 points 3 months ago* (last edited 3 months ago)

I'll confess that I only tried gpt 3.5 (and the mistral one but it was actually consistantly worse) given that there's no way in the world I'm actually giving openAI any money.

Having said that I don't think it fundamently changes the way it works. Basically I think it's fine as some sort of interactive man/stackoverflow parser. It can reduce frictions of having to read the man yourself, but I do think it could do things a lot better for new user onboarding, as you seem to suggest in the comments that it's one of the useful aspect.

Basically it should drop the whole "intelligent expert" thing and just tell you straight away where it got the info from (and actually link the bloody man pages. At the end of the day the goal is still for you te be able to maintain your own effing system). I should also learn to tell you when it actually doesn't know instead of inventing some plausible answer out of nowhere (but I guess that's a consequence of how those models work, being optimized for plausibility rather than correctness).

As for the quality of the answer, usually it's kind of good to save you from googling how to do simple one liners. For script it actually shat the bed every single time I tried it. In some instances it gave me 3 ways to do slightly different things all in the same loop. In other straight up conflicting code blocks. Maybe that part is better in GPT 4 I don't know.

It also gives you outdated answers without specifying the version of the packages it targets. Which can be really problematic.

Basically where I'm going with this is that if you're coding, or maintaining any server at all, you really should learn how to track the state of your infra (including package versions) and read man pages anyway. If you're just a user, nowadays you don't really have to get your hands in the terminal.

At the end of the day, it can be useful as some sort of interactive meta search engine that you have to double check.

I'm really not getting into the whole "automated garbage that's filling up the internet, including bug reports and pull request" debate. I do think that all things considered, those models are a net negative for the web.

load more comments (1 replies)
[-] flashgnash@lemm.ee 2 points 3 months ago

I've found it's best for things you kinda already know the answer to or at least know what it should look like, it fills in the blanks

Also, for gpt 4 you can get it without the subscription if you do it through the API and use something like gpt-cli (you're still paying for it but unless you're talking to it hours on end it'll end up cheaper that way)

[-] solberg@lemmy.blahaj.zone 1 points 3 months ago

Data Analysis is nice for converting files and cropping images I’ve found.

[-] Secret300@sh.itjust.works 1 points 3 months ago

I constantly use it for running Linux commands that have complicated options and arguments or that I just don't remember.

load more comments
view more: next ›
this post was submitted on 14 Apr 2024
51 points (69.5% liked)

Linux

45732 readers
664 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS