37

Playing around with the FOSS game Cataclysm DDA, I felt compelled to parse and connect the CPP and JSON to see relationships and complexity. It's the first time I've really felt motivated to do so. I'm just trying to wrap my head around how some features are implemented like z-levels, mining tools and various actions; simple stuff really. I find it challenging to parse something quite this large, so I started scripting a way to track down objects across the code base to see what is defined in JSON and what is hard coded. Normal? Obvious? FOSS alternatives to do this? I'm basically chaining a bunch of grep commands to print pretty trees with bat.

top 13 comments
sorted by: hot top controversial new old
[-] MoogleMaestro@lemmy.zip 13 points 3 weeks ago* (last edited 3 weeks ago)

Even better: do a git history of certain files to get a broad sense of history and understand it's evolution.

I highly advise this practice for familiarizing yourself with parts of a codebase you may otherwise not know anything about. Interesting commits you should git show.

Though combining this with scripting would also be interesting. 🤔

[-] 31337@sh.itjust.works 11 points 3 weeks ago

I usually just use VS Code to do full-text searches, and write down notes in a note taking app. That, and browse the documentation.

[-] 0x0@programming.dev 9 points 3 weeks ago

The code is my bible, the grep is my friend.

That and breakpoints.

[-] grrgyle@slrpnk.net 1 points 3 weeks ago

this but ack

[-] LainTrain@lemmy.dbzer0.com 9 points 3 weeks ago

This is a really neat idea. I'm frequently put off by large highly distributed (among files and dependencies) codebases with no obvious entry point. I wanted to make some changes to GNU's mailutils and the code felt genuinely incomprehensible (BSD's implementation of mail was a bit easier).

Perhaps another approach is to parse ptrace.

[-] degen@midwest.social 8 points 3 weeks ago

To grep is to grok.

I have a grepconf alias for a find-grep loop on my nixos config that comes in handy. Treesitter can be a godsend too.

[-] FizzyOrange@programming.dev 2 points 3 weeks ago

No but I think this is probably a great use case for AI. Haven't tried it though.

[-] 31337@sh.itjust.works 7 points 3 weeks ago

Nah, LLMs have severe context window limitations. It starts to get wackier after ~1000 LOC.

[-] j4k3@lemmy.world 4 points 3 weeks ago* (last edited 3 weeks ago)

Yeah this has been my experience too. LLMs don't handle project specific code styles too well either. Or when there are several ways of doing things.

Actually, earlier today I was asking a mixtral 8x7b about some bash ideas. I kept getting suggestions to use find and sed commands which I find unreadable and inflexible for my evolving scripts. They are fine for some specific task need, but I'll move to Python before I want to fuss with either.

Anyways, I changed the starting prompt to something like 'Common sense questions and answers with Richard Stallman's AI assistant.' The results were remarkable and interesting on many levels. From the way the answers always terminated without continuing with another question/answer, to a short footnote about the static nature of LLM learning and capabilities, along with much better quality responses in general, the LLM knew how to respond on a much higher level than normal in this specific context. I think it is the combination of Stallman's AI background and bash scripting that are powerful momentum builders here. I tried it on a whim, but it paid dividends and is a keeper of a prompting strategy.

Overall, the way my scripts are collecting relationships in the source code would probably result in a productive chunking strategy for a RAG agent. I don't think an AI would be good at what I'm doing at this stage, but it could use that info. It might even be possible to integrate the scripts as a pseudo database in the LLM model loader code for further prompting.

[-] FizzyOrange@programming.dev 3 points 3 weeks ago

Gemini has a 1 million token limit. Also instead of just giving it the entire source you can give it a list of files and the ability to query them (e.g. to read an entire file, or search for usages/definitions of terms etc.).

[-] astrsk@fedia.io 4 points 3 weeks ago

In my experience, token limits mean nothing on larger context windows. 1 million tokens can easily be taken up by a very small amount of complex files. It also doesn’t do great traversing a tree to selectively find context which seems to be the most limiting factor I’ve run against trying to incorporate LLMs into complex and unknown (to me) projects. By the time I’ve sufficiently hunted down and provided the context, I’ve read enough of the codebase to answer most questions I was going to ask.

[-] FizzyOrange@programming.dev 1 points 3 weeks ago

Right but presumably you can let the AI do that hunting.

[-] 31337@sh.itjust.works 4 points 3 weeks ago

Haven't tried Gemini; may work. But, in my experience with other LLMs, even if text doesn't exceed the token limit, LLMs start making more mistakes and sometimes behave strangely more often as the size of context grows.

this post was submitted on 09 Sep 2024
37 points (97.4% liked)

Programming

17123 readers
88 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS