84

Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there's sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?

Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.

But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

you are viewing a single comment's thread
view the rest of the comments
[-] xthexder@l.sw0.com 4 points 1 year ago

I'd never heard of mutation testing before either, and it seems really interesting. It reminds me of fuzzing, except for the code instead of the input. Maybe a little impractical for some codebases with long build times though. Still, I'll have to give it a try for a future project. It looks like there's several tools for mutation testing C/C++.

The most useful tests I write are generally regression tests. Every time I find a bug, I'll replicate it in a test case, then fix the bug. I think this is just basic Test-Driven-Development practice, but it's very useful to verify that your tests actually fail when they should. Mutation/Pit testing seems like it addresses that nicely.

[-] Sleepkever@lemm.ee 2 points 1 year ago

We are running the above pi tests with an extra (Gradle based) build plugin so that it only runs mutations for the changed lines in that pull request. That drastically reduces runtime and still ensures that new code is covered to the mutation test level we want. Maybe something similar can be done for C or C++ projects.

[-] xthexder@l.sw0.com 2 points 1 year ago

I'm currently working on a C++ project that takes about 10 minutes to do a clean build (Plus another 5 minutes in CI to actually run the tests). Incremental builds are set up, and work quite well, but any header changes can easily result in a 5 minute incremental build.

As much as I'd like to try, I don't see mutation testing being worthwhile for this project outside of maybe a few isolated modules that could be tested independently. It's a highly interconnected codebase, and I've personally reviewed (or written) every test, so I already know they're of fairly high quality, but it would be nice to be able to measure.

this post was submitted on 09 Jul 2023
84 points (97.7% liked)

Programming

17314 readers
176 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS