Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there’s sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?
Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.
But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?
PitMutation testing is useful. It basically tests how effective your tests are and tells you missed conditions that aren’t being tested.For Java: https://pitest.org
Edit: corrected to the more general name instead of a specific implementation.
Every enterprise I’ve consulted for that had code coverage requirements was full of elaborate mock-heavy tests with a single Assert.NotNull at the end. Basically just testing that you wrote the right mocks!
That’s exactly the sort of shit tests mutation testing is designed to address. Believe me it sucks when sonar requires 90% pit test pass rate. Sometimes the tests can get extremely elaborate. Which should be a red flag for design (not necessarily bad code).
Anyway I love what pit testing does. I hate being required to do it, but it’s a good thing.
Yeah. All the same. Create lazy metric - get lazy and useless results.
This is really interesting, I’ve never heard of such an approach before; clearly I need to spend more time reading up on testing methodologies. Thank you!
Does something like this exist for Python?
https://mutatest.readthedocs.io/en/latest/
Oh sweet! This introduced a whole new world to me. Also seeing mutmut, is one better than the other?