Zum Jahresende 2016 wurde ich von für den OmegaTau-Podcast interviewt. Hier der Link dazu. Es hat eine Menge Spass gemacht Red und Antwort zu stehen.
Category Archives: Bugs
Faynman translated to software testing
I’m amazed by the work of Feynman. Some interviews on youtube are filled with his way to see the world. In one he explaines to students how science works. I’ve tried to transfer it to software testing.
Bug: Guess it; then compute the consequences, to see what would be the implied risk; then compare the result to nature, with experiment.
Original quote:
In general, we look for a new law by the following process: First we guess it; then we compute the consequences of the guess to see what would be implied if this law that we guessed is right; then we compare the result of the computation to nature, with experiment or experience, compare it directly with observation, to see if it works. If it disagrees with experiment, it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is, it does not make any difference how smart you are, who made the guess, or what his name is — if it disagrees with experiment, it is wrong.
How to display test results for the management? A question…
Something I learnt is that testers provide a service to the management. The management shall gain confidence of the status of the product. That the tester itself can learn about the product he carries out some test cases. As it is normal, the tester finds some bugs.
What is visible to the management? Often it is a percentage of passed/failed test cases. And now it’s getting interesting. The first question is: When is a test case failed? If the goal of the test case can be reached and no bugs are detected, it’s a pass – that’s mostly clear to everybody, but what status do you set, if you found a minor bug? Passed or failed? Some strict people say, if there is a bug it’s a fail. This might lead to the situation that a few bugs impact a lot of the test cases. This is bad for the visible percentage of passed/failed test cases. Quickly you get unwanted (?) management attention, being asked whether the software quality is bad. For sure it is not, but it looks like.
Asking @alanpage what to do, he suggested making something visible “…something to do with product quality…”. As this is sounding good, I was seeking for some KPIs that can be generated out of our databases (test cases and buglist). Something we can produce quickly is a report about the severity of the bugs. As expected there are no bugs with severity 1 and only a few with severity 2. The most of the bugs are in 3 and a few in 4. A pie chart made visible to the management that there is no urgent matter to take measures as there are no severe bugs.
The management was satisfied for a second. But the “real problem” of a bad passed/failed percentage is not solved. Out of our data bases (test cases and buglist) we would be able to say that one bug affects ‘a number of’ test cases. Like this an impact analysis could be done of which bugs affect which test cases. As a result a hit list (bug with most affected test cases) can be produced. Anyway the management, I guess, needs a chart to understand it.
What would such a chart look like? Suggestion appreciated. 🙂
Or do I need just words to explain the impact of a few bugs to a lot of tests. As always the management is looking for a quantitative answer. I would also definitely feel better making a real and stressable conclusion than just guessing.