Would Heu-risk it? Part 21: The Hive

Picture of the inside of a bee hive

This weapon is very important for risk assessments. The rhyme is pretty straight forward this time:

”First you find one, then two, three maybe four
Hold on, there will probably also be more
Bugs tend to show up like tightly knit knots
So that part you better make sure you test lots


So, what does it mean?

Oh yeah, we are talking about defect clustering baby!
Just like some types of errors are more common (I’m looking at you off-by-one!), some areas tend to me prone to problems.
This means you probably won’t have an even curve of ”numbers of areas tested” and ”numbers of bugs found”, but rather some areas will be littered with bugs and some might have very few.

I have a very special place in my heart for the Pareto principle, which I tend to apply to almost anything, so I like to say things like ”80% of your bugs will be in 20% of your code” or ”20% of your tests will find 80% of your bugs”. Unfortunately, you might not know from the start which 20% but if we think about the reasons why those areas are more problematic, we might be able to guess.

So, why do some areas attract more bugs?
A few reasons could be: time, money, complexity, experience.
Time: A story/feature/area was done last and was more rushed than the others. Or it had a set deadline. GDPR could be an example of that.

Money: We didn’t have enough budget to do a story/feature/area in the way we wanted so we had to make do with a cheaper solution.

Complexity: A certain part might have very complex business rules and/or very complex programming logic.

Experience: Or maybe the developer building a certain thing lacks the experience needed to build it. And was not given proper support.

When thinking of how to prioritize your testing, these are all things to take into consideration. I expect to find more problems, and therefore plan to spend more time testing, in areas that I know fall into any of those categories.

One thing in particular that gets my spidey senses tingling is when I notice, or hear, developers avoiding certain parts or estimate stories there very high. That means it has probably been problematic before and probably will be again.

And of course that if a certain part/feature starts clustering bugs, I will keep digging there for a longer time than planned.

And sadly, not to be taken lightly: how people are feeling will affect the quality.
So, knowing the people you work with is very important.
A developer who has been delivering top quality code but is going through some kind of trauma (sickness/deaths in the family, divorce, stressing/working too much) or just has a lot of other things on her/his mind – chances are the quality will start slipping.
Someone who has been very opposed to something, say they wanted to prioritize another story instead or choose another solution, might not deliver 100% top quality.

Story time

This is a bit of a sad story actually.
It was my first real project as a tester. In the team were a number of developers, all new to me. One stood out a lot. Let us call them X.

X was pretty fresh out of school, young and with that attitude of ”I can do anything” that can be refreshing and amazing in some people. In this case – no. There was no end to the lengths X would go to in order to not accept that they had done something wrong. There was also almost nothing that could force X to ask questions, meaning they build something rather than check they were on the right track.

Needless to say, I think I spend 50-75% of my testing time just testing X’s code.
And I spent at least 50% of my non-actively-testing time arguing with X.
And yes, most bugs were in X’s areas.
And yes, they took at least twice as long to get fixed.

I learned a lot about bug advocacy during that project. By doing absolutely everything the wrong way. I can only imagine the great things we could have created together had I only not fallen into the alpha caveman competition that X’s attitude brought out in me.

X – I am sorry. You made a lot of mess but I made our collaboration needlessly hard.


Quote of the day

”Whenever I create or change code,
there’s a probability that I will introduce a new defect (or three). When I fix these new defects, I change more code,
which creates a higher probability of more new defects.”

Jason Goreman

Reading suggestions

Don´t count on it – QA Hiccups
How to defeat defect clustering – Ashley Dotterweich
What is defect Clustering – Discussion on MoT The Club
Why Do Defects Cluster? – Codemanship, Jason Goreman
Clustering of defects reports using graph partitioning algorithms – Vasile Rus, Xiaofei Nan, Sajjan Shiva, Yixin Chen

Previous posts in the series

Title and linkCategory
Part 1: IntroductionNone
Part 2: Mischievous MisconceptionsTrap
Part 3: The RiftWeapon
Part 4: The FascadeTool
Part 5: The Temptress’ TrailsTrap
Part 6: AlliesWeapon
Part 7: Don’t turn backTool
Part 8: The GluttonTrap
Part 9: Beyond the borderWeapon
Part 10: Forever and neverTool
Part 11: The ShallowsTrap
Part 12: The TwinsWeapon
Part 13: The ObserverTool
Part 14: AlethephobiaTrap
Part 15: Opus interruptusWeapon
Part 16: The IllusionistTool
Part 17: Fools’ assumptionsTrap
Part 18: The UnexpectedWeapon
Part 19: Constantly ConsistentTool
Part 20: Drowning in the deepTrap
En kommentar

Lägg till en kommentar

Din e-postadress kommer inte publiceras. Obligatoriska fält är märkta *