How do you measure and appraise testers?
- Do you use a tool to measure testers?
- Do you measure them based on the number of bugs they find?
- Do you measure them by the number of valid (acknowledged by development/product/ team) bugs they find?
- Do you measure them based on the x% variance with schedule for a testing effort?
- Do you measure them based on the x% adherence to configuration management practices?
- Do you measure them based on the Ratio of defects found during testing (vs) defects found post release?
- Do you measure them based on the x% system test coverage of the assigned functionality as measured through Reviews and Requirement Traceability Matrix?
- Do you measure them based on the Test Environment Utilization Rate?
- Do you measure testers based on how testers improve development teams?(http://blogs.msdn.com/james_whittaker/archive/2008/07/22/measuring-testers.aspx)
- etc
For all those who strive by metrics and incentive plans based on metrics read this fantastic article “Employees will always game incentive plans -- because the geniuses who design them don't anticipate how employees will respond” By Joes Spolsky
I thank Michael Bolton for sharing the above link by Joel in a discussion at Test Republic.
Right now I do not have “the/a” answer for the question of this post.
But, isn’t it also our responsibility to not persist/continue in such a blind system.
Further reading:
- Software Engineering Metrics: What Do They Measure and How Do We Know? by Cem Kaner
- Side Effects of Metrics/Statistics by Shrini Kulkarni
- Remuneration and Punished by Rewards by Jonathan Kohl
- Don’t Use Bug Counts to Measure Testers by Cem Kaner
- The Darker Side of Metrics by Douglas Hoffman
Disclaimer: All the blogs shared by me are my ideas, my thought, my understanding of the subject and does not represent any of my employer’s ideas, thought, plans or strategies.
- Do you measure them based on the number of bugs they find?
- Do you measure them by the number of valid (acknowledged by development/product/ team) bugs they find?
- Do you measure them based on the x% variance with schedule for a testing effort?
- Do you measure them based on the x% adherence to configuration management practices?
- Do you measure them based on the Ratio of defects found during testing (vs) defects found post release?
- Do you measure them based on the x% system test coverage of the assigned functionality as measured through Reviews and Requirement Traceability Matrix?
- Do you measure them based on the Test Environment Utilization Rate?
- Do you measure testers based on how testers improve development teams?(http://blogs.msdn.com/james_whittaker/archive/2008/07/22/measuring-testers.aspx)
- etc
For all those who strive by metrics and incentive plans based on metrics read this fantastic article “Employees will always game incentive plans -- because the geniuses who design them don't anticipate how employees will respond” By Joes Spolsky
I thank Michael Bolton for sharing the above link by Joel in a discussion at Test Republic.
Right now I do not have “the/a” answer for the question of this post.
But, isn’t it also our responsibility to not persist/continue in such a blind system.
Further reading:
- Software Engineering Metrics: What Do They Measure and How Do We Know? by Cem Kaner
- Side Effects of Metrics/Statistics by Shrini Kulkarni
- Remuneration and Punished by Rewards by Jonathan Kohl
- Don’t Use Bug Counts to Measure Testers by Cem Kaner
- The Darker Side of Metrics by Douglas Hoffman
Disclaimer: All the blogs shared by me are my ideas, my thought, my understanding of the subject and does not represent any of my employer’s ideas, thought, plans or strategies.
Comments
In my opinion we should measure testers by:
1) The number of valid (acknowledged by development/product/ team) bugs that they have found.
2) Check the product that they have tested and see the number of high and low priority defects. No High priority defects should be present in the product released to the customer.
If you deeply think about these two aspects, you will understand the challenges for any measurement systems for thinking and learning activity by engaging humans.
I am not sure if we can get any "reasonable" way to measure testers. One way is to make the human beings behave like "brain" dead then we can make them act like robots to follow a script. Then we can apply any of the methods you have listed.
I would say "sapient" testing is beyond measurement
Shrini
Any measurement system that leaves out intangible but crucial aspects of the work being measured is going to distort that work in ways that are not of benefit, and which may be quite hair raising. Higher managers and HR (i.e. those too far from the coalfront to know what's going on) love numbers, because numbers seem impartial and objective, and they think they can understand numbers because they don't remind them that they don't really know the detail. But give any kind of metrics, and any qualitative part of the assessment fades swiftly into a very distant second place.
The whole point of having a sapient human in the management chain is that they should be skilled enough at the whole management thing to use their judgement.
abhilash,
1) The number of valid (acknowledged by development/product/ team) bugs that they have found.
- How do you know which bug is valid?
- What if you are given a program which was developed by a skilled developer for testing v/s your colleague who is asked to test a program developed by a less-skilled developer?
- What if it's release week, most bugs you report are assigned the status invalid by development team, the manager is more interested in the release than your arguments?
- It’s in acceptance phase; your manager asks you to share the bugs with development directly, rather than entering in bug tracker, because he/she (manager) feels it might take longer and does not want his/her upper management know test team is still discovering bugs.
2) Check the product that they have tested and see the number of high and low priority defects. No High priority defects should be present in the product released to the customer.
- Are you confident that you and your customer talk the same language (assigning priority)?
- What if the customer is busy during acceptance, overlooks the bug in the current release, but then reports them in the next release?
-Sharath.B
What's wrong with hiring a decent test manager and asking them to use their brain to appraise their testers?
I feel this is a decent way to appraise testers, since a test manager would have observed his tester test and improve his skills during the time period.
But, then this comes with it’s set of issues like
- favoritism
- managers who take arguments personally
- manager might act like GOD
- etc
But, I still feel a human appraising a human is far better than humans appraised by fake numbers.
-Sharath.B
1) How do you know which bug is valid?
We can find out which bug is valid by entering the bug ID in a defect tracking tool like Clearquest and check the various states of the bug. Also check for the current state of the bug. If the state is rejected we can assume that the bug is not valid.
2) What if you are given a program which was developed by a skilled developer for testing v/s your colleague who is asked to test a program developed by a less-skilled developer?
This is a good and a valid case. I am not sure how to handle this situation
3) What if it's release week, most bugs you report are assigned the status invalid by development team, the manager is more interested in the release than your arguments?
I think each tester must maintain a record of the issues discussed with the developers so that we can check the record and cross check with the developers to verify the integrity of the data
4) It’s in acceptance phase; your manager asks you to share the bugs with development directly, rather than entering in bug tracker, because he/she (manager) feels it might take longer and does not want his/her upper management know test team is still discovering bugs.
Same answer as above. Maintain a complete record of all the issues discussed with the development team
5) Are you confident that you and your customer talk the same language (assigning priority)?
I think this depends on the amount of customer interaction and clarity of communication. Depends mainly on the customer
6) What if the customer is busy during acceptance, overlooks the bug in the current release, but then reports them in the next release?
This is a good and a valid case. I am not sure how to handle this situation
As Shrini said, Sapient Testing and skills for Sapient Testing by a testers are always valued beyond measurement and this words remains true forever. They cannot be given the value or rating and it can only be felt and seen in the testing environment. Its my little sand dust experiences.
I believe and practice this; for me, in my contexts it works and do not know how fine it suits for my other testing friends. A tester should keep consistently practicing the skills needed by a tester, by QUESTIONING. Should keep accessing oneself with her/his progress and quest for learning, thinking, questioning and testing.
There is a need for appraisal of the test environment, which should be made by the Test Leads and Test Managers; also the testers should involve in this appraisal of test environment very necessarily. A good testing environment with a nourishing motivation and inspiration can deliver the sapient skills and testers to the testing community.
Being or making herself/himself a sapient tester with sapient skills, no doubt, that should motivate and inspire the other testing peers to make themselves into such worth testers whose values are beyond measurement.
Ravisuriya
Do your best developers want this tester working on their code?
If they're really good, they'll want the code that they put into production to be the best. And to do that, they'll know they need the best working on testing it.
Now your only problem is working out who the best developers are...
In my opinion Testers priority should be to deliver a good quality application/product to a customer.
- How do you think a tester will be able to achieve it? What if a test team logs valid information, but due to some pressure management decides to ship the product without fixing them?
Also, how do plan to achieve this? - find critical bugs during sanity testing rather than finding it later
I feel your solution to measuring a tester is very narrow, what if a developer was not good enough or had personal problems or did not expect the scenario or etc…and the module developed by him is assigned to be tested by a tester (A). I am sure the tester will be able to find more “critical” bugs than a tester (B) who might be assigned to test a module which could be developed by a better developer. Does it mean tester A is better than tester B?
-Sharath.B