Do Testers Suffer from Decision Fatigue

I recently came across a great article on Decision Fatigue.

On projects, testers spend their days making decisions. These decisions range from the simple such as boundary conditions, through the complex around the interpretation of a requirement. These decisions also range from the unimportant to those that can have impacts on time, delivery and project budgets.

In our roles as ‘decision machines’ we need to understand the factors that contribute to making great decisions. More importantly we need to know what contributes to us making a poor or inconsistent decisions and what we can do to turn that around.

One of the biggest understandings I took from the article was the need to break decisions that are complex or with multiple parts down to a series of simpler, serialised decisions.

For more detail have read of these great articles on Decision Fatigue and if we are in control of our own decisions;

Change Requests and Scope

When you are working on a project that is having issues meeting milestone dates, one of the primary things that you need to do is ensure that scope is managed well.

Below are a list of questions that I ask to help myself and those around me understand if an item is needed and make an informed decision. One of the important things to remember is there are no hard and fast rules. If an organisation understands the impacts they can have anything they like.

Ref Question Answer to Advance CR
1 Will the safety of an individual or class of individuals be endangered if the CR is not done? Yes – it must advance – If not Risk Assessment must be done ( I would suggest legal advice too)
2 Is there a contractual or legal obligation that we need to cover off that drives this CR? Yes – it must advance
3 Is this change necessary for the overall success of the project? Yes – it must advance
4 Can the organisation function without this? No – it must advance
5 Will the profitability of the organisation be impacted if this is not done? Yes – it must advance
6 Will the organisation suffer reputational or brand damage if this is not done? Yes – it must advance
7 Even though this change may have a negative impact on this project, does it result in significant business upsides that make it worthwhile? Yes
8 Is this change financially worthwhile? Yes
9 Does enacting this change now make more sense than delaying it? Yes
10 Is there a business work around? No
11 Is the impact of this change seen outside core organisation, ie wholesalers, partners, public etc? No right or wrong answer, it just needs to be understood
12 Will the delay end up costing the organisation more money in the end? Yes
13 Given a choice between slipping the end date and having this what would you choose? slipping End Date = Do
14 Can it be leveraged into existing system process(es)? No



Defining Success Without Metrics

I make the assertion that it is possible for a solution to go into Production, have no defects and the product/project still to be a failure. From a testing perspective, I would argue that generally the genesis of this is in the mix of Validation versus Verification Testing that is done.

When we look at quality, I believe that the solution’s quality exists on a line from inferior, through acceptable and up to perfect. The area that is acceptable (between acceptable and perfect) I will call the Acceptance Paradigm (AP). Once we have a release we have is in the AP, we have satisfied Quality.

The above paragraph repeats over and over, just with the context changing. Let me change it for requirements, so you can see what I mean

When we look at compliance to requirements, I believe that the solution’s compliance exits on a line from inferior, through acceptable and up to perfect. The area that is acceptable (between acceptable and perfect) I will call the Acceptance Paradigm (AP). Once we have a release we have is in the AP, we have satisfied Requirement Compliance.

I think of the number of defects being an aspect Quality, and Quality as being an aspect of Success.

I would argue as a test team (or a TM) when we focus on defects only, we are looking backwards not forwards and focusing only on one aspect of the outcome.

I regularly define the number of defects found in Production (and being under) as an aspect of success. The more I move forward, I think it is naïve from both a testing and a project perspective.

it will  take some time for me to formulate my definition of success. I don’t have one, yet. This post is an attempt to help me distil my thoughts.

At the moment I think that success is variable to be defined for each individual release or project. It will cover;
• The deliverable (quality, durability and compliance)
• The process (how we did it)
• The stakeholder (are they satisfied)
• Timeliness (did we hit milestones)
• Cost (did we deliver value)
• Relationships (being built or still intact)

as always comments welcome

Effectiveness over Efficiency

Why should we care more for Effectiveness over Efficiency?

I think that the most important thing to do before I go on is to define what the terms mean to me;


If an artefact is produced effectively then its objectives are achieved and the problems it should resolve are answered.

If an artefact is produced ineffectively, then it is not worth the paper that it is written on.



If an artefact is produced efficiently then then tasks are completed in the least amount of time possible with the least amount of resources possible.

If an artefact is produced inefficiently, then the creators are taking the long road and it costs more than it should.


I believe in its purest form effectiveness is evaluated without measuring costs. This can be problematic when we overlay commercial reality on to a report, artefact or process. It becomes doubly complicated as I consider effectiveness and efficiency to be mutually exclusive.

One of the common issues that I come across is the confusion between prescriptiveness and effectiveness. It is my experience that trying to force people (and teams) into an overly prescriptive approach is seen as dictatorial and authoritarian and counterproductive to both efficiency and effectiveness.


As examples;

I can produce a test plan in a day. It will largely be a template, solution agnostic and while it will look and feel like a test plan, and it will tell you what we are testing, and where the risks are etc. It will not evolve off the paper into a framework for actually doing the job, in effect it becomes shelfware and adds no value.

I can produce a test plan in a month, it will be a work of art, prescriptive, articulate all flows, processes, the risks, the mitigations. It will not only tell you what we are testing but, how, where, who, and why. It will not survive the first month of execution and become a rock upon which testing will be shackled at every turn. In effect it becomes shelfware and adds no value.

I think of an effective document as a compass as opposed to a roadmap.

A roadmap captures the layout at a point in time. If the area that is described changes, as an example, they build a new motorway, you cannot use it as you do not know where to get on or off. The worst part is that you can look at a roadmap and see the motorway and know they must have removed a road to make it, so what you have is irrelevant, you know that, but it is all that you have.

A compass always points north and from there you can always relatively figure out where you need to go. If you need to go west, you can always derive that from the compass regardless of the underlying landscape or changes. Even if they remove the roads and plant a forest, you still know which way is west.

So what are the attributes of an effective report, artefact or process?

  • It produces the desired result
  • Its outcome is relevant to its audience or consumer
  • It provides a framework within which to solve problems
  • Its value is greater than the cost to produce it
  • It maintains relevance over the course of is life

Peter Drucker has said: “Plans are worthless, but planning is invaluable.” My experience in ICT, tends to align with this. The planning process is invaluable, but most of the artefacts that I have created or read, in my ICT career have not remained relevant over their intended life. I think this is one of the fundamental changes that we need to make for Waterfall 2.0.

So while I consider effectiveness over efficiency to be mutually exclusive we should still look to a produce an effective document in the most efficient manner (never losing site that the governing rule is effectiveness).


IEEE 29119 and Why I Am Not Signing the Petition

There is a lot of noise around IEEE 29119, a replacement for many current standards. It has been in creation for the last 6 years. In the last year there has even been a petition to have it revoked or for work on it to cease.

To be honest standards have not had that much of an impact anywhere that I have worked. The closest I have come is an organisation that used IEEE 829 documents (what a nightmare that was).

Methodologies and techniques have had a far bigger influence.  There has been RUP (still love Use Cases), TDD, V Model, Pairwise, Waterfall, Lean (my current fascination) and of course Agile.

My observations are that no organisation that I have worked for has managed to completely adopt a purest methodology, and none will!  I would love a dollar for every time someone has talked about going Agile, and what they actually meant was they weren’t going to do any documentation, or they were going to commence without requirements.

Over the last 3 years I believe I have seen a dumbing down of the industry as organisations embrace qualifications such as ISTQB at the expense of what makes great testers (Intuition, an enquiring mind, self-motivation, persistence, adaptability). I understand why. Assessing a personal attribute is much harder than seeing that someone has passed a qualification.  The biggest concern that I have out of this is the number of process-jockey testers and test managers who believe following a process and filling in templates will create great solutions, and the number of managers who believe them.

We still do not really have a common vocabulary (though I think ISTQB has helped with improving this) If in doubt, have a search for definitions of Quality (this Blog and Comments are worth a look) or Success. There are a scary number, I think they fall into around 12 themes (but that is another blog).

I do not believe that IEEE 29119 is going to change the landscape in terms of vocab, practise or outcomes.

I consider myself a waterfall TM, but I believe in scrums, session based testing (we called in Guerrilla Testing when I started) the V model, Risk Management and V&V. I bring elements of everything that I have done to each project to tailor an approach that suits the current project/programme and problem. I will read and digest IEEE 29119 and hopefully place some tools into my tool box to use when needed.

I believe that IEEE 29119 will create some noise and then become another failed attempt to standard our profession. That is why I am not signing the petition.

I also realise that in NZ we work in a relatively unregulated and unlitigious country, which probably allows me to have this view, more than if I was in the US or Europe.

Here are a few standards that I have come across in various roles:

 Standards and Certifications

  • ISO 9126 Software Engineering (has the quality model presented in the first part of the standard). This is superseded by;
    • ISO/IEC 25010:2011  Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE)
  • ISO/IEC/IEEE 29119 Software Testing (an internationally agreed set of standards for software testing that can be used within any software development life cycle or organisation). This supersedes
    • IEEE 829 Test Documentation
    • IEEE 1008 Unit Testing
    • BS 7925-1 Vocabulary of Terms in Software Testing
    • BS 7925-2 Software Component Testing Standard
  • IEEE 1028 standard for Software Reviews (this still appears to be valid but I assume will be superseded by ISO/IEC/IEEE 29119).
  • ISTQB – International Software Testing Qualification Board
  • CMM – Capability Maturity Model and CMMI
  • STBOK – Software Testing Body of Knowledge
  • BBST –Black Box Software Testing

The Test Strategy

Here Is Some Information On Creating A Test Strategy.


This is the mind map that I use to gather information. I may not use all the information, if not it is a considered decision to remove the information.


My Thoughts On Content

This document should contain static information on the testing process. It should be reviewed

  • Yearly
  • If the delivery framework changes
  • As part of project reviews

Generally every organization should have one and only one Test Strategy. It should not be a living document.


Cross party projects should have their own test strategy. It should be a static document and provide a common framework. Generally the integrator should own the production of this and each party that involved in the development or testing should be a signatory.


A test strategy’s content can change depending if it is for a single organization or a cross party engagement.

Should contain for both documents

  • Document control block
  • Defect definitions
    • Severity
    • Priority
  • Test Environments
    • Locations
    • Access
    • Support hours
  • Glossary
  • Types of Testing
    • What each type entails
    • Supporting collateral
    • Acceptance into testing criteria
  • Document Hierarchy
  • Roles and Responsibilities
  • Tools
  • Static Testing Risks and Mitigations
  • Reporting heuristics
  • Artifact arching durations and process
  • Data management and anonymization rules
  • Project close out process
  • Test Objectives
    • Success Criteria

Additional items for a cross party strategy

  • Delivery milestones
  • Suspension and Resumption criteria


More Content

If you are writing your first Test Strategy, this is a good template to use. I personally think it delves to far into the test plan area. but if you need a starter for 10, it is a good place for that

There is a good blog entry here on Test Strategies and Plans



Verification and Validation

Some thoughts on Verification and Validation

I suppose that I should prefix my reply by saying that I am an Enterprise Waterfall TM and while I have an understanding of Agile, it is not my forte.

I was told by an Agilista, that Agile was better than waterfall as Agile did Validation and waterfall Verification. I think that Agile’s Just-In-Time nature, and business engagement model can create the illusion that it is doing Validation as opposed to Verification.

I would argue that the terms are methodology agnostic. Test teams (and projects) that only engage in Verification are taking the easy path and often why in organisations the businesses can have a low opinion of ICT and at times a distrust. (you got what you asked for (it meets the requirements), not what you needed).

I am a great believer that the test team should contain 10% business users (from inception, not just UAT) so that they become the owners of business knowledge and rules to a greater sense than a BA. That while Requirements, tell us what to test, Design how to test, it is Business knowledge and relationships that we are testing the correct thing.

For a test team (and the project) to be successful, it needs to ensure that it is testing the right thing (Validation) and it is built correctly (Verification)