Meet the Speaker UKSTAR 2018 | Richard Paterson

Next up in our ‘Meet the Speaker’ Series for UKSTAR 2018 is Richard Paterson.


Richard has over nineteen years experience in testing, test management and design. He has been involved in many aspects of functional and non-functional testing of Client-Server and Multi-tier Windows, Linux and Unix systems, as well as of embedded mission-critical defence systems. In this time, he has been actively involved in process and quality improvements and has strove to improve quality and efficiency where possible.

Richard has been involved in IIP, ISO and CMM(I) Lvl4 audits and certification proceedings. He is an internal auditor for ISO9001 (Quality) and ISO27001 (Information Security) and currently oversees a test function containing over 20 Test Analysts who are embedded within Agile teams.

Richard is currently also working on developing capabilities in the areas of automated functional, automated security, and accessibility testing. He built this team over ten years from 3 to 20+. As Head of Testing, he leads from the back, working to support, educate and mentor testers to achieve their potential and empower them to take responsibility for pushing forward their ideas. Richard has worked in traditional SDLCs like Waterfall and V-model but is currently working within an Agile environment. This involves creating an agile test process which has been certified as ISO9001 compliant.

He has been involved in most roles during the software lifecycle and enjoy bringing his skills to whatever position he finds himself in. While his main discipline is testing, he has done a lot of design and requirements capture work and has also done some development. Recently, he started an Application Security program to ensure that application security is considered at each stage of the application lifecycle. The focus is on designing and implementing security in from the start, rather than testing it in later in the life of the product.


You can find more from Richard on his blog:

Richard will present his session ‘Talking About Talking About Testing‘ at UKSTAR 2018 in London.


1. What is your favourite testing book/blog? Why is this your favourite?

This is probably a popular / obvious one, but Michael Bolton’s blog would be my favourite, because it opened my eyes to the gargantuan size and varied nature of the testing field.

About halfway through my testing career, I’d reached a bit of a plateau in terms of my testing knowledge and ability. As a tester and therefore knowledge junkie, I was bored and grumpy about the whole thing to the extent that I looked at other roles.

Stumbling across Michael’s blog opened my eyes to thoughts, ideas, concepts and approaches that I wasn’t even aware of. It made it clear how much I didn’t know, how broad and diverse and interesting the field was. That wealth of stuff to learn made knowledge junkie me very happy. I was pretty quickly re-hooked on testing, and I still am.


2. How do you keep up to date with the software testing industry?

I love quotes, because they can distill a concept to a pithy impactful statement which I can remember (and repeat to appear smart). Naturally, that means I tend to use Twitter. It’s a good way to push a lot of stuff past your nose very quickly, as long as you follow lots of people.

If the author can craft an interesting 140-character nugget, I rely on my “Quote-dar“© to pick it up. The danger, of course, is that I miss transformative ideas because the Twitter summary wasn’t clickbait-y enough; “You’ll never believe what this tester does! Developers hate him!


3. What is the biggest misconception about testing that you’ve heard?

That testers are solely responsible for the quality of product that makes it to the customer, and therefore solely to blame. Customers – even those who profess to understand software development – react to poor quality software with comments like “Did you even test this?” and “What do your testers do all day?“.

These comments misunderstand the nature of how defects are created and handled. Testers are clearly a key part of the development process, but we don’t create the bugs, and we typically don’t decide which get fixed and which do not.

While development teams logically understand this, and will state that quality is everyone’s responsibility, post mortems tend to fixate on what the testers could do better, and rarely on developers writing poor code, or managers overfilling releases.

This is why the term “quality assurance” is dangerous, because it implies we have powers that we do not or cannot have; the abilities to create and guarantee quality.


Track Information



Leave a Reply