What are you working on at the moment?

A fellow tester recently asked “What are you working on at the moment?” and after telling them they proclaimed How come you’ve not blogged about this stuff?”

My usual response to this is “Not sure anyone would be interested in this” – I once used that excuse on Twitter and Keith Klain (@KeithKlain) responded saying “Please let us be the judge of that” – so here goes – the answer to What is Bill working on at the moment?

Security Testing

Application security is one of those topics I rarely tire off; as fast as new ideas in how to develop awesome applications come, the faster the pace of new ways to exploit them. I’ve been involved with application security testing for about 8 years now and consider it just part of my job as a tester to investigate possible security problems while testing. Some testers I meet ask Why? and my answer is usually the same Part of my role as a tester is to find the problems that matter to my stakeholders and Security issues are problems that matter to my stakeholders.

Hardly a week goes by at the moment where a new vulnerability is announced or a new company has been hacked or even more data has been stolen from a company. Certainly for the types of projects I get involved with, Security is a problem my Stakeholders what to know about and usually don’t want to be surprised at the end of the project when the Penetration Testers come in and find some big hole in the design or implementation.

Having been doing this for a while I’ve come to the conclusion that there are very few types of security vulnerabilities that I think my fellow testers might struggle to understand and be able to explore while they are testing. Doing this type of testing throughout the project gives us more opportunity to identify, consider options and implement fixes. It also has the benefit that we are raising the security bar by making it more difficult for attackers to exploit the lower hanging fruit as well as allowing the specialist penetration testers to focus on emerging threats and the really gnarly security problems – that to me is where their value lies, not in finding the common problems.

I’m not alone in this view, and I’m increasingly aware of more and more companies that expect their testers to be able to conduct security testing as part of their day-to-day role.

So what specifically am I focusing on at the moment? Well for me, I’m focusing my efforts on getting more in-depth knowledge of the security issues and vulnerabilities that affect mobile devices (including phones), APIs and increasingly client-side (browser based) frameworks.

For me this means learning the technology stacks involved, how programmers use them and how vulnerabilities might creep into them. This also means actually building and testing such systems especially where my current projects don’t offer opportunities in these areas.

Much of this information finds its way into my daily work as well as my Application Security Training courses that I run periodically. A new (and expanded) version of this course is being prepared and likely to be available as an online course around April time.

Those that are interested in learning more about Security Testing can catch me (and Dan Billing aka @TheTestDoctor) at the following conferences this year where we are presenting hands-on Security Testing tutorials.

As always I’m happy to discuss and offer help security testing to the testing community – as well to run public and private training courses on the topic.

Testing Adaptive/Learning Algorithms

A few years ago my interest in Artificial Intelligence was re-awakened when I stumbled on a coursera course on Machine Learning and an interesting talk by Chris Blain (@chris_blain) at Let’s Test a few years ago. When I studied Artificial Intelligence at university, it seemed to promise lots but the results seemed fairly meagre. However with the advent of Big Data and significant advances in computing power, storage, memory and algorithms the field caught my attention again. If you are interested in finding out more then I highly recommend the Machine Learning course on Coursera as a starting point.

So why am I interested in this field? Well the testing of these algorithms and systems that use them seems to be an interesting testing problem. Much of our approaches and testing literature relates to testing systems where we can express the rules that we expect the implemented computer system to follow. We have an explicit understanding of how we expect the  system to behave and so have reasonable methods for detecting problems.

I’ve always enjoyed testing complex systems and early in my career I worked on an interesting project developing tools to profile DNA samples; the profiling tools were being built to answer questions such as If we have a sample of DNA that is a mixture of 2 or more unknown people, what are the likely (probabilistic) combinations of individual samples that created the mixture.

The statistical models used were complex and time-consuming to calculate by hand and there were no comparable systems we could use as Oracles. We we did have a set of previously manually calculated mixtures we could use but this was a small proportion of all possible combinations and we had no choice in which combinations these were – they were samples from real crime scenes.

This was only one of the profiling tools we built and tested and finding elegant means to test profiling tools was difficult but really enjoyable (to me at least).

While there are some parallels with that project and the new breed of machine learning algorithms there are some key differences.

Testing the DNA profiling tools was deterministic, we could create a mixed DNA profile and calculate the expected probabilities of different combinations. The processing that took place and the decisions made by the profiling tools were explicit.

When testing a Machine Learning algorithm, we might understand the algorithm used but not the features that the algorithm has chosen to select and base its outputs on. This can give the appearance that the processing involves more tacit knowledge than explicit knowledge (like the DNA profilers).

What is more, these algorithms often learn over time to (hopefully) improve the quality of their outputs; so a set of inputs entered today may yield one response but the same set of inputs entered tomorrow (or yesterday) may yield different responses. This gives the appearance of these algorithms being somewhat non-deterministic.

Now there are traditional statistical tests and cross checks we can perform to check the outputs against a set of historical data (for example) but is that enough? How do we test such systems against data it might encounter where we don’t have suitable historical data. How do we select our tests from the wide range of possible inputs? What Oracles can we use when we don’t have comparable products and we may not fully understand the tacit knowledge that the machine has learnt?

It’s an interesting problem (and may have some simple solutions) and with the rise in the use of these types of algorithms it is one that as testers we need to consider and begin to answer the How do we test that? question.

I’ve submitted workshops on this to a couple of conferences this year so I’m hoping we’ll have some interesting sessions to explore this challenge further.

In the meantime, I’m more than happy to explore this topic with anyone who is interested.

The nature of testing

For the last year, I’ve had to spend a lot of time driving to various client sites and this has meant lots of time to reflect and think deeply on topics. Normally I struggle to sit still and reflect but being confined to a car for 2-3 hours a day solves that problem. So the last area that I’m actively investigating came from a line of thought during a long drive and was around how we reason when we test but lately this has morphed into something else (more on that in a bit).

Initially I could see that we are often engaged in deductive reasoning about the system we are testing; by this I mean we have a model (or notion or theory or hypothesis) of what we expect and we design a set of tests to explore the physical application we are testing. The tests work towards  building a consensus in our mind as to whether the physical system meets our model of the system or not (this is the simplified version and I might write a longer post on this in the future).

The other type of reasoning I’ve encountered is inductive reasoning (not 100% sure of the terminology yet) where we examine the physical system to build up a model of how it behaves and then reason about whether we think there is a problem.

My thoughts are that we are generally in one of these modes of reasoning (and there may be more) and often jump between the two.

This is the 5 minute overview so I might write some other posts on this as it seemed to lead to some interesting insights.

While I found this idea of two models of reasoning interesting and insightful, it led me to another question (probably linked to my renewed interest in Artificial Intelligence) – that question is Can we teach a machine to test?

This might sound fanciful and while we may never be able to fully replace a human tester we may be able to build a better class of tool to augment testers. The current range of tester tools  is fairly rudimentary and haven’t changed all that much since I started my professional testing career in the 90s.

Traditionally, machines have been very good (and much better than us humans) at calculations and following rules but have been notoriously poor at tasks that require pattern matching, learning abstract concepts and applying general ideas to specific instances (all key skills in testing). While this was generally true when I studied Artificial Intelligence, this has changed so much in recent years that I started to wonder what it would take for a machine to learn how to test.

As an example,  this week I read about an online service (Clarifai) that can process videos in real-time to identify key objects and events, produce a narrative of what is going on and formulate some idea of the sentiment of the video clip. You can read about this on MIT Technology Review.

When you think about the cognitive  processes involved in this feat (pattern recognition and understanding concepts and meaning) it is not a huge leap to having a machine learning to explore an application and detecting and reporting certain types of observations.

At the moment, this is just an academic topic that I’m thinking about but I’m already thinking about experiments to conduct in this area and no doubt will be ready to talk at conferences about this topic and my findings later this year or into 2016 – but I’ll be more than happy to talk about how the research is going before then.

Going back to my second topic of interest (testing adaptive/learning algorithms), the system from Clarifai at is the type of system that we as testers may be asked to test in the future. If you think about the challenges of testing such a system then you’ll understand why I think it will be a future challenge for many testers and is already an existing challenge for some.

Wrapping up

So that’s pretty much what I’m working on at the moment on the work front (aside from the regular  demanding project work, thinking about launching a new business venture, developing additional training material and getting ready to talk at conferences). All this probably explains why I don’t tweet often.

If any fellow testers are interested in any of these topics, I’m open to talking about them either online or arranging to meet up at conferences and the like. If you don’t already have my email address you can contact me at @Bill_Matthews.