Wednesday, December 05, 2007

A Systematic Look at Testing, Perhaps

Seeing the release of Firefox, followed by the rapid release of, made me wonder about how regression testing is going at Mozilla. Of course, there are no simple answers to be found for any of the issues. We all want less bugs and for bugs to get fixed more quickly. Nor am I feeling that there is some deep and dark hidden flaw that is coming to the fore here. Things could be better. No shock there. Things could be worse. Lots of people care enough to make sure they are not. And MoCo moves from one day to the next. We all love the products that come out of Mozilla, and as with anything one cares about, we are frustrated about some things. Such is life.

But with every new test harness being talked about and with every new set of tests being brought in, the same questions and issues seem to come up for me. Every new test harness I see being developed for MoCo technologies bring me hope, but it is rarely clear why the last one is not being talked about. The same issues seem to be stumbled over. What happened to the lessons learned in the past? One problem with wiki-based communities is that people talk a lot about what they are doing. People tend not to talk about what they are not doing. Search for any kind of solution in Mozilla's information space, in any problem area, and one finds at least four or five different efforts. Only one or two may be current, but nothing gets written about why the others are not moving forward, or why they failed. It may just be against our natures to talk about things that did not work. One just has to puzzle out the things that are real from the things that just look real.

In testing, it often seems to me that one needs to be clear about what kind of tests one wants, who can create, develop, or maintain what kinds of tests, and why different kinds of tests should or should not be run. If one is not clear about these things, tests get developed which do not get run, tests get run but they do not exercise the right things, or their results get ignored, and lots of people might be working hard to no good effect. The result is that there is no joy.


(How hard is it to/Who can) develop a new test?
(How hard is it to/Who can) modify an existing test?
(How hard is it to/Who can) identify a feature or fault for which a test should be created?
(How hard is it to/Who can) evaluate the benefit of creating a test of the effectiveness of running a particular test over time?
(How hard is it to/Who can) run a test?
(How hard is it to/Who can) see that a test has failed?
(How hard is it to/Who can) identify the cause of a test failure, or interpret the failure?

I would suggest that each of these questions defines an axis in the "problem space", the space of possibilities to in which to locate problems and suggest solutions about Mozilla products. There are risks to be managed in the Firefox source base, and possibilities to be enabled. The cost of the risk to be averted and the possible improvement to the product that are enabled have to be measured against the costs of testing. I think that answers to these questions can be used to categorize various quality efforts and give a way to judge whether testing is doing what it should do and how much should or should not be spent on it.

I was going to take a stab at answering the one or more of questions above, but I will post this as a start. Answers can come later, especially since I do not know all of the answers. Nor am I sure I know all the questions. And if anyone wants to suggest either questions or answers, please do. And if anyone can think of a way to draw a representation of a, say, 12-dimensional space, so I can draw pictures based on something like the questions above, let me know that too.

Thanks. More to come.