Seeing the release of Firefox 2.0.0.10, followed by the rapid release of 2.0.0.11, made me wonder about how regression testing is going at Mozilla. Of course, there are no simple answers to be found for any of the issues. We all want less bugs and for bugs to get fixed more quickly. Nor am I feeling that there is some deep and dark hidden flaw that is coming to the fore here. Things could be better. No shock there. Things could be worse. Lots of people care enough to make sure they are not. And MoCo moves from one day to the next. We all love the products that come out of Mozilla, and as with anything one cares about, we are frustrated about some things. Such is life.
But with every new test harness being talked about and with every new set of tests being brought in, the same questions and issues seem to come up for me. Every new test harness I see being developed for MoCo technologies bring me hope, but it is rarely clear why the last one is not being talked about. The same issues seem to be stumbled over. What happened to the lessons learned in the past? One problem with wiki-based communities is that people talk a lot about what they are doing. People tend not to talk about what they are not doing. Search for any kind of solution in Mozilla's information space, in any problem area, and one finds at least four or five different efforts. Only one or two may be current, but nothing gets written about why the others are not moving forward, or why they failed. It may just be against our natures to talk about things that did not work. One just has to puzzle out the things that are real from the things that just look real.
In testing, it often seems to me that one needs to be clear about what kind of tests one wants, who can create, develop, or maintain what kinds of tests, and why different kinds of tests should or should not be run. If one is not clear about these things, tests get developed which do not get run, tests get run but they do not exercise the right things, or their results get ignored, and lots of people might be working hard to no good effect. The result is that there is no joy.
Questions:
(How hard is it to/Who can) develop a new test?
(How hard is it to/Who can) modify an existing test?
(How hard is it to/Who can) identify a feature or fault for which a test should be created?
(How hard is it to/Who can) evaluate the benefit of creating a test of the effectiveness of running a particular test over time?
(How hard is it to/Who can) run a test?
(How hard is it to/Who can) see that a test has failed?
(How hard is it to/Who can) identify the cause of a test failure, or interpret the failure?
I would suggest that each of these questions defines an axis in the "problem space", the space of possibilities to in which to locate problems and suggest solutions about Mozilla products. There are risks to be managed in the Firefox source base, and possibilities to be enabled. The cost of the risk to be averted and the possible improvement to the product that are enabled have to be measured against the costs of testing. I think that answers to these questions can be used to categorize various quality efforts and give a way to judge whether testing is doing what it should do and how much should or should not be spent on it.
I was going to take a stab at answering the one or more of questions above, but I will post this as a start. Answers can come later, especially since I do not know all of the answers. Nor am I sure I know all the questions. And if anyone wants to suggest either questions or answers, please do. And if anyone can think of a way to draw a representation of a, say, 12-dimensional space, so I can draw pictures based on something like the questions above, let me know that too.
Thanks. More to come.
Wednesday, December 05, 2007
Subscribe to:
Post Comments (Atom)
2 comments:
Well, specifically in this instance, there are automated tests for the failure that caused 2.0.0.11 to be created. The problem is that these reference tests, which were created by developers working on Canvas (I believe), only run on the trunk, which is Firefox 3. The failure was in a single method that Canvas uses.
Most of the ongoing work to improve unit tests and automated coverage has been future focused, so it has happened in the Firefox 3 space. That's where the vast majority of development work is occurring. That's not a justification but simply a statement of fact.
We're looking at ways to address this issue but don't have a short-term solution.
On the trunk: except for the test added after this regression, I can only see a single test case which uses drawImage, and it doesn't even use the drawImage(HTMLImageElement, ...) form which was broken. And that test wasn't written by Mozilla developers.
(I've posted some more comprehensive tests here in the hope that it will help prevent future canvas bugs, though that doesn't solve the general problem for the rest of the browser.)
Post a Comment