Reading some (but not all) of the Firefox/Thunderbird discussions, I find myself sanguine. I am not finding anything to be upset about. I think the current discussion may be more about the social structure of Mozilla. It does not seem to be a technical issue.
Looking at the technologies, it might be helpful if MoCo became, or came to see itself as, a service-provider for projects like Thunderbird. It could be "sourceforge with benefits", or perhaps a "sourceforge with a conscience". This seems obvious.
What makes it hard for MoCo to deal with Thunderbird as it is right now? Open-source avoids some problems one sees with commercial development. Commercial development seeks local maxima of utility with minimal investment of money. But MoCo has a similar issue. It seeks local maxima with minimal investment of social capital. What does this mean? If one looks at how Mozilla works, one sees this played out in many ways. I see some of them, but I do not think I am the best one to call them out.
One can identify, though, the "Mozilla way" of doing things. I noticed this when I was at Apple, and I still see it today. The "Mozilla way" is to use quirky tool sets, often old tools sets, with lots of hand-crafted modifications for particular issues. Enabling flexibility in these systems is not often a priority. Often it seems to be easier to hand-craft a solution to deal with the hand-crafted solution from two years ago. Finding a general solution that does not require tweaking does not seem to be a priority. A general solution rarely has social benefit. Individual tweaks, each to satisfy a different social entity, do have social benefits.
Why are more things not better documented? There is no social benefit to documenting that which is known by the core group. MoCo has, in general, a resistance to making things obvious. But social networks are built on shared knowledge, and if that knowledge requires a "maturation ritual", a "rite of passage", to find, all the better.
So, why can't Mozilla keep Thunderbird up-front, alongside Firefox? Perhaps it is not a technical issue. Anthropologists tell us that primates form social networks whose size varies with the ration of the brain size to body size. Given MoCo's reliance on social cohesion, Mozilla may not be able to concentrate on Thunderbird because the social network required is too big. Is Mozilla's pervasive reliance on social cohesion a good thing?
Tuesday, July 31, 2007
Tuesday, July 24, 2007
automated leak testing - followup
I have filed a bug (389361 - reproducible leak accessing page www.1ting.com) as a result of my leak testing. There are actually 6 URLs in that bug, but I cannot differentiate the leaks, so I have reported them together. I will have to figure out the leaks-gauge output so that I can make sense of this and report better bugs.
The leaks tool available on the Mac allows one to set an environment variable, MallocStackLogging, that makes the leaks output much more verbose. It will potentially tell you a lot about the objects being leaked. Unfortunately, this mostly works for Objective-C objects. C++ seems to be pretty good at obfuscating itself, or Apple has not done the work to help here, or both. It's a shame.
Well, the long and short of it is that leaks-gauge output does not make very much sense to me yet. :-(
I was not being very creative in how I got URLs. Someone pointed me to the alexa 500 and I hit pay dirt. Here are the sites I filed the bug against:
* http://www.1ting.com
* http://www.chinaren.com/
* http://www.cmfu.com/
* http://www.hurriyet.com.tr/
* http://www.yahoo.com/
* http://www.zaycev.net/
By the way, I am trying to get my add-on up onto AMO. This extension does nothing but quit Firefox at any page load. This is useful when launching Firefox from a script. The extension is available at my site and in the AMO sand box (login required, or may not be reachable). If anyone wants to review, after reading the docs, please do. I am not sure what the trigger is for getting it accepted on AMO. We will see.
I find it interesting that these leaks are sometimes not reproducible. There were a couple of dozen URLs in the list which generated leaks, but only once. As I re-tested URLs, sometimes they would leak every other time (like www.yahoo.com) and sometimes every third access (like www.1ting.com) and sometimes it was just more random than that. I did not list a URL if I could not get it to leak again. I am re-launching the browser each time, though, so I thought the testing would be more reproducible. It is more reproducible, but it is still surprising to me how many one-time leaks there were. There are some questions I am going to investigate.
* Does it matter if I randomize the list of URLs I am checking?
* If I automatically retry leaking URLs a certain number of times, how many times should I check?
* What files in the app bundle or in the profile directory change with an instance launch and shutdown? Or,
* What state being held between app launches?
We'll see.
The leaks tool available on the Mac allows one to set an environment variable, MallocStackLogging, that makes the leaks output much more verbose. It will potentially tell you a lot about the objects being leaked. Unfortunately, this mostly works for Objective-C objects. C++ seems to be pretty good at obfuscating itself, or Apple has not done the work to help here, or both. It's a shame.
Well, the long and short of it is that leaks-gauge output does not make very much sense to me yet. :-(
I was not being very creative in how I got URLs. Someone pointed me to the alexa 500 and I hit pay dirt. Here are the sites I filed the bug against:
* http://www.1ting.com
* http://www.chinaren.com/
* http://www.cmfu.com/
* http://www.hurriyet.com.tr/
* http://www.yahoo.com/
* http://www.zaycev.net/
By the way, I am trying to get my add-on up onto AMO. This extension does nothing but quit Firefox at any page load. This is useful when launching Firefox from a script. The extension is available at my site and in the AMO sand box (login required, or may not be reachable). If anyone wants to review, after reading the docs, please do. I am not sure what the trigger is for getting it accepted on AMO. We will see.
I find it interesting that these leaks are sometimes not reproducible. There were a couple of dozen URLs in the list which generated leaks, but only once. As I re-tested URLs, sometimes they would leak every other time (like www.yahoo.com) and sometimes every third access (like www.1ting.com) and sometimes it was just more random than that. I did not list a URL if I could not get it to leak again. I am re-launching the browser each time, though, so I thought the testing would be more reproducible. It is more reproducible, but it is still surprising to me how many one-time leaks there were. There are some questions I am going to investigate.
* Does it matter if I randomize the list of URLs I am checking?
* If I automatically retry leaking URLs a certain number of times, how many times should I check?
* What files in the app bundle or in the profile directory change with an instance launch and shutdown? Or,
* What state being held between app launches?
We'll see.
Thursday, July 19, 2007
automating leaks testing
I have managed to get some ideas I had to work in code. It required a couple of things.
First, why. The leak-gauge.pl script by dbaron is obviously very useful for tracking leaks in Firefox. I was just reminded of this reading this blog post by Steve England.
But reading Steve's post, it looks as though he did the following:
1) invoke leaks-gauge.pl on some URL
2) do a bunch of stuff on the resulting page
3) quit the browser
4) look at the results
5) perhaps file a bug
6) lather, rinse and report
If one wants to automate this testing, there are issues from some of this.
"invoke leaks-gauge.pl on some URL"
Right now, one uses a web page to tell leak-gauge.pl what to do. It is a manual process, hence this is not automatable.. I modified the leaks-gauge.pl script so that you give it an application executable, a profile directory, and a URL and it launches the browser for you. Another script goes through a list of apps and a list of URLs and calls the modified leaks-gauge.pl with the Cartesian product of these. This could all be done smarter than I am doing it.
What URLs can I run it on? Not wanting to answer that right now, I just looked at my copy of the trunk and used all the html files inside the layout/reftests directory. Right now this is 1081 URLs. I used 5 different applications, those being nightlies from 2007/01/30, 2007/03/24, 2007/05/23, 2007/06/15, and 2007/07/13.
"do a bunch of stuff"
Steve was testing extensions. Indeed, if one wants to test the memory footprint of extensions, one probably has to interact with the UI to cause the extension to do its stuff. I have no solutions to this quandary. I can say that it would be useful to use leak-gauge.pl just for single-page loads and this is automatable now.
"quit the browser"
This is not hard, but it is a bother. There are lots of ways to do it, but many of them make testing hard for various reasons. My solution to this was to create a very small extension, which will be up on AMO very soon. All it does is wait for a page to load, any page, and quit the browser. So, if one invokes firefox from the command-line and passes in a URL, then Firefox gets launched, it loads the page and then it automatically quits.
"look at the results"
So, I have a script that takes the output from the test run and outputs a couple of pages. Of course, anybody would know I will say this should be in a database, since I think everything should be in a database, but that is for later.
For full results, see http://www.wykiwyk.com/mozilla/leakTesting/testlog_20070718_byApp.html and http://www.wykiwyk.com/mozilla/leakTesting/testlog_20070718_byURL.html.
The summary results are that 2007/01/30 leaked some on a lot of pages, 2007/03/24 was slightly worse, and 2007/05/23, 2007/06/15, and 2007/07/13 did not leak at all on these files. Yeah for Cycle Collection! :-)
And now, there is a Firefox extension for which someone is actually paying me, so I should get back to it.
First, why. The leak-gauge.pl script by dbaron is obviously very useful for tracking leaks in Firefox. I was just reminded of this reading this blog post by Steve England.
But reading Steve's post, it looks as though he did the following:
1) invoke leaks-gauge.pl on some URL
2) do a bunch of stuff on the resulting page
3) quit the browser
4) look at the results
5) perhaps file a bug
6) lather, rinse and report
If one wants to automate this testing, there are issues from some of this.
"invoke leaks-gauge.pl on some URL"
Right now, one uses a web page to tell leak-gauge.pl what to do. It is a manual process, hence this is not automatable.. I modified the leaks-gauge.pl script so that you give it an application executable, a profile directory, and a URL and it launches the browser for you. Another script goes through a list of apps and a list of URLs and calls the modified leaks-gauge.pl with the Cartesian product of these. This could all be done smarter than I am doing it.
What URLs can I run it on? Not wanting to answer that right now, I just looked at my copy of the trunk and used all the html files inside the layout/reftests directory. Right now this is 1081 URLs. I used 5 different applications, those being nightlies from 2007/01/30, 2007/03/24, 2007/05/23, 2007/06/15, and 2007/07/13.
"do a bunch of stuff"
Steve was testing extensions. Indeed, if one wants to test the memory footprint of extensions, one probably has to interact with the UI to cause the extension to do its stuff. I have no solutions to this quandary. I can say that it would be useful to use leak-gauge.pl just for single-page loads and this is automatable now.
"quit the browser"
This is not hard, but it is a bother. There are lots of ways to do it, but many of them make testing hard for various reasons. My solution to this was to create a very small extension, which will be up on AMO very soon. All it does is wait for a page to load, any page, and quit the browser. So, if one invokes firefox from the command-line and passes in a URL, then Firefox gets launched, it loads the page and then it automatically quits.
"look at the results"
So, I have a script that takes the output from the test run and outputs a couple of pages. Of course, anybody would know I will say this should be in a database, since I think everything should be in a database, but that is for later.
For full results, see http://www.wykiwyk.com/mozilla/leakTesting/testlog_20070718_byApp.html and http://www.wykiwyk.com/mozilla/leakTesting/testlog_20070718_byURL.html.
The summary results are that 2007/01/30 leaked some on a lot of pages, 2007/03/24 was slightly worse, and 2007/05/23, 2007/06/15, and 2007/07/13 did not leak at all on these files. Yeah for Cycle Collection! :-)
And now, there is a Firefox extension for which someone is actually paying me, so I should get back to it.
Saturday, July 07, 2007
Starting a Firefox Instance With a Fresh Profile
Maybe it is just me, but in doing anything with development builds of Firefox, I was constantly bothered by having my profile mucked around with, or having to create fresh profiles, or having to switch the default profile for some set of tests I wanted to run.
I had slapped together little scripts to deal with some of this, but I finally got them all together in one script and it has been working for me for a while. This may be useful to others. Maybe not.
One thing different about this is that I am doing things on a Mac. I have not tried to make the script cross-platform at all. I also find the executables in the places I keep them. There is no way to find out about the different copies of Firefox that are installed on a particular machine. There is no global installation log. And even though all of the apps read from resources in the 'Application Support' folder, none of them store information there that would tell one which instances of Firefox have been run. And it is not the most elegnt or clever script I have written. But it works. If anybody has suggestions for something to replace this find command, or anything else, please suggest. Until then, here it is:
http://www.wykiwyk.com/mozilla/newbrowser.pl
So, the script will create a fresh profile, put some preferences into it, ask you to identify the copy of Firefox that you want to run and then launch with the new profile. Here is an example of it being run:
I have also made it trivial to run a set of reftests, as so:
For more:
I had slapped together little scripts to deal with some of this, but I finally got them all together in one script and it has been working for me for a while. This may be useful to others. Maybe not.
One thing different about this is that I am doing things on a Mac. I have not tried to make the script cross-platform at all. I also find the executables in the places I keep them. There is no way to find out about the different copies of Firefox that are installed on a particular machine. There is no global installation log. And even though all of the apps read from resources in the 'Application Support' folder, none of them store information there that would tell one which instances of Firefox have been run. And it is not the most elegnt or clever script I have written. But it works. If anybody has suggestions for something to replace this find command, or anything else, please suggest. Until then, here it is:
http://www.wykiwyk.com/mozilla/newbrowser.pl
So, the script will create a fresh profile, put some preferences into it, ask you to identify the copy of Firefox that you want to run and then launch with the new profile. Here is an example of it being run:
% newbrowser
1: /Users/ray/mo/trowser/mozilla/dist/MinefieldDebug.app/Contents/MacOS/firefox
2: /Users/ray/mo/trowser2/mozilla/dist/Minefield.app/Contents/MacOS/firefox
3: /Applications/Mozilla/Firefox_1.5.0.11.app/Contents/MacOS/firefox
4: /Applications/Mozilla/Firefox_1.5.app/Contents/MacOS/firefox
5: /Applications/Mozilla/Firefox_2.0.0.2.app/Contents/MacOS/firefox
6: /Applications/Mozilla/Firefox_2.0.0.4.app/Contents/MacOS/firefox
7: /Applications/Mozilla/GranParadiso_20070426.app/Contents/MacOS/firefox
8: /Applications/Mozilla/Minefield_3.0a2pre_20070130.app/Contents/MacOS/firefox
9: /Applications/Mozilla/Minefield_3.0a2pre_20070206.app/Contents/MacOS/firefox
10: /Applications/Mozilla/Minefield_3.0a3pre_20070222.app/Contents/MacOS/firefox
11: /Applications/Mozilla/Minefield_3.0a3pre_20070308.app/Contents/MacOS/firefox
12: /Applications/Mozilla/Minefield_3.0a3pre_20070317.app/Contents/MacOS/firefox
13: /Applications/Mozilla/Minefield_3.0a3pre_20070324.app/Contents/MacOS/firefox
14: /Applications/Mozilla/Minefield_3.0a4pre_20070413.app/Contents/MacOS/firefox
15: /Applications/Mozilla/Minefield_3.0a5pre_20070503.app/Contents/MacOS/firefox
16: /Applications/Mozilla/Minefield_3.0a5pre_20070519.app/Contents/MacOS/firefox
17: /Applications/Mozilla/Minefield_3.0a6pre_20070623.app/Contents/MacOS/firefox
which executable? 1
cmd = "NO_EM_RESTART=1 /Users/ray/mo/trowser_newbad/mozilla/dist/Minefield.app/Contents/MacOS/firefox -profile /tmp/3lwhsxqz.mozilla_20070707_215451_PDT 2>&1"
WARNING: NS_ENSURE_TRUE(compMgr) failed: file nsComponentManagerUtils.cpp, line 90
Type Manifest File: /tmp/3lwhsxqz.mozilla_20070707_215451_PDT/xpti.dat
*** Registering xpconnect components (all right -- a generic module!)
etc, etc, etc, etc, ....
I have also made it trivial to run a set of reftests, as so:
% newbrowser -reftest layout/reftests/reftest.list
1: /Users/ray/mo/trowser/mozilla/dist/MinefieldDebug.app/Contents/MacOS/firefox
2: /Users/ray/mo/trowser_newbad/mozilla/dist/Minefield.app/Contents/MacOS/firefox
3: /Applications/Mozilla/Firefox_1.5.0.11.app/Contents/MacOS/firefox
4: /Applications/Mozilla/Firefox_1.5.app/Contents/MacOS/firefox
5: /Applications/Mozilla/Firefox_2.0.0.2.app/Contents/MacOS/firefox
6: /Applications/Mozilla/Firefox_2.0.0.4.app/Contents/MacOS/firefox
7: /Applications/Mozilla/GranParadiso_20070426.app/Contents/MacOS/firefox
8: /Applications/Mozilla/Minefield_3.0a2pre_20070130.app/Contents/MacOS/firefox
9: /Applications/Mozilla/Minefield_3.0a2pre_20070206.app/Contents/MacOS/firefox
10: /Applications/Mozilla/Minefield_3.0a3pre_20070222.app/Contents/MacOS/firefox
11: /Applications/Mozilla/Minefield_3.0a3pre_20070308.app/Contents/MacOS/firefox
12: /Applications/Mozilla/Minefield_3.0a3pre_20070317.app/Contents/MacOS/firefox
13: /Applications/Mozilla/Minefield_3.0a3pre_20070324.app/Contents/MacOS/firefox
14: /Applications/Mozilla/Minefield_3.0a4pre_20070413.app/Contents/MacOS/firefox
15: /Applications/Mozilla/Minefield_3.0a5pre_20070503.app/Contents/MacOS/firefox
16: /Applications/Mozilla/Minefield_3.0a5pre_20070519.app/Contents/MacOS/firefox
17: /Applications/Mozilla/Minefield_3.0a6pre_20070623.app/Contents/MacOS/firefox
which executable? 1
cmd = "NO_EM_RESTART=1 /Users/ray/mo/trowser/mozilla/dist/MinefieldDebug.app/Contents/MacOS/firefox -profile /tmp/60yqqqnd.mozilla_20070707_221500_PDT --reftest layout/reftests/reftest.list 2>&1 | /usr/bin/grep '^REFTEST '"
etc, etc, etc, etc, ....
For more:
% newbrowser -usage
newbrowser [ -noUserJS ] [ -reftest <manifest file> ] [ -exec <executable> ]
%
Sunday, July 01, 2007
A Way Around my Gordian Knot with Reftests
I have been trying to figure out a way for reftests to be able to do image comparisons. I have been hampered by both my lack of experience with writing "Mozilla-ish" JavaScript and by the fact that reftest was not designed to do this. Or perhaps, reftest was designed not to do this. Indeed, the thing that makes reftest different is that it does not do exact comparisons with a hard-coded "golden" page and so it is more flexible and does not cause unnecessary failures when unimportant changes are made to, for example, layout.
But there is a call for exact rendering comparisons.
For example, if a certain Unicode value in an html page is supposed to render an Urdu character, the only way to test for this is to have a human look at it and see whether it is right or not. Similarly, whether a given MathML leads to correctly rendered expressions is, at this point, only checkable by a human loading a page and looking at it. I call this the "browsers and eyeballs" test harness.
But the people blocking my changes for reftest are better at doing that than I am at figuring out what would be acceptable and would also do the job, so I have figured out a way to use the current, checked-in version of reftest to do exact visual comparisons. Rather than go into the code, I will demonstrate.
First, I created a file, u1.txt, that contained a list of URLs that I pulled from Ian Hickson's "HTML 4.0 Test Suite" at http://hixie.ch/tests/html40/test41-1a.html . Following the chain that starts at that page leads one to 16 URLs.
Then I run:
The profile is nothing special. It is just dynamically generated so that the browser can run in an automation-friendly manner. Then u2.txt will contain something like:
Then when I can do:
This will verify that the html being received from the URL is exactly as is expected (the DGT marker above is for an SHA1 digest of the source) and the new copy of Firefox displays exactly the same images as the old copy.
Are there obvious issues raised by doing this kind of testing? Yes. And there are ways around them. There are some tools needed to support a workflow that uses this test tool. And I could try to guess what the objections will be now but, then again, "vita brevis, programmi longa".
This should definitely not be seen as an attempt to replace testing tools like mochitest or reftest. On the other hand, I could spend a few days and find 50,000 static URLs and be able to say whether any given check-in causes any difference in the display of any of those pages. How long would it take to find the 5000 or so important differences in those pages and write reftests for them? I do not know, but I am not going to spend a few years trying to find out.
But there is a call for exact rendering comparisons.
For example, if a certain Unicode value in an html page is supposed to render an Urdu character, the only way to test for this is to have a human look at it and see whether it is right or not. Similarly, whether a given MathML leads to correctly rendered expressions is, at this point, only checkable by a human loading a page and looking at it. I call this the "browsers and eyeballs" test harness.
But the people blocking my changes for reftest are better at doing that than I am at figuring out what would be acceptable and would also do the job, so I have figured out a way to use the current, checked-in version of reftest to do exact visual comparisons. Rather than go into the code, I will demonstrate.
First, I created a file, u1.txt, that contained a list of URLs that I pulled from Ian Hickson's "HTML 4.0 Test Suite" at http://hixie.ch/tests/html40/test41-1a.html . Following the chain that starts at that page leads one to 16 URLs.
Then I run:
% perl runRef -generate -src u1.txt -app /Users/ray/mozilla/dist/MinefieldDebug.app/Contents/MacOS/firefox -profile /tmp/12qp5rt8.mozilla_20070630_153321_PDT > u2.txt
The profile is nothing special. It is just dynamically generated so that the browser can run in an automation-friendly manner. Then u2.txt will contain something like:
URL: http://hixie.ch/tests/html40/test41-1a.html
DGT: 9e747d8a7b957f5126f07135b75f9564405da513
URI: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAyAAAAPoCAYAAAAmy5qxAAAgAElEQVR4nOzdf1zV9 (and so on ...)
URL: http://hixie.ch/tests/html40/test41-1b.html
DGT: 1bb9c5fba9fd37d6afdced89817f0a61104b3ced
URI: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAyAAAAPoCAYAAAAmy5qxAAAgAElEQVR4nOzdf1zV9d3/8QfXgA4 (and so on ...)
URL: http://hixie.ch/tests/html40/test41-2.shtml
DGT: 0f3e0c63e174afff4c61924384cbe2dc78722447
URI: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAyAAAAPoCAYAAAAmy5qxAAAgAElEQVR4nOzdf1zV9d3/8QfXAA8aB (and so on ...)
URL: http://hixie.ch/tests/html40/test41-3.html
DGT: ad782efed8ab827701fb6740f2089d897f8499bd
URI: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAyAAAAPoCAYAAAAmy5qxAAAgAElEQVR4nOzdf1zV9f (and so on ...)
Then when I can do:
% <re-fetch and build Firefox here>
% perl runRef -test -src u2.txt -app /Users/ray/mozilla/dist/MinefieldDebug.app/Contents/MacOS/firefox -profile /tmp/12qp5rt8.mozilla_20070630_153321_PDT
URL: http://hixie.ch/tests/html40/test41-1a.html -> Ok
URL: http://hixie.ch/tests/html40/test41-1b.html -> Ok
URL: http://hixie.ch/tests/html40/test41-2.shtml -> Ok
URL: http://hixie.ch/tests/html40/test41-3.html -> Ok
...
This will verify that the html being received from the URL is exactly as is expected (the DGT marker above is for an SHA1 digest of the source) and the new copy of Firefox displays exactly the same images as the old copy.
Are there obvious issues raised by doing this kind of testing? Yes. And there are ways around them. There are some tools needed to support a workflow that uses this test tool. And I could try to guess what the objections will be now but, then again, "vita brevis, programmi longa".
This should definitely not be seen as an attempt to replace testing tools like mochitest or reftest. On the other hand, I could spend a few days and find 50,000 static URLs and be able to say whether any given check-in causes any difference in the display of any of those pages. How long would it take to find the 5000 or so important differences in those pages and write reftests for them? I do not know, but I am not going to spend a few years trying to find out.
Subscribe to:
Posts (Atom)