Tidying up .mozconfig

Long long ago, when I was first learning how to compile Mozilla from source, through some magical process I ended up with a .mozconfig file which worked. Maybe I copied it from DevMo, or maybe someone gave me their copy. I really don’t remember!

Anyway, now that the build process is ever-so-slightly more familiar to me, I’ve cleaned up some of the .mozconfig files I have around. One line I just looked at was:

mk_add_options MOZ_CVS_FLAGS=@CVS_FLAGS@' -q -z4'

The -z4 specifies the compression level used when transferring the source. Using 0 disables compression, and 9 uses maximal compression. In theory, it’s a tradeoff between CPU time and network speed. So what’s the best value to use?

A quick benchmark seems to indicate that it doesn’t matter. -z4, -z8, and -z9 all took 135 seconds (+/- a couple seconds) to checkout the Firefox 3 trunk code. A test with -z0 was about 10 seconds slower, but I didn’t repeat it to see if it was a quirk.

So, I’m just going to remove this line from my .mozconfig, and rely on the default (which is “-q -z3”). Yay simplicity. Anyone else find similar results? Perhaps -z9 helps for dialup-speeds? [My system is a 2.16Ghz Macbook Pro, connected over 802.11g WiFi.]

Images In Motion

A few months ago I noticed that the video presentations on DevMo were rather unwieldy to watch… A 280 MB Quicktime download doesn’t exactly offer immediate gratification! So, I uploaded all the videos on that page to video.google.com, and today *finally* got around to editing the DevMo page to link to them. Share and Enjoy.

I considered putting them on YouTube, but the video quality there is lower and they had a size/time limit to uploads.


One more thing…

I’ve been working on a little side project for a while — an editor for creating Animated PNGs — and I see Dave let the cat out of the bag. Just as well, as I kept wanting to fix one more thing and have failed at the whole “release early, release often” thing. Boom. Andrew has a page where you can download it, and describes a bit how to use it.

I’ve also created a hacky little demo page showing some samples of APNGs. I’ve more ideas to add, as there are some clever things that APNGs should make possible.

This APNG editor is a decent start, but still needs a lot of work and polish to be a solid tool. Some things I’d like to add at some point…

  • Frame reordering
  • Ability to stop the animation and step forwards/backwards frame by frame
  • Colormap support (for 1 to 8-bit APNGs), which should allow for smaller file sizes. I think the spec allows this, but encoder changes would be needed.
  • Interframe diffing, so that video-like animations can be more efficiently encoded. I’m rather curious if the ability of APNG to do alpha blending will help out here… For example, to have an Animated GIF fade to black, each frame would have to be encoded as a full image (darkening as needed). But with an APNG, you could just overlay successive frames of semi-transparent black (which would compress very well).
  • Automatic cropping and repositioning of frames with all-transparent pixels along one or more borders.
  • Sandboxing of script execution, so that sharing rendering scripts would be safe. Declarative canvas and SVG integration would also be exceedingly awesome.

Patches welcome!

Confluence of thoughts…

I read Gerv’s post from earlier today (“Choice considered harmful”), as well as the predictable replies to it. It’s a rich topic to debate, but one thought that particularly strikes me is that with computers running billions of instruction per seconds (and increasing), software (un?)naturally grows in size and complexity to keep those CPUs warm and toasty… So we, as software engineers, need to continually increase the instructions-per-user-decision ratio, or else things spiral out of control. And just breaking even isn’t good enough if you’re interested in improving usability.

Unfortunately that’s often perceived as “removing features” and “limiting what users can do.” Done improperly, that can be the case. But I think more often it’s… Well, let me avoid that rathole and instead run off on a tangent. 🙂

In my last blog post, I had mentioned having problems last year getting Solaris working right in a Parallels VM. Alfred Peng (from Sun) commented that pre-installed VM images are now available from Sun, which would have certainly saved me some time. 🙂 But that’s a great idea for other reasons — it makes it MUCH easier to try out the software, by avoiding the whole hassle of having to install it. Linux also ran with this idea by making “Live CD” images available, so you could try Linux by booting a CD and not having to commit to installing it over your current system. I think some distros are making VM images available now, and there’s a VMWare appliance available with the Nokia N800 development platform pre-installed, which is an interesting idea in lowering the threshold to starting development.

Now, let’s swerve this post back towards Firefox…

Somewhere, recently, I caught part of a discussion with Mike Beltzner talking about improving the first-run experience with our browser. It’s been a while since I installed Firefox on a fresh new system, but as I remember it you’ve got to run the installer, click through a bunch of installer wizard screens, confirm importing your IE bookmarks, decide if you want to make FF your default browser, wade though security dialogs the first time you enter and leave an SSL site, etc. That’s not a terribly pleasant experience (especially for someone just curious about what this Firefox thing is all about), and doesn’t give a good impression of what using Firefox is really like.

We can fix a lot of the first-run issues with tweaking how things are done. Shipping a VM image with Firefox pre-installed isn’t really needed. 🙂 But I do wonder if there’s a way to eliminate, or at least minimize, the install process. OS X is nice in that you can just drag Firefox.app to the Desktop and run it, so there’s a minimum of hassle in “installing” an application. I’ve run across people hesitant to try Firefox because they don’t want to install it over IE, not really realizing you can just try it. I wonder how many users bail out of the process before Firefox loads a single web page.

Solaris redux

Allow me to hoist my suspenders and stroke my scruffy gray beard for a few moments…

I’ve been a Solaris user for a long time now. I started with SunOS 4.1.3 in college, hacked on a Solaris 2.5.1-based proxy firewall (ANS InterLock, w00t!) for a few years, helped get that product working on Solaris 7, and then ended up at Sun Microsystems during the development of Solaris 9 and 10. On my own time I began working on Linux, as it matured and Solaris’ future became dim. And now, at Mozilla, I’m happy with OS X (aka unix with a sensible interface).

But more recently, I’ve had a Solaris itch growing. Solaris x86 — once the unloved bastard step-child — has clung to life through some rough times, and today is an entirely usable desktop OS. Kudos to the folks who have made it compatible with lots of hardware and Linux apps. My return to Solaris has had a few false starts, though… I struggled to get it working under Parallels (on OS X) last year before losing interest, then got it working on a spare PC until the dying video card made is unbearable. Then when Fred finished his internship at Mozilla, I swiped his PC and scrounged some spare parts to get a respectable system built. [Dual Xeon @ 2.2 Ghz, 1 GB RAM].

One reason I’ve been interested in Solaris again is that they’ve got some really spiffy technologies, some of which should be appearing in the next OS X release as well. Most prominently: ZFS and DTrace.

I’ve already got ZFS working on my “new” box… What a joy! If only the rest of Unix was this slick to use. Here’s what I did:


1. Dug though a box of old hard drives, and found 3 old-but-serviceable 9.1GB drives. Tossed into case, hooked up cables, and booted.


2. Created a ZFS storage pool named “build”:

# zpool create build raidz c0t2d0 c0t4d0 c0t8d0

# zpool status -v build
  pool: build
 state: ONLINE
 scrub: none requested

        NAME        STATE     READ WRITE CKSUM
        build       ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
            c0t4d0  ONLINE       0     0     0
            c0t8d0  ONLINE       0     0     0

errors: No known data errors


3. Not strictly needed; but created a ZFS filesystem in the pool for Firefox builds, and mounted it in a convenient place…

# zfs create build/firefox

# zfs set mountpount=/export/home/dolske/ff build/firefox

# zfs list
blob                  88.7M  60.9G  24.5K  /blob
blob/home             88.6M  60.9G  88.6M  /export/home
build                 1.92G  14.6G  32.6K  /build
build/firefox         1.92G  14.6G  1.92G  /export/home/dolske/ff

And that’s it! As far as command line based filesystem administration goes, that’s dead sexy. No formatting or partitioning needed. Just a few simple commands, and I’ve got a fast filesystem that’s striping across 3 devices, fault-tolerant, with error detection and correction. And that’s just the beginning of what ZFS can do.

You can also get some nice stats as the pool in use… Here I’m starting a build, with stats dumped every 15 seconds:

# zpool iostat build 15
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
build        821M  24.4G      0      0      0      0
build        821M  24.4G      5      0  70.0K  8.53K
build        821M  24.4G      1     28  35.4K   128K
build        824M  24.4G     57    149   148K   181K
build        827M  24.4G     90    147   182K   227K
build        828M  24.4G     31     43   188K   211K
build        839M  24.4G     16     38   115K   663K


I’m not really maxing out the drives during a build, but it’s fun to watch. The surprisingly-readable ZFS Administration Guide has more info on what goodies ZFS provides.