Demo: Image enhancement with getUserMedia

Enhance 224 to 176. Enhance, stop. Move in, stop. Pull out, track right, stop. Center in, pull back. Stop. Track 45 right. Stop. Center and stop. Enhance 34 to 36. Pan right and pull back. Stop. Enhance 34 to 46. Pull back. Wait a minute, go right, stop. Enhance 57 to 19. Track 45 left. Stop. Enhance 15 to 23. Give me a hard copy right there.

It seems demos of the new WebRTC getUserMedia() are all the rage these days. The bug’s bitten me too, so I took Tim’s green-screen demo and hacked it up to my own ends…

A common technique in photography — especially astrophotography — is “image stacking.” The stuff you DON’T want in your image is transient and random noise, whereas the scene you DO want in your image can be reliably and repeatedly captured. So, the basic idea is to take a bunch of photos, changing as little as possible, and then use image processing to combine/average (“stack”) them together. Once you start thinking about capturing photons this way, it’s possible to capture images that far exceed what one would normally expect.

I’ve implemented a simple-and-dumb version of image stacking using HTML5’s getUserMedia() and canvas. Let me illustrate with some pictures.

First I pointed my super-cheap USB webcam at a thing on my desk — which was dimly lit and quite stationary. Here’s a typical frame of captured video:

Doesn’t look very good; it’s a typical poor-quality webcam image. Next, using my little hack, I then captured 50 frames at 640×480 and averaged them together:

Yum. Much better. The image is overall much cleaner; the random-color “static” is suppressed in favor of flat colors and smooth gradients. (A little too much so, making it look cartoonish. I’m not sure if this is because of my cheap camera, a dumb algorithm, or something else.). But this image isn’t just smoother — it’s also sharper. Lines and edges are now crisp instead of blurry and mottled. Text labels that were barely readable before are now easily readable. This is particularly evident in tiny “◃SCALE▹” and “◃POSITION▹” labels just above them.

The differences are even more obvious if you boost up the brightness of the above images in Photoshop. The already-bright areas are now washed out, but detail in the darker areas is easier to see. In particular, the area from the column of dark buttons to the lower-left corner of the pic is much more detailed than in the single-frame capture:


Now, just point your space telescope at a dark patch of the sky, use similar techniques to stack up 23-days worth of exposure, and you get this. Neat.

If you’d like to play around with this demo yourself, try it out in your browser here. I’m curious if people can find improvements to the averaging (in either speed or quality).