Last Thursday (23 Oct 2014), North America was treated to a partial solar eclipse. This occurs when the moon passes between the Earth and Sun, casting its shadow onto part of our planet. For observers in the California Bay Area, the moon blocked about 40% of the sun. Partial eclipses are fairly common (2-5 times a year, somewhere on the Earth), but they can still be quite interesting to observe.
The first two eclipses I recall observing were on 11 July 1991 and 10 May 1994. The exact dates are not memorable; they’re just easy to look up as the last eclipses to pass through places I lived ! But I do remember trying to observe them with some lackluster-but-easily-available methods of the time. Pinhole projection seems to be most commonly suggested, but I never got good results from it. Using a commercial audio CD (which uses a thin aluminum coating) had worked a bit better for me, but this is highly variable and can be unsafe.
I got more serious about observing in 2012. For the annular solar eclipse and transit of Venus which occurred that May/June, I made an effort to switch to higher-quality methods. My previous blog post goes into detail, but I first tried a pinhead mirror projection, which gave this better-but-not-awesome result:
(In fairness, the equipment fits into a pocket, and it was a last-minute plan to drive 6 hours, round trip, for better viewing.)
For the transit of Venus a few days later — a very rare event that occurs only once ever 105 years — I switched to using my telescope for even better quality. You don’t look through it, but instead use it to project a bright image of the sun onto another surface for viewing.
I was excited to catch last week’s eclipse because there was an unusually large sunspot (“AR2192″) that was going to be visible. It’s one of the larger sunspots of the last century, so it seemed like a bit of an historic opportunity to catch it.
This time I took the unusual step of observing from indoors, looking out a window. This basically allows for projecting into a darker area (compared to full sunlight), resulting in better image contrast. Here’s a shot of my basic setup — a Celestron C8 telescope, with a right angle adapter and 30mm eyepiece, projecting the full image of the sun (including eclipse and sunspots) onto the wall of my home:
The image was obviously quite large, and made it easy to examine details of the large sunspot AR2192, as well as a number of smaller sunspots that were present.
I also switched to a 12.5mm eyepiece, allowing for a higher magnification, which made the 2-tone details of the main sunspot even more obvious. The image is a little soft, but not too bad — it’s hard to get sharp contrast at high zoom, and the image was noticeably wavering as a result of thermal convection withing the telescope and atmosphere. (Not to mention that a telescope mounted on carpet in a multistory building isn’t the epitome of stability — I had to stand very still or else the image would shake! Not ideal, but workable.)
As with the transit of Venus, it’s fun to compare my picture with that from NASA’s $850-million Solar Dynamics Observatory.
Observing this sunspot wasn’t nearly as exciting as the Carrington Event of 1859, but it was still a beautiful sight to behold. I’m definitely looking forward to the 21 August 2017 eclipse, which should be a fantastic total eclipse visible from a wide swath of the US!
I upgraded to a new MacBook about a week ago, and thought I’d use the opportunity to try living without Flash for a while. I had previously done this two years ago (for my last laptop upgrade), and I lasted about a week before breaking down and installing it. In part because I ran into too many sites that needed Flash, but the main reason was that the adoption and experience of HTML5 video wasn’t great. In particular, the HTML5 mode on YouTube was awful — videos often stalled or froze. (I suspect that was an issue on YouTube’s end, but the exact cause didn’t really matter.) So now that the Web has had a few additional years to shift away from Flash, I wanted to see if the experience was any better.
The short answer is that I’m pleased (with a few caveats). The most common Flash usage for me had been the major video sites (YouTube and Vimeo), and they now have HTML5 video support that’s good. YouTube previously had issues where they still required the use of Flash for some popular videos (for ads?), but either they stopped or AdBlock avoids the problem.
I was previously using Flash in click-to-play mode, which I found tedious. On the whole, the experience is better now — instead of clicking a permission prompt, I find myself just happy to not be bothered at all. Most of the random Flash-only videos I encountered (generally news sites) were not worth the time anyway, and on the rare occasion I do want to see one it’s easy to find an equivalent on YouTube. I’m also pleased to have run across very few Flash-only sites this time around. I suspect we can thank the rise of mobile (thanks iPad!) for helping push that shift.
There are a few problem sites, though, which so far I’m just living with.
Ironically, the first site I needed Flash for was our own Air Mozilla. We originally tried HTML5, but streaming at scale is (was?) a hard problem, so we needed a solution that worked. Which meant Flash. It’s unfortunate, but that’s Mozilla pragmatism. In the meantime, I just cheat and use Chrome (!) which comes with a private copy of Flash. Facebook (and specifically the videos people post/share) were the next big thing I noticed, but… I can honestly live without that too. Sorry if I didn’t watch your recent funny video.
I will readily admit that my Web usage is probably atypical. I’ve rarely play online Flash games, which are probably close to video usage. And I’m willing to put up with at least a little bit of pain to avoid Flash, which isn’t something fair to expect of most users.
But so far, so good!
[Coincidental aside: Last month I removed the Plugin Finder Service from Firefox. So now Firefox won’t even offer to install a plugin (like Flash) that you don’t have installed.]
A few years ago I blogged about the importance of using a good paper shredder. A crappy shredder is just (bad) security through obscurity, and easily leaks all kinds of info. So I was delighted and horrified to find this package show up in the mail, using shreds as packing protection for the item I had ordered.
30 seconds of searching turned up a number of interesting bits (plus a few more with full names which I have kindly omitted):
Let’s count a few of the obvious mistakes here:
And of course it would have been better security to use a microcut shredder.
Now, to be fair, even this poor shredding has technically done its job. Other than a few alluring snippets, it’s not worth my time to assemble the rest to see the full details of these banking, business, and health care records. But then I’m a nice guy who isn’t interested in committing fraud or identity theft, which is an unreasonable risk to assume of every customer.
In my last blog post, I wrote about how I helped fund the launch of a satellite via KickStarter, and that after launch I would have more to say about receiving radio signals from it. This is that update.
The good news is that SpaceX’s Falcon 9 finally launched last Friday. It’s had two delays since the originally scheduled date of March 30th. No big deal, it happens. Here’s the launch video, showing liftoff, onboard rocketcam, staging, and separation of the Dragon cargo capsule: (YouTube)
This is the CRS-3 flight — the 3rd Commercial Resupply Service flight hauling cargo to the International Space Station. With the retirement of the Space Shuttle, CRS flights are the only US vehicles going to ISS, although other countries have their own capabilities. This is also the first flight where SpaceX has successfully flown the first stage back to Earth for a landing (albeit in the ocean, for safety, as it’s still experimental). Usually rockets like this are expendable – they just reenter the atmosphere and burn up. But SpaceX plans to fly them back to reuse the hardware and enable lower costs. It’s a pretty stunning concept. There’s no video of CRS-3′s Falcon 9 returning (yet?), just two tweets confirming it was successful. But it just so happens they did a test flight last week that demonstrates the idea: (YouTube)
So back to to satellite I funded: KickSat. The “carrier” was successfully deployed, and the first telemetry packets have been received and decoded. In a couple of weeks (May 4th) it will deploy its 104 “Sprite” nanosatellites, which will each start broadcasting their own unique signals. Things are off to a great start!
The slightly bad news is that I haven’t been completely successful in capturing Kicksat broadcasts with my own ground equipment. Yet.
It’s challenging for a few reasons…
The biggest factor is that the signals are inherently just not very strong, due to power limitations. The KickSat carrier is a 3U cubesat, with limited area for solar-cells. It periodically broadcasts a brief telemetry packet with a 1-watt radio, which is what I’m trying to capture.
And of course the Sprites it carries are even smaller, and will only be transmitting with a mere 10mW of power. The silver area in the Sprite I’m holding below is where the solar cell goes. (It turns out that satellite-grade solar cells are export-restricted under US law, so the sample units shipped without ‘em!)
Such faint signals need some modestly specialized (but still off-the-shelf) equipment to enable reception. I’ve got a yagi antenna and low-noise amplifier, as recommended on the KickSat wiki. Successfully making use of it is a little tricky, though. You need to use orbit tracking tools (e.g. gpredict or via Heavens Above) to know when a satellite pass will occur, and where in the local sky its path will be. Yagis are directional, and thus need to be pointed (at least roughly) to the right place. Not every pass is directly overhead, and if it’s too low on the horizon it may be too hard to receive (due to strength or interference).
Additionally, KickSat only broadcasts a short packet once every 30 or 250 seconds (depending on what mode it’s in), and during a pass it’s only above the horizon for a few minutes. That makes for a rather ephemeral indication that it’s even there. Blink and it’s gone! My location has about 4 pass opportunities a day, but not all are useful.
Oh, and did I mention I’m doing this all with a cheap $20 RTL2832U software defined radio?! Heck, the coax cables connecting my dongle to the antenna cost more than the radio itself!
I decided to start off by first trying to catch signals from some other satellites. I went through AMSAT’s status page and DK3WN’s fantastic satellite blog and gathered a list of a couple dozen satellites known to be active on the same 70cm (~435Mhz) band KickSat is using.
My first success was HO-68 (aka Hope Oscar 68 aka XW-1). This is a Chinese satellite launched in 2009, broadcasting a fairly constant 200mW telemetry “beacon” in morse code. Picking out the dotted line in the GQRX waterfall display was pretty easy, even though it’s not exactly on the expected 435.790Mhz frequency due to inaccuracies in my radio and doppler shift(!).
This is what it sounds like: WAV | OGG. Why is the tone shifting? The gradual lowering is doppler shift. The abrupt jumps upward are just from me adjusting the radio tuning to keep it audible. My morse code skills are terrible, but I replayed it enough times to convince myself that it starts out with the expected “BJ1SA XW XW” (the radio sign and initials of its name), per the page describing the signal format. For the lazy, that’s – . . . / . – - – / . – - – - / . . . / . – / – . . – / . – - / – . . – / . – - in morse code.
Next up was FO-29.
Here’s a screengrab of gpredict, showing the predicted path low in the sky, from the southeast to north.
It’s got a stronger 1-watt beacon, which wasn’t too hard to spot. Here I’ve switched from GQRX on OS X to SDRSharp on Windows. It has more features, plugins, and makes it easy to zoom in on a signal. The signal with doppler-shift is readily apparent as a diagonal line.
Audio of FO-29′s beacon: WAV | OGG. Despite the stronger transmitter, the received signal is weaker (probably due to being low on the horizon), and the the morse code is sufficiently fast that I’m not able to decode it by ear. (There’s an app for automatic decoding, but I haven’t tried it yet.)
And lastly… *drumroll*… After a number of unsuccessful attempts to receive KickSat’s signal, I finally caught it today! There was a nearly-overhead pass in the afternoon, so the as-received signal strength was optimal.
I pointed my antenna upwards, tilted it to my clear northeast view, tuned SDRSharp to the 437.505 MHz KickSat frequency, and waited. This is one of the usecases where software radio is really useful… While waiting, I was was recording the entire raw ~2Mhz slice of bandwidth around that frequency. That’s 4.5GB for the 7 minutes I was recording. I actually missed the transmission the first time, as it’s indicated about 23kHz lower than expected (again, due to hardware inaccuracies and doppler shift). But no big deal, I just replayed the recorded data and fine tuned things to the right spot.
Here’s what it sounds like: WAV | OGG. Unlike the steady stream of analog dit-dahs from HO-68 and FO-29, this is just 2 seconds of digital “noise”, like you would hear from a dialup modem. In fact, it’s exactly what you’d hear from an old modem. KickSat is using the 1200bps AFSK-modulated format, which is apparently still widely used in amateur packet radio. There are decoders available to extract the digital data (GQRX has one built in, and SDRSharp output can be piped to the qtmm AFSK1200 decoder).
If you’ve got SDRSharp, here’s the raw IQ data for the packet (ZIP, 60.3MB). I cropped the data to just the relevant portion. Alas, I can’t seem to get the decoder to recognize the packet. :( I’m not quite sure what the problem is yet… I’ve successfully decoded other AFSK data with this setup, so I suspect my signal was just too weak/noisy. Could be poor antenna pointing, but this was an easy pass. Some folks have had success with improving SNR with shielding, but I haven’t been able to replicate their results. There are a number of knobs and dials in SDRSharp to adjust manual/automatic gain control, so I might need to tweak that. (Unfortunately difficult, as there are only a few brief chances a day to catch KickSat.) It’s also possible that this is just slightly beyond the capabilities of a $20 RTL2832U dongle. Other options exist. I’d love to get a HackRF SDR board, but they’re not available yet. The FUNcube Dongle Pro+ can be had for about $200, but from comparisons I’m not sure exactly how much better it is in this band, or if it’s worth it. I’d love to hear opinions from hams who know this stuff better, or have tried both!
Amusing aside: while poking around in the 70cm band for interesting things, I stumbled across Santa Clara County’s ARES/RACES packet radio BBS. (Apparently, Ted, that is indeed still a thing!) In fact, FO-29 is actually an orbiting BBS! It’s quaintly amusing in the Internet Age, but when it launched in 1990 it must have been amazing. I had just upgraded to a 2400bps modem and discovered FidoNet and UUCP to reach beyond the local area code BBS systems.
That’s it for now. Over the next few weeks I’ll be refining my equipment and skills, and hope to capture some solid transmissions by the time my Sprite deploys. That will be even tougher to catch, but it’s a fun challenge!
So, I did this thing… It’s a little complicated to explain, but bear with me: I used a Software Defined Radio to capture the radio transmissions from a Bravo Ph esophageal monitor, wrote a browser-based decoder using the AudioData API, and reverse-engineered the broadcast data packets. As a bonus, I hope to do something similar to catch signals from a tiny satellite I helped fund on Kickstarter, which launches this weekend on SpaceX’s CRS-3 mission to the International Space Station.
Phew. Ok, now let’s break that down. :-)
Software Defined Radio (SDR) is central to all of this. It’s a pretty complex field — and I am by no means an expert — but the simplified basics are not hard to understand. Traditionally, a radio is built for a specific purpose with specific hardware. It’s effectively a black box customized to convert audio/video/data to or from a particular pattern of electromagnetic radiation in some particular part of the spectrum. Each box is different; you need one for satellite TV, one for FM radio, one for WiFi, one for GPS, and so on. There are myriad variations, and devices that might seem similar can actually be completely different. You’ve probably seen news reports about police and fire departments who are responding to the same disaster, but are literally unable to talk to each other because their radio systems are different.
SDR is a radical departure from all that. You still need a piece of hardware that can tune to a relevant slice of the radio spectrum, but it becomes a general-purpose device that relies on software to do all the application-specific bits. For example, you might ask such a device to tune to 66Mhz, and capture 3Mhz of bandwidth on either side. You then feed the result to a software NTSC decoder, et voilà, you’re watching TV (analog channel 4). The hardware doesn’t know anything about the contents of what it’s receiving, since it’s the software that deals with it. If your device can capture more bandwidth, you could even watch multiple channels at the same time. And since it’s just generic data being processed by software, it doesn’t need to happen in real-time. You can record a stream of RF data, and process it in different ways after the fact.
Until recently, SDR was only possible with fairly expensive equipment, which made it a niche hobby. But in the 2010-2012 timeframe, some folks discovered that a cheap USB device intended for digital TV reception (“watch TV on your laptop!”) contained surprisingly capable hardware that could be repurposed as a general-purpose SDR. Specifically, the RTL2832U chipset and a variety of tuner chips. For $10 to $20 you could get one of these mass-produced dongles that, with the right software, let you receive and decode all kinds of interesting transmissions from roughly 50Mhz to 1800Mhz.
Here are just a few of the things possible:
There’s a whole world of analog and digital RF data being broadcast around us, which cheap Software Defined Radio hardware makes readily accessible.
Around the time I was starting to play with SDR, I had gone to my doctor because I was experiencing some of the symptoms of acid-reflux. Or, as it’s more formally known, gastroesophageal reflux disease: GERD. [Spoiler: no big deal, weightloss + antacid and I’m all good.] One of the steps in the diagnosis is monitoring the acidity level in your esophagus over time. This used to involve inserting a tube into your nose and throat, leaving it there for a few days of measurement, and was generally quite unpleasant. Now they can just attach a tiny wireless sensor in you; it sticks there for about a week, and then gets eliminated naturally. It’s only a couple centimeters in size, and you don’t even notice it:
During the monitoring period, you carry around a receiver (which basically looks like a giant 1990-era pager). It’s supposed to be kept within 3 feet of you at all times, or else it makes an annoying beep when it loses the sensor’s signal. It records pH measurements every few seconds, and conveniently displays the last reading.
When the study is over, you return the receiver, your doctor downloads the data, and you get a nice little report with graphs and numbers to help your doctor make a diagnosis.
So that’s SDR and Bravo Ph. Now, if you’re connecting the obvious dots like I was, you’re wondering if it might be possible to snoop on the sensor’s broadcasts to see what they contain. Indeed it is!
But the first step is finding the signal.
I wasn’t really sure where to start, but some Googling turned up a User’s Guide (doctor’s guide, really) for the system, and buried in an appendix was the info I needed:
Output & Transmission EIRP: 17.6 μW (-47.53 dBm) at a 3-meter distance Format: Amplitude-shift keying Frequency: 433.92 MHz Rate: 60 ms every 12 seconds
Bingo! All I needed to do was tune my dongle to around 433.92MHz, and look for a bursty signal repeating about every 12 seconds. It was literally as simple as that — here’s a waterfall display from the GQRX app I was using, showing two transmissions (time is the vertical axis, frequency is the horizontal):
Ok, so now we get signal. But what’s in it? The brief bursts are obviously too fast for our meaty ears to discern meaning, so looking at the waveform in an audio editor was the easiest way to take a first look. I used Audacity on OS X:
Ah, there it is. Digital data. There are clear hi/lo levels, but what’s actually important is the length of the pulses. After examining a few more transmissions, the basic format of the data packet is apparent:
(Note that I’m slightly rounding the timings to what would seem to be likely values. The actual data is imprecise due to noise and rising/falling edges, on the order of tens-of-microseconds.)
I decoded a few packets by hand, both for fun and to validate the format. But it quickly became tiring to scribble down data like “pppppppppp111110111111101000000100110100011101000001100101s” and then convert it to hex (fbfa04d1d065), so I decided to write a tool to do it. In the browser, of course!
At the time, Firefox supported a simple low-level Audio API that was exactly what I was looking for. In a nutshell, you add a MozAudioAvailable event listener to an <audio> element, and the listener periodically gets an array of sample data as the media plays. I implemented a simple state machine to decode the data, based on a manually set threshold between the hi/lo states (and some fudge-factors to deal with the imprecision/noise previously noted). I’m sure there are more elegant and automatic ways to do this, but simple brute force was enough for my limited needs. The one annoying downside is that this API can only process in real-time(!); there’s no way to ignore playback speed and just get all the data as fast as possible. If you’d like to play around with it, here’s a live demo and the source on Github. (*cough* I’ve been so slow in finishing this post, that the Audio API I’m using has been removed from Firefox 28, in favor of the newer Web Audio API. So you’ll need an older Firefox, or just gaze upon the following screenshot.)
Ok, so now I’ve got a bunch of decoded data values to examine, such as:
fbfa04b4b79b fbfa04b4a9a9 fbfa049b80eb fbfa04857a07 fbfa04858af7 fbfa048483ff
What do they mean?
The first 3 bytes (0xFBFA04) are always the same, so that’s presumably a serial number or unique ID (and the manual confirms that during setup, there’s a step to ensure that the receiver is getting the expected sensor ID).
The next two bytes must be the actual pH measurements. They are usually similar to each other, and by graphing the values I can see they follow the trend of the pH values reported on the receiver (which I was writing down when capturing the transmissions). Why two values? The manual says that a measurement is made every 6 seconds but a transmission only every 12, which I assume is done to save power. The pH is roughly obtained by dividing the byte’s value by 25 — but it looks like it’s somewhat non-linear or uncalibrated, as data for the lowest pH values needs to instead be divided by 30 to match what the receiver reports.
The last byte took a bit more effort to figure out. At first glance it appeared fairly random, so I assumed it was some kind of checksum. Validating medical data seems important, after all. Probably something simple to compute for an 8-bit microcontroller, so no fancy FEC or CRC magic… I fiddled around with a few guesses, but graphing the data led me to the answer:
The checksum value (green) looks like a stretched, inverted, and offset version of the average pH (yellow). How about (pH1 + pH2) ^ 0xFF + 7? (All modulo-255, since this is likely an 8-bit microcontroller.) That’s it! It correctly generates the observed checksum for each of the 144 packets I captured.
So with that, I’m able to decode, interpret, and validate the data packets. Neat. It’s not directly useful for anything, but made for a fun experiment.
Afterwards, I got to wondering if there might be some further technical details buried somewhere online to help explain or confirm what I found. I’ve seen patents and FCC filings used to glean data in other cases, so I went to look…
Patent US6689056 has a number of interesting tidbits. It indicates that the microcontroller in the sensor is probably a MicroChip 12C672 (a member of their PIC family, which is similar to the Atmel AVR family familiar to Arduino folks). There’s a basic description of the packet format, but the only detail I hadn’t caught was that the 3-byte header is actually a 2-byte ID and a 1-byte Message Type (I only ever saw one type). It does confirm that the last byte is a checksum, but doesn’t go into how it’s computed.
On the FCC’s website, I found a 4/25/2001 application from Meditronics for the PHZ-BRAVO100, which has a number of close-up photos and an extremely detailed Test Report. It basically confirms what I had found, with additional info on the bit timing and packet format, and also reveals that there is a “transmitter status” message type that’s sent once an hour.
Now let’s shift from inner space to outer space — or at least low Earth orbit. Back in November 2011, a fascinating project appeared on Kickstarter: “KickSat – Your personal spacecraft in space.” Usually satellites are large vehicles that cost millions, but recently this has been made more affordable by using the small, standardized Cubesat format (a 10cm cube, weighing 1.3kg). KickSat takes this a step further, by packing a Cubesat carrier with tiny “nanosatellites” (3cm square, weighing a few grams). That brings the cost down to just $300 to sponsor a KickSat in orbit, broadcasting a custom callsign and other simple data. They don’t do much, but it helps demonstrate the concept of using a fleet of cheap, simple sensors instead of a single expensive “Cadillac” spacecraft. For example, instead of predicting space weather using reports from a handful of satellites, you might use a huge number of cheap nanosatellites monitoring a wide area.
As a bonafide space nerd, I jumped at the opportunity. And now, after a long wait, KickSat is poised to launch in just a few days (March 30th), onboard the SpaceX CRS-3 resupply flight to the International Space Station. Assuming all goes well, the KickSat CubeSat carrier will be deployed immediately after 2nd stage cutoff. It orbits by itself for 16 days, to ensure wide clearance from the ISS, and then deploys its 104 KickSats. Including mine, which will be broadcasting “MOZFF“. They’ll orbit for a few weeks, and then burn-up as they reenter the atmosphere. (No space junk!)
The project is publishing info about the satellite’s transmissions, as well as info on how to set an inexpensive ground station using… That’s right, software defined radio (GNURadio + RTL2832U dongle). I’ve got my equipment ready, and will attempt to capture signals from KickSat while it’s in orbit. More on that after launch!