Candles Made Dangerous

DISCLAIMER: This experiment involves rather a lot of fire, and could cause a lot of damage or harm if not done properly. This information is provided for educational purposes only. Do not try this unless you know what you’re doing. Kids, get your parents’ permission (and help) before you do this. Have a plan to safely extinguish it, and a fire extinguisher in case that plan fails.

The image of a candle usually brings to mind serene, romantic images — candle-lit dinners, candles in the window of homes from yesteryear, and life in general as it was a century or more ago — a more peaceful and relaxed era.

A nice, romantic candle. Let’s upgrade it!

To an engineer, though, a candle is fundamentally a machine for combining oxygen (air) and fuel (wax) through the process of combustion. Normally, this is a nice, regulated process, and the candle provides a calm, relaxing glow for several hours.

…but what if it could be “overclocked”…?

We have plenty of oxygen — it’s in the air. We also have plenty of fuel available — if perhaps not yet in the correct form for high-speed combustion. What we need is a way to combine these more quickly.

Wrap the candle tightly in a paper towel. Ideally, the towel should extend up to the top. (I’m not very good at this yet.)

Wrapping a paper towel tightly around the candle allows it to act as a wick — one with a much larger surface area than the candle’s original string wick. When the paper towel is lit, it heats up the wax next to it, melting it and causing it to flow into the unburned portion of the paper towel. From there, the same wicking action normally at work in the middle of the candle converts the entire outside of the candle into one big combustion zone.

Power equals Energy, divided by Time.

Have fun — but stay safe!

Posted in DoNotTryThisAtHome, Hacks, Mad Science, Science | Leave a comment

Virtual Glasses

Virtual Reality, as presented by the Oculus Rift CV1 headset, is amazing. With two separate displays and the right optics and postprocessing, modern PCs can not only display amazing renderings of 3D scenes on a 2D screen — they can truly immerse you in the scene. Move your head to the right, and your view changes. Turn and tilt your head slightly left and back, and what you see tracks your movement perfectly. Look over your shoulder, and you’ll see what’s there. No 2D picture or video can really describe it adequately. It’s like being there.

Oculus Rift CV1 headset. (Image from Wikipedia; click for larger version)

The illusion is particularly uncanny in Flight Simulator. To an experienced Flight Sim enthusiast like me, the Rift is literally like opening a second eye that you never had before. Suddenly, everything is in true 3D. You no longer have to guess how far away the runway is — your eyes do the calculation for you. The dashboard is right there in front of you. If you position your setup right, you can even get it to match where your (real) control yoke is. The illusion is very convincing.

However, those amazing, super-realistic 3D images — are optically projected at infinity. The Rift is amazing, but it’s not yet advanced enough to truly simulate optical depth, other than by parallax. Your brain quickly fills in the details — but if you have trouble seeing distant objects in real life, you’ll have the same problem in the Rift.

…So I learned that I’m nearsighted in VR, as well as real life. It figures.

Fortunately, there is now a fix! Thanks to the existence of online optical shops like Zenni Optical, and the very welcome work of Thingiverse user [jegstad], you can order prescription lenses for your CV1. Here’s how.

First, grab a copy of [jegstad]’s adapter from Thingiverse. This is an .stl file, which can be sliced and printed by just about any 3D printer out there. (This model is a pretty easy one to print, I found.)

[jegstad]’s lens holder for the CV1, ready to be sliced in Simplify3D

Next, you’ll need a copy of your distance-glasses prescription. Go to Zenni Optical, and order frame #550011. Color doesn’t matter; we won’t need the frame at all where we’re going. We’re after those nice, circular 43mm prescription lenses it holds.

Pick the options you want (you don’t want tint for these, but the oleophobic coating is probably a good bet to avoid smudges) and order the frames.

When the frames arrive, remove the lenses (remember which is which, if it matters for your vision like it does mine). They can be snapped into the printed frame. Note: If you have astigmatism (anything other than zero in the “CYL” category), make sure you mark down not only which lens is which, but which orientation they’re in, since you’ll need to transfer them the same way.

Next, carefully remove the foam visor from the CV1. The lens holder will clip right in. (It looks questionable but will seem more secure once in place.)

The 3D-printed lens holder, clipped to the Oculus’ visor insert.

The Oculus visor insert with the lenses in the holder


Finally, carefully reattach the visor, making sure to keep the cable at the upper left positioned correctly.

Enjoy your new, properly-focused universes!

Posted in 3D Printing, Flight Simulator, Games, HOW-TO, Toys, Virtual Reality | Leave a comment


EPROMs are interesting devices. With the advent of Flash memory, they’re not in common use these days. However, they are extremely easy to integrate into hobby electronics, and can serve several functions, including a few off-label ones that I’m currently investigating.

A 64kb (64 kilobit, or 8kB) EPROM, containing 65,536 individual bits.

EPROM stands for Erasable/Programmable Read-Only Memory. As the name implies, it can be erased and reprogrammed, but this is typically not done in circuit. Instead, specialized eraser and programmer units are used to reset the chips and then program them. The new semi-permanent ROM chips can then be used in a TTL circuit just like custom ROM chips; place the address on the address bus, lower ~CE, ~OE, and ~RD, and read the data from the data bus.

EPROM chips are reprogrammed in a two-step process. First, the cells are cleared by setting them to all-ones. Any covering of the optical window is removed, and the chips are subjected to ultraviolet light, which quickly resets all of the exposed memory cells to the “1” state. Programming, done in a separate device, then consists of zeroing out everything that shouldn’t be a 1, much like Michelangelo removed the marble that wasn’t part of his statues.

This is all transparent (pardon the pun) to the user; you simply “bake” the chips in the UV “oven” and then use the programmer per the directions.

Once you understand the process, though, it’s possible to try more interesting experiments. EPROMs are inherently analog devices, with analog amounts of charge stored in each cell.

A closer look at the EPROM die, through a microscope. (Click for larger.)

In the past, others have made clever use of the inherent analog nature of EPROMs by using them to store “digilog” audio — sampled audio, stored as analog charges instead of binary representation. Certainly one of the more unique audio storage methods out there: instead of storing zeroes and ones at every address, eight analog charges were stored.

Due to their optical erasure method, EPROMs can also be used to sense light exposure. Although the data sheets state to “bake” the chips for roughly 45 minutes to an hour for erasure, I’ve found that as little as 20 to 30 seconds is often enough to at least make the chip read as erased. (It may be that longer erasure times help ensure that the 1s remain readable.)

They probably won’t be much use as imaging sensors — even if an image could be properly focused on the split halves of the die, the image would only be roughly 256 by 256, in grayscale. Not exactly high definition. But they might make interesting sunlight-exposure sensors.

Posted in Components, Digital, Electronics, Nostalgia | Leave a comment

Investigating VOR signals, Part 1

Even in the age of GPS, alternate forms of navigation are still around, and still in use. One such technology is VOR (Visual Omni Range) navigation. Despite the questionable acronym, VORs provide useful navigational information by allowing users (nearly always aircraft) to determine their bearing to or from a given nearby station. With two such bearings, position can be determined reasonably accurately.

This direction-finding ability relies on the combination of two signals from the VOR: an omnidirectional reference signal and a swept unidirectional phase signal. By determining the phase difference between the signals, a receiver can determine the direction to the VOR. If the phase signal is received at the same time as the reference signal, the receiver is north of the VOR (since the phase signal is tuned to match the phase of the reference signal when directed north.) If the signals are 180 degrees out of phase, the receiver is to the south. One degree of phase difference corresponds to one degree of azimuth.

Theoretically, given a list of VOR frequencies and locations (readily available online), a GPS-like navigation device could be built to automatically scan the band from 108-118MHz, find nearby VOR signals, determine bearings to them, and solve for the position. Once the phase information is extracted from the signals, solving for receiver position is fairly straightforward. (Aircraft-based navigation systems such as the FMCs on some Boeing aircraft, do this; the FMC can sometimes be seen tuning VORs along the flight path automatically to supplement GPS/INS navigation.

Unfortunately for hobbyist investigations, VORs are specifically designed not to radiate significant RF power at ground level. To minimize interference with other radio services, VOR transmitters include a ground plane at 10′ height, which serves to minimize radiated power at low angles. The specified “service volume” for VORs generally starts at about 1000′ AGL (Above Ground Level).

This makes collecting VOR signals tricky unless you have access to an airplane (or perhaps if you go to the trouble to mount a RPi and a RTL-SDR radio on a quadcopter.) Signal strength, and therefore readability, drop off quickly within a short distance of the VOR, unless you can somehow manage to be at a higher altitude.

This lack of low-altitude signal availability means that VOR navigation is impractical for ground navigation. However, receiving and decoding VOR signals is still a useful exercise — and with the increasing reliability of GPS and other global satellite-based navigation systems such as GLONASS, VOR transmitters may not be around for much longer.

So, to capture a VOR signal for analysis, a field trip was in order. If the VORs won’t transmit appreciable power at ground level, the only option that doesn’t involve chartering an airplane or duct-taping half a kilo of RF gear to a drone is to go pay one a visit.

The Bangor (BGR) VOR transmitter, showing the central omnidirectional antenna, the surrounding phased array, and the groundplane.

Counterpoise or no, this close to a VOR, you can get a beautiful signal from ground level, even using a $20 RTL-SDR kit from Amazon and SDRsharp software.

SDRSharp spectrum and waterfall plot of the signal from the Bangor VOR. (Click for larger.)

The next step (which will have to wait until I can take a closer look on my workstation) is decoding the signals. One of the nicer features of SDRSharp is its ability to record a slice of the radio spectrum for later playback.

…But be careful what you ask for. Quadrature sampling at 32 bits produces four bytes of data per second, per Hertz of bandwidth sampled. At 16,000kb/sec, this is roughly 2MB per second. At higher speeds, it’s even worse. Roughly ten minutes of recording produced several GB of data, before I realized I didn’t have to sample half the nav spectrum.

This post is dedicated to the memory of my grandfather, Millard C. Carr, who would have been 100 years old today, and who would have thoroughly enjoyed the idea of homebrew VOR navigation, even if he would have pointed out that it’s simpler to use GPS. We miss you, Granddad.

Posted in Analog, Aviation, RF | Leave a comment