The Case Of The Teleporting 737

I’ve recently started using NeoFly to add some economic and flight-monitoring realism to 737 flights in FS2020. NeoFly provides a simulated economy, where you can accept transit jobs between specified airports, set up hubs, pay for fuel and maintenance, earn reputation and clients, and so on. My virtual airline — Aurora Transit — mostly serves the US/Canada border region, with planned expansion to Alaska, the Aelutians, and Iceland.

I had just finished up a flight from Seattle to Philadelphia (not currently a hub for us, but a popular destination), and went to the NeoFly window to set up the next leg to our eastern Canadian hub, in Moncton.

The jobs available looked a little off — and I soon realized that this was because it thought my plane was at airport “9N2” instead of Philadelphia (KPHL). Had I landed at the wrong airport, by mistake? No; Philadelphia is unmistakable from the air on a nice day like today, and we had just landed (using ILS autoland, no less!) on runway 9R, which is definitely part of KPHL.

Curious, I went to look up this “9N2” airport, and found out that it’s a seaplane base, just west of KPHL. I pulled up the Neofly planner map, and it didn’t take long to figure out the problem.

Aerial view of KPHL and the area just west of it, showing the location of 9N2. (Click for larger.)
The white rectangles 1000′ from the left end of the runway mark the touchdown zone.

The symbols 9N2 and KPHL on the map represent the (single geometric point) locations that NeoFly has for those two airports. And the approach end of 9R (which is where the plane touches down, if all goes well) is actually closer to the 9N2 marker than it is to the KPHL one. So it must have thought I landed in the Delaware!

New company policy: When landing east at KPHL, we need 9L and not 9R. Or, land long!!

Posted in Algorithms, Aviation, Coding | Tagged , , , , , , , , | Leave a comment

Swiss Army Dolphin

Digital technology is ubiquitous in modern life. Electronic signs, traffic lights, wireless access points, pet microchips, contactless payment, ID, and access cards are as familiar to us as a Swiss Army Knife would have been, in earlier times.

So it’s not too surprising that a modern Swiss Army Knife-like device would be something that can interact with this technology on its own terms, making it more accessible. Or at least more hackable. Most of these devices use either some kind of wireless technology (RFID, NFC, Bluetooth, WiFi, infrared, or more proprietary VHF/UHF protocols.) Wouldn’t it be cool to have a device that can interact with most of these?

The Flipper Zero, showing the initial startup screen.

The Flipper Zero is just such a device, packaged in a relatively innocent-looking Tamagotchi-like housing that somewhat resembles a 1990s electronic game, or maybe an older mp3 player. Unlike these vintage devices, however, it contains built-in hardware to interact with NFC, RFID, Bluetooth, Infrared, One-Wire, and sub-1GHz RF.

There’s a gamelike (or Tamagotchi-like) aspect, too. The Flipper Zero is named both for its ability to “flip bits” in targeted devices, as well as its dolphin mascot. Most features of the device feature a friendly dolphin character (with a unique, per-device name) to guide you through their use, or at least appear in amusing cartoons on the screen as features like NFC card emulation are put to use. Interact with your Flipper daily, and your dolphin will remain “happy,” according to the directions.

Here are some of the things a Flipper Zero can do:

  • Scan pet microchips: I was able to verify that my cat’s microchip is in place, functioning, and registered to me. Flipper could also be useful in scanning local stray cats for microchips. (Newer microchips can sense the animal’s temperature, too.)
  • Clone and emulate payment, ID, and access cards: The Flipper can read RFID and NFC information, whether directly from a card or from an emulating device (I was able to copy my card information from my Garmin Venu watch, as well as credit, ID, and transit cards.) Flipper can then “play back” these IDs to a reader. (I’ll have to try this on the door card readers and NFC vending machines at work.)
  • Emulate a USB keyboard and/or mouse via the USB cable. (This can be used to automatically install applications on a PC that Flipper is plugged into.)
  • Determine the frequency, and often the protocol, for wireless devices using sub-1GHz RF.
  • Play back DTMF tones (useful for phreaking, if you have a time machine, I guess.)
  • Act as a rudimentary oscilloscope or logic analyzer
  • Test servomotor devices, which use 50Hz pulse-width signaling to control angles/speeds
  • …and the usual CPU tricks: there’s a Snake game, dice emulator, Conway’s Life…

With an add-on ESP32-based WiFi dev board, the Flipper Zero can also interact with WiFi networks to access the Internet and/or do basic security testing.

As with all useful tools, Flipper has the potential to be used for legitimate purposes as well as more nefarious ones. It’s been called a “hack tool,” which is accurate enough. Use it for white-hat “hacking” — meaning learning more about good network security practices. Like a knife, Flipper also has the potential to help you get into trouble. Be a Jedi, not a Sith.

Posted in Digital Citizenship, Networking, Reviews, System Administration, Tools, Toys | Tagged , , , , , , | Leave a comment

Three-phase power demonstrator

I recently discovered Temu — a site selling miscellanea at often hard-to-believe low prices. A lot of what they sell is standard Dollar Store fare (plastic kitchen implements and small metal tea infusers and such), but they also have some interesting STEM items for sale.

One gizmo that caught my eye was a “three-phase power generator” demonstrator. It’s a set of nine coils grouped into three phases around a stator, with an outrunner magnet rotor. Spin the rotor and the coils generate three-phase power, suitable to light some LEDs or perhaps (with some power conditioning) charge a phone or other USB device.

A small three-phase generator. They even include a LED to show that it works.

The generator comes apart easily, to show the coil configuration.

To show the sequential activation of the coil groups, I connected three red LEDs with consistent polarity from A-B, B-C, and C-A. I drove the generator shaft with a cordless drill set to a relatively low speed (just enough to provide sufficient voltage to light the LEDs), and filmed it with my phone in “Slow Motion” mode. This was enough to see the phase sequence of the coil activations. (With another set of three LEDs in the reverse direction, you could see the other half of the three phases, as well.)

Posted in Uncategorized | Leave a comment

Cache And Carry

Calculating sequences like the Fibonacci numbers sounds straightforward enough. If we call the Nth Fibonacci number F(n), then they’re defined as F(0)=F(1)=1; F(n), for N greater than 1, is F(n-1)+F(n-2). So, F(2)=F(1)+F(0)=2; F(3)=F(2)+F(1)=3; etc.

The sequence is recursively defined, but if you code it up that way, you run into a problem. Naïve use of the recursive Fibonacci algorithm is horrendously inefficient. Say you’re computing F(5). Well, that’s F(4)+F(3). Easy. Except you need F(4) and F(3). F(4) is F(3)+F(2) and F(3) is F(2)+F(1). With each level, the number of calls to the function doubles.

This is O(2^N), which means that computing the function for large values of N will be prohibitively slow for some reasonable value of N, even if you have an amazingly fast computer. Recursive Fibonaccis are manageable for F(5), hairy for F(10), and F(40) and above will slow down even a modern Core i9 workstation to a crawl.

The solution is to cache your results. Whenever you calculate a Fibonacci number, store its result in memory. Now, when calculating F(5), we have F(4) and F(3) in memory, and merely have to perform an addition to calculate F(5), instead of the whole pyramid of recursive lookups. Recursive O(2^N) calculations like this are simply not feasible if done in the usual straightforward way — calculating one term of the sequence could take longer than the projected lifetime of the universe. With caching, each term is simply two array lookups and an addition, and happens in almost no time at all.

The drawback is that these arrays can get very large. Exploring the Hofstadter Q sequence, I read that the existence of numbers Q(n) in the sequence has been proven up to 10^10. Ten billion is a fairly low number in terms of these things — but that’s still some 80GB of memory. Each term in the sequence has to be stored, which uses eight bytes of memory for a 64-bit unsigned integer. (32-bit numbers aren’t large enough.)

The Hofstadter Q sequence is more convoluted than the Fibonaccis. It is defined as Q(1)=Q(2)=1, and Q(n>2)=Q(n-Q(n-1))+Q(n-Q(n-2)). So the preceding two terms tell the algorithm how far back in the sequence to go to find the two terms to add. With Fibonaccis, it’s enough to keep the previous two terms in memory. Not so for the Hofstadter Q sequence — it’s unclear as yet just how far back you would need to go, so you may need the whole sequence.

It’s apparently not currently known whether Q(n) is even defined for all n (although it most likely is). According to the Online Encyclopedia of Integer Sequences (OEIS) page, Roman Pearce had computed that a(n) exists for n <= 10^10 as of August 2014.

I was able to extend this result to 2×10^10, and confirm that Q(n) does extend at least that far. It may be possible to extend this somewhat farther, but the cache is getting unwieldy for a workstation with “only” 64GB of physical memory — the commit size is over 200GB.

I’m reminded of Seymour Cray’s (NSFW but 100% true) quote about memory

Posted in Algorithms, BASIC, Coding, Math | Leave a comment