Bus Prognosticator

I recently came across an “ESP32 SEPTA Bus Monitor” project that would query a SEPTA Transit API and update a large scoreboard-like display with information about the predicted arrival time of the next SEPTA bus. Since project author [grandpasquarepants] lives right next to a bus stop, this format is perfect to be able to not only see when the next bus will arrive, but share this information with other riders.

I live half a block from my bus stop. It’s too far to be in easy sight from the bus stop, but a similar clock would be a useful addition to my entryway, helping me decide whether to walk or to wait for the bus when I head out. (If it’s more than a ten-minute wait, I’ll get to work faster by walking.)

So I asked GPT5.1 to take a look at the original ESP32 SEPTA Bus Monitor site and modify the code to simply print out the information to the serial port, as a test. After adding in my WiFi credentials to the sketch, this worked quite well, and a few prompts later, I had the world’s weirdest wall clock…

The Bus Prognosticator. (Now if only there were a bus inbound…)
Eventually, a bus is inbound!

When one or more buses are inbound to the stop, the estimated time to the first bus to arrive is displayed. Sometimes there’s an incoming bus and therefore a time displayed, sometimes not. When there is, it usually counts down (bus making forward progress) but sometimes stands still (bus stuck in traffic) or even counts up (I don’t want to know.) The script can be modified to show buses in both directions; as it is, it only shows inbound buses (towards city center), since that’s all I use the bus line for.

Here’s the Github, including source code and .stl files for the wall-mount case, as shown. It seems pretty reliable, if you leave a minute or so slack. Enjoy!

Posted in 3D Printing, Arduino, C, Networking, Projects, Tools | Leave a comment

Flip Flash

A twin pack of eight-shot FlipFlash arrays.
(You got four flashes per side, then flipped it over to use the other four.)

Taking pictures used to cost actual money.

Even today, you do still need a camera — but we all have smartphones with us anyway. Back in the day, you not only needed the camera, but needed to buy film and then either have it processed (for more money) or process it yourself in a home darkroom (which cost a lot of money to set up and then more for the materials.) Most people paid to have the pictures developed — FotoMat huts were a common sight in shopping center parking lots. Polaroid pictures were self-developing, but also more expensive and couldn’t be used for slides.

Even the lighting was often consumable. You could usually take daylight pictures outdoors using ambient light, but taking pictures indoors usually needed extra light. Professional photographers would lug around large umbrella lights with xenon strobe bulbs and exotic chargers that whined after every flash.

The consumer option was a lot more convenient — but a lot more expensive per shot, over time. A multiple-shot flash cartridge was designed, where a stored electrical charge from the camera would trigger a controlled explosive reaction between fine zirconium wire and a pure oxygen environment, resulting in a brief, bright flash of light.

The reaction consumed the filament and oxygen, ruining the bulb, but the genius idea was to harness the heat from this reaction to sever the connection to the currently-firing bulb, and also to enable the connection to the next bulb in the chain, so it would fire with the next picture taken. Although each individual bulb was single-use, you could take four (or five) pictures in a row, and then flip the flash cartridge over (thus the name) to use the others. Most FlipFlash cartridges came with colored dots on the back to indicate how many flashes remained.

Photo shops (and most drug stores and many grocery stores) sold these consumable flash cartridges — either the older Flash Cube style or Kodak’s Flip Flash, used on the 126 Instamatic and other cameras. For a few bucks, you could get a two-pack of flash cartridges, good for a total of sixteen flashes (not counting the occasional fairly rare misfire). That’s twenty cents per shot, in 1980s-era money, just to light up the shot. ($3.33 would buy lunch, back then.)

When I got my first digital camera in 1997, I calculated that it was roughly 600 times less expensive to take digital pictures and then burn them to a CD-R for storage, back then. The resolution back then didn’t compete with film — but it does, now.

Removing all effective cost per picture taken (as well the new ones being in digital form from the start) has not only allowed the explosion of multimedia social sites like Instagram and Facebook — it is a profound new tool to understand, record, and document our world. When students want a record of lab activities to include in the report, they simply take a picture and think nothing of it. 1980 would be amazed at how easy and inexpensive modern photography is.

Posted in Nostalgia, Tools | Leave a comment

Nerd Sniped

xkcd is a gold mine of insightful thoughts and cool ideas. Even posts that introduce original ideas like “Nerd Sniping” usually have thought-provoking STEM content as a bonus.

As an EE-adjacent nerd, I sometimes think about the original Nerd Sniping problem itself: What is the resistance between two nodes a knight’s move apart on an infinite resistor grid?

xkcd: “Nerd Sniping”

While I still don’t know how I’d go about getting a closed-form Solution to it (maybe you could set up a recurrence relation between rectangular rings of nodes?), it recently occurred to me that it wouldn’t take much math at all (just a moderately large amount of compute) to simulate such a setup at finite-but-large sizes. Create an (N+3) by (N+2) grid of nodes, where N is nonnegative. Each node is connected to its four Manhattan neighbors via an ideal resistance R (1 ohm in the original problem; I simulated it at 1k). Connect each node to Ground via a 1nF capacitor, with no initial charge. (The two source nodes have no ground capacitor.)

Place two source nodes a knight’s move apart (dx=2; dy=1) centered in this grid, with one held at +1V and the other held at -1V. (Using symmetrical voltages makes for a nice color map.) Once per small timestep, note the voltage differences between neighboring nodes, flow Q=(dV/R)*dt coulombs of charge between the two, and update the capacitor voltages accordingly. Top the source nodes back off so they remain at 1V and -1V, noting how much charge (and therefore current) it takes to do this.

Eventually, everything will more or less stabilize. At that point, measure how much current (charge per timestep, divided by timestep) is flowing in to the positive node. This should exactly match the current flowing out of the negative node. (If not, something is wrong.) Since 2V is applied between the nodes, the equivalent resistance is 2.0 divided by the current in amps. Simulate for a millisecond or so if using 1k resistors, and even large networks stabilize.

I’ve increasingly found LLMs to be amazing coding assistants, even in languages I speak fluently. I know how to make a simulation like this, but it would probably take several hours and would be a more-or-less naïve discrete-timestep model. Collaborating with GPT-5-Thinking to create a simulation of this in FreeBasic (I still understand the syntax nuances of FreeBasic better at least for now, but GPT-5 is far faster at coding), we came up with a discrete-time numerical simulation using Gauss-Seidel convergence and a visual heat map after two or three bugfix iterations. The code GPT-5 came up with even uses some FreeBasic graphics libraries I wasn’t aware of.

A heatmap of node voltages. Bright blue = +1.0V source; bright red = -1.0V source.
(The steady-state current is slightly lower here than for a larger — or infinite — grid.)

The steady-state current flowing into the network does increase as additional rectangular rings of nodes are added outside the original 3×2, but this quickly approaches a limit of about 2.587mA for a 4000×4000 grid, and only a slight bit more for larger grids (a 400×400 grid has the same current, to within about microamp).

Given the 2.0V voltage difference applied, this means that R is about 0.773 ohms in the original xkcd problem, or about 773 ohms if using 1k resistors.

Posted in Analog, BASIC, Coding, EET201, Electronics, Lore, Machine Learning / Neural Networks, Science | Leave a comment

Actually Open AI

Maybe we do have “ChatGPT at home,” now.

OpenAI recently released two actually-open LLMs, suitable for running on local, consumer-grade PC hardware. gpt-oss:20b and gpt-oss:120b appear to be among the most capable, efficient, and reliable local LLMs I’ve tested, so far. They’re no GPT-5-Thinking, of course, but even the 20b model has handled all the logic puzzles I’ve thrown at it, so far, including competently writing a C function to find the midpoint of a great-circle path anywhere on Earth.

Performance, at least for the 20b model, is quite good, considering the high quality of the responses. Inference runs at about 12-13 tokens per second, on a Core i9 system with an RTX4070 GPU. (128GB system RAM; ~16GB total used, so it basically fits in the 12GB VRAM.) The 120b model runs at 4-5 tokens per second, which is fair, considering it’s 6x larger. (I believe the 120b model uses a mixture-of-experts scheme, to limit the amount of the model that’s active at any one time.)

The ability to have local intelligent agents handling various tasks will open up a whole range of new, interesting projects. The next step is to try to get an idea of what kind of tasks various LLM model sizes can handle. qwen3:0.6b is really fast, but usually loses the plot when asked anything but a basic question. gpt-oss:120b is very capable, but communication is so slow that it might as well happen via Morse code.

Posted in Uncategorized | Leave a comment