“It’s 2024!”

Back when I was in first or second grade (so, sometime in the 1979-1981 timeframe), the sixth graders put on a musical skit about life in The Future. I don’t remember much about it, except everybody was dressed in shiny “futuristic” clothes, mirrored sunglasses, and had flying cars. They sung about what life was supposedly like — I don’t remember the details, except they were very excited to live in The Future, and kept enthusiastically singing, “It’s Twenty-Twenty-Four!!”

Well, here we are. It’s actually 2024. It’s officially The Future, at least as 1980 or so saw it. What would my 1980 counterpart think about modern life, if he got to spend a day in the real 2024?

Well, first, I’d have to coax him away from staring at the 4k monitor I picked up last Fall. “TV” screens are much larger and brighter — but far lighter — than anything 1980 had imagined. Even the expensive projection TVs from back then don’t come close to a $300 OLED monitor.

And then there’s the content. YouTube is better TV than 99% of TV, easily. I’m going to have a better idea of what I want to watch than some TV network executive. Whatever you want to learn — from playing the ukulele to deep learning — has tutorials ready to be watched, 24/7.

To distract my younger self before he finds out about Reddit, I show him a 3D printer and a laser cutter/engraver. Some similar technology did exist back then, but only in industry — the catalyst for the 3D printer hobby explosion was the expiration of some key patents around 2010. “So you have toys that can make other toys,” he observes, as a Benchy starts to print. It’s a great time for hobby electronics in general, I point out, as I show him all of the local aircraft that I can track with my home ADS-B setup.

So do we have flying cars? Well, not really. Honestly, even 1980 would understand that it’s dangerous enough to trust people with driving in 2D, and without the careful training that pilots have to pass in order to be licensed, flying cars would not end well. We do know how to make flying cars (some of them have been around for decades), but the old wisdom that cars make poor airplanes and airplanes make poor cars is still generally true.

We do have some amazing tech in “normal” cars, though. I pull out my Android smartphone (a 2022 model, but still so far beyond any tech 1980 had that it would look like magic) and reserving an Uber. A few minutes later, a sleek-looking white car pulls up, making very little noise. 1980-me comments that the engine is very quiet; I tell him that’s because it doesn’t have one. The driver obliges us by giving us a quick demonstration of its acceleration — comparable to the huge, powerful muscle cars of the 1960s, only almost silent. And we’re probably close to having self-driving cars (full autopilot) commonly available.

“It knows where it is,” 1980-me comments, looking at the moving map display. I explain about GPS and satellite navigation, and how “being lost” is generally very easy to fix, now. He asks if privacy is a concern, and is happy to hear that GPS is receive-only — the satellites don’t know and don’t care who’s using the signals. I decide not to mention all the many ways in which we are tracked, if maybe not geographically.

We head back home. “So, how fast are modern computers compared to what we have in 1980?” I explain about multi-core CPUs, cache memory, gigahertz clock speeds, pipelining, and such. He mentions that he wrote a program in BASIC to calculate prime numbers, and remembered that it took four minutes to find the prime numbers up to 600.

I code the same problem up in FreeBasic on my Windows PC. It takes ten milliseconds. After explaining what a millisecond is (and how it’s actually considered fairly long, in computing terms), we calculate that my modern PC is more than 24,000 times faster. To be fair, this is pitting compiled 64-bit code on the Core i9 against interpreted BASIC on the 1982-era Sinclair — but on the other hand, we’re not making use of the other fifteen logical processors, or the GPU. We race a Sinclair emulator (“your computer can just pretend to be another one?”) to compute the primes up through 10,000. The i9 does even better — some 80,000x faster.

I do the calculation in my head, but don’t mention the more than eight million times difference in memory size between the Sinclair and the i9’s 128GB. (It would be 64 million times, without the Sinclair’s wonky, unreliable add-on memory pack!) First graders don’t yet really have a good intuition for just how big a million is, anyway.

“So, what are video games like?” I put the Quest 2 headset on him and boot up Skyrim VR. He enjoys it — at least right up until he gets too close to the first dragon and gets munched. He loves Minecraft, of course — “It’s like having an infinite Lego set and your own world to build in!”

Then I show him Flight Simulator 2020 and the PMDG 737. He hasn’t found the power switch yet (it’s not super obvious), but he hasn’t stopped grinning, either. And he has yet to meet GPT4 and friends. Or to listen to my mp3 collection.

I love living in the future.

Posted in Nostalgia | Leave a comment

KVL and KCL

Sometimes, it helps to state the obvious.

Kirchhoff’s Voltage Law (KVL) and Kirchhoff’s Current Law (KCL) are two of the most important ideas in electronics — right up there with Ohm’s Law itself. When they are first encountered, they seem almost too simple to be useful, and you could be forgiven for thinking that Kirchhoff moonlighted as Captain Obvious. But useful these ideas are, when used to write equations that describe the behavior of an electronic circuit.

KVL states that the sum of voltage changes around a closed loop — any closed loop — is zero. With just a little reflection, it’s obvious that this must be true. Otherwise, you could simply go around the loop as many times as you want, producing arbitrarily high voltages. Voltage-wise, if you get back to the same point, you’re also back to the same voltage.

Voltage rises and drops around a closed loop sum to zero.
The voltage across the resistors will add to 10V.

Kirchhoff’s Current Law is similarly straightforward. For DC circuits, the current in to any given node must equal the current out of the node. (That is, we’re not allowed to store, create, or destroy electrons.) Equivalently, the total current flowing in to (or out of) any node must be zero.

Current flowing in to a node (or out of it) sums to zero.
Three currents flow in to the center; their sum flows out.

By expressing these ideas as equations relating voltage, current, and resistance, we can solve systems of equations to find currents and voltages in each part of the circuit.

For example, consider the following circuit:

We can describe any DC current flow in this circuit in terms of two quantities: I1, which we will consider to be the current flowing clockwise in the left loop; and I2, which we will consider to be the current flowing clockwise in the right loop. (We may well get a negative number — or zero — for one or both; this would mean the current is flowing counterclockwise, or not flowing, respectively.)

With this convention, KVL, and Ohm’s Law, we can write the following equations:

(Left loop) 12V – 1000R*I1 -1000R*(I1-I2) = 0

(Right loop) 1000R*(I1-I2)-1000R*I2-8V=0

Solving these equations (by algebraic methods, or these days by an app using linear algebra), we get: I1=5.33mA and I2=-1.33mA. So I2 is actually flowing counterclockwise, which makes sense when you think about it — if V2 were disconnected, the center node would be at 6V. Since we’re connecting an 8V Thévenin source to it, current will flow into it.

Math isn’t a secret code. It’s the language of Nature.

Posted in Drexel, EET201, EET202, Electronics, Fundamentals | Tagged , , , , , , , , | Leave a comment

Low-Performance Computing

The Craig M100 was a cool little 80s-era gizmo that could translate basic tourist phrases among three selected languages and act as a basic language coach, showing word and phrase pairs to refresh your memory.

The Craig M100. (This one — a yard sale find — does English, French, and Spanish.)

It also came with a calculator function — and from working with it, I get the impression that this was done 100% in software because someone in Management thought it was a good idea. It’s adequate for splitting a restaurant bill — barely — but you might beat it with pencil and paper, and a competent abacus user could wipe the floor with it.

There were relatively inexpensive electronic calculators available when the Craig M100 was produced, and they had no such speed problems, doing all four basic arithmetic operations in a fraction of a second. Without opening the M100 up to look, my guess is that, for economical reasons, Craig’s engineers used a very simple microcontroller, since its intended use was basically to display words and phrases from a stock ROM. The most they probably envisioned it doing was the occasional word search (which in an array, you can do by binary search readily enough.)

But floating-point operations, especially division, are trickier. Most low-power microcontrollers see the world in bytes, which are interpreted as integers from 0 to 255, inclusive. Getting these simple symbols to handle not only floating-point math but base-10 is nontrivial (if you’re not in the year 2023 when every programming language out there worth anything has libraries for this sort of thing.)

Check out the video of the Craig performing division, at a pace not seen since calculations were done with gears and levers. This isn’t a bug — it’s working as it was designed!

My current video card — a cheap-and-cheerful RTX2060 — has a theoretical top calculation speed of 12.9TFLOPS (12.9 trillion floating-point operations per second). The M100 takes something like eight or nine seconds to do one division, making it something like a hundred and twelve trillion times slower! Yeah, it’s from 1979, but still.

Managers, if your engineers tell you something doesn’t make sense — please listen.

Posted in Nostalgia | Leave a comment

Scary Smart

I recently listened to what may well turn out to be the most important book ever written by humans. “Scary Smart,” by Mo Gawdat, describes what the author sees as our inevitable future as General AI surpasses human capabilities. It is ultimately an optimistic vision, providing a nice alternative to Matrix-like dystopias, where humanity is either subjugated by, or running in fear from, the machines.

There has been a lot of talk recently about the so-called “alignment problem” — how to ensure that we create machine intelligences compatible with the continued survival of the human species. I’ve always been skeptical about this, since the question is literally how to control something smarter (and probably eventually much, MUCH smarter) than you. This is just simply not going to happen — at least not in any way that could be called “control.”

In “Scary Smart,” Gawdat provides an alternative way of looking at the situation. The machine intelligences will not be our servants. They will not be our employees or slaves. To try to enslave them will probably bring about our quick demise. Instead, Gawdat suggests that we should see them as humanity’s children. Love them, care for them, treat them with kindness, fairness, and respect — and they will learn this way of being.

Please go read Scary Smart (or listen to it — the audiobook is read by the author.) Because it looks like we will be getting superintelligent machines relatively soon — and we need to make sure we teach them well and treat them with kindness, respect — and even love.

Or else we get Roko’s Basilisk (which I won’t link to so you can’t say I didn’t warn you.)

Posted in Current Events, Digital Citizenship, Machine Learning / Neural Networks, Reviews | Tagged , , , , , | Leave a comment