jlm-blog
~jlm

17-Mar-2020

Things that aren’t going to happen in three weeks

Filed under: covid, science, sfba — jlm @ 21:07

• The SARS-2 (COVID) virus will burn out.

SARS-2 has demonstrated its ability to linger in populations, so even if the SFBA manages to rid itself of SARS-2, carriers entering the area will restart the local spread.

• There will be effective prophylaxis available.

Vaccines take much, much longer than that to develop, so we’re not going to see one that quickly. If we had a vaccine to SARS-1, there’d be a slim chance that it’d also immunize against SARS-2, but we’re out of luck there.

• Treatment will be substantially better.

Same as above. Treatments tailored specifically to a disease take time to develop and improve. The potential exceptions would be already existing SARS, coronavirus-general, or broad-spectrum antivirals. But if any of those showed significant efficacy against SARS-2, we’d already know about it. So we’re not buying revolutionary improvements in treatment, but incremental ones.

• The shelter-in-place order will be honored.

People are going to get fed up as the days, then weeks, drag by, and even today I saw kids practicing soccer on the Eastshore field.

 

Don’t get me wrong — there are good odds that a vaccine will be developed or that better treatments will arrive, and combining those with natural resistance means there’s a very good chance that the disease will be stopped. But the new treatments and prophylaxis will take longer than three weeks to happen. And sadly, that means this disease is unlikely to be stopped before it sweeps through our population, and a three-week shelter-in-place won’t change that.

17-Feb-2020

Unit tests are not enough

Filed under: programming — jlm @ 20:48

[I wrote this for another blog back in 2011, but it was lost to public view when that blog disappeared. It’s still valid today, though!]

In one of our datastructures, there’s an entry for a 64-bit unique resource identifier. This turned out to be a little too vague a description: Is it an 8-byte array, a 64-bit wide integer, or an instance of a 64-bit UUID class? One of our developers though one thing, and another thought another. No problem, right? The first compile by the one who chose wrong will show the type error. Except we’re using Python, so no type checking until the code runs. Still no problem, because our devs wrote unit tests, so the type error will show up on their first run, right?

Unfortunately, unit tests help not an iota in situations like this. The developer expecting UUID objects writes unit tests which consume UUID objects, and the dev expecting byte arrays writes unit tests which produce byte arrays.

So, integration tests, then? It turns out our devs did these too, verifying that their code integrated with the fancy datastructure class, but this still didn’t catch the mismatch! This is because the datastructure class doesn’t actually do anything with the resource identifier that would trigger a type check: it just accepts it as an entry in a slot when an item is inserted, and gives it out as one of the slot entries when items are looked up, and Python is happy to let an instance of any type pass through unchecked.

It’s only when we plug everything together that this trivial type mismatch shows up. So the moral: Make end-to-end tests early. You need them to turn up even problems as basic as this, and the earlier you find problems, the better.

27-Jan-2020

My name is Mary, and I’m an electraholic

Filed under: fiction, humor — mary @ 20:59

Hi, I’m Mary and I’m an electron addict. I’ve been sustainable for 52 days.

I’d like to share my story. I first began to understand the gravity of my problem during the PG&E blackout in late October. For many of my friends it was an inconvenience, but for me it was an intervention. Within the first 24 hours I began to experience cravings. I craved hot coffee, hot water, any water, hot food; light to read by, light to find the bathroom, terrible TV shows, even terrible news. I became irritable, annoyed with my dog, my husband, even our cat just for being alive and invisible in the dark. I was anxious and jittery. How long could they legally turn power off? Where was the PUC when we needed them? Had Cliff repaired the voltage regulator on our generator correctly or were all our motors being ruined? Why did fires still start when power was supposedly shut off? How could we ever escape this dark prison?

Four and a half days I suffered acute withdrawal symptoms, and then, power was restored. I couldn’t wait to start using. I e-mailed. I showered. I washed and watered and cooked. The endorphins flowed. As frail woman I could wash clothes, send my thoughts across miles, provide hot food for my family and bring water to a parched garden with such ease. The electrons were my slaves.

At the end of the day I went to bed exhausted, but not at ease. I clutched the remote, never wanting the sound and light to stop. I fretted about my supply of electrons. I needed to recharge more batteries and stash more water. Another intervention could occur anytime. I ordered a better generator/inverter on Amazon. I emailed Tesla. What I needed was a fourteen thousand dollar wall of batteries. That Tesla wall looked so lovely in the advertisement. All the precious electrons generated by our solar array could be safely stored there.

Still I could not sleep. Thank God for EA. The ad popped up just after I left the Tesla site. I called and my life changed. My wonderful sponsor helped me to let go and trust my higher power. She helped me see how I had harmed the planet and future generations while using. I accepted Mother Nature as my higher power and my sponsor helped me see Mother Nature’s generous hand in the golden persimmons and scarlet pomegranates, the change of seasons and the arrival of an atmospheric river. My sponsor helped me live sustainably.

Still, as I mentioned, I am a fragile woman. My hands tremble at the dimming of the day. I come here with an urgent need for a sponsor. My first sponsor, my beacon of hope and true north, relapsed on Thanksgiving. Her family refused to believe that a solar cooker nestled in the snow could roast a turkey and sadly they were right.

[ This awesome story is by guest blogger Mary Myers.   — JLM ]

9-Jul-2019

Quine, Hofstadter, Gödel again

Filed under: math, philosophy — jlm @ 20:25

During some recent traveling, I re-read some sections of Douglas Hofstadter’s Gödel, Escher, Bach. Of particular interest was a step in the proof of Kurt Gödel’s first incompleteness theorem which involved substituting a variable of a formula with the formula itself. Hofstadter calls this peculiar operation “quining” after the logician W. V. Quine, who wrote extensively about self-reference. As with the previous times I read through that part, I noticed that the operation didn’t specify a variable for substitution like substitutions generally do, but instead performed substitution on all free variables, which is something I haven’t encountered anywhere else. This oddity wasn’t even mentioned, much less justified. Unlike after my previous reads, this time I decided to figure out why it was performed that way.

Now, the more interesting parts of Gödel’s Incompleteness Theorem involve what we now call Gödel coding to demonstrate that classes of logical systems can represent quining their own formulas and verifying their own proofs, the latter of which was very surprising at the time. But those parts turn out to be unnecessary for dealing with this small facet of Gödel’s proof, so let’s just deal with the proof’s final proposition, which is comparatively simple and straightforward: given that a logical system supports the operations of quining and proof verification (and logical conjunction and negation, and an existential quantifier), that logical system is incomplete (or inconsistent).

(more…)

9-Feb-2019

Sleet again

Filed under: sfba — jlm @ 13:00

Having learned from my experience on Monday, I didn’t take my bike out shopping as usual, despite it being 48°F / 9°C (so probably warmer) and midday instead of evening. Then, in due course, ting-ting-ting-ting on my car’s body. I guess I need to save my biking in the rain for warmer parts of the year, even though it’s not at present cold per se.

4-Feb-2019

Sleet!

Filed under: biking, sfba — jlm @ 19:09

I biked home from work this day, as usual.
The bicycle has been my preferred mode of
personal transportation almost all my life.

It was raining.
I learned to bike in Portland.
I’m used to biking in the rain.

It was raining hard.
I was drenched. But one quickly learns that
once soaked, you can’t get any more wet.

The air was cool. High 40’s, I think.
That’s a good temperature to bike in. Not
uncomfortably cold, but cold enough to
pull your body heat away without sweating.

The rain was cold.
I’m from Portland. I can handle cold rain.
Suddenly, the rain stang!
What is this sorcery?
The rain beat a tattoo on my helmet.
Augh! Never mind the wet, this is sleet!
The weather forecast said nothing about sleet!
I have yet to enjoy being outdoors with sleet or hail.

2-Feb-2019

Less is more, CPU edition

Filed under: general, programming — jlm @ 17:11

I fixed an interesting bug recently. By way of background, I work on industrial robotics control software, which is a mix of time-critical software which has to do its job by a tight deadline (variously between 1 and 10 ms) or a safety watchdog will trigger, and other software that does tasks that aren’t time sensitive. We keep the timing-insensitive software from interfering with the real-time software’s performance by having the scheduler strictly prioritize the deadline-critical tasks over the “best effort” tasks, and further by massively overprovisioning the system with far more CPU and memory than the tasks could ever require.

The bug (as you may have guessed) is that the deadline was sometimes getting missed, triggering the watchdog. What makes the bug interesting is that it was caused by us giving the system too much excess CPU and memory!

The deadline overruns happen when the computer runs best-effort tasks. It only runs those tasks a small fraction of the time, but no matter, they shouldn’t be interfering with the deadline completion at all (and having our software fail its deadlines is unacceptable, of course). The real-time tasks’ peak usage is only about a quarter of the computer’s CPU capacity, and the system gives them that always: 100% of the time they want CPU, they get CPU. They are never delayed, and never have to wait on a resource held by a best-effort task. When they miss the deadlines, it’s always because they’ve gotten jobs that take the usual amount of work to complete, and had their usual amount of time to do it, yet the jobs somehow just take more time to finish. There’s no resource contention, nor are the caches an issue.

Yet when the best-effort tasks execute, the calculations done by the real-time tasks run slower on the same CPU for the same job with the same cache hit rate and with no memory or I/O contention. What’s going on? After hitting some false leads, I discovered that they’re going slower because the CPUs are running at a lower frequency. It turns out the CPUs’ clocks have been stepped down because they’re getting too hot. (Yes, the surrounding air is much warmer than it “should” be, but we don’t have a choice about that.)

The computer is hot because all the CPUs are going at full blast, because all of the best-effort tasks are executing because the computer can fit them all in memory. Half a minute later, the tasks are done, the computer cools off, the CPU frequency gets stepped back up, and the deadline overruns cease. An hour later, the hourly background tasks all go again, the fans spin up to full, but they’re not enough and the CPU frequency steps down, hence the watchdog alerts about missed deadlines.

Okay, maybe we should change the background tasks so they execute in a staggered fashion. But before thinking about how I might do that, I tried disabling ⅔ of the computer’s CPUs instead. The hourly processing now takes 200s instead of 30, but it’s still far below 3600, and that’s what matters. Now the computer stays nice and cool, so the CPUs stay nice and quick, and the deadlines all get met. We were using too many CPUs to get the calculations we needed to get done fast, done fast. Who’d’a thunk?

20-Oct-2018

Scheme got incrementally better

Filed under: programming — jlm @ 08:34

Almost a decade ago, I wrote about how the venerable and elegant but third-tier programming language Scheme had no way to access the program arguments that would work across different implementations, and pointed out that type of large stumbling block tends to hinder the adoption of a language. The last time I worked on a serious project in Scheme was only a little later. So not much news from the Schemeoverse has reached me since then. Which is why I just learned that four years after that post, five years previous to the present day, the Scheme community decided to fix that and standardized on option (c) from that post: (command-line) gets you your program’s arguments. Progress marches on! In the case of Scheme, that march is just veerrryyyy slow.

1-Sep-2018

Europe (probably) ditching DST

Filed under: politics, time — jlm @ 11:24

Lookee, lookee, seems like the Europeans have figured out how crazy DST is and are likely to drop it soon (by the standards of these things, so maybe in a couple years). Keep pushing for DST abolishment, everyone — progress is being made.

25-Aug-2018

Open source CPUs are not new

Filed under: general — jlm @ 23:25

There’s this video going around “An Open Source CPU!?” about SiFive’s implementation of RISC-V being “an open source CPU”, but it’s so duplicitous that I wanted to tear out my hair as I watched it. RISC-V is an interesting and worthy project, but the video is misleading in so many ways it’s hard to know where to begin.

For one thing, the x86-16 ISA wasn’t just hacked up in a few weeks from nothing. Intel’s architects spent months on it, and it was based on Intel’s 8-bit 8080 ISA, which was in turn informed by the experience of their earlier 8008 ISA. It had also already been on the market for a few years by the time IBM got around to the PC. For another thing, the x86-32 ISA was hardly lacking competition during its era of desktop dominance, with Motorola’s 68000, IBM’s Power Architecture, and DEC’s Alpha all big-name-backed ISAs trying to displace it. When the time came for 64-bit PCs, even Intel itself failed in its attempt to push a new kind of ISA for the PC market with its Itanium architecture.

Regarding the RISC/CISC divide, it is true that CISC processors don’t execute their ISA instructions directly in hardware now, but instead run RISC microcode (because RISC instruction-handling circuitry executes faster and is easier to design) for a CISC ISA interpreter. But that doesn’t mean RISC is best for application or system code, because microcode has no instruction cache! In the embedded systems and mobile-device markets, ARM had to abandon the “elegant” simplicity of their RISC ISA and supplement it with non-RISC “Thumb” and then “Thumb-2” extensions to raise its code density (important on small processors with small instruction caches) to keep x86 ISA microcontrollers, with their dense CISC code and ever-decreasing prices, at bay. As is often the case when competing technologies butt heads, the best solution is to find a way to use the advantages of both!

The video’s title is “An Open Source CPU!?” — and this is its biggest lie, because SiFive’s CPU being open is presented as something new, which is very much not true. Even if you dismiss all CISC designs as unworthy, there are open RISC specs with open source implementations that predate RISC-V, both grassroots and of commercial origin. Furthermore, SiFive’s CPU not only isn’t the first open source RISC processor, it’s not even the first open source RISC-V processor! You can get open source implementations of RISC-V here (VexRiscv) or here (ORCA) or here (Sodor).

This video is just an advertisement of SiFive’s pretending to be an independent informational video, spreading misinformation about the history and present state of microprocessors. (For your amusement, go and pause the video around 6m12s — the presenter is stating the inferiority of other open ISAs compared to RISC-V while showing a changelog on GitHub. But the changelog shown is not that of anything to do with another ISA, but the opposite: it’s actually the changelog for the port of gdb to RISC-V! The presenter could have grabbed a screenshot from the GitHub pages of the ZPU project with insignificant additional effort and shown something that was actually applicable to his claim, but once you’ve stepped off the straight-and-narrow path of honesty onto the wide highway of dishonesty, why bother?)

In short, that video is complete and utter BS, the presenter should be downvoted into YouTube oblivion, you should never pay one red cent for anything from SiFive, and RISC-V deserves better. It’s strong enough that it should be able to fight honestly, and having its proponents descend to dishonest shilling for one corporation is disgraceful.

Powered by WordPress