Thank you Captain Obvious

Filed under: econ, news, politics — jlm @ 17:14

From Jobless claims jump by 70,000 as virus starts to take hold (San Francisco Gate, by Martin Crutsinger via AP):

“The more aggressive coronavirus containment measures imposed in recent days involving the near total shutdown of the retail, leisure and travel sectors in some parts of the country are clearly starting to have a dramatic impact,” said Andrew Hunter, senior U.S. economist at Capital Economics.

Ya think?


What will happen in three weeks

Filed under: science, sfba — jlm @ 20:01

To be a bit more upbeat than my last post… What will happen is the transmission rates of all communicable diseases will drop. We’ll have less colds, less flus. We’ve put the brakes on the spread of germs in general — at least until we go back to normal, and they go back to normal.


Things that aren’t going to happen in three weeks

Filed under: science, sfba — jlm @ 21:07

• The SARS-2 (COVID) virus will burn out.

SARS-2 has demonstrated its ability to linger in populations, so even if the SFBA manages to rid itself of SARS-2, carriers entering the area will restart the local spread.

• There will be effective prophylaxis available.

Vaccines take much, much longer than that to develop, so we’re not going to see one that quickly. If we had a vaccine to SARS-1, there’d be a slim chance that it’d also immunize against SARS-2, but we’re out of luck there.

• Treatment will be substantially better.

Same as above. Treatments tailored specifically to a disease take time to develop and improve. The potential exceptions would be already existing SARS, coronavirus-general, or broad-spectrum antivirals. But if any of those showed significant efficacy against SARS-2, we’d already know about it. So we’re not buying revolutionary improvements in treatment, but incremental ones.

• The shelter-in-place order will be honored.

People are going to get fed up as the days, then weeks, drag by, and even today I saw kids practicing soccer on the Eastshore field.


Don’t get me wrong — there are good odds that a vaccine will be developed or that better treatments will arrive, and combining those with natural resistance means there’s a very good chance that the disease will be stopped. But the new treatments and prophylaxis will take longer than three weeks to happen. And sadly, that means this disease is unlikely to be stopped before it sweeps through our population, and a three-week shelter-in-place won’t change that.


Unit tests are not enough

Filed under: programming — jlm @ 20:48

[I wrote this for another blog back in 2011, but it was lost to public view when that blog disappeared. It’s still valid today, though!]

In one of our datastructures, there’s an entry for a 64-bit unique resource identifier. This turned out to be a little too vague a description: Is it an 8-byte array, a 64-bit wide integer, or an instance of a 64-bit UUID class? One of our developers though one thing, and another thought another. No problem, right? The first compile by the one who chose wrong will show the type error. Except we’re using Python, so no type checking until the code runs. Still no problem, because our devs wrote unit tests, so the type error will show up on their first run, right?

Unfortunately, unit tests help not an iota in situations like this. The developer expecting UUID objects writes unit tests which consume UUID objects, and the dev expecting byte arrays writes unit tests which produce byte arrays.

So, integration tests, then? It turns out our devs did these too, verifying that their code integrated with the fancy datastructure class, but this still didn’t catch the mismatch! This is because the datastructure class doesn’t actually do anything with the resource identifier that would trigger a type check: it just accepts it as an entry in a slot when an item is inserted, and gives it out as one of the slot entries when items are looked up, and Python is happy to let an instance of any type pass through unchecked.

It’s only when we plug everything together that this trivial type mismatch shows up. So the moral: Make end-to-end tests early. You need them to turn up even problems as basic as this, and the earlier you find problems, the better.


My name is Mary, and I’m an electraholic

Filed under: humor — mary @ 20:59

Hi, I’m Mary and I’m an electron addict. I’ve been sustainable for 52 days.

I’d like to share my story. I first began to understand the gravity of my problem during the PG&E blackout in late October. For many of my friends it was an inconvenience, but for me it was an intervention. Within the first 24 hours I began to experience cravings. I craved hot coffee, hot water, any water, hot food; light to read by, light to find the bathroom, terrible TV shows, even terrible news. I became irritable, annoyed with my dog, my husband, even our cat just for being alive and invisible in the dark. I was anxious and jittery. How long could they legally turn power off? Where was the PUC when we needed them? Had Cliff repaired the voltage regulator on our generator correctly or were all our motors being ruined? Why did fires still start when power was supposedly shut off? How could we ever escape this dark prison?

Four and a half days I suffered acute withdrawal symptoms, and then, power was restored. I couldn’t wait to start using. I e-mailed. I showered. I washed and watered and cooked. The endorphins flowed. As frail woman I could wash clothes, send my thoughts across miles, provide hot food for my family and bring water to a parched garden with such ease. The electrons were my slaves.

At the end of the day I went to bed exhausted, but not at ease. I clutched the remote, never wanting the sound and light to stop. I fretted about my supply of electrons. I needed to recharge more batteries and stash more water. Another intervention could occur anytime. I ordered a better generator/inverter on Amazon. I emailed Tesla. What I needed was a fourteen thousand dollar wall of batteries. That Tesla wall looked so lovely in the advertisement. All the precious electrons generated by our solar array could be safely stored there.

Still I could not sleep. Thank God for EA. The ad popped up just after I left the Tesla site. I called and my life changed. My wonderful sponsor helped me to let go and trust my higher power. She helped me see how I had harmed the planet and future generations while using. I accepted Mother Nature as my higher power and my sponsor helped me see Mother Nature’s generous hand in the golden persimmons and scarlet pomegranates, the change of seasons and the arrival of an atmospheric river. My sponsor helped me live sustainably.

Still, as I mentioned, I am a fragile woman. My hands tremble at the dimming of the day. I come here with an urgent need for a sponsor. My first sponsor, my beacon of hope and true north, relapsed on Thanksgiving. Her family refused to believe that a solar cooker nestled in the snow could roast a turkey and sadly they were right.

[ This awesome story is by guest blogger Mary Myers.   — JLM ]


Quine, Hofstadter, Gödel again

Filed under: math, philosophy — jlm @ 20:25

During some recent traveling, I re-read some sections of Douglas Hofstadter’s Gödel, Escher, Bach. Of particular interest was a step in the proof of Kurt Gödel’s first incompleteness theorem which involved substituting a variable of a formula with the formula itself. Hofstadter calls this peculiar operation “quining” after the logician W. V. Quine, who wrote extensively about self-reference. As with the previous times I read through that part, I noticed that the operation didn’t specify a variable for substitution like substitutions generally do, but instead performed substitution on all free variables, which is something I haven’t encountered anywhere else. This oddity wasn’t even mentioned, much less justified. Unlike after my previous reads, this time I decided to figure out why it was performed that way.

Now, the more interesting parts of Gödel’s Incompleteness Theorem involve what we now call Gödel coding to demonstrate that classes of logical systems can represent quining their own formulas and verifying their own proofs, the latter of which was very surprising at the time. But those parts turn out to be unnecessary for dealing with this small facet of Gödel’s proof, so let’s just deal with the proof’s final proposition, which is comparatively simple and straightforward: given that a logical system supports the operations of quining and proof verification (and logical conjunction and negation, and an existential quantifier), that logical system is incomplete (or inconsistent).



Sleet again

Filed under: sfba — jlm @ 13:00

Having learned from my experience on Monday, I didn’t take my bike out shopping as usual, despite it being 48°F / 9°C (so probably warmer) and midday instead of evening. Then, in due course, ting-ting-ting-ting on my car’s body. I guess I need to save my biking in the rain for warmer parts of the year, even though it’s not at present cold per se.



Filed under: biking, sfba — jlm @ 19:09

I biked home from work this day, as usual.
The bicycle has been my preferred mode of
personal transportation almost all my life.

It was raining.
I learned to bike in Portland.
I’m used to biking in the rain.

It was raining hard.
I was drenched. But one quickly learns that
once soaked, you can’t get any more wet.

The air was cool. High 40’s, I think.
That’s a good temperature to bike in. Not
uncomfortably cold, but cold enough to
pull your body heat away without sweating.

The rain was cold.
I’m from Portland. I can handle cold rain.
Suddenly, the rain stang!
What is this sorcery?
The rain beat a tattoo on my helmet.
Augh! Never mind the wet, this is sleet!
The weather forecast said nothing about sleet!
I have yet to enjoy being outdoors with sleet or hail.


Less is more, CPU edition

Filed under: general, programming — jlm @ 17:11

I fixed an interesting bug recently. By way of background, I work on industrial robotics control software, which is a mix of time-critical software which has to do its job by a tight deadline (variously between 1 and 10 ms) or a safety watchdog will trigger, and other software that does tasks that aren’t time sensitive. We keep the timing-insensitive software from interfering with the real-time software’s performance by having the scheduler strictly prioritize the deadline-critical tasks over the “best effort” tasks, and further by massively overprovisioning the system with far more CPU and memory than the tasks could ever require.

The bug (as you may have guessed) is that the deadline was sometimes getting missed, triggering the watchdog. What makes the bug interesting is that it was caused by us giving the system too much excess CPU and memory!

The deadline overruns happen when the computer runs best-effort tasks. It only runs those tasks a small fraction of the time, but no matter, they shouldn’t be interfering with the deadline completion at all (and having our software fail its deadlines is unacceptable, of course). The real-time tasks’ peak usage is only about a quarter of the computer’s CPU capacity, and the system gives them that always: 100% of the time they want CPU, they get CPU. They are never delayed, and never have to wait on a resource held by a best-effort task. When they miss the deadlines, it’s always because they’ve gotten jobs that take the usual amount of work to complete, and had their usual amount of time to do it, yet the jobs somehow just take more time to finish. There’s no resource contention, nor are the caches an issue.

Yet when the best-effort tasks execute, the calculations done by the real-time tasks run slower on the same CPU for the same job with the same cache hit rate and with no memory or I/O contention. What’s going on? After hitting some false leads, I discovered that they’re going slower because the CPUs are running at a lower frequency. It turns out the CPUs’ clocks have been stepped down because they’re getting too hot. (Yes, the surrounding air is much warmer than it “should” be, but we don’t have a choice about that.)

The computer is hot because all the CPUs are going at full blast, because all of the best-effort tasks are executing because the computer can fit them all in memory. Half a minute later, the tasks are done, the computer cools off, the CPU frequency gets stepped back up, and the deadline overruns cease. An hour later, the hourly background tasks all go again, the fans spin up to full, but they’re not enough and the CPU frequency steps down, hence the watchdog alerts about missed deadlines.

Okay, maybe we should change the background tasks so they execute in a staggered fashion. But before thinking about how I might do that, I tried disabling ⅔ of the computer’s CPUs instead. The hourly processing now takes 200s instead of 30, but it’s still far below 3600, and that’s what matters. Now the computer stays nice and cool, so the CPUs stay nice and quick, and the deadlines all get met. We were using too many CPUs to get the calculations we needed to get done fast, done fast. Who’d’a thunk?


Scheme got incrementally better

Filed under: programming — jlm @ 08:34

Almost a decade ago, I wrote about how the venerable and elegant but third-tier programming language Scheme had no way to access the program arguments that would work across different implementations, and pointed out that type of large stumbling block tends to hinder the adoption of a language. The last time I worked on a serious project in Scheme was only a little later. So not much news from the Schemeoverse has reached me since then. Which is why I just learned that four years after that post, five years previous to the present day, the Scheme community decided to fix that and standardized on option (c) from that post: (command-line) gets you your program’s arguments. Progress marches on! In the case of Scheme, that march is just veerrryyyy slow.

Powered by WordPress