jlm-blog
~jlm

2-Feb-2019

Less is more, CPU edition

Filed under: general, programming — jlm @ 17:11

I fixed an interesting bug recently. By way of background, I work on industrial robotics control software, which is a mix of time-critical software which has to do its job by a tight deadline (variously between 1 and 10 ms) or a safety watchdog will trigger, and other software that does tasks that aren’t time sensitive. We keep the timing-insensitive software from interfering with the real-time software’s performance by having the scheduler strictly prioritize the deadline-critical tasks over the “best effort” tasks, and further by massively overprovisioning the system with far more CPU and memory than the tasks could ever require.

The bug (as you may have guessed) is that the deadline was sometimes getting missed, triggering the watchdog. What makes the bug interesting is that it was caused by us giving the system too much excess CPU and memory!

The deadline overruns happen when the computer runs best-effort tasks. It only runs those tasks a small fraction of the time, but no matter, they shouldn’t be interfering with the deadline completion at all (and having our software fail its deadlines is unacceptable, of course). The real-time tasks’ peak usage is only about a quarter of the computer’s CPU capacity, and the system gives them that always: 100% of the time they want CPU, they get CPU. They are never delayed, and never have to wait on a resource held by a best-effort task. When they miss the deadlines, it’s always because they’ve gotten jobs that take the usual amount of work to complete, and had their usual amount of time to do it, yet the jobs somehow just take more time to finish. There’s no resource contention, nor are the caches an issue.

Yet when the best-effort tasks execute, the calculations done by the real-time tasks run slower on the same CPU for the same job with the same cache hit rate and with no memory or I/O contention. What’s going on? After hitting some false leads, I discovered that they’re going slower because the CPUs are running at a lower frequency. It turns out the CPUs’ clocks have been stepped down because they’re getting too hot. (Yes, the surrounding air is much warmer than it “should” be, but we don’t have a choice about that.)

The computer is hot because all the CPUs are going at full blast, because all of the best-effort tasks are executing because the computer can fit them all in memory. Half a minute later, the tasks are done, the computer cools off, the CPU frequency gets stepped back up, and the deadline overruns cease. An hour later, the hourly background tasks all go again, the fans spin up to full, but they’re not enough and the CPU frequency steps down, hence the watchdog alerts about missed deadlines.

Okay, maybe we should change the background tasks so they execute in a staggered fashion. But before thinking about how I might do that, I tried disabling ⅔ of the computer’s CPUs instead. The hourly processing now takes 200s instead of 30, but it’s still far below 3600, and that’s what matters. Now the computer stays nice and cool, so the CPUs stay nice and quick, and the deadlines all get met. We were using too many CPUs to get the calculations we needed to get done fast, done fast. Who’d’a thunk?

20-Oct-2018

Scheme got incrementally better

Filed under: programming — jlm @ 08:34

Almost a decade ago, I wrote about how the venerable and elegant but third-tier programming language Scheme had no way to access the program arguments that would work across different implementations, and pointed out that type of large stumbling block tends to hinder the adoption of a language. The last time I worked on a serious project in Scheme was only a little later. So not much news from the Schemeoverse has reached me since then. Which is why I just learned that four years after that post, five years previous to the present day, the Scheme community decided to fix that and standardized on option (c) from that post: (command-line) gets you your program’s arguments. Progress marches on! In the case of Scheme, that march is just veerrryyyy slow.

5-Oct-2017

Tutorial for building .so files

Filed under: linux, programming — jlm @ 18:34

Exactly how one goes about building .so files* isn’t widely understood, and the documentation on it is overcomplicated IMHO. So, I figure the web needs a tutorial demonstrating how to build an extremely simple .so file. Et voilà, here’s a step by step guide for building a .so file on Linux or a similarly flavored Unix using a GNU-compatible toolchain.

We’re going to make two .o files, “call_me.o” and “die.o”, and combine them into a single “call_me_and_die” shared library. Making a library is very much like making an executable, just with some tweaks to the build commands and some extra steps at the end. To make our “call_me_and_die” library, we start the way we would with a “call_me_and_die” executable: writing a .h file for the common function signatures. File so_tut.h:

extern void call_me(void);
extern void die_die_die(void);

(more…)

11-Sep-2017

EBUSY: Another decade, and still sucking

Filed under: general, linux, programming — jlm @ 12:40

Today’s xkcd riffs a common frustration about busy files: programs that can’t do their intended operations on them rarely (if ever) specify the file that’s busy or tell the user which programs are using the file — and without that knowledge, the user can’t fix the problem.

There are tools that can help you discover those things, as I describe using ten years ago, but I’d already been running into EBUSY problems for ten years by then, and the old guard before me for another ten years beyond that. Why haven’t things improved?

Sometimes file_operation(filename) fails — hey, this happens, busy files are a thing, ain’t nothing your program can do about that. But report to the user that file_operation on filename failed for reason strerror(errno)! Not reporting the filename is just sloppy programming.

It’s not like the tools have gotten any better. What, you think it’s acceptable to make the user trace your program, looking through a haystack of every system call to find the needle which is the program’s critical failure? Is that work you’re putting on the user’s shoulders less than sticking the filename in the error message? This has sucked for two generations now. Please help making it suck less.

4-Mar-2017

twitcode #4: reverse diff — using the right tool for the job

Filed under: programming — jlm @ 20:29

Mercurial’s  hg diff  command supports a  --reverse  option which shows the regular diff output except it reverses the sense of the comparison — i.e., it goes from the “destination” to the “source” (git diff  supports this action too, but as the flag “-R”). Most of the time you want the ordinary “forward” sense, but occasionally the reverse sense comes in handy, and that’s why that option’s there. On rare occasion, I’ll even want to do this to files not under version control, but the regular system diff doesn’t support this feature.

So, after hitting this deficiency again recently, I decided to write up my own reverse-diff command which would swap its last two arguments and call diff. I started with the shell, as dealing with command arguments and calling programs is its forte. But it turned out to be surprisingly difficult to do stuff like copy the argument list or mess around with the end and near-end of the argument list, which I thought would be dead-simple operations. After futzing around with shell variables and parameters and the various options for variable/parameter expansion for something like 25 minutes, I came to my senses and did it in something like two minutes using C, where nothing’s going to interpret any kind of data as anything unless you explicitly request it to, and array manipulation is built in with clean syntax. All I had to do was swap argv[argc-1] and argv[argc-2] then execvp("diff", argv), easy peasy.

And if I golf argc and argv into c and v, then it fits in 130 chars [source]:
#include <unistd.h>
int main(int c, char**v) {
 if (c>2) { char*t=v[c-2]; v[c-2]=v[c-1]; v[c-1]=t; }
 return execvp("diff", v);
}

I could probably omit the check for ≥2 arguments, as the system diff doesn’t support the convention that a missing file argument means to use stdin (instead, it treats a filename of - (single hyphen) as representing stdin), but perhaps it’ll be used by somebody who’s installed an enhanced diff program.

I’m also amused that each of my “twitcodes” has been in a different language: shell, perl, python, and now C.

29-Dec-2016

A puzzle about C’s stdio

Filed under: programming — jlm @ 18:06

I found the C puzzles webpage by Gowri Kumar to be a very interesting collection of oddities of the C language and some of its basic libraries. If you work with C for fun or profit, I encourage you to go and give them a try. I found very few of them to produce behavior I hadn’t expected, which could be a symptom of overfamiliarity with C. I did find a few surprises though, which I felt warranted further investigation. (more…)

28-May-2016

twitcode #3: New mail in mbox

Filed under: programming — jlm @ 09:12

Once upon a time, people’s interactions with computers (those few people who got to interact with computers directly) was mostly through a teletype: a combination of a keyboard where they could type instructions to the computer and a printer where it gave the responses back to them. This model is tenaciously clung to by a handful of still-active projects such as gdb, but the bulk of its use nowadays is from command shells (bash, zsh, cmd.exe) because command-response interaction is much easier to specify, record, automate, examine, modify, and perform remotely in a teletype-style than a GUI-style.
(more…)

24-Nov-2015

twitcode #2: decoding MIME

Filed under: programming — jlm @ 12:12

Messing around with some mail handling scripts, I was surprised I didn’t find any good ways to decode MIME as a stream filter. Ten minutes later, I have 13 lines of Perl which do it in 201 characters in my normal non-terse style. It’s great for normal use, but a tiny bit of golfing fits it in a tweet’s 140-character limit:

$ cat mime_decode.pl
#!/usr/bin/perl -w
use strict; use utf8; use MIME::WordDecoder; 
binmode(STDOUT, ":utf8");
while (<>) { print mime_to_perl_string($_); }
$ wc mime_decode.pl
  4  16 137 mime_decode.pl

Good thing there was already a method which does all the real work…

23-Dec-2012

Addressing the fragile base class problem

Filed under: programming — jlm @ 21:47

I’ve been thinking about the fragile base class problem lately. (Yes, I know it’s almost Christmas. My mind works mysteriously.) I started thinking by analogy to APIs, which the interface a superclass gives a subclass in fact is, even if it’s not called that. So, the superclass’s API changes, breaking the subclass, just like a regular API’s change can break a client. How do we deal with this with regular APIs? If we are to make a compatibility-breaking change (which introducing any member into a superclass potentially is), we version the API so that a client requesting version 1 semantics gets them while only clients written against the newer semantics will request version 2. We could do the same kind of thing with class inheritance if we mark everything with revision numbers, which we reference when inheriting.

class base@2 {
    void start@1();
    void stop@1();
    void idle@2();
};

class child@1 extends base@1 {
    void idle@1();
    void park@1();
};

Here’s our classic case of a fragile base class. child subclassed base and defined the new method idle(), then later base was extended with its own method idle(). Normally, this would cause a problem — the new stop() implementation might call idle() perhaps, and child’s idle() won’t be written with overriding a then-nonexistent base::idle() in mind. But with these revision markings, we say that child only overrides methods marked as being in revision 1 of base. So, when stop() calls idle(), it gets base::idle, not child::idle, and when park() calls idle(), the call resolution goes the other way.

The problem I see with this though, is that when going to an indirect superclass, it can be unclear which revision that should be.

class grandparent@3 {
    void method@2();
};

class parent@2 extends grandparent@2;

class child@1 extends parent@1 {
    void method@1();
};

Uh-oh. Should child’s method() override grandparent’s? If parent@1 extended grandparent@2, then yes. But if it extended grandparent@1, then no. So do we need to list the parent class revisions of every revision of the child class? I’d hope there’d be a better way. Perhaps we’d be relying on an IDE to handle the revision numbers for us, keeping them updated is just a dumb task, so in that case the IDE could maintain the manifest of parent revisions too.

6-Oct-2011

Better SelectableChannel registration in Java NIO

Filed under: programming — jlm @ 12:01

Java NIO’s Selector class is surprisingly difficult to use with multiple threads. Everyone that tries it encounters mysterious blocking, much of which is due to it sharing a lock with SelectableChannel.register. So, if you happen to try to register a channel in a thread other than the selector thread, it blocks that thread until the select is done. Boo.

So, this is a NIO rite of passage of sorts, finding this misfeature and then looking up how to work around it. The usual answer is to keep a ConcurrentQueue of pending registrations, and have your select loop process that queue between select calls. Uggggleeee. It occurred to me that using a synchronization lock, we can do better.

To register a channel and get the SelectionKey:

  synchronized(registerLock) {
      selector.wakeup();
      key = channel.register(selector, operations, attachment);
  }

And in the select loop:

  // before
  synchronized(registerLock) {}
  // between
  numEvents = selector.select(timeout);
  // after

If the loop is before or after when our registration block takes the registerLock, we’re fine, as having the registerLock prevents the loop from reaching select until we’ve registered and released the lock. If the loop is inside select(), then the wakeup() will cause it to exit select, and it won’t re-enter because we hold the registerLock, so we’re fine.

The tricky case is when the loop has the registerLock or is between releasing the registerLock and entering select(). In these cases, the registration block takes the registerLock and races with the loop over select() and wakeup(). Fortunately, the NIO designers anticipated that programmers would have a desire to ensure a Selector wasn’t selecting, even if the wakeup was called in the window between checking it was okay to enter select and actually entering it. Selector.select() returns immediately if wakeup() had been called after that Selector’s prior select(). So, our race doesn’t matter, the select() always exits, and we’re safe.

This is so much simpler than building up a queue of registrations and processing them in the select loop, and we get the SelectionKey right away, I wonder if I’m missing something. Why is the textbook technique to use a ConcurrentQueue, instead of a synchronization lock like this?

Powered by WordPress