jlm-blog
~jlm

2-Feb-2019

Less is more, CPU edition

Filed under: general, programming — jlm @ 17:11

I fixed an interesting bug recently. By way of background, I work on industrial robotics control software, which is a mix of time-critical software which has to do its job by a tight deadline (variously between 1 and 10 ms) or a safety watchdog will trigger, and other software that does tasks that aren’t time sensitive. We keep the timing-insensitive software from interfering with the real-time software’s performance by having the scheduler strictly prioritize the deadline-critical tasks over the “best effort” tasks, and further by massively overprovisioning the system with far more CPU and memory than the tasks could ever require.

The bug (as you may have guessed) is that the deadline was sometimes getting missed, triggering the watchdog. What makes the bug interesting is that it was caused by us giving the system too much excess CPU and memory!

The deadline overruns happen when the computer runs best-effort tasks. It only runs those tasks a small fraction of the time, but no matter, they shouldn’t be interfering with the deadline completion at all (and having our software fail its deadlines is unacceptable, of course). The real-time tasks’ peak usage is only about a quarter of the computer’s CPU capacity, and the system gives them that always: 100% of the time they want CPU, they get CPU. They are never delayed, and never have to wait on a resource held by a best-effort task. When they miss the deadlines, it’s always because they’ve gotten jobs that take the usual amount of work to complete, and had their usual amount of time to do it, yet the jobs somehow just take more time to finish. There’s no resource contention, nor are the caches an issue.

Yet when the best-effort tasks execute, the calculations done by the real-time tasks run slower on the same CPU for the same job with the same cache hit rate and with no memory or I/O contention. What’s going on? After hitting some false leads, I discovered that they’re going slower because the CPUs are running at a lower frequency. It turns out the CPUs’ clocks have been stepped down because they’re getting too hot. (Yes, the surrounding air is much warmer than it “should” be, but we don’t have a choice about that.)

The computer is hot because all the CPUs are going at full blast, because all of the best-effort tasks are executing because the computer can fit them all in memory. Half a minute later, the tasks are done, the computer cools off, the CPU frequency gets stepped back up, and the deadline overruns cease. An hour later, the hourly background tasks all go again, the fans spin up to full, but they’re not enough and the CPU frequency steps down, hence the watchdog alerts about missed deadlines.

Okay, maybe we should change the background tasks so they execute in a staggered fashion. But before thinking about how I might do that, I tried disabling ⅔ of the computer’s CPUs instead. The hourly processing now takes 200s instead of 30, but it’s still far below 3600, and that’s what matters. Now the computer stays nice and cool, so the CPUs stay nice and quick, and the deadlines all get met. We were using too many CPUs to get the calculations we needed to get done fast, done fast. Who’d’a thunk?

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress