Socrates calls on his *dimótis* (δημότης) Crito and some others to serve as character witnesses, and this term is a problem for translators. No-one translates it right. They either use a circumlocution like “man of the same district as myself”, something awkward like “fellow-burgher”, or invent the word “demesman” to create a term in English just to translate *dimótis* into. Well, respectable English, that is. Because English already has a word for *dimótis*. The problem is no-one in the rarefied high halls of the ivory tower would ever use it because it’s associated with stigmatized dialects and thereby become stigmatized itself.

Crito was Socrates’s *homie*.

`random_shuffle`

algorithm has a landmine associated with it. I was using it with a `vector`

, which I’ll refer to as `v`

. The call which shuffles `v`

’s contents is `random_shuffle(v.begin(), v.end(), rfunc)`

, where `rfunc`

is a function object which, in the case of `int`

-indexable `vector`

s, accepts a single `int`

`n`

and returns a random `int`

in the range [0, n). What’s the obvious way to get a random `int`

in some range? `uniform_int_distribution`

, of course. And `uniform_int_distribution(0, n)`

generates a random `int`

in the range [0, n], not [0, n).
Besides the need to use `uniform_int_distribution(0, n-1)`

being counter-intuitive, some other attributes make this bug frustrating. While `random_shuffle`

requires `rfunc`

’s return value to be in [0, n), it doesn’t check that it is, even when debugging is enabled — instead, it’s our old friend undefined behavior, no compiler diagnostic provided. `vector`

bears some blame too, as it’s happy to silently munge data pointed to by an out-of-range iterator even with debugging enabled, optimization off, warnings enabled — and again, undefined behavior, no compiler diagnostic provided. If 0 is a valid value of a vector element (naturally, for me it was), then the effect of the bug when `v.size()`

is less than `v.capacity()`

is to get a shuffled `vector`

in which one of its elements has been replaced by 0, and the downstream effects of that can easily be too subtle to notice (naturally, for me it was). So, you only get crashes when `v.size() == v.capacity()`

and something in `v.data`

corrupts and gets corrupted by whatever’s just past it in memory — which is a freakin’ rare occurrence!

How do you debug this kind of thing? Since it’s an intermittent bug which any given run of the program is highly unlikely to trigger, turn on core dumps so that when it eventually does occur, you have the dump of the program state available to debug right there. Turn on debugging so you can get the most out of your corefiles. Stick in code which lets you track what you need to track. Here, this shim function is instructive: `int f() { int rand_val = uid(rand_eng); return rand_val; }`

. You need to remove `-O`

to prevent calls to `f()`

from being replaced by calls to `uid(rand_eng)`

and use `-O0`

to prevent the transient variable `rand_val`

from disappearing. (`volatile`

probably would work also.)

Don’t forget that this is *terrible* API design, though. We have a library which has a generator of random `int`

s and a consumer of random `int`

s, and their conventions are different enough that the obvious way to combine them is buggy, yet similar enough that they’ll never trigger compiler warnings (much less errors). That’s not a good API choice, that’s a trap for API users! It’s good to have obvious ways to perform the operations API users will want to perform *provided they work correctly*. Since API design is hard, often you can’t do that. In those cases where you can’t make an obvious use that works, don’t introduce the obvious use in the first place — it’s far worse than having no obvious use at all, because if you make an obvious way to perform an operation, people *will* perform the operation that way, simply because it’s the obvious way. It should *never* be wrong. That there are so many ways to perform unsafe operations that are not obvious, not caught by C++ compilers, and not caught at runtime is the thing I dislike most about this programming language. Today I learned one more way to do it, and I should be well beyond that point.

**Combinatorics:** There are _{12}C_{1} ways to have the ace and one other spade, without regard to order. There are _{13}C_{2} ways to deal two spades. So the probability is _{12}C_{1}/_{13}C_{2} = 12/78 = 2/13.

**Trickery:** The chance the first card *isn’t* the ace is 12/13. The chance that the second card, dealt from the remaining 12 cards, also isn’t the ace is 11/12. So the chance that neither is the ace is (12/13)⋅(11/12) = 11/13. Thus the chance that one of the first two cards *is* the ace is 1 − 11/13 = 2/13.

**Conditional probability:** The chance that the first card is the ace is 1/13 and the chance it isn’t is 12/13. If we represent the event of the first card being the ace as **1** and the event the second card is the ace as **2**, then that is *P*(**1**) = 1/13 and *P*(~**1**) = 12/13. Now, *P*(**2**|~**1**) = 1/12. ∴ *P*(**1** or **2**) = *P*(**1**) + *P*(**2**|~**1**)⋅*P*(~**1**) = 1/13 + (1/12)⋅(12/13) = 1/13 + 1/13 = 2/13.

**Unconditional probability:** *P*(**1** or **2**) = *P*(**1**) + *P*(**2**) − *P*(**1** and **2**) = 1/13 + 1/13 − 0 = 2/13.

They all generalize into dealing *n* cards out of *m*, but the one using unconditional probability not only is the simplest, it generalizes the most cleanly. But most of us found it the least satisfying solution. Maybe because it was *too* simple?

`Logic`

module of the standard library of the Lean proof verifier:
```
theorem iff_not_self : ¬(a ↔ ¬a)
| H => let f h := H.1 h h
f (H.2 f)
```

OK, the fact being proved isn’t enigmatic — it’s stating that a proposition’s truth value is never the same as its own negation’s — but the proof thereof sure is. If that looks to you like some incomprehensible formal logic circling to eat its own tail, that’s because it kinda is.

However, it’s written *extremely* densely, which inhibits comprehension — which is sad, because the way it works is very interesting. It’s quite the rabbit hole, though, so let’s take our trusty spade and start digging. The very first thing we encounter is that Lean implicitly converts the `¬(a ↔ ¬a)`

into `(a ↔ ¬a) → False`

. This is because Lean uses dependent type theory to represent logical statements and their proofs, so we need to discuss how these kinds of theories are used with formal logic.

Lean isn’t alone is doing this. Type theories are a common way to implement formal logic in proof assistants/verifiers, by having the *types* represent mathematical statements and *instances* of a type represent a proof of that type. An advantage of this technique is that a proof of `P∧Q`

is trivial to generate from proofs of `P`

and `Q`

by modelling it with a type constructor as a product type, and similarly, a proof of `P∨Q`

modeled as a sum type is trivial to construct from a proof of either `P`

or `Q`

. Also, everyone who’s programmed has some intuition regarding type theory even if they’ve never studied it.

There are disadvantages as well, though. It’s very easy to get confused between the various abstract ensembles of types, the specific types you’re working with, and the instances of these types you’re trying to construct. Problems spanning across these levels usually have solutions that are very short (slightly tweaking a statement to trigger pattern matching, or finding just the right arguments to a type constructor) but difficult to find. And while the proof assistants can often provide hinting within any one level, they generally can’t provide any guidance in spanning across them.

The other big disadvantage is they don’t handle negation as well as they handle the rest of the logical constructs. This is because there’s no relationship between a proof of `P`

and of `¬P`

since at most one will exist, and having no relationship in logic means there’s nothing for a type theory to model. The typical way to handle this is to introduce

as either an instance of type *False*

or as a type which by definition has no instances, and make it subject to special rules like *Bool*

holds for every proposition *False*→P`P`

and there are no tautologies `Q`

for which `Q→`

holds. Once *False*

is defined, *False*`¬P`

gets defined to be `P→`

.*False*

And that’s why we’re now looking at

```
theorem iff_not_self : (a ↔ ¬a) → False
| H => let f h := H.1 h h
f (H.2 f)
```

The next question is what is that `|`

doing there? I can’t find this in Lean’s documentation anywhere, which seems to always show theorems being proven by assigning them a proof with `:=`

. (Lean’s syntax is undocumented and I’m not the only one bothered by that.) Experimentation indicates this is a function builder syntax, which we can write with typical lambda notation by

```
theorem iff_not_self : (a ↔ ¬a)→False :=
λ H => let f h := H.1 h h
f (H.2 f)
```

What’s this? Our theorem is a function? In this case, yes, because of how implication is handled. The way it works is that `P→Q`

is modeled as function mapping proofs of `P`

into proofs of `Q`

. Here, we have a function that maps proofs of `a ↔ ¬a`

to proofs of

. Does this mean we get to see a proof of *False*

? Only if we can provide the function with a proof of *False*`a ↔ ¬a`

— but those don’t exist. Constructing a function that maps `P`

to

is constructing a *False**refutation* of `P`

, which serves as a proof of `¬P`

.

It might be clearer to add a type annotation to `H`

:

```
theorem iff_not_self : (a ↔ ¬a)→False :=
λ H:(a ↔ ¬a) =>
let f h := H.1 h h
f (H.2 f)
```

Or maybe it just adds clutter. That probably depends on how familiar you are with Lean (or closely related systems like Coq). For this article, I’ll keep them in the proofs, while retaining awareness that Lean can infer them automatically.

The next thing we dig up are the mysterious fields of `H`

, `H.1`

and `H.2`

. It directly follows from `P↔Q`

that `P→Q`

and `Q→P`

, so for convenience Lean makes those available as the fields `1`

and `2`

of “`Iff`

types”. We can give those names, and include the unnecessary but clarifying type annotations:

```
theorem iff_not_self : (a ↔ ¬a)→False :=
λ H:(a ↔ ¬a) =>
have ana : a → ¬a := H.1
have naa : ¬a → a := H.2
let f h := ana h h
f (naa f)
```

Up until now, our digging has been tedious de-sugaring of Lean syntax and annotating of directly inferable types. But now we hit paydirt and actually get to the fun bit: doing logic! First, it’ll be helpful to convert the `¬a`

into `a→False`

and give a name to `naa f`

:

```
theorem iff_not_self : (a ↔ ¬a)→False :=
λ H:(a ↔ ¬a) =>
have ana : a → (a→False) := H.1
have naa : (a→False) → a := H.2
let f h := ana h h
let naaf := naa f
f naaf
```

This reveals what’s going on with that `f`

function. `ana`

is a function which takes two proofs of `a`

and returns a proof of

. So what *False*`f`

does is takes a proof of `a`

(called `h`

) and calls `ana`

with it twice, returning the resulting proof of

, giving *False*`f`

the type `a→False`

. We can annotate that:

```
theorem iff_not_self : (a ↔ ¬a)→False :=
λ H:(a ↔ ¬a) =>
have ana : a → (a→False) := H.1
have naa : (a→False) → a := H.2
have f : a→False := λ h => ana h h
let naaf := naa f
f naaf
```

What does `naa`

do? It takes a function that refutes `a`

and returns a proof of `a`

. And we just saw that `f`

is a function which refutes `a`

! So `naaf := (naa f)`

is a proof of `a`

.

Finally, since `f`

is a function that maps proofs of `a`

to

, *False*`f naaf`

is a proof of

. Wait, earlier I said that there were no proofs of *False*

, yet here one is. The thing is the proof was constructed from a proof of *False*`a ↔ ¬a`

(named `H`

), so what we’ve really proven is that `(a ↔ ¬a)→False`

, i.e., `¬(a ↔ ¬a)`

, and QED.

So, here we see an extremely elegant proof that uses gemination (`ana h h`

) to generate a refutation (`f : a→False`

) that establishes an attestation (`naaf := naa f`

) that generates a contradiction (`f naaf`

). And all that elegance is hidden away because it’s written as `¬(a ↔ ¬a) | H => let f h := H.1 h h; f (H.2 f)`

from a misplaced value on extreme brevity.

`swf`

is dead, a closed-source project shut down by its owner (Adobe) a few years ago.Kazerad, best known as the artist-author of the web comic Prequel Adventure, wrote a touching eulogy for it on twitter here.

[Read it “unrolled” (all 15 parts together) here.]

]]>What’s the limit as

The right insight makes it work out rather elegantly, but suppose your math muse is AWOL cavorting in Cancun today — is there a more systematic way to approach these kinds of problems? It turns out that the tools of Robinson’s hyperreal analysis are very good at attacking this stuff.

Since we’re looking at a limit as *n* approaches infinity, the first thing to do is to simply set *n* to infinity — or rather, to an arbitrary infinite hyperreal (all the positive infinite hyperreals are indistinguishable in a deep sense). The second step is to write everything in terms of infinitesimals, so we introduce the infinitesimal *m* = 1/*n* and our expression becomes *x*⋅((^{x}√*x*)^{m} – 1)/*m*.

Now we want to get everything expressed as polynomials of infinitesimals (here just *m*) of as low an order as lets things work out, so we expand the Maclaurin series of all the not-ploynomial-in-*m* stuff. Here that’s only *f* (*m*) ≝ (^{x}√*x*)^{m}, which expands to *f* (0) + *f ’* (0)⋅*m* + *o*(*m*²) = 1 + (ln ^{x}√*x*)⋅*m* + *o*(*m*²) because *f ’ *(*m*) = (ln ^{x}√*x*)⋅(^{x}√*x*)^{m}.

Plugging this back into *x*⋅((^{x}√*x*)^{m} – 1)/*m* gives *x*⋅((ln ^{x}√*x*)⋅*m* + *o*(*m*²))/*m* = *x*⋅((ln ^{x}√*x*) + *o*(*m*)) = *x*⋅(ln ^{x}√*x*) + *x*⋅*o*(*m*). Now, *x*⋅(ln ^{x}√*x*) = ln (^{x}√*x*)^{x} = ln *x*. Also, *x*⋅*o*(*m*) = *o*(*m*) since *x* is finite. So *nx*⋅(^{nx}√*x* – 1) simplifies into ln *x* + *o*(*m*).

Since *m* is infinitesimal, *o*(*m*) disappears when we transfer back into ordinary reals and make *n* a limit index variable once again, and we have lim_{n→∞} *nx*⋅(^{nx}√*x* – 1) = ln *x*. Now, this is certainly more work than substituting in *w* ≝ 1/*nx* and applying l’Hôpital’s rule, but the general technique works on all kinds of limit calculations where *a-ha!* moments become few and far between.

The game supports “achievements”, where whenever your gameplay meets the condition of some challenge (some very easy, some nightmarishly hard, most of medium difficulty), it records this in a log file. Of course, met achievements aren’t all that the log file contains, but it’s nice enough to start each “met achievement conditions” record with “`ACHIEVEMENT`

” and a marginally-descriptive achievement name, followed by other relevant information. Unfortunately, you can’t get a list of your fulfilled achievements merely by running `grep '^ACHIEVEMENT ' hyperrogue.log`

because the game logs when you fulfill an achievement’s conditions separately for every run, not just the first run to fulfill it. So, to extract the records of each achievement’s first fulfillment, we ignore any record with an achievement name we’ve seen before and print out the line otherwise (*ie*, if we haven’t seen it before), which is nigh-trivial in `awk`

[link]:

```
/^ACHIEVEMENT / {
if (!($2 in ach)) {
ach[$2] = 1;
print;
}
}
```

At 86 characters, it’s the fifth example from me of “twitcode” (programs under 140 bytes). And being in `awk`

, it’s the fifth language I’ve twitcoded as well.

“Where are you going from here?”

“I’ll be staying here in London for the next five days.”

“You’re lodging in London?”

“Yes.”

“Why are you coming here?”

“Primarily to visit my family.”

“You’re visiting family, who live in London?”

“Yeah.”

“And when are you leaving?”

“I fly back home on July 1^{st}.”

“You’re flying back to the US on the 1^{st}of July?”

“Yep.”

“And how are you getting to London?”

“I bought a ticket for a National Express bus.” [The “Tube” was shut down due to a labor strike.]

“You’re taking a National Express bus to London?”

“Yup.”

[Dropping the skeptical act] “Very good, welcome to the UK.”

Maybe the idea is somebody fabricating their answers will feel a need to elaborate instead of simply confirming? Whatever. The odder bit was the trip back, where each security officer asked me a strange question.

“Where is your journey starting from?”

“Uh, what?”

“Where is your flight leaving?”

[Borrowing the tone of his countryman] “You want to know the airport my flight departs from?”

“Yes.”

“From this airport, London Heathrow.”

“Very good.”

And then, from ICE:

]]>“What’s in London?”

“Uh, what?”

“You said you flew in from London, what’s there?”

“Well, there are alotof things in London: Parliament, and a stretch of the River Thames, and several bridges over the river, and millions of people, one of whom is a first cousin, and his wife, and —” [at this point, the agent cuts me off, and I don’t even get a “very good”. I guess that’s a British thing.]