jlm-blog
~jlm

9-Jul-2019

Quine, Hofstadter, Gödel again

Filed under: math, philosophy — jlm @ 20:25

During some recent traveling, I re-read some sections of Douglas Hofstadter’s Gödel, Escher, Bach. Of particular interest was a step in the proof of Kurt Gödel’s first incompleteness theorem which involved substituting a variable of a formula with the formula itself. Hofstadter calls this peculiar operation “quining” after the logician W. V. Quine, who wrote extensively about self-reference. As with the previous times I read through that part, I noticed that the operation didn’t specify a variable for substitution like substitutions generally do, but instead performed substitution on all free variables, which is something I haven’t encountered anywhere else. This oddity wasn’t even mentioned, much less justified. Unlike after my previous reads, this time I decided to figure out why it was performed that way.

Now, the more interesting parts of Gödel’s Incompleteness Theorem involve what we now call Gödel coding to demonstrate that classes of logical systems can represent quining their own formulas and verifying their own proofs, the latter of which was very surprising at the time. But those parts turn out to be unnecessary for dealing with this small facet of Gödel’s proof, so let’s just deal with the proof’s final proposition, which is comparatively simple and straightforward: given that a logical system supports the operations of quining and proof verification (and logical conjunction and negation, and an existential quantifier), that logical system is incomplete (or inconsistent).

(more…)

15-Jun-2016

Super Breakout Guy

Filed under: philosophy — jlm @ 17:27

Fiction literature (especially comic books and movies) often includes characters with “superpowers”, abilities intrinsic to them which are beyond that of ordinary people, powers which no humans in the real world have. There are millions of possible superpowers (and some settings, like the X-Men universe, go so far as to include hordes of undeveloped background characters with minor superpowers), but I’d like you to consider just one: Super Breakout Guy, with the power to break out of any confines! No prison holds him, he can always escape. When marooned on a deserted island, the rough and shark-infested waters separating it from the distant mainland don’t block him. Why, he can even escape from more abstract limits: He slips the surly bonds of gravity and takes flight. He sends 200-character tweets. Freedom, heck yeah! He has even broken out of this hypothetical situation and is right here right now!

Well, no. Of course not. Hypothesizing someone able to break out of hypothetical situations doesn’t make them exist. Hypothetically, someone moving from hypothetical to actual would be actual, but actually they remain hypothetical (and that’s no surprise). See this Usenet Oracle post for an especially amusing take on this.

And this is why the Ontological Argument is unpersuasive. It defines God as “being than which no greater can be conceived”, and puts forward for consideration a “being than which no greater can be conceived, and which exists”. Conceptually, this being exists. Hypothetically, Super Breakout Guy is actual. But conceiving something able to become greater than a mere conception has as little power as hypothesizing someone able to break out of hypotheticals: In order to break out of the realm of mere conceptions, such a being must first be admitted to be actual, otherwise it is only conceivable that it could be greater than mere conceptions are.

(What if Super Breakout Guy breaks out of the bounds of logic?)

18-Apr-2010

Economic musings

Filed under: econ, philosophy — jlm @ 20:05

There’s been noise and worry about deflation lately. The fear deflation sparks has always seemed strange to me, with the industry I know best — electronics — being both very energetic and highly deflationary. I’m familiar with the theory of how a deflationary spiral saps economic growth, with Japan’s economy being the prime example.

But looking closer at Japan’s economy: It was “stagnant”, by which economists mean its production was steady, and that steady production was actually quite high. If it weren’t economics, meeting an objective well and steadily would be considered very good, but the norm for economics is growth, so steady first-world level production is considered a failure. (Coming out of a deep recession, a steady production level doesn’t sound that bad after all.)

How much production do we want? Is this even the right question? I’m remembering Dijkstra’s complaint that programmers were proud of how large a program they had written, when instead they should have been ashamed at needing so much code to accomplish their goal. Is GDP as faulty a metric as LOC? Instead of being proud that we produce $47,000 while Japan only manages $33,000, should we instead be looking at why it takes us $47,000 to have a full and fulfilling life while Japan only needs $33,000? Are we really full and fulfilled with our lives, and getting a better life out of that $47,000 than Japan is out of its $33,000? Is increasing that number to $50,000 the best way to improve our lives? Or is there a better measurement that we should be looking at? Certainly more GDP helps immensely, it gives us more resources to spend on our goals, but I worry that treating GDP itself as the goal, we foolishly sacrifice “life value” for GDP, instead of spending our GDP to improve our lives.

28-Jan-2008

Newcomb’s Paradox, part II

Filed under: philosophy — jlm @ 10:46

It’s been pointed out I’m not really addressing what’s wrong with the two arguments which give you different “correct” choices for the Newcomb scenario. So here we go…

Expected Outcome
This argument assumes that the predictor has foreseen our choice [with high probability]. If however we assume that free willed choices are by definition unpredictable, this is a logical impossibility.

Dominant Outcome
This argument assumes that the choice we make is not correlated with the predictor’s choice, because that would involve reverse causation. However, if we assume that we’re not free willed, or that freely willed choices can be predicted (whatever that would involve), then there’s a common cause: Our mind’s tendency to select choice X in the scenario causes both our selection of X when we’re in the scenario and the predictor to anticipate this selection. So, our choice wouldn’t be independent of the predictor’s, and dominance wouldn’t hold.

27-Jan-2008

Musings on Newcomb’s Paradox

Filed under: philosophy — jlm @ 11:45

So, I’ve been thinking about Newcomb’s Paradox lately.
I think part of the issue is it conflates a couple issues, and it might be useful to consider them separately. So, think on these paradoxes:

Determinism vs. Nondeterminism
Consider a superhuman predictor and a fair coin. The predictor predicts what the coin will show, then you flip it. The predictor is [nearly] always right.

Removal of free will
A computer has been programmed to maximize its expected score as a player in the Newcomb scenario, given that the predictor has a copy of the program to analyze, run in simulation or on another computer, etc. How will it play?

My take on Newcomb’s paradox? It reduces to the question of whether free will makes our choices inherently unpredictable, and the paradox is thorny because free will isn’t well defined enough to provide a clear answer.
If we have no free will, it’s just the computer scenario. If we assume free willed actions are inherently unpredictable, then the existence of a predictor contradicts that assumption, just like it contradicts the assumption that the outcome of a fair coin flip cannot be predicted.

19-Jul-2006

Latest thoughts

Filed under: philosophy — jlm @ 13:37

Why is it that things we find very easy to do in our minds (process language, identify people, etc.) are things we can’t get machines to do, but things we find very cumbersome (arithmetic, data manipulation, etc.) we can make machines to do extremely well?

This dilemma, that stuff we can do easily doesn’t imply easy automatability, leads to all kinds of optimism on how human-like computers and other machines are. So what’s the root of it? I think it’s because we don’t know how to understand language, identify faces, etc. We do it, but we don’t know how! For all of recorded history, we’ve been refining mathematics, improving manufacturing processes, so we know in fine detail how to do arithmetic, how to weave cloth, etc. and we teach it to the next generation, write it down in books — whereas we don’t teach our children how to understand language, how to identify their parents and friends, these are mysteries that our brains do for us without being taught, processes which we have no visibility into the internals of. We can make a step by step guide to how to multiply numbers, and make a machine to do it. But we don’t have step-by-step guides to language. The steps are inside our brains and we get only the output. We don’t understand how we understand language, and so find it impossible to make a machine that does it.


Update: For the ~0 of you who’ll read this, I’ve just learned that this is called Moravec’s paradox.

19-Nov-2005

Philosophic musings

Filed under: philosophy — jlm @ 21:28

So, what is free will, really? Well, there’s enough answers to that question that another one can’t hurt. I consider free will to be the ability to consider the choices that one may make and decide among them. That would make the sensation of free will the feeling that one is considering choices and deciding between them.

This really boils down to what “ability” is though, and for all that it seems superficially simpler, it seems to be fundamentally tougher concept. I had corn flakes for breakfast today, but I could have had Cheerios — or could I have? This happened this morning, the past is fixed, so I couldn’t have had Cheerios because I didn’t. What does it mean to say “I could have had Cheerios.” when “I had Cheerios.” is false?

It means I was considering what to eat, and Cheerios was a choice under consideration. When we make choices, we model (consider) the result of the choices, and it’s to that model that “I could have …” refers.

The human mind is extremely complicated and poorly-understood. Let’s consider a far simpler system: A chess program on a computer. Does it have free will? Well, it needs to make decisions: Does it move this pawn, sac that knight, develop that rook? It calculates the results of all these possible actions, as thoroughly as it is able, and decides upon a move.

It goes through a lot of effort, but the programmer who designed it could tell you exactly why it chooses the move: It calculated this series of moves as being the best for both sides as near as it could figure, resulting in this board position, which it evaluated as superior to the board positions from the other possible moves, blah blah blah, the point is as a consequence of its programming it couldn’t have done anything other than queen takes pawn.

Those of us outside the program see it that way, but the program has to consider all these other moves. It has to model their consequences. It does a ton of calculation, and eventually makes a decision. It “could” have sacrificed the knight, in the sense that the decision mechanism has to consider that possibility. It “couldn’t” have done so, in the sense that its decision mechanism has to reject it.

I “could” have played hooky from work yesterday, in the sense that my decision mechanism considered the possibility and modeled the consequences. I “couldn’t” have played hooky, in the sense that my decision mechanism rejected it. That feeling of your decision mechanisms in action is the sense of free will.

Powered by WordPress