### Data from an ISC game against Nyx

Some screenshots of late positions from a game I played with Nyx today:

I wasn’t sure whether to play something which would definitely leave me a bingo lane on my next turn (the best I could see at the time was DICE D1) or something which readily allowed blocks (I had DICI(N)G in mind). Earlier, I had been looking at ISC’s incomplete but helpful description of Nyx’s algorithm. For guidance with the above position, I brought it up in-game.

Reading this didn’t make it clear to me how Nyx would respond to my plays. I wanted to play DICE, because that seemed to win with some draws, whereas DICING seemed to lose to optimal play. (I wasn’t factoring in disconnected 9s after a possible block parallel to DICING, and as usual, I think much of my reasoning was felt rather than consciously exhaustive.) However, I got excited about DICING upon realizing that it would teach me something about Nyx. I played it:

The screenshot just above shows Nyx’s rack after DICI(N)G. I might play UMBER or TUBER here, to limit bingos. Instead, Nyx took 12 seconds to play

### CACOETHES

According to Google’s dictionary, a cacoethes is an irresistible urge to do something unadvisable.

I had thought “irresistable” was the standard spelling of the word. Actually, TWL19 does allow that spelling. But RESISTIBLE is the only spelling it likes of this latter word — the dictionary committee restrained themselves.

### Sitting on Your Knees

I just realized that, for my current desk arrangement, sitting on the floor on my knees, with some blankets and pillows to cushion and elevate, actually works pretty well! By placing my laptop atop books, I can get a good eye level. I’m curious to see how this goes, and excited to experiment more aggressively with different postures in the future.

### Loops between Python Objects

Epistemic Status: Trying to figure things out, so this is mostly for my own sake. Don’t trust this post, and please correct me if you see a mistake.

Wanting to implement the data structure in this post, I wrote the following Python (3.5.3) code. What do you expect it to do?

```class Node:
def __init__(self, name, links = []):
self.name = name

def __init__(self, kind, value, children):
self.kind = kind
self.value = value
self.children = children
for child in self.children:

n = Node("Grizzelda")
n.name = "Shmizzelda"
print(l.children[0].name)
```

The answer is, it runs fine, and the

`print`

statement outputs “Shmizzelda”. The name changes correctly, and there is also no problematic infinite recursion of a node being in a link being in a node being in a… Why does this work?

I don’t totally understand, but I have a more sensible guess than I did before, thanks to a tutorial.

When you do

`l.children`

this gets converted by Python (the Python interpreter, or something?) into

`Link(l, children)`

So the Link class is what stores the data, not the individual links themselves. Same with Nodes. And when you do

`l = Link("pimpled", 3, [n])`

the data gets stored by the Link and Node classes are just the names of the links and nodes — not the links and nodes themselves.

The names act like pointers (I wouldn’t be surprised if this is how they’re actually implemented under the hood). So there is no infinite recursion, and when the data that the “pointer” is “pointing” to gets updated, we receive the updated data upon “dereferencing” it.

### Explanations

Sometimes, I would like to explain something to someone. This may happen before some task can be completed, or maybe just as a natural part of the conversation.

Of course, my understanding of the thing will vary. I may understand it very well (how to add numbers). Maybe I can formally manipulate the concept, but don’t have a fast intuition for it (manipulating complicated summations). Other times still, I’m just learning about it myself.

I recently was explaining diagonalization arguments to a friend. In particular, how they can be applied to prove many different statements: they can show that the reals and natural numbers have different cardinalities, that some propositions are independent of Peano arithmetic, and that some programs cannot halt. They can show still more things that I don’t understand yet.

While explaining this to my friend, I was still trying to work out some of the formalism in my own head. (I’m still not sure I can write out a complete formal proof of Gödel’s theorem.)

This was a pretty tough situation. I noticed I was concentrating pretty hard. There was a slight worry in the back of my mind that I was going too quickly for my friend, thereby defeating the purpose of explanation. But I glossed over that feeling, and maybe rightly — trying to explain it perfectly to my friend might have caused me to make a mistake. It was hard to get the math right and manage to think about what my friend was thinking. (My friend did seem to end up getting it, though it’s often hard to tell whether someone understands.)

Understanding and explanation can be very hard without being combined. I’ve already talked about difficulties with understanding math, but when the math isn’t an issue, explanation is still challenging. Many people, myself included, have failed to show kids how to add one-digit numbers, even after spending a lot of time on it.

CFAR taught me a useful way of thinking about this problem: try to model the person, and make sure you don’t get too fixed on one hypothesis. I’ve thought about this a lot, and extended the idea for myself.

If you’re trying to explain something to one person, model them, keeping in mind that they have a complex brain you don’t understand. Most of their state is hidden. You only get some observations, but they can be very useful, since you can take advantage of near-universal human traits and shared background.

With this framing, generating multiple hypotheses doesn’t feel like a lot of work (though it may be that I’ve also gotten better at the skill of generating hypotheses in the past few months). This frame makes it seem that of course you should have multiple hypothesis! You’re talking to someone who’s been observing things for years, using a powerful system designed by evolution over billions of years. It’s to be expected that a lot of your first impressions will be way off base. You should still use your first impressions; just be willing to revise them, relentlessly.

Another thing that seems helpful is acting, or as John Salvatier called it, first-person modeling. Embodying the person and pretending you are them, in real time, seems very powerful for getting things across.

How to do this while also explaining something complex, though? It feels possible, but currently it gets tiring. I’ll certainly keep practicing. Explaining is fun.

### Sum of first n cubes is square of sum of first n numbers

I drew this for a friend and figured I’d post it, though it’s nothing new. It’s a proof without words that $(1+\cdots+n)^2 = 1^3 + \cdots + n^3$, made using YouIDraw

### Avoid Seeing Your Own Face on Video Chats

I have searched multiple times for ways to avoid seeing my face on video chats, as it’s distracting. I never found options to control this on Google Hangouts or Skype. Recently, I found a simple solution, which works in Google Chrome on my MacBook Pro with Version 10.12.6 installed. Just zoom out, by holding down the command key and typing minus (that’s “-“). Your conversational partner’s video remains the same size, while the video of you gets really small. With this solution, I can see my own video well enough to know if they can see me, but not so well as to be distracted.

### Computation isn’t mere (Draft)

Epistemic Status: Highly uncertain, mostly impressions from reading about AI safety, without much rigorous backing.

Mathematicians often refer to a problem as now being “merely computational”. My thinking style has been bred mostly by mathematicians, and not programmers, so what follows is a wild guess. I think programmers probably find this statement strange. Mathematicians tend to be more concerned with principle, programmers with practice.

I think complexity theorists might also find it weird when a mathematician calls something “merely computational”. A Fervent Defense of Frequentist Statistics contains a helpful intuition pump:

My high-level argument regarding Dutch books is that I would much rather spend my time trying to correspond with reality than trying to be internally consistent. More concretely, the Dutch-book argument says that if for every bet you force me to take one side or the other, then unless I’m Bayesian there’s a collection of bets that will cause me to lose money for sure. I don’t find this very compelling. This seems analogous to the situation where there’s some quant at Jane Street, and they’re about to run code that will make thousands of dollars trading stocks, and someone comes up to them and says “Wait! You should add checks to your code to make sure that no subset of your trades will lose you money!” This just doesn’t seem worth the quant’s time, it will slow down the code substantially, and instead the quant should be writing the next program to make thousands more dollars. This is basically what dutch-booking arguments seem like to me.

I’ve noticed this come up in the debates about how we should build safe Artificial Intelligence systems . My sense is that this intuition is the source of tension between views of people who are approaching this more mathematically, e.g. researchers at MIRI working on the Agent Foundations Agenda, and people working on machine learning at places like DeepMind and OpenAI. It might also relate to the disagreement between Hanson and Yudkowsky, though I’m even less sure of this.

### New Approaches (Draft)

There’s something satisfying about learning a new way to approach things. Knowing you are different than you were before because you have this approach is exciting. New thinking tools can be very powerful as well, but they are not universally so. Being able to exercise judgment and realize when a tool is bad is important (said judgment is of course also a tool).

Not everything is about learning a new way to think. Every day we walk around, go to the bathroom, say “hi”, and plenty of other routine things. These can be approached in a new way, and sometimes that’s useful, but often, we’re just reusing a familiar pattern that works well.

And when you are actually doing something new, there are different degrees of newness. Take meeting people. Meeting someone for the first time can be quite routine, especially if neither of you is excited about the other. Now imagine talking to someone in an unfamiliar country, whose language you are struggling to learn. You have to strain a lot more in this case.

There are plenty of applications for this principle. This post was inspired by thoughts about programming. You can solve a small problem, which uses syntax and concepts you already know well. This problem may be new, and require some effort, but it’s not as big a stretch compared to learning a library for web scraping, when you’ve never done web programming before.  And learning functional programming, when you’ve only done imperative and object-oriented programming before, is another leap up the newness ladder.

The more unfamiliar a new framework is, the bigger your time investment, and, it seems, the bigger your chance of failure. The gap you need to cross is wider. Talking to people who know the subject well can help, but even good teachers and references succumb to the illusion of transparency.

For me at least, when trying to learn a thing, it’s easy to get my head in the clouds, and be very confused in a subverbal way. One thing that might help here is trying to ground the new topic concretely. Doing some wordless musing isn’t always bad, but it probably shouldn’t take too much learning time — let’s say about 10%.

### Rehearse the Improvement

Epistemic status: Playing around with stuff on a sample size of one.

Related: Attention controlUpdate from the Suckerpunch

Noticing mistakes is an appealing habit to me. If I can tell that I’ve just done something I dislike, I have more power to not do such things in the future. This by itself isn’t enough, though. You have to also have an idea of what a better choice would’ve been, and how you could make that choice.

For example, when playing piano, you may play something that doesn’t sound good. First of all, it might not be obvious what would sound better. So you sleuth around, play at a slow tempo, weight your hands differently during a passage, change the mental aesthetic you have around the piece while playing, listen to recordings.

It’s not enough just to know what sounds better. You have to know how to make that sound. Listening to good recordings might be best for finding awesome sounds, but it’s worst at teaching you how to make the sound yourself. And experimenting at a slow tempo can help you find pleasing sounds, but leave you unable to make them when playing at speed.

Even knowing what it feels like to do the better option is not enough. You have to actually choose that option consistently. I’ve struggled with this, often having bad habits that persist. In part, I’m not actually sure certain habits are bad. But what about examples where a bad habit stays there, despite my knowing it’s around?

• Not filling out an application for a promising opportunity
• Being late
• Putting off a whole assignment until the last day, when I remember similar behavior ending badly in the past
• Playing a certain passage of a piece with a technical mistake
• Continuing to work even though I’m tired

These mistakes may involve several pieces that I don’t understand. But I recently noticed a common behavior, which is definitely involved in some of those tasks, maybe all of them. When I err, I often think about the error afterward for a while, sometimes in the long-term as well as the short-term. And when I do, I tend to replay a mental video of myself making the mistake. By playing that video, I think I’m reinforcing the less desirable behavior. That is, here’s the sequence of TAPs (aka implementation intentions) I’ve been doing:

Make some type of mistake –> Notice that I made a mistake

Notice that I made a mistake –> Think of the better version of the action

Think of the better version of the action –> Think more about the mistake I made and worry that I made it

But I want to change the last step to:

Think of the better version of the action –> Mentally rehearse what the better version of the action would’ve looked like in the previous situation

What’s cool is, if you have access to the memory of the situation, you can sometimes change it into a pretty accurate visualization of what the actual behavior looks like. I’m a little worried about this part of the habit — what if it increases my rate of false memories? I’m not worried enough to not try it out, though.

Knowing about this now, I anticipate being late to things less often in the future. (I don’t have a great way of checking this later, since measurement seems annoying.) For lateness, my main problem has been knowing that continuing to do something (e.g. play piano at home, read at school) will make me late to something else, then doing it anyway. Usually, I’ve noticed the mistake by the time I’m getting ready to go. So here’s the sequence of TAPs for lateness:

Be uncomfortably likely to be late for something –> Notice that

Notice the uncomfortable likelihood –> Think of an early enough event that could trigger me to get ready to go in the future

Think of this event –> Imagine having responded correctly to it in the past, and maybe imagine responding correctly in some imaginary situations too

I don’t think this will completely solve my lateness, but I’m excited.

Joscha Bach

Math for joy

Still Drinking

Math for joy

Unstable Ontology

by Jessica Taylor

Minding our way

Math for joy

Trying to dig out from minus a million points

Compass Rose

The territory is a map of the map.

Gowers's Weblog

Mathematics related discussions

What's new

Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao