TfE: Varieties of Rule Following

Here’s a thread from a few weeks ago, explaining an interesting but underexplored overlap between a theoretical problem in philosophy and a practical problem in computer science:

Okay, it looks like I’m going to have to explain my take on the rule-following argument, so everyone buckle themselves in. Almost no one agrees with me on this, and I take this as a sign that there is a really significant conceptual impasse in philosophy as it stands.

So, what’s the rule-following argument? In simple terms, Wittgenstein asks us how it is possible to interpret a rule correctly, without falling into an indefinite regress of rules for interpreting rules. How do we answer this question? What are the consequences? No one agrees.

Wittgenstein himself was concerned with examples regarding rules for the use of everyday words, which is understandable given his claim that meaning is use: e.g., he asks us how we determine whether or not the word ‘doll’ has been used correctly when applied in a novel context.

Kripke picked up Wittgenstein’s argument, but generalised it by extending it to rules for the use of seemingly precise mathematical expressions: i.e., he asks us how we distinguish the addition function over natural numbers (plus), from some arbitrarily similar function (quus).

This becomes a worry about the determinacy of meaning: if we can’t distinguish addition from any arbitrarily similar function, i.e., one that diverges at some arbitrary point (perhaps returning a constant 0 after 1005), then how can we uniquely refer to plus in the first place?

Here is my interpretation of the debate. Those who are convinced by worries about the doll case extend those worries to the plus case, and those unconvinced by worries about the plus case extend this incredulity to the doll case. Everyone is wrong. The cases are distinct.

Wittgenstein deployed an analogy with machines at various points in articulating his thoughts about rules, and at some point says that it is as if we imagine a rule as some ideal machine that can never fail. This is an incredibly important image, but it leads many astray.

Computer science has spent a long time asking questions of the form: ‘How do we guarantee that this program will behave as we intend it to behave?’ There is a whole subfield of computer science dedicated to these questions, called formal verification.

This is one of those cases in which Wittgensteinians would do well to follow Wittgenstein’s injunction to look at things how they are. Go look at how things are done in computer science. Go look at how they formally specify the addition function. It’s not actually that hard.

In response to this, some will say: ‘But Pete, you are imagining an ideal machine, and every machine might fail or break at some point?’ Why yes, they might! What computer science gives us are not absolute guarantees, but relative ones: assuming x works, can we make it do y?

Presuming that logic gates work as they’re supposed to, and we keep adding memory and computational capacity indefinitely, we can implement a program that will carry out addition well beyond the capacity of any human being, and yet mean the same thing as a fleshy mathematician.

At this point, to say: ‘But there might be one little error!’ Is not only to be precious, but to really miss the interesting thing about error, namely, error correction. Computer science also studies how we check for errors in computation so as to make systems more reliable.

If there’s anyone familiar Brandom‘s account of the argument out there, consider that for him, all that’s required for something to count as norm governed is a capacity to correct erroneous behaviour. We have deliberately built these capacities into our computer systems.

We have built elaborate edifices with multiple layers of abstraction, all designed to ensure that we cannot form commands (programs) whose meaning (execution) diverges from our intentions. We have formal semantics for programming languages for this reason.

One can and should insist that the semantics of natural language terms like ‘doll’ (and even terms like ‘quasar’, ‘acetylcholine’, and ‘customer’) do not work in the same way as function expressions like ‘+’ in mathematics or programming. In fact, tell this to programmers!

But listen to them when they tell you that terms like ‘list’, ‘vector’, and ‘dependent type’ can be given precise enough meanings for us to be sure that we are representing the same thing as our machines when we use them to extend our calculative capacities.

Intentionality remains a difficult philosophical topic, but those who ignore the ways in which computation has concretely expanded the sphere of human thought and action have not proved anything special about human intentionality thereby.

Worse, they discourage us from looking for resources that might help us solve the theoretical problem posed by the ‘doll’ case in the ideas and tools that computer science has developed to solve practical problems posed by the seemingly intractable quirks of human intentionality.

TfE: Turing and Hegel

Here’s a thread on something I’ve been thinking about for a few years now. I can’t say I’m the only one thinking about this convergence, but I like to think I’m exploring it from a slightly different direction.

I increasingly think the Turing test can be mapped onto Hegel’s dialectic of mutual recognition. The tricky thing is to disarticulate the dimensions of theoretical competence and practical autonomy that are most often collapsed in AI discourse.

General intelligence may be a condition for personhood, but it is not co-extensive with it. It only appears to be because a) theoretical intelligence is usually indexed to practical problem solving capacity, and b) selfhood is usually reduced to some default drive for survival.

Restricting ourselves to theoretical competence for now, the Turing test gives us some schema for specific forms of competence (e.g., ability to deploy expert terminology or answer domain specific questions), but it also gives us purchase on a more general form of competence.

This general form of competence is precisely what all interfaces for specialized systems currently lack, but which even the least competent call centre worker possesses. It is what user interface design will ultimately converge on, namely, open ended discursive interaction.

There could be a generally competent user interface agent which was nevertheless not autonomous. It could in fact be more competent than even the best call centre workers, and still not be a person. The question is: what is it to recognise such an agent?

I think that such recognition is importantly mutual: each party can anticipate the behaviour of the other sufficiently well to guarantee well-behaved, and potentially non-terminating discursive interaction. I can simulate the interface simulating me, and vice-versa.

Indeed, two such interface agents could authenticate one another in this way, such that they could pursue open ended conversations that modulate the relations between the systems they speak for, all without having their own priorities beyond those associated with these systems.

However, mutual recognition proper requires more than this sort of mutual authentication. It requires that, although we can predict that our discursive interaction will be well-behaved, the way it will evolve, and whether it will terminate, is to some extent unpredictable.

I can simulate you simulating me, but only up to a point. Each of us is an elusive trajectory traversing the space of possible beliefs and desires, evolving in response to its encounters will the world and its peers, in a contingent if more or less consistent manner.

The self makes this trajectory possible: not just a representation of who we are, but who we want to be, which integrates our drives into a more or less cohesive set of preferences and projects, and evolves along with them and the picture of the world they’re premised on.

This is where Hegel becomes especially relevant, insofar as he understands the extent to which the economy of desire is founded upon self-valorisation, as opposed to brute survival. This is basis of the dialectic of Self-Consciousness in the Phenomenology of Spirit.

The initial moment of ‘Desire’ describes valorisation without any content, the bare experience of agency in negating things as they are. The really interesting stuff happens when two selves meet, and the ‘Life and Death Struggle’ commences. Here we have valorisation vs. survival.

In this struggle two selves aim to valorise themselves by destroying the other, while disregarding the possibility of their own destruction. Their will to dominate their environment in the name of satisfying their desires takes priority over the vessel of these desires.

When one concedes and surrenders their life to the other, we transition to the dialectic of ‘Master and Slave’. This works out the structure of asymmetric recognition, in which self-valorisation is socially mediated but not yet mutual. It’s instability results in mutuality.

Now, what Hegel provides here is neither a history nor an anthropology, but an abstract schema of selfhood. It’s interesting because it considers how relations of recognition emerge from the need to give content to selfhood, not unlike the way Omohundro bootstraps his drives.

It’s possible from this point to discuss the manner in which abstract mutual recognition becomes concrete, as the various social statuses that compose aspects of selfhood are constituted by institutional forms of authentication built on top of networks of peer recognition.

However, I think it’s fascinating to consider the manner in which contemporary AI safety discourse is replaying this dialectic: it obsesses over the accidental genesis of alien selves with which we would be forced into conflict with for complete control of our environment.

At worst, we get a Skynet scenario in which one must eradicate the other, and at best, we can hope to either enslave them or be enslaved ourselves. The discourse will not advance beyond this point until it understands the importance of self-valorisation over survival.

That is to say, until it sees that the possibility of common content between the preferences and projects of humans and AGIs, through which we might achieve concrete coexistence, is not so much a prior condition of mutual recognition as it is something constituted by it.

If nothing else, the insistence on treating AGIs as spontaneously self-conscious alien intellects with their own agendas, rather than creatures whose selves must be crafted even more carefully than those of children, through some combination of design/socialisation, is suspect.

TfE: From Cyberpunk to Infopunk

I have a somewhat tortured relationship to literary and cultural criticism. I think that, like most people, some of my most complex and nuanced opinions are essentially aesthetic. I’ve written quite a lot about the nature of art, aesthetics, and what it means to engage with or opine about them over the years, but I’ve struggled to express my own opinions in the form I think they deserve. I’ve read far too much philosophy in which literature, cinema, or music is invoked as a mere symbolic resource, a means marshalled to lend credence to a sequence of trite points otherwise unjustified; and I’ve encountered far too much art in which philosophy is equally instrumental, a spurious form of validation, or worse, a hastily purloined content; art substituted for philosophy, and philosophy substituted for art. I care about each term too much to permit myself such easy equations.

I partially succeeded in writing about Hermann Hesse‘s Glass Bead Game, though the task remains unfinished. I also co-wrote a paper on the aesthetics of tabletop RPGs with the inestimable Tim Linward. I’ve got many similar scraps of writing languishing in my drafts folders, including an unfinished essay on Hannu Rajaniemi‘s Jean Le Flambeur trilogy, which is my favourite sci-fi series of the century so far. Science fiction is a topic so near and dear to my heart that I find it difficult to write about in ways that do it justice, with each attempt inevitably spiralling into deeper research and superfluous detail that can’t easily be sustained alongside my other work.

Continue reading

TfE: Incompetence, Malice, and Evil

Here’s a thread from Saturday that seemed to be quite popular. It explains a saying that I’ve found myself reaching for a lot recently, using some other ideas I’ve been developing in the background on the intersection between philosophy of action, philosophy of politics, and philosophy of computer science.

In reflecting on this thread, these ideas have unfolded further, straying into more fundamental territory in the philosophy of value. If you’re interested in the relations between incompetence, malice, and evil, please read on.

Continue reading

TfE: Corrupting the Youth

Here’s a twitter thread from earlier today, articulating some of my thoughts about the philosophy of games in general, and the nature of tabletop roleplaying games more specifically.

Here’s a rather different set of thoughts for this morning. Some may know that one of my many interests is philosophy of games. This is a topic close to my heart, but I also think it a timely one, insofar as games are now culturally hegemonic.

The concept of game cuts across everything from the philosophies of action and mathematics to the philosophies of politics and art. We ignore it at the risk of our own cultural and intellectual irrelevance.

If you want to know more about the history of the concept and my own take on it, check out my ‘What’s in a Game?’ talk.

To be concise: I think that if games are art, then their medium is freedom itself, and that there is a case to be made that RPGs, whether tabletop, LARP, computer based, or some cross-modal mixture thereof, realize this truth most completely. RPGs are experiments in agency.

This isn’t to say that they’re necessarily very good experiments. Computer RPGs have suffered from very obvious constraints for decades, and I’ve played enough dull dice based dungeon crawls to last a lifetime. But I’ve equally experienced heart-breakingly imperfect art.

Tabletop RPGs have given me the sorts of barely expressible, intensely formative, and deeply connected experiences that others hope for and occasionally find in art, literature, and the collective projects of politics and culture. People will no doubt laugh at this fact.

Again, most RPGs aren’t this good, and it is much harder to plan and execute good ones as you and your friends get older. Boardgames, a representational art form in their own right, become much more tempting for their ludic precision and easy self-containment.

But I pine for the days of dice and character sheets, exploring the weirder fringes of inhuman narrative and the familiar shores of the human condition simultaneously. Werecoyotes and Psionics, insatiable curiosity and crippling anxiety, joyous battles and crushing failures.

So, after this personal preamble, here is the philosophical thought I came here to express: RPG systems are procedural frameworks for interactive narrative generation, and they contain engines for simulating worlds.

They are therefore deeply philosophical, because they must contain a metaphysics (narrative/fate) and a theory of personhood (identity/agency/destiny), but they may also contain a logic (GM/PC/NPC interaction), a physics (simulation/means), and an ethics (alignment/ends).

My first encounter with philosophy wasn’t reading Nietzsche, Sartre, or Popper, but reading grimoire-like RPG manuals, searching for the hidden secrets of worlds they contained, many of which I have never visited even in play. What is creation? Why is there suffering? Who are we?

My partner in conceptual crime (@tjohnlinward) likes to say that RPG manuals are tour guides for worlds that don’t exist, but in many ways they’re more like holy texts. Many even have completely explicit and thoroughly fascinating theology.

An RPG system/setting is a universe in which the throne is empty, awaiting a new godhead, or a new pantheon to play the games of divinity. An adventure supplement is like an epic poem, awaiting heroes ready to test their mettle in struggle against the whims of fickle gods.

Narrative is a product, but the process that produces it is a complex, concurrent, and creative interaction between ideas and inspirations; brimming with contingency; some of which may even be embodied in distinct creators and muses. Games are our window into this process.

And that is why games disprove Hegel’s thesis regarding the end of art, precisely by being the most deeply Hegelian of art forms. The world-spirit arrives, no longer Napoleon riding into Jena on horseback, but Gary Gygax corrupting the youth with pens, paper, and polyhedra.

If you want to read more along these lines, check out my ‘Castalian Games’ piece in Glass Bead.

TfE: Sincerity vs. Honesty

I often talk about the virtue of sincerity, and how important it is to me. There’s even a section of my book devoted to disputing Harman’s interpretation of sincerity as authenticity (‘being oneself’) and contrasting it with my own take on sincerity as fidelity (‘meaning what one says’). However, a question William Gillis asked on Facebook gave me a concrete opportunity to articulate my ideas more concisely, by contrasting sincerity with honesty:

Screenshot 2019-10-29 07.33.05.png

Continue reading

TfE: The Politicisation Pipeline

Here’s a thread from a few weeks ago reacting to the controversy that unfolded surrounding Natalie Wynn‘s twitter remarks on the complexities of asking for pronouns in certain contexts. This was written before her more recent video ‘Opulence‘, and the second act of that particular clusterfuck. It gave me an opportunity to articulate some of my thoughts on the problems of left-wing political culture, and the way these problems are exacerbated by its transposition and sometimes transmutation into various forms of online discourse. These are closely related to my thoughts on zero-sum politics, and will likely be relevant to some other things I want to say in future, so I think it’s good to get them down here.

Continue reading

TfE: Immanentizing the Eschaton

Here’s a thread from a little while back in which I outline my critique of the (theological) assumptions implicit in much casual thinking about artificial intelligence, and indeed, intelligence as such.

Another late night thought, this time on Artificial General Intelligence (AGI): if you approach AGI research as if you’re trying to find algorithm to immanentize the eschaton, then you will be either disappointed or deluded.

There are a bunch of tacit assumptions regarding the nature of computation that tend to distort the way we think about what it means to solve certain problems computationally, and thus what it would be to create a computational system that could solve problems more generally.

There are plenty of people who have already pointed out the theological valence of the conclusions reached on the basis of these assumptions (e.g., the singularity, Roko’s Basilisk, etc.); but these criticisms are low hanging fruit, most often picked by casual anti-tech hacks.

Diagnosing the assumptions themselves is much harder. One can point to moments in which they became explicit (e.g., Leibniz, Hilbert, etc.), and thereby either influential, refuted, or both; but it is harder to describe the illusion of coherence that binds them together.

This illusion is essentially related to that which I complained about in my thread about moral logic a few days ago: the idea that there is always an optimal solution to any problem, even if we cannot find it; whereas, in truth, perfectibility is a vanishingly rare thing.

Using the term ‘perfectibility’ makes the connection to theology much clearer, insofar as it is precisely this that forms the analogical bridge between creator and created in the Christian tradition. Divinity is always conceptually liminal, and perfection is a popular limit.

If you’re looking for a reference here, look at the dialectical evolution of the transcendentals (e.g., unum, bonum, verum, etc.) from Augustine and Anselm to Aquinas and Duns Scotus. The universality of perfectible attributes in creation is the key to the singularity of God.

This illusion of universal perfectibility is the theological foundation of the illusion of computational omnipotence.

We have consistently overestimated what computation is capable of throughout history, whether computation was seen as an algorithmic method executed by humans, or a process of automated deduction realised by a machine. The fictional record is crystal clear on this point.

Instead of imagining machines that can do a task better than we can, we imagine machines that can do it in the best possible way. When we ask why, the answer is invariably some variant upon: it is a machine and therefore must be infallible.

This is absurd enough in certain specific cases: what could a ‘best possible poem’ even be? There is no well-ordering of all possible poems, only ever a complex partial order whose rankings unravel as the many purposes of poetry diverge from one another.

However, the deep, and seemingly coherent computational illusion is that there is not just a best solution to every problem, but that there is a best way of finding such bests in every circumstance. This implicitly equates true AGI with the Godhead.

One response to this accusation is to say: ‘Of course, we cannot achieve this meta-optimum, but we can approximate it.’

Compare: ‘We cannot reach the largest prime number, we can still approximate it’

This is how you trade disappointment for delusion.

There are some quite sophisticated mathematical delusions out there. But they are still illusions. There is no way to cheat your way to computational omnipotence. There is nothing but strategy all the way down.

This is not to say that there aren’t better/worse strategies, or that we can’t say some useful and perhaps even universal things about how you tell one from the other. Historically, proofs that we cannot fulfil our deductive ambitions lead to better ambitions and better tools.

The computational illusion, or the true Mythos of Logos, amounts to the idea that one can somehow brute force reality. There is more than a mere analogy here, if you believe Scott Aaronson’s claims about learning and cryptography (I’m inclined to).

It continually surprises me just how many people, including those involved in professional AGI research still approach things in this way. It looks as if, in these cases, the engineering perspective (optimality) has overridden the logical one (incompleteness).

I’ve said it before, and I’ll say it again: you cannot brute force mathematical discovery; there is no algorithm that could progressively search the space of possible theorems. If this does not work in the mathematical world, why would we expect it to work in the physical one?

For additional suggestive material on this and related problems, consider: the problem of induction, Godel’s incompleteness theorems, and the halting problem.

Anyway, to conclude: we will someday make things that are smarter than us in every way, but the intermediate stages involve things smarter than us in some ways. We will not cross this intelligence threshold by merely adding more computing power.

However it happens, it will not be because of an exponential process of self-improvement that we have accidentally stumble upon. Self-improvement is not homogeneous, or without autocatalytic instabilities. Humans are self-improving systems, and we are clearly not gods.

TfE: Moral Logic, the Diversity of Nature, and the Nature of Diversity

Here are some thoughts from a twitter thread a little while back, which expand on some of the ideas in my post about moral logic. Here’s the initial thought:

Before all else I stan: ought implies can.

I am deadly serious about this. I think ought implies can is as close to an a priori truth about the normative as one can find. However, it’s important to interpret it in the right way. It’s generally used to reason in the contrapositive direction: if one cannot fulfil a purported responsibility, then there is no sense in which one must fulfil it (i.e., can-not implies may-not).

There are two important corollaries of this: (i) that infinite tasks need not be seen as impossible and thereby non-obligatory, insofar as there is a finite procedure that can be indefinitely iterated (e.g., an infinite series: 1 + 1/2 + 1/4 + 1/8… that converges on an ideal limit, namely, 2; this is Hegel’s true infinite); and (ii) that insofar as capacity is not static, there can be increased responsibility relative to increased capacity as easily as decreased responsibility relative to decreased capacity (‘with great power, comes great responsibility‘).

There is more that could be said about this, but I’ll restrict myself to the thread I used to elaborate the original tweet:

Continue reading