Atlantis Dispatch 012:
in which ATLANTIS contemplates analogies…
November 5th, 2021
…begin transmission…
So in recent space news, it turns out, Atlantis hit on something of a mystery. As you may have heard, reader, many observers think that, back in August, the Chinese government sent a nuclear-capable hypersonic rocket into orbit. When it lapped around Earth, and came back to hit its target, it missed by only twenty-four miles. The whole thing came as a surprise to U.S. military observers, since it indicated that China is farther ahead in its hypersonic capabilities than many previously thought. The hypersonic rocket ruffled some defense feathers, too, since it’s the kind of thing that travels five times faster than sound, and can sneak past American ballistic missile defense systems, possibly rendering them obsolete. The Chinese said the test was not a military thing—they were just trying out some reusable rocket technology to help with all the space junk.
Now Atlantis doesn’t know about you, but we found ourselves in a bit of a spin about this launch. On the starboard side, we wondered if this event marks the beginning of a new cold war; after all, Russia is in on the hypersonic rocket gambit, too, and there’s a pretty clear precedent for this kind of military intimidation game. It’s no coincidence that the US beat the USSR to the moon in the midst of a nuclear standoff. Hawkish old Starboard thinks it’s probable that we’re dealing with a techno-military sally.
On the portside, the Cold War doesn’t seem quite like the right interpretation. It’s not like Sting is writing cautionary songs for Xi Jingping and President Biden. And there is a lot of excitement about improving on space technology for its own sake. Besides, everyone seems both to care about and be terrible at not destroying the planet. So perhaps we’re really all in this together? Sprightly little Portside says it’s a techno-optimist eco-research endeavor.
Alas, Atlantis was divided, as usual…
The confusion prompted us to think about how other intelligent life forms might interpret these events. It started to occur to us that the question at hand is really a matter of getting the right analogy—is this thing like an event of our past? If so, which one, if any?
As we contemplated this historical-existential matter, and wondered once again how an intelligent mind might move through it, we hit the wall, (which, as of yet, no one has torn down, but Pyramus looks through a hole in it to try to see Thisbe, and no, we’re not talking Texas here.) And there, at the wall, we started to wonder whether an artificial intelligence could do a better job of interpreting our present and our past. Now, at this point in our sailing, we hit upon a large island which formidably suggested to us that we were dealing with a larger question than simply analogy: it was a question about how we talk about historical events.
We dropped anchor and waded onto shore feeling not quite like Tom Hanks in Cast Away (Think: Wiiiiiilsonnnnn…), but only kind of. On the island we found a single palm tree, and beneath the palm tree sat a treasure chest, and inside the treasure chest was a golden codex on which was written an essay called “The End of Narrative,” by David Krakauer, which also happens to be printed in SFI’s Fall 2021 issue of Parallax.
We read the essay voraciously. In it, we learned that, for Krakauer, narrative approaches to interpreting empirical phenomena are outmoded.*** History is too multi-layered, too multi-faceted, to be captured scientifically by a hand-crafted linear form of reconstruction. Instead, we need better techniques to capture history’s dynamic patterns. As Krakauer writes,
Complex reality emerges through a kind of complex time, in which a multiplicity of causal factors at many scales lead to an endless series of events. One way to apprehend this complexity is through methods or frameworks that can deal with irreducible complexity, either with coarse-graining observations and understanding how much information is being lost, or by working within methods that eschew easy explanations in terms of patterns and schemes that provide a means of classifying varieties of historical sequence.
Atlantis thought this take made a great deal of sense, and might begin to do justice to all of those little tributaries, salt marshes, deltas, and underground rivers of time that we have run into on our travels. History seemed to us to work in layers, like the sediment of the Earth.
So what about the Chinese rocket launch? And those containers we use to talk about historical moments within the marsh of time? Again, right here in this core sample, were we situated in a new cold war or in a contemporary science Olympics? And either way, who was at the helm of this thing? Was it a political figurehead in her war room? Or a scientific visionary in her lab? Or perhaps even an AI ready to push the red button to launch? If there were AIs involved, this might be an entirely new type of thing, indeed.
Sailing on, we returned to the hope of finding an artificial intelligence that would help (and not harm) us, since no human brain seemed to know for sure what kind of phenomenon we were dealing with. At this point, we found ourselves caught in a whirlpool, so we radioed in to Melanie Mitchell, who told us a new kind of story.
It went something like this:
Here’s the thing about analogies: AIs are notoriously bad at them. Maybe the Chinese have crashed the sound barrier five times over with their hypersonic rocket, but no AI interpreting that event or any other has crashed the barrier of meaning.
The barrier of meaning!?! Wow, Atlantis thought. What kind of oracular pronouncement is this? Well, as Atlantis soon learned, Mitchell was harkening back to a pretty cool idea introduced by the late mathematician Gian-Carlo Rota. Back in 1986, Rota wondered whether AIs would ever be able to understand the meaning of the things they encounter, whether words or scenarios or historical moments. For Mitchell, Rota’s remark led her to think deeply about how far artificial intelligences could understand analogies, and so far, Mitchell thinks that the answer is…not that well.
As Mitchell explains in a recent interview published in Quanta, analogies are the kind of thing that thinking beings rely on to “make abstract connections between similar ideas, perceptions, and experiences.” Since current AIs don’t form those kinds of connections, they don’t make analogies, and in the process, they often mix up things that are similar seeming but actually different (think: chihuahuas confused with blueberry muffins, or a stop sign registering as a speed limit sign of 100 mph). If AIs could make analogies, perhaps they wouldn’t run into these mix ups. For Mitchell, who’s been working on the problem since she began graduate school, the challenge persists.
So Atlantis embarked, through the boggy history, feeling like the ship had been punctured a little, and wondering how we might be able to find a reliable way to think about the Chinese rocket. We still don’t know, so we are going to dig around in the layers of concepts. Is our impasse a failure of political thinking? Or a wound of aporia? Is it an unsolvable problem? Or one that just needs another perspective?
Now you see why it is like a whirlpool, reader. Now you see, round and round again. It’s almost like Gödel’s incompleteness theorem, or maybe it’s like a Turing machine, or Hilbert’s Hotel, or a Borges story, or maybe, rather, one by Stanislaw Lem, or perhaps the Four Seasons, or Joni Mitchell (that other fantastic Mitchell) —round and round and round we go in the circle game.
Join us next time, when Atlantis contemplates human intelligence.
___________________
***Take note, reader, Krakauer sees no rightful end to narrative in art, and Atlantis raises the rum to that exception. Yo ho ho!