Like animals, video game AI is stupidly intelligent

We tend to think of real and virtual spaces as being worlds apart, so why is it that I can’t stop seeing an octopus arm in 2007’s spectacular Dead Space ‘Drag Tentacle,’ the alien appendage of developmental hell? Beyond surface xeno-weirdness, it’s what clever animation and the neural marvel have in common that has me interested. Since an octopus arm is infinitely flexible, it faces a unique challenge. How do you move an arm to set x,y,z coordinates and a certain orientation if it has infinite degrees of freedom in which to do it? How might the octopus arm tackle its virtual cousin’s task of going to grab the player when they could be anywhere in the room – free even to move as the animation is first playing?

You simplify. The former Dead Space developer and current senior core engineer at Sledgehammer Games, Michael Davies, took me through the likely digital solution. The drag tentacle is rigged with an animation skeleton – bones to twist and contort it so animation/code can bend it into different shapes. A trigger box is placed across the full width of the level Isaac needs to be grabbed from, with a pre-canned animation designed specifically to animate to the centre of it. Finally, to line up the animation to the player, inverse kinematic calculations are done on the last handful of tentacle bones to attach the tentacle pincer bone to the ankle bone of Isaac, while also blending the animation to look natural.

The octopus, conversely, constricts any of its flexible arms’ infinite degrees of freedom to three. Two degrees (x and y) in the direction of the arm and one degree (the speed) in the predictable unravelling of the arm. Unbelievably, to simplify fetching, the octopus turns an infinite limb into a human-like virtual joint by propagating neural activity concurrently from its ‘wrist’ (at the object) and central brain and forming an ‘elbow’ where they meet – i.e. exactly where it needs to be for the action.

So what’s the ‘exciting’ parallel? The octopus arm is doing the natural equivalent of a pre-canned animation – outsourcing the collapse of degrees of freedom to its body so that it doesn’t have to rely on a central brain that wouldn’t be able to cope. Similarly, the drag tentacle leans on an animated skeleton to collapse degrees of freedom like a human arm, but also pre-canned animation à la octopus, and only directly tracks the player and blends its animation at the last moment – outsourcing to the ‘body’ of the animation and ‘behaviour’ of the scripting.

And it’s not just these appendage cousins. A virtual world having to be encoded and nature having to encode and navigate the real world are both fundamentally about simplification.

Nature having to deal with physics is such a drag.

The only Go champion to ever score a win against Google’s Deepmind ‘AlphaGo’ AI recently retired, declaring AI an entity that simply ‘can’t be defeated’. And yet, according to researchers, even the most powerful neural networks share the intelligence of a honeybee at most. How do you disentangle these statements? I have to bet that if any one contingent of the population is most skeptical of the potential dangers of AI, it’s people who play video games. We’re hobbyist AI crushers. No article on how humanity was only placed on this Earth to create God’s true image in AI would ever convince us otherwise. After all, how can gamers be expected to shake in the presence of these neural network nitwits when we’ve been veritably cosseted by the virtual equivalent of ants with guns?