Proposing a Spatial Artificial Intelligence

Since studying architecture, I’ve been fascinated by the thought that “space” can serve as a creative medium, and that buildings and cities are merely the physical artifacts we construct to exploit it. Buildings aren’t space, but they can manifest spatial ideas and impart experiences that express those ideas.

Similarly, the so-called “fabrics” of cities can be quite unique. Anyone who has walked through the concrete canyons of downtown Manhattan could readily contrast that experience with the old Las Vegas strip or the suburbs of Atlanta.

(Downtown Manhattan Photo by Nikoloz Gachechiladze on Unsplash | Las Vegas Strip Photo by André Corboz on Wikimedia Commons)

Abstract conceptions of “space” as inspiration

And the question of what “space” actually is has also been examined by philosophers, mathematicians, and geographers for centuries. Is space equivalent to a void? Is it the distance between objects? Does it have structure? To list a few of these concepts and their origins:

I’ll focus on these last two, “smooth” and “striated” space, defined by Deleuze and Guattari in their work A Thousand Plateaus: Capitalism and Schizophrenia, which are particularly interesting.

Is space equivalent to a void? Is it the distance between objects? Does it have structure?

These categories surface during their discussion of the nature of “the State” and “the War Machine,” before the chapter dedicated a deeper conceptual dive. They elaborate using an analogy with two board games: Go and chess.

They write, “The ‘smooth’ space of Go, as against the ‘striated’ space of chess,” and point out that “chess is a game of the State.” In chess, pieces exhibit distinct intrinsic behaviors and move in an almost formal way. Whereas in Go, the pieces are anonymous; they derive their power from extrinsic factors, their relationships to each other, the topology and connectivity of neighbors. Deleuze and Guattari continue:

“Finally, the space is not at all the same: in chess, it is a question of arranging a closed space for oneself, thus of going from one point to another, of occupying the maximum number of squares with the minimum number of pieces. In Go, it is a question of arraying oneself in an open space, of holding space, of maintaining the possibility of springing up at any point: the movement is not from one point to another, but becomes perpetual, without aim or destination, without departure or arrival.” (Deleuze Guattari 352-353)

AI to reason about game-space

Reiterating, “the space is not at all the same.” While we usually think of games as collections of rules, if we think of them as embodying various types of “space,” then the reasoning used to play them (the predicting, strategizing, positioning, decisioning, etc.) should also reflect the qualities of those types of space.

The “smooth” space of Go, as against the “striated” space of chess…

(An animation of how a knight would traverse all the squares on a chessboard.)

And regardless of whether the players are humans or bots, this reasoning isn’t fungible between games. Assuming the human players are only experienced at their respective games, we wouldn’t expect a chess grandmaster to perform competitively against a 9-dan Go player at Go, and vice versa. They understand the dynamics of their craft uniquely and distinctly.

AI for a particular game-space

Similarly, the AI systems that garnered the highest profile AI-defeats-human headlines for chess and Go were also quite different. Deep Blue, which defeated Garry Kasparov in 1997, was an expert system, and AlphaGo that defeated Lee Sedol in 2016 used a neural network trained with modern deep learning techniques.

Even though today’s chess engines like Leela Chess Zero also use deep neural networks, we still wouldn’t expect chess models to perform well at Go out-of-the-box, unless explicitly trained to do so (like AlphaZero and MuZero). An AI trained on Terrace would be yet another story, as would Quoridor, and so on.

Game Space AI
Chess striated Deep Blue (expert system, 1997)
Go smooth Alpha Go (deep neural network, 2016)

So an AI can reason through a game’s mechanics and game-space, which all appear intrinsically linked to each other. What if we were to apply this thinking to real-world scenarios and their corresponding spaces as well?

After all, spatial thinking is everywhere…

Besides board games, spatial thinking is all around us.

Fitting furniture into a room is an obvious everyday example. Availability of space is consumed by a sofa, and we can only fit so many pieces.

Marketers and entrepreneurs talk about “entering a space” when they mean “sell to another customer segment,” or say “this is a crowded space” to mean “too many competitors.”

Information flows through a company via official communication channels but also through gossip networks, which often contain “the real story.”

And going back to cities, the spatial reasoning required to safely drive through suburban Atlanta wouldn’t necessarily apply to walking through the busy canyons of Manhattan.

Moreover, a dense urban environment of sidewalks, streets, and city blocks presents contrasting and overlapping spatial qualities, selectively exposed whether you’re a car, pedestrian, cyclist, or pigeon. Automated traffic lights flash their signals to guide vehicles and people through the streets, but the same lights serve as mere resting places for pigeons.

And developing a city demands taking yet another perspective and solving its associated spatial puzzles (e.g. compare the god’s-eye view as in SimCityto the first-person view in Grand Theft Auto).

Real-world spaces are complicated

Each of these scenarios is complicated. Spatial designers like architects, urban planners, industrial engineers – and yes, game designers – need to work through these dynamics, which always seem to involve multiple factors and actors that intermingle and compete.

Games might be the simplest category, but even the enormous-but-finite game-space of the 361-location Go board takes among the most advanced neural networks built to traverse.

There’s something else curious about these scenarios; they all involve managing:

  1. a kind of limited supply of space (like lots on a city block or parking spots in a parking lot),
  2. rules by which it is occupied or consumed (e.g., only one building per lot, floor-to-area ration of 3.0, one car per spot), and
  3. rules by which it is traversed (like motor vehicles move forward along a vehicle lane or pedestrians should stay on the sidewalk/crosswalk).

(Map from Stamen Design showing a portion of downtown Manhattan, NYC.)

And at least three perspectives become apparent:

Is there something common to all spatial reasoning that may warrant a niche category of AI? This is what spatial AI is.

Spatial AI is the way

So spatial AI is artificial intelligence developed for spatial reasoning

I believe spatial AI should encompass more than what it first sounds like, more than algorithms for faster 3D modeling, powering virtual reality, or enabling geospatial predictions. Such methods tend to regard space rather simplistically, i.e., merely using 3D point data or latitude-longitude data.

Spatial AI speaks to the reasoning employed while occupying, traversing, and designing in space.

In contrast, spatial AI speaks to the reasoning employed while occupying, traversing, and designing in space (whatever concept we employ). Spatial AI thus embraces these existing AI methods but still aspires to do more.

More Inspiration from Game Theory and Systems Thinking

More than abstract notions of space, we can take a bit of inspiration from game theory, the mathematical methods for decision-making amidst competing incentives, which is applicable not only to games, but business, economies, biology, etc.

…And also systems theory, which provides frameworks to model the dynamics of complex systems.

(Systems flow diagrams from Thinking in Systems: A Primer by Donella Meadows.)

While these fields are tailored for strategic planning, they aren’t easily applicable to the kinds of decisions spatial designers need to make.

A timely need for new AI methods…

We’re in the midst of a generative AI explosion. It’s an almost most magical discovery, and the hype cycle is real. And we know new technologies can inadvertently cause significant ripple effects. A spatial example, Uber and Lyft – pitched as convenient apps for on-demand rides or provide your services as a driver – unexpectedly altered driving patterns in major cities and created additional traffic congestion.

(Redrawn from Gartner Places Generative AI on the Peak of Inflated Expectations on the 2023 Hype Cycle for Emerging Technologies)

And there’s no shortage of discussion about generative AI’s potential ill effects, including safety, misinformation, disinformation, deepfakes, copyright infringement, excessive energy usage, water consumption, and the like. Beyond these issues, “agentic AI” may soon flood the already bot-riddled Internet with autonomous, personalized proxies acting on our behalf. Companies like Agility Robotics and Figure AI would replace human laborers with androids.

In what ways can spatial designers anticipate and respond to the effects of AI on the built environment?

These will no doubt cause a host of ripple effects as well through the physical, tangible world. So in what ways can spatial designers anticipate and respond? By the very nature of their roles, they are well positioned to impact the built environment, even entire communities and landscapes. For example, can we make a spatial AI to help pitch and design new third places, potentially breathing new social life into public space?

These new tools need enough sophistication to model the complexities of an AI-enabled future in the ways spatial designers think.

Yet many of these designers still work with point-and-click modeling and drafting software grafted from other professions. Without a proactive self-tooling strategy, we could get swept up in Big Tech’s agenda – or even be relegated to mere aestheticists – having been lured into believing AI image generators and text-to-whatever apps are the only way to adopt AI.

The potential challenges also come with opportunities.

Luckily, new hosted platforms and the underlying enabling technologies of emerging AI are more accessible and flexible than ever. Companies like Hugging Face, Meta, and Google DeepMind are leading the way with open-weight AI models, public data, and open algorithms. Text-to-code capabilities can even smooth over the complexities of writing new algorithms and training new models.

These new tools need enough sophistication to model the complexities of an AI-enabled future in the ways spatial designers think.

This means we aren’t beholden to the hype cycle, and computational designers don’t need PhDs in machine learning to invent something meaningful.

This “spatial AI” concept is obviously early and unproven. I’m teaching a seminar on it at Columbia University GSAPP in Spring of 2024, which starts by exploring the vocabulary of spatial relationships (words like above/below/adjacent to/… or whole building programs and zoning codes), testing how far can we push large language models to reason through spatial rules, and authoring our own spatial semantics.

There’s a lot of work to do, like cataloging the current spatial algorithms, scrutinizing more data structures, and diving further into how to make spatial concepts computable. I’m not sure how far we can take this, but hopefully we’ll see it yield something interesting soon.