Generative AI Value Propositions for Computational Designers

Spectrum of intervention

  1. productivity, work faster
  2. take away drudgery and roteness
  3. new interaction models (chat with your data)
  4. elevate thinking
  5. new ai models

EndNotes ↓

For Design and Designers

And more relevant for designers, notions of who actually authors creative works and who holds their ownership rights are already being challenged in US courts.

Tools

Even so, will designers continue to be subject to the tools and agendas of commercial entities like Autodesk and Adobe?

The danger comes from the idea that when Adobe ships generative AI features in Photoshop, that they are signaling to designers what good design process should be, rather than simply what more they can do.

Akin to incentivizing writing against the attention it gets. Does this lead to better writing or clickbait titles and curt hot-take opinions?

(reference to something on tools)

Value prop: Generative AI changes one’s mindset, from an author to an editor or curator.

Speaking specifically to how GPT and other similar models aid programming, they enable designers to ask conceptual questions about computational problems and get specific, encoded answers. (To my surprise, it can even use specific frameworks like P5.js.) ChatGPT is particularly good about explaining the code it generates as well, so it’s not just a generate-and-done experience, but a learning one. (As we’ve seen, it certainly doesn’t obviate the need for instructors!)

So as long as the designer knows what to ask for, i.e. what outcomes they want, (e.g. “I have a graph of Cities as nodes and Shipping Lanes as edges. Give me an algorithm to traverse the graph and find potential supply chain routes.”), they can get a formal-language output that represents it. Designers also still must know how to incorporate the resulting code within the rest of their projects.

Aside from making comp-designers more productive and help students learn, it can potentially elevate their mindsets, for example, helping to avoid painful context switches between frustrating implementation details and more desirable conceptual thinking. And even if they encounter such inevitable frustrations, they can turn back for help from the LLM.

Text generation changes one’s mindset from an author to an editor. (link to mindset stuff?)

If we take this further, AI could enable latent opportunities in a design computation, like authoring semantically rich models and building a canonical foundation of computable spatial primitives for such models. That’s another story though.

Approach-wise, I believe there’s an opportunity now for the program to engage AI proactively at multiple levels. Yes, by (carefully) using ChatGPT, Bard, Claude 2, and the like, but also by researching and building new models and corresponding spatial data, researching potential synergies between LLMs and more traditional computational models that are provably "correct” (i.e. what we already do, like parametric models in Grasshopper), and so on.