AI’s Values in 2023

Especially in the past few years, artificial intelligence has achieved significant advancements, which have led to remarkable outcomes in medical imaging, drug discovery, autonomous vehicles, climate modeling, operational management, research, urban surveillance, and more.

Sneaky and Surreptitious

In a way, it’s even more remarkable that AI’s presence can go unnoticed. Large-scale surveillance systems employ cameras and sensors that continuously observe public places, and the enabling algorithms scrub the accumulated data to identify and track individuals, most often without our conscious knowledge, until it shows up on the evening news. Shipping logistics now takes advantage of AI capabilities that effect less fuel consumption and shorter shipping times, but do consumers really realize it?

2022’s AI Rockets

And then in 2022, OpenAI’s ChatGPT and DALL-E (and similar products from other companies) showcased the new potential of generative AI methods, which have captivated the world with striking creative abilities – or perhaps, the surprisingly convincing mimicry of such abilities.

With generative AI, mere text prompts can appear to instigate deep research and write entire essays, as if a dutifully ghostwriter responded to our need. With the same algorithms, we can converse with chatbots as we might with a human, finally challenging the famous “Turing Test,” the ever elusive AI gold standard. Similar textual prompts induce the generation of hundreds of high fidelity images, as if talented digital artists were somehow imagining and painting in hurried unison. New tools like GitHub Copilot and models like Code LLaMa can author and debug source code, changing traditional software engineering practices and expanding coding’s accessibility.

Collapsing Roteness

AI’s value propositions often focus on the classic gains afforded the individual by automation. For example, instead of color grading or rotoscoping manually for weeks, a film editor or team of animators may simply declare their creative intent and wait for mere minutes for it to manifest. Rather than days spent researching, a single prompt may yield relevant scholarly results in seconds (more relevant that typical search engines). Hours of frustrating debugging can be curtailed with a prompt that may track down a source code bug in moments.

Delegating such rote tasks to AI might just save our sanity. Every generation since the Industrial Revolution has experienced new major forms of automation that collapse the time needed for the soul-sucking drudgeries of work, but it has unfortunately often produced more drudgery in its place.

Printers and word processing software helped us to create documents en masse, which yet also created the deluge of paperwork that defined much of the Twentieth Century’s knowledge work. Programming seemed like the ultimate automation, yet it spawned entire industries and academic pursuits, each with its own time consuming tasks like debugging and repetitious coding assignments.

And here we are again, where rote tasks can be collapsed into prompts, simple instructions with the power to automate our computer systems with the semantics of natural language, making such capabilities available to nearly everyone.

Capitalist Productivity and The Labor Shift

Related, and perhaps loudest in our contemporary world, such AI advancements are being heralded as bringing about the newest wave of business value, as increasing labor’s productivity and thus capitalism’s competitive levers. In the US, where people’s health is measured in dollars as “lost productivity,” is this not the ultimate value we could aim for?

And indeed, generative AI is advancing an uncomfortable and challenging shift with labor, as any “disruptive” technology may do. No doubt jobs will be lost and others created. For knowledge workers, a new “AI capital” embodied in custom models may eat away at precious “human capital.”

Do we expect to realize the “gains” afforded by AI translated back to workers’ leisure time, less stress, and fair wages? Or do we expect the human aspects to be still be quantified as capital and managed as such?

Profit Motives Amidst Bias and Disinformation

Generative AI has moved so fast in the last year; most of us are still trying to catch up. A number of prominent technologists even published a call to slow down the development of LLMs, so society and governance could reason about the implications. However, this doesn’t stop venture capitalists and startups from rapid opportunistic invention. The motivation to sell-sell-sell in the tech industry still reigns, and while I love innovation, I fear this wave comes from a innovation-supply-oriented (“Isn’t this cool? Please pay now.”) perspective rather than a necessity-demand-oriented (“Here are the societal problems we’ve finally solved.”) perspective.

I believe we should work to ensure technology is a human benefit, not merely a capital market.

This urge to wrap technologies with startups means they often ship prematurely for some uses. Bias, fallacies, and factual mis- and disinformation (so-called “hallucinations”) pervade the completions generated by leading language models, which eerily seem to reflect back upon us the best and worst of our Internet-documented society.

And since the popular underlying large language models (LLMs) like GPT-4 are so expensive to train (on the order of 100s of millions of USD for that model), questions arise as to whether market pressure will prove effective enough to motivate “Big Tech” to responsibly address these controversial behaviors and thus tack against what merely profits them.

Stereotyping, by definition?

It’s no secret that AI carries bias. One such analysis shows the embedded bias in generative imagery from text-to-image tools like MidJourney, showcasing the societal stereotypes from across the Internet, exaggerated due to commercialization, differing cultural attitudes towards publishing photographs of oneself, sexualized images of women, Westernization, and infinitely more.

Because such generative AI works with pixels and embeddings, the requisite associations that power the algorithms occur at a superficial level, without regard for accuracy or our efforts to combat our own stereotypes. Short, generic prompts produce images that are often strikingly similar, lacking the diversity we desire to ensure the vast multitude of humanity are well represented, but also displaying curious nonsensical mixing of details.

This suggests to me that today’s AI is even serving as a computational definition of stereotyping, one that looks to statistical prevalence of such details without the ability to incorporate the underlying concepts or to judge its own approaches.

Perhaps this was due to mistaken tagging, but even those “mistakes” also reflect our own bias as inexpensive human labor produces the tags upon which text-to-image algorithms rely.

Mitigations

The need to mitigate these critical problems is becoming more urgent, particularly as companies formulate and implement their AI strategies. Should systemic bias reach beyond the completions and images we generate for our daily tasks and somehow infect the actions of major corporations providing the services of modern life, the consequences of such potentially flawed judgment could be even more far reaching.

Somehow, we must find sustainable ways to democratize AI’s evolution and open its governance. While the centralization of power of this technology may seem a forgone conclusion, glimmers of hope are emerging from organizations like the Technology Innovation Institute and HuggingFace. Even some tech startups are starting to address them. Anthropic focuses on AI safety, and even Meta is taking steps to address data bias.

The next year is going to tell us a lot about this technology, which seems only to develop faster every day. In particular, realizing the values that generative AI espouses will help us steer this technology with wisdom, particularly anyone wanting to create and modify such AI models for good, but what Herculean effort will this take?

Updated 2023 Oct 10 to include “Stereotyping, by definition?”