The Hypothetical Image

Response from Midjourney 5 for the prompt, “Seance of the Digital Archive.”

Part One: Underutilized.

When you think of the history of generative AI, what comes to mind? It might be the early robotics labs of Marvin Minsky at MIT. Maybe you go further back, to Norbert Wiener and cybernetics. Personally, I think of something far less exciting: the origins of the insurance industry.

Today, what was once Edward Lloyd’s coffee shop is home to a small Sainsbury’s grocery store and a Pret-a-Manger. But in 1689, it was a hangout for sailors and those in the shipping trade. Patrons treated the shop like an office. They’d discuss work between sips, and Lloyd would listen. Eventually, Lloyd published what he’d heard in a small newsletter for the maritime industry; later, he would deploy observers to various ports who would observe the movement of ships. It was a means of offering news — and driving business for coffee sales. The paper crumbled, but was later resurrected by his sons as Lloyd’s List. 

Over time, the data gathered about the maritime industry would balloon as retired ship captains offered expertise, and as auctions of boats took place on site. There was a special value in this data: businessmen and sea captains would place bets on which boats would return and which boats would fail. When 130 enslaved Africans were killed on a British ship to preserve fresh water for the crew, the result was a scandal — for the insurance claims. Nobody ever filed a suit on behalf of those who had been kidnapped and murdered. They placed bets, and they sued each other over how those bets should be settled. 

Lloyd’s was the epicenter for the insurance of the slave trade — estimated to be between 80-90% of its business dealings. Insuring a vessel simply meant placing a bet on whether its human cargo survived the voyage. Various payments could be made, depending on how people died. Lloyd’s was a coffee shop, but it was also a hub of data surveillance and financial speculation where wealthy men could earn money on the death and exploitation of slaves over coffee. 

19th Century illustration of Lloyd’s coffee house.

These bets came to inform a peculiar and macabre kind of mathematics, a means of establishing risk. Past performances of captains and individual boats, the treacherousness of certain waters, were all factored in. In creating a relationship between past events, which were quantified and analyzed for patterns, Lloyd’s is widely cited as the birthplace of predictive analytics. Today, Lloyd’s of London continues to make its profit off of insurance, data, and risk.

Today, predictive analytics remains the basis of an economic ideology that underwrites much of our digital world, and generative AI is no exception. It is an industry built on an exploitation of bodies and labor at a distance. It has become so ingrained into our economies that we have forgotten what data is, and what the technology and financial industries do with it. 

I would not compare the current regime of statistics to the destruction wrought by slavery. I only want to acknowledge that the origins and ideologies derived from that place — of distance from the real-world effects of abstraction. Generative AI owes more to this history of data analytics than to any history of AI. It is less about figuring out autonomous systems and more about automated pattern analysis. Those patterns strip away much of the world, and in part two of this essay — which comes next week — I’ll explore the ways that ideologies of AI reject the emotional meaning of the world.

But first I want to talk about Big Data and the insistence on displacing labor. Without the massive expansion of data, generative AI tools could not exist. The rebranding of data analytics to AI severs a historical narrative and perspective about where AI actually comes from, distorting the way we make sense of it and our perception of its risks.

Social Media Was an Insurance Agent

Gathering data is an exercise of power. It starts by reducing the world, and people, to samples of behavior. Then it imposes rules, assigns categories, and limits or allows sets of actions. This emphasis on abstraction at the expense of living people reflects a deep disengagement from human joy or suffering. It has driven the tech industry to develop tools that empower that abstraction. It distances us from lived experience and connection, and leads people to believe in a kind of digital simulation of politics at the expense of local communities. 

Online, the right to collect samples from our previously private lives started out as a casual coffeeshop kind of agreement. Social mediation services would extract value from our presence, and we could use the site for free. But meanwhile, San Francisco in the 2010s, scores of consultants made careers advising on data monetization and tapping the power of underutilized digital assets. This data wasn’t useful on its own, it had to be activated. The reanimation of this data was achieved through predictive analytics. The idea was simple: gather enough data, from our credit card purchases and grocery store cards, and companies would find useful patterns in that data. They’d use those patterns to figure out when to push sales, send us an email, or show us an ad on Instagram.

Just as data about the survival of ships would guide gamblers to certain bets, so it was believed that data could guide a vast number of decisions through sheer force of predictive rationality. At the height of this social-data fetishism, we believed that if statisticians could only average enough polls, they could predict elections. Facebook could look at communication patterns of its users to figure out who had a crush on someone and if it was reciprocated. Enough data could predict your sexuality, whether you were pregnant, whether your city was about to have a flu outbreak.

Through dopamine-inducing feedback loops, social media sites mined this data from us. It encouraged users to share information about themselves constantly, with smartphones reporting locations and even gamifying self-reports. Apps like FourSquare offered points for telling people where you were and how often you went there, information that was sold directly to online advertisers.  

Under the social media model, people provided content for free, and companies sold the ads. Writers were reduced to content, and reactions to that content created data that helped social media further analyze, predict, and target its users. At the heart of this practice was the view that writers and artists could be reduced to signals. The real value was mining people’s response to those signals. Today, companies are aiming to remove artists and writers from the loop entirely — it turns out, even free labor was too expensive.

It wasn’t just social media. On the backend of capitalism, the business to business (“B2B”) world sought our data too.

“Imagine knowing which elements of a multi-channel marketing campaign are effective and which aren’t, or automatically triggering personalized Web pages and offers based on a visitor’s clickstream and purchase history, or ... matching the right action to the right credit account at just the right time. You can, with MarketSmart.” (Cited by Golumbia, 2009). 

In pure terabytes, the vast majority of archived human knowledge is sales receipts. By the end of the first decade of the 21st Century, data would be able to predict entire sentences, then books, that an author might write, or imagine a thousand museums full of a single painter’s work. In 2013, WalMart was gathering data about 20 million transactions per day. Cars collect data about the routes we take.

Generative AI might best be understood as a rebranding of Big Data and Predictive Analytics. The principles are the same as data analytics, and so are the underlying economics. Companies collect billions of data points, process them through massive data centers, and identify lucrative patterns.

Rather than the movement of ships or people, Big Data’s predictive analytics are tuned to our words and images.

Data as an Economy

Not content with the information posted across the walled gardens of Facebook or Twitter, generative AI companies claim dubious rights over images that were never offered to them. These images and words were shared by individuals to a range of platforms, part of that long forgotten agreement to trade our data for their services.

This data is no longer merely evaluated or presented to advertisers as pie charts or graphs. Words are generated based on their statistical likelihood to follow other words. We have images that emerge from random noise, based on central tendencies within the vast archives of our online visual culture. The 'Data Gold Rush' has made this kind of training data a new frontier for leveraging underutilized assets. It’s an important one, because advertising revenue on social media is slowing down and users are growing skeptical about using them at all. 

Websites that once hosted large files for users to share with others (for the low, low cost of watching an ad) are leveraging its underutilized assets by selling or using this data for training AI. Getty Images, not content with Stability AI’s alleged scraping of millions of its images, trained on its own archives to build a Diffusion model that generates Getty-esque images. Where once our data was gathered to serve us ads, it is now gathered to serve us chatbots and clip art. 

“But that’s not AI, that’s capitalism!” I hear you shout. Well yes. AI exists, and was built, under capitalist demands for profitability, not for public interest. The data analytics industry, and generative AI, is built on a long-standing regime of extracting wealth from information, and a reliance on cheap labor to maximize the information it has collected. Treating generative AI as a revolutionary new tool, or somehow independent of the system in which it originated, makes this connection to historical patterns harder to see, and harder to resist. 

Generative AI offers a few twists. Many come in the form of chatbots that obscure what’s happening in the software by emphasizing a conversational interface. The ability to talk in real time, in simple language, to a machine is a noteworthy evolution.

But the idea that they are more “intelligent” than Google’s AdSense system is a deliberate misunderstanding.

Sure, they write words instead of identifying audiences. But they don’t actually provide answers to our questions. They present statistical extrapolations, a verbal equivalent of a machine betting on the survival of a ship that is out to sea. The questions we ask are the ship, and patterns associated with that ship’s past journeys through language are gathered through data analytics to predict likely outcomes. They steer themselves toward longer, authoritative responses.

The interface is designed to resemble a conversation, but it isn’t. It’s a summary of data most commonly associated with the words that come before a question mark. 

The data used as the source of these analyses comes from the same places that all data analytics has always come from. Offering our data online was once voluntary, even if the question mark was silent. The most critical of us knew that going online was subjecting ourselves to a surveillance economy — tracked and measured (but the memes were free). The ghost of Edward Lloyd is still eavesdropping on our idle coffee house chatter, but now he’s placing bets on what we’ll say next.

What the walled gardens of social media networks collected — and sold to one another — was always served up under the illusion of anonymity. Even if we thought our data was identifiable (and it almost always is) we took comfort that no one would ever care to see it. It was one drop in the ocean of data surveillance. Orwellian as it was, mass surveillance suggested anonymity. 

We were so used to that agreement of trading data for their services, that many of us forgot it was an agreement at all. The services have become deeply integrated with our lives: foolishly, I once committed to logging into my bank account with Facebook, a practice I’ve had to work to unwind. Today the terms have shifted, and the agreements we made seem poorly chosen. 

Fake Decentralization 

I remember when people liked Facebook and Twitter. Silicon Valley positioned itself as punk. It was rebelling against authorities and systems that stifled creative expression, limited participatory access to media, conspired to charge us more for pointless overhead. Then it won, and embodied all of those behaviors. The hybrid of hippie politics, techno-optimism and individualism that Richard Barbrook and Andy Cameron defined as the Californian Ideology had raised a new generation of children. For them, the dotcom boom that ended the first web boom was a bump in the road to the exponential fortunes of 2.0 tech companies. 

Briefly, many of us living in San Francisco reaped the rewards everywhere we turned. It was the revival of disintermediation: demolishing pesky intermediaries that stood between ourselves and cheaper goods, services, and content. We had forums to share ideas, unmediated by the elite spellchecking of minimum wage copy editors at our local newspapers. People were annoyed with the concentration of power within the media, and social media belonged to us — we, the social! 

Just like “social,” “democratization” took on a peculiar definition. An earlier generation might have learned from our efforts at “democratizing” Iraq and Afghanistan. But this wasn’t that generation, and it took everyone by surprise. In Silicon Valley, it meant undermining the overhead associated with a legacy business and shifting the cost to everyday people — the “social.” In 2015, an internal IBM memo summed it up well: 

“Digital disruption has already happened! The world's largest taxi company owns no taxis (Uber), Largest accommodation provider owns no real estate (Airbnb), Largest phone companies own no telco infra (Skype, WeChat), World's most valuable retailer has no inventory (AliBaba), Most popular media owner creates no content (Facebook), Fastest growing banks have no actual money (SocietyOne), Largest movie house owns no cinemas (Netflix), Largest software vendors don't write the apps (Apple & Google).”

This physical, truly social world would be eroded by every virtual turn. Democratization would reduce overhead and costs against the institutional actors: books would be cheaper without heating a bookstore. The end of an era of “brick and mortar” was celebrated. But the arc of this democratization was toward denser concentrations of power, not less. The result has been further erosion of wages and value in almost every industry it has touched. The common connecting fabric of these services was a strange paradox: the belief that a centralized actor could facilitate a democratization of power. 

The great disintermediaries became a new intermediary. It wasn’t a people’s revolution, it was a coup.

It isn’t a fluke of Web 2.0 or dot-com bubbles. In their book, Power & Progress, Daron Acemoglu and Simon Johnson point out that computation has long been associated with promises of productivity and economic prosperity. Yet, since the introduction of the “democratized” personal computer, the facts show us something else. 

“Digital technologies became the graveyard of shared prosperity,” they write. “Wage growth slowed down, the labor share of national income declined sharply, and wage inequality surged starting around 1980” (255). 

With computer companies relying on large companies for the bulk of their contracts, the companies reflected the concerns of those companies. For Facebook, and most companies of the data analytics era, any attempt at service was a front. The real business was data analytics. 

If anyone embraced this radical disruption of malfunctioning institutions, Facebook’s algorithmic sorting would eventually punish thoughtful content that didn’t trigger profitable online arguments. Instagram would prioritize images containing blue skies or exposed skin at the expense of other images. Twitter would turn into X. Bandcamp would collapse. WeWork would start kicking people out of unprofitable office spaces after luring companies to give up their own real estate.

Everything was subsidized by Venture Capital with a thirst for data, redirected to ever-powerful sorting algorithms and marketing backends. Around 2018, a new breed of data grab started to appear: apps that would put funny lips or haircuts on your face, swap your gender, add wrinkles or remove them. Face-Swap apps didn’t just want to know where you were or where you were shopping. They wanted your face. Ideally, many pictures of your face. Cheap gimmicks of trick photography was how they would get it. Your raw images started to stockpile face recognition systems, used in surveillance — a kind of idle coffee-house chatter that would eventually be turned against minorities as “crime prediction” tools. 

Then Big Data came to democratize art. We can expand IBM’s list: the most prolific producer of visual art has no artists (Stable Diffusion), and the biggest producer of words has no writers (OpenAI). 

Monticello in California

Much of America’s technological lineage can be traced to Monticello, Thomas Jefferson’s sprawling estate maintained by people he had kidnapped and enslaved. Politically, Jefferson was constantly wringing his hands about the ethical quandary of slavery, and did abolish the international slave trade (while keeping people enslaved at home).

As a way to accommodate this hypocrisy of conscience, Jefferson relied on a set of ingenious technologies for mediation. One could come have dinner with Jefferson and discuss the moral quandary of slavery while being served wine from a technological apparatus, loaded by slaves in the basement, and delivered through a small elevator that ensured no one was disturbed by the slave’s presence. 

Likewise, one would be served a meal through a rotating door, from which the food would magically appear on shelves, ready to be served as if by magic, with no need to acknowledge the service of the people kept hostage in the kitchen. 

Technology has long been designed at the expense of those whose living is earned through service for the benefit of those who pay to be served. The interface is a tool of obscuring human labor behind screens. These screens enforce a kind of spectacle — they make food or goods appear on your doorstep, without any interaction with the chefs or the driver. They allow you to hire workers to process data for your startup for pennies on the dollar. These interfaces strike at the nerve of social connection, creating diffused networks where the individual elements of the underlying system are completely obscured, rendering cohesion and solidarity nearly impossible. 

For those in power and control, the act of labor has a way of dehumanizing the laborers: they move from people to expenses. Many of the activities targeted for replacement are therefore not considered “human” at all, chiefly because technology encourages us to ignore our reliance on other humans. There seems to be almost a resentment of labor in the halls of technology: tech is never meant to empower workers to strike, it is meant to replace workers who someday might strike.

There was something easy about handing over access to our browsing habits and shoe size in exchange for being shown new websites and shoes, even as we knew it was all killing bookstores, record stores and community interaction. Once it killed the stores, it came for the music, then the books, then the art and photography.

This was shocking for many, because it was not only labor, but cultural memory. Photography is a technology of remembrance. It connects us to others. Sever that, and what have you got? The damage is emotional, not material. Yet, attachments to our imaginary, virtual worlds are so slippery, so mediated by spectacle, that our attachments to the world inside of screens feels dangerous to acknowledge. 

“Love Doesn’t Scale.”

The Techno-Optimists seem to assure us, simultaneously, of the value and disposability of our online communities, our shared artworks, our online conversations. They want us to share, because sharing has always been to tool they’ve used to drive monetization.

AI adds another dubious layer to this relationship. It severs the community entirely. Instead, we talk to the AI, make art with the AI. What’s monetized is not technology as an intermediary: charging us a communication tax in the form of sampling our data and showing us ads is Web 2.0. AI is Web 3.0, a simulation of the web based on the ghost of interactions past. Social media was a platform where we were all trapped in digital walls and surveilled in order to build this next communication regime.

At my most cynical, my most paranoid, I find myself fretting about this trajectory. I don’t believe that the internet connected us to each other. I find it has isolated us from physical proximities to ideological ones. Now AI promises to further constrain those relationships, to move us from a time when one could speak and hear from many to a time when one can speak only to ourselves: one-to-none communication, a throwback to the days of yelling at the TV, but now the TV can adjust.

If the trend line of this historical trajectory of leveraging underutilized assets continues unabated, then the future seems bleak. Industry will aim to further tighten and constrain our interactions online until we are surrounded by engagement engines, pumping out material that keeps us typing and sharing. Based on the ideology of individualism, the tech industry seems likely to tighten the boundaries of these online systems so that we don’t need anyone else to do the things we love. We will type for the machine that surveils us, share with simulated audiences.

Denying our dependencies on others is not a means of amplifying human potential. It is a tool for rejecting that potential. I see this as the true existential risk of AI. Not the machines, which simply hum math into pixels. I am convinced that those who build the technology of generative AI will aim to replace, not empower, the communities and interactions we find ourselves valuing most today. Already generative AI is hijacking the impulse of empathy and conversation. If history is any indication, the next logical step of the system’s evolution would be to control it.

Part Two: The Aestheticization of Big Data

Let’s look again at the image I created with Midjourney.

I wanted an image that would be true, no matter what it looked like. I settled on this: “seance of the digital archive.” 

A seance, because by sending the prompt, a digital archive is reanimated. The information lies still in the archive of the World Wide Web, recorded and set aside. For the medium of Diffusion models, it represents a world it seeks to resurrect. 

In a literal way, Diffusion models begin by destroying that archive. Every image has information stripped away, step by step, and the distribution of digital noise is measured and calculated. This calculated decay is studied and traced back to the original image, as if following scattered breadcrumbs across the ruin the trail. The journey back to the training image is preserved as a mathematical formula, the algorithm, the rules that dictate what the computer does. 

As a medium, the models are then tasked with communicating, or expressing, the essence of these degraded images. This resurrection is always incomplete. It’s all orchestrated by applying the logic of a billion decaying images in reverse. It starts with noise, but it’s noise that belongs to nothing: literally random. The connect-the-dots game of remaking the image is based on the hazy recollection of the archive, tracing finer and finer lines into the abstractions that emerge. These tracings don’t follow the logic of cultural memory, they follow the logic of an arbitrary world whose shape is carved by pattern-finding. 

In the image above, we see a young girl in front of a pile of photographs. The room is dark, lit by a red lamp. There are images on the wall behind her. Her hands sit folded as if a seance is taking place. But is it true? Are the hands folded as if in a seance? Or is this only my mind making connections to the prompt I see above it? 

That kind of trickery is baked into these systems. But like all images, our interpretations rely on the entanglements of our minds with culture. I can infer that the hands are folded as if in a seance, because I have seen seance photographs before. Other things strike me as unusual. The images in piles, physical photographs to represent the digital archive.

We can read an AI image in a number of ways. In this case, I would turn to the prompt. The prompt is the starting point for carving these noisy pixels into images. In the training data — the digital archive from which this seance was performed to reanimate — the phrase “archive” conjures thousands of images. None of these images are replicated in this generated image. Instead, they congeal, loosely, into a collection of associations. Every image with this label becomes a kind of synonym for the thing it represents. No one image represents “archive,” but the cluster of all these images builds a central abstraction. The machine will sample from this abstraction as it steers through the noise. 

If you look closely at the image generated above, we can see aspects of the image that must have been connected to my prompt — “Seance of the Digital Archive.” We can then go search datasets to see what images are stored there, images resurrected by this seance.

The images in the training data for archive include images of World War 1 bombings, family photographs of the German Wehrmacht, and individual portraits of Holocaust victims. These are inscribed into the image above alongside the hands of a seance found in spiritualist archives, and the lighting drawn from episodes of the Archie reboot, Riverdale. This is not a collage of images but a collage of documentation stripped of context, photographs without memory. It is stitching with cultural debris, pop culture and trauma woven into a single tapestry, the threading of the needle predicted pixel by pixel.


To the left: Images of German Wehrmacht Soldiers on holiday, from WW2. Part of the training data associated with archives, as discovered by a keyword search of LAION 5B.

This collection was on sale at an auction website and the image remained in the training data.


Images of victims of the Holocaust, taken at a Holocaust museum in Germany and included in LAION 5B training data.

To the left: Portraits of People Killed in the Holocaust.jpg, an image in the training data associated with archives, as discovered by a keyword search of LAION 5B. Compare to the portion of the generated image, below. Not direct, but informed-by, this and millions of other images.


To the left: Midjourney’s “Describe” feature allows you to post a generated image and see what keywords might have created it, that is, it is a way to reverse engineer the prompt of an image.

The “Seance of the Digital Archive” image consistently gave two sets of terms: the first was “Jewish heritage,” though this was never part of the original prompt.

The second was “Zachary Hayden,” an actor with a bit role in the TV show “Riverdale.” An image search for that actor associated these two scenes, one of which includes a seance and one of which includes a red lamp. The lighting effects are also reminiscent of the “Seance of the Digital Image” text seen above.


These are not one-to-one translations, of course. That’s not how diffusion works. There’s nothing hidden in the name of this process: these images are diffused, and this diffusion etches itself as a loose set of traces and outlines associated with the images we see here.

None of the training data is being destroyed. But I would argue that they are nonetheless being desecrated. It’s an empty ritual of erasure. Every AI image is built on images that came before, but those originals are completely severed of any connection to meaning. The images are stripped down and sold for parts — for any essence that might inform these new statistics. 

If ruins are a monument to those who burned it down, then maybe the Empire of the Image is at an end. The “democratization” of images is upon us: the frenzy of the image. Image-mania. AI creates new images from the ruins. A new culture comes in its place: optimistic, futuristic, absent of melancholy, devoid of death. But this techno-optimism comes at the expense of acknowledgement and response.

The images of the Wehrmacht on holiday lives side by side with images of their victims. In that way, Diffusion is dis-integration. The meaning of historical images is derived not purely from what is depicted, but from what is understood by the viewer. Disconnect images from their social meaning — treat them solely as data, rather than cultural artifacts, as tools for remembrance — and you erase their significance.

As Pierre Nora, writing long before Diffusion models, has noted in Realms of Memory:  

"Hallucinatory re-creations of the past are conceivable only in terms of discontinuity. The whole dynamic of our relation to the past is shaped by the subtle interplay between the inaccessible and the non-existent. If the old ideal was to resurrect the past, the new ideal is to create a representation of it. Resurrection, no matter how complete, implied a careful manipulation of light and shadow to create an illusion of perspective with an eye to present purposes. Now that we no longer have a singular explanatory principle, we find ourselves in a fragmented universe. At the same time, nothing is too humble, improbable or inaccessible to aspire to the dignity of historical mystery. We used to know whose children we were; now we are the children of no one and everyone."

This passage takes on new meanings in the age of AI generated artworks. We can look at a person in any of these images and see nobody’s child and everybody’s child all at once. Victims and perpetrators of the Holocaust fused into new, shared bodies. The past is a ruin from which nothing is mourned and everything is a playground.  

No Mourning for Synthetic Ruins

But ruins, at least, can bring to us a sense of peace. Walk around the long-abandoned columns of Rome or the former site of a castle in Japan and you might be moved to contemplation. Georg Simmel, anthropologist of the ruins, wrote in 1911 of the remnants of Rome, that the pleasure found in ruins rises from the tensions they place between obliteration and form — between what was and what remains:  

“This antagonism [of] letting one side preponderate as the other sinks into annihilation, nevertheless offers us a quietly abiding image, secure in its form. The aesthetic value of the ruin combines the disharmony, the eternal becoming of the soul struggling against itself, with the formal satisfaction, the firm limitedness of the work of art. For this reason, the metaphysical-aesthetic charm of the ruin disappears when not enough remains of it to let us feel the upward-leading tendency. The stumps of the pillars of the Forum Romanum are simply ugly and nothing else, while a pillar crumbled - say, halfway down - can generate a maximum of charm.” (384)

The pleasure of this tension is absent from the accelerated decay of the digital image. The ruin of the image isn’t natural decay as we have seen over a century of film. It is imposed on the archive: a razing of meaning, like scraping the names of old leaders from the parks once the new regime comes in. It is a gesture that says, “this is yours no longer.” 

Within this process is hidden a denial of death, a rejection of meaning in that massive breakdown of online visual culture. There is an emphasis on endless new life built on piles of what has been built, socially, online: now broken down and discarded. But traces of the ruin linger in their politics. The specific politics of this ruination, and the way we "read" these, is an aestheticization of Silicon Valley ideologies which focus on perpetual growth, while denying the ruin and rubble left behind by its pursuit. An aesthetics of capture, sampling, and prediction. 

In Mirror Stage, Nora N. Khan and Peli Grietzer raised the question of surveillance: “Prediction and correlation analysis come, also, with a semantic style, one that affirms the infallibility of the prediction. We slowly lose track of what did happen in favor of what was most likely to happen.”

Diffusion models aestheticize what data analytics has always done. It alienates a sliver of the world, abstracts it through measurement, and predicts corollaries. It turns a photo into a representation of what it represents, rather than a reference to the slice of time depicted. This is both the “mechanical” process — what Diffusion does, as an apparatus. It is also a cultural process, in which the prompter evokes symbols — literally, words — to create a representation of those words, rather than any depictions of real events. It is a request to extrapolate what an image might be, based on the data previously gathered. It is a hypothetical.  

People, as a human presence, are removed from the image as bodies to be remembered. As training data, the image is stripped of this connection to memory. The person remembered is erased to become rearranged in the forms and structures of new bodies which do not exist but are speculated to exist. 

It is not a coincidence that generative AI’s ideologies resemble a new manifestation of a long-running set of myths. An AI image is a result of data analysis and prediction models, and data is a language of reduction. Diffusion models reduce images to data by reducing them to noise. This is literally true, but also appropriately metaphorical. If postmodernism was defined by a kind of instability of definitions and orientations after the collapse of any consensus toward shared meaning, then what we’re in now is surely an attempt to reinforce a new aesthetics of progress, through acts of reduction, control, datafication, and regeneration, built on a literal erasure of the history on which these systems were built.  

Trauma Collage

A close-up of the generated “archive.”

There is a moment in the writing of this essay where I came across footage of a small village built by the US military for tests in the 1950s. Unable to build a complete mock city in the desert, the Army Corps of Engineers built a few objects that might occur in a city: a fraction of a bridge. A Frankenstein building with various materials and building methods. A handful of vehicles and mannequins; train tracks in the middle of nothing. 

Then they dropped a nuclear bomb on it. The idea was to study the detonation and its impacts: the way the buildings and bridges and mannequins fell apart. In some hopeful moment, engineers had decided that these tests of deterioration would guide them to stronger materials or building arrangements. In the end, the entire city was obliterated. 

I was watching this and thinking of diffusion models — the vast visual corpus of the internet engaged as a simulated deterioration study. On the one hand, it is a vastly destructive action, to drown images of atrocity and bubblegum into the same billboard, to take pictures of people we love and strip them of that connection, to gaze on an image without any reference or linear context, to deprive an image of its story. 

It’s different when the trauma belongs to us — when it shapes our imagination. But when we do this to others, it’s desecration. We know history’s atrocities, and to forget them would be bliss — a return to ignorance. If it was possible, if it was achieved by some equilibrium of justice instead of the erasure of whatever evoked memories of those atrocities. Until then, of course, we navigate the trauma of others by holding its images gingerly, at the corners. We respect the solemnity of it.  

Yet, somewhere in the training data of your AI images are the contours of Auschwitz and Abu Ghraib. Emmett Till. Photographs of children killed in Rwanda. 

Tamara Kneese, in Death Glitch, reminds us that “Mourning is always mediated in some way,” and that digital and physical heirlooms alike depend on storytelling and caregiving to maintain their legacies, to preserve that function across generations.

What do we hold in collective memory? What work and care do we owe to these images, and what to those whose memories they sustain? Photographs are a technology of remembrance. They are themselves a form of taking-from, which is why photography is both a tool and form of power. They may strip away the dignity of agency over our own bodies, or grant us visibility in times of erasure. They evoke the memory of the dead. It is a purely imaginary sense of things, and yet, we recoil at the desecration of that memory. 

After contemplating this process in comparison to nuclear blasts in the Nevada desert, I had a second thought: does any of this actually matter? This is an invisible process, hidden in some enormous computer cluster. We don’t see it happen, and most of us don’t even know when or if it did. One could blissfully generate remixes of victims and perpetrators of historic atrocities into people with funny hats, and we’d never know they were trauma collages. If the meaning of photographs is socially constructed, built by reference, then if there is no trace of that reference, no trace of that erasure — is it even erasure at all?

Some would argue that the virtual, simulated desecrations of memory do not really matter. Obscured through the comforting distance of technological mediation, we never really confront it. The image is torn apart, but not destroyed. If the images we make don’t mock those tortured at Abu Ghraib, does it matter that those prisoners are dissolved into the stew of generative AI? Our original memory remains. Nevermind that it has been stripped of context and industrialized. These problems, they suggest, are imaginary, because they cannot be seen. No measurement, no reality.  

But perhaps we can defend the imaginary as the actual place in which we live our digital lives. We’re in a century of screens luring us to interact with content. All of that is imaginary. Think hard enough about what we’re doing and the whole scaffold is absurd. Moving a clump of plastic around with our hands. Responding viscerally to tiny light bulbs changing colors, packed densely enough to look like information. 

The screens react to us, and we find ourselves in a wild oscillation between observer and actor. We respond, the screen responds, and we respond to what has changed: a cybernetic circuit. 

Any technology that relies on the human imagination to function is a deeply social system. Anderson’s imagined community suggests that even nations exist only in the shared imagination of media references: the newspaper reader envisioning themselves in a network of similarly informed citizens, seeing the news reported from the position of their shared nationhood and history. Of course, these communities have always been exclusive: some imagine themselves steering the nation, others see themselves in tow. The imagination isn’t a perfect vessel. But it is the primary vessel that we have.

The ruination of digital objects — or communities — is a virtual exercise of power over these entanglements. It displaces one imaginary for another: the messy for the predictable, the free for the monetized. Respect for these worlds of feeling is quickly dispensed with when it comes time to measure. Digital objects are, by definition, not “precious” under any capitalist definition of value. As a result, a sense of inauthenticity lingers over the entirety of our digital experiences, especially, perhaps, the experiences of others. 

An image being sold on the Adobe Stock Photography website as of October 12, 2023. It is an AI generated image depicting a Palestinian refugee who does not exist.

Today you can buy AI generated stock photographs representing a variety of historical traumas. The image above is one such example, an image of a Palestinian refugee that doesn’t exist. There are all kinds of additional images of the conflict, from bombs to debris. Here the flow is reversed: the vast sea of images, from pop culture to stereotypes of refugees and Palestinians, is mobilized to erase what they represent. The reality of conflict is displaced and the images that document that reality are cheapened, made less real. Aestheticized images of refugees are commodified and sold in ways that ultimately undermine lived trauma.

It moves us from the abstract, invisible technological process into the real world of image circulation and distribution. It moves the abstract and invisible into the real world of media power.

Dis-Integration

To see our selves — our traumas and atrocities alike — blended into a single database; reanimated purely for the production of aesthetic pleasure, taps into the impulse Benjamin observed in stylings of fascism. One is dissolved into the collective whole through the mobilization of images. AI moves the image from a technology of remembering to a tool where the graveyard can become a playground for self-expression, where collective responsibility dissolves.

Data, too, is imaginary. Sample the bits of the world we can measure. Discard outliers, and stick toward whatever center emerges in the data. With context and emotion stripped away from these isolated samples of our entangled world, we’re asked to trade the meaning of things for the mean of things.

The value of data is in its predictive power: when it is tested, and reproduced, it suggests that the next time we test it, whatever we measure will behave similarly. It is a method of determining what is reliable in an unstable world. It is dangerous to use data to mold the world into the shape of those predictions. 

Data measured through standard, rigid categories of observation is an abstracted fiction. Data might do things, might predict things, but so does the human imagination. Value exists in the things we define and negotiate in a social world. Imagination is the space where we navigate this collective negotiation. To collect data that denies one set of values — such as safety, justice, identity, kindness — whittles the world into bits and pieces framed by the equally slippery values of predictions. 

Diana Forsythe noted, observing AI engineers in the 1990s,

“'Knowledge' means explicit, globally-applicable rules whose relation to each other and to implied action is straightforward. Knowledge in this sense is a stable entity that can be acquired and transferred. It can be rendered machine-readable and manipulated by a computer program. I believe that, in effect, 'knowledge' has been operationally redefined in artificial intelligence to mean 'what can be programmed into the knowledge base of an expert system'. Star has commented that computer scientists 'delete the social'.  I would add that they delete the cultural as well.” (Forsythe 465)

Khan and Peli suggest that the artistic response may be to frame and “highlight the blur,” to create frames of thinking that oppose reduction and prediction. In that light, I wonder about my own set of tools — the glitch, designed to subvert the system entirely, to trick the system out of the security of its predictions in order to render scenes of confusion and disorder. Part of this is still lured into the seductive entanglement of these things. 

In the absence of a forward-looking and socially engaged imagination, too much generative AI re-evokes false memories. It endlessly recreates yesterday as a form of making tomorrow. It does this at the expense of connection and mourning. It’s an erasure of accountability dressed up in techno-positivity. Glitch is meant to reveal the feedback loop that chaos drives in these predictive systems. When the data is absent, when the data is made weird, when the model is starved of data altogether. We force the predictive engine into a double-bind, a forced breakdown in the logic of analytics.

At least, I hope.  

The Hypothetical Image

Images from the generated “Archive.”

This may not be a philosophical statement, but a personal one. I hear that my concerns are imaginary but I prefer to say my concerns are the imaginary. The imagination is the space where artists work.

The imaginary worlds of generative AI feel bleaker for me every day. A surrealism without a subconscious, rendered with the aesthetic predictability of its training data: advertisements and clip art fused with atrocity footage and family snapshots. All of the images are extensions of the visual melange, hypothetical images based on all images prior. Paired with a sense that the origins do not matter, that labor does not matter, that any obligation to citation or history do not matter.

The Generative AI artist believes in these hypothetical images as if they were actually images. That’s the trick of it. AI images aren’t images at all. They are guesses at what images might someday come. It reminds me of envisioning a city on top of its ruins. Because it is constrained by the past, with no remembrance of what took place, nostalgia is at the heart of these hypothetical images: “utopia in reverse,” as Andreas Huyssen puts it. Using prompts to navigate samples of historical forms, the image rearranges tropes without recollection and with imprecise control. AI art as we see it commonly practiced relies on the ‘artists’ memory of past cultural forms in order to resurrect and reanimate them. Star Wars. Wes Anderson. Fashion shoots.

It’s more than just the behind the scenes process of the diffusion technologies. It’s also the way these are mobilized: the culture surrounding the generative AI apparatus.

“Everything is a remix,” they say. The unspoken follow-up: “and nothing matters.” The implication is that everything is ours. imagination is redefined as the reproduction of all previous patterns, the animation of all past forms into new arrangements. In rejecting the traumas of history that these new images are built on, I can’t help but to be disturbed. This imagination is the perpetuation of a sterilized past. Ruins are razed to be rebuilt as theme park versions of themselves. The reminders are dismissed as phantoms.

This text originally ran as over two weeks of the Cybernetic Forests Newsletter. You can sign up here.