Urgent Questions: A Summary of New Macy #2 

Questions were gathered through an open online process and curated by members of the New Macy facilitation team. These questions were seeded with two starting prompts responding to the “pandemic” (that which affects all people) of today’s AI. 

  1. How do we create more analog & organic frameworks for AI based in human and ecological values? 

  2. What forms of human-machine relationships and interactivity would more us away from the ills of “today’s AI”? 

The original Macy meetings (45-53) were designed to seize cybernetics as an opportunity. Paul Pangaro noted in his opening remarks that in 2020, with its biological, technological, ecological and social pandemics, cybernetics has something to offer. Cybernetics aims at complex, adaptive systems for a world that is unpredictable and unknowable, and frames questions around purpose, from which a multitude of further questions emerge: “what is ‘our’ purpose? Who is ‘our’? Who decides?”

The second New Macy meeting turned to artificial intelligence. 

The conversation was a series of questions opening another series of questions, each probing the limits, ills, and complexities embedded into the ways artificial intelligence is conceived, designed and implemented onto other intelligences. Beyond ethics, what alternatives exist in response to the algorithms? What alternatives already exist? 

One method of exploration was to consider the dichotomies of “analog and digital” — complicating those distinctions through the “bilingual sensibility” of cybernetics. Rather than reject one or praise the supremacy of the other, how to design, engineer, or plan in ways that bridge their contradictions? 

For the purposes of the event, these divisions were articulated in an image from Paul Pangaro, pitting digital vs analog frameworks while offering a third way:

Pangaro Cybernetics.jpeg

This session wasn’t intended for generating new applications or for articulating specific frameworks or guidelines for “cybernetic” activities. That will come later, in session 3.

Instead, the event was seeking questions to provoke meaningful, open conversation from a variety of fields, perspectives, and experiences of practice. This is the spirit of the multi-disciplinary Macy Conferences that was intended to be captured. Over time, the questions may lead to clearer refinements of purpose and methodology.  

These questions revolved, loosely, around: Agency, Perception, Disciplinary Entrenchment, Affordances, Environmental Sustainability, Trust & Decentralization.




Question 1: Agency

“Can we develop a new digital / analog model that would support an AI that would help us leverage human agency to address the wicked problems facing us, locally and globally? — And what would that look like?” 

— Mark Sullivan, MSU Science Gallery / Museum 

Sullivan added that AI today is a “wicked problem,” and that it’s no longer enough to correct the deficiencies of AI. As the artist Stephanie Dinkins’ AI Assembly, to the writing of Cathy O’Neil (Weapons of Math Destruction) and many others have made clear, approaches to AI must not only respond to communities, but rather, must incorporate communities into their development. For example, Timnit Gebru and Joy Buolomini have shown that facial recognition models misrecognize darker skin tones more consistently than white ones, and biased algorithms have already lead to the false arrests of black men. 

Resistance to agency is one of the identifiers of a wicked problem. Could AI be aimed at finding agency for our problems?

There was some question of what AI was in the framing of this question. For example, Robert Johannson noted that “There are no technical solutions to what are essentially moral problems. Moral problems do not have answers. They require decisions. As  long as our AI is programmed by capitalist corporations it will be corrupt and irresponsible, because the programmer is corrupt and irresponsible. Profit is not the sole purpose of human life, but it is the sole purpose of the capitalist corporation. The national security state also spawns corrupt and  irresponsible actors.”

Is there, in contrast, an ethical grounding for cybernetics? Paul Pangaro pointed to Heinz von Foerster’s “Ethics and Second Order Cybernetics,” in which reflexivity is articulated as a center for cybernetic ethics and “must reside in the action itself.” 


Question 2: Perception

“In a relatively short period of time, we have become disconnected from the non-human world and are immersed in one of our own constructions. What can we gain if we think about how AI and Augmented Reality (AR) might aid our re-connection to the non-human world? What examples can we find? What would the next steps be?” 

— Linda Price-Sneddon, Feminist Futurist Artist Collective 

Price-Sneddon, an artist and member of the Feminist Futures Collective, posed a question about embracing identities. In contemplating the future, what becomes possible, and what is left behind? Citing David Abram’s “Spell of the Sensuous,” and building on the work of Merleau-Ponty’s thesis of the participatory nature of perception as “interaction between the perceiver and the perceived.” In some Indigenous societies, there is a careful observation of non-human nature, such as living in rich exchange with non-human nature: flora, fauna, and weather. 

Now that “we are immersed in a world now of our own construction, contributing to mental health and ecological problems, can we think of ways that AI and AR might re-engage interaction with the non-human world?” 

Lowell Christy noted that “Douglas Englebart gave us some indication of how to handle this really important issue. He saw the world as an interface. But his real insights were never capitalized: How do you develop the organization of human intelligence? Bateson said we must learn from the mind of nature. Nature is more than digital and analogue. Bateson, Englebart give us the naturalistic side of the equation.” 


Question 3: Disciplinary Entrenchment

“How does the dissection of education into "disciplines" set us up for technology uses/consumptions, workforces/industries, and modes of interactivity that valorize digital frameworks and dismiss analogic ones?”

 — Kate Doyle, Rutgers University Newark, Department of Arts, Culture & Media

Doyle poses a question about entrenchment: academic subjects are “associated with specific modes of description, language use, and methods of study and analysis.” In turn, these disciplines have become aligned either with digital (sciences) or the analogic (arts) frameworks. The arts are considered supplemental or disposable because of these analogic (ambiguous, open-ended) qualities. Sciences, on the other hand, tend to be seen as “pragmatic and lucrative” for their digital aspects. Students entering the workforce perpetuate learned divisions inherit these biases; they take them into the world of business and industry where they enact these learned divisions. 

Paul Pangaro cited Joseph Weizenbaum, writing in the ‘70s about how “hackers and tech folks had a way of thinking that caused a way of programming,” in a way that made the digital come out in the world. Weizenbaum’s work contrasted those cultures in terms of how to come back to analog roots: “Since we do not now have any way of making computers wise, we ought not now to give computers tasks that demand wisdom.” (Weizenbaum). Pangaro adds that recommendation engines “don’t demand wisdom when recommending where to get pizza,” but that many applications of AI do demand wisdom when making social, racial, economic or social judgments that require wisdom about individual people. 

“Students characterize their experience in relationship to their disciplines,” Doyle said. “I’m experimenting a lot about changing these things, questioning the heart of the institution itself.” If teachers work within the  frameworks that institutions have been set up with since antiquity, can we create new ways to achieve fluidity between “analogue” and “digital” frameworks?

Damian Chapman notes that one of the effects of the “digital” in academic disciplines is the discouragement of learning with room to fail. Among students comes a lack of confidence to fail in experimentation, “because we value [grades], rather than the creative.” Chapman suggests that “digital” disciplines can reshape the perception of failure to a more “analog,” fluid model, and develop language to “fail forward.” Notable that these boundaries are reinforced within current incentives for “positive” findings in scientific publishing, where funding and prestige is rarely awarded to failed experiments.

Kate Doyle framed the question differently: “How can we use art to think about analogic frameworks? [Regarding] uncertainty and unknowing, how can art help us to access a resistance against this boundary system? How can art help us to educate ourselves to think about boundaries differently?”


Question 4: Affordances 

How will new affordances and challenges for cyber physical systems be crafted and engineered for healthy, productive, online teams? 

— Daniel Freedman, activeinference.org

Freedman explained his question further: Goal-oriented systems use questions to set up for results. The affordances of a system set its capacity for action. With AI, we know some of these, while others are new or even unknown. In a sense, the affordances of a technology are the flip side of its challenges. If we can’t see challenges, we find disastrous outcomes; at the same time, if we don’t use tools that we have, we’re less effective toward our mission. 

So it’s crucial to recognize capacity. What are we designing for? In the case of a cyber-physical framework, that incorporates supply chain, electricity, internet: all systems are increasingly linked through internet and digital technology. Is there space there for the crafted, artisanal and boutique approaches within the precise framework demarcated by the word “engineered”? For boutique artisans to be working with this precision? 

When it comes to online teams (increasingly all and any group), our question might be: what does healthy look like? What values do we need? 


Question 5: Environmental Sustainability

“How can automated and interactive systems be designed for goals of environmental sustainability and the long-term viability of all we hold dear?” 

— Sebastian Benthall, NYU School of Law

Benthall notes that climate change is the “big problem,” and asks how AI could be part of the solution — especially, how we might consider the environmental costs of large-scale systems (like NLP models). But could AI be designed to be part of the solution? 


Question 6: Trust vs Centralization

“How can design re-imagine and enable learning and trust in humans to counter increasingly intrusive AI?” 

— Damian Chapman, Design School, KSA Kingston University (UK)

Chapman begins with the premise that design is “a creative and innovative opportunity, thinking through making,” and that dialogue through materials and with each other can build or repair trust. “We bring something into being: moving to knowing from not-knowing. It’s also a process of creating something, which we can explain as part of a logical process, so we can give it a causality. But it doesn’t have that causality as we make it. The causality is a post-rationalization. So how can we move from mistrust and mistrusted systems through opaque systems, to transform into an open process of learning? How can we include within developmental culture that human requirement of trust?”

Jamie Rose and Eve Pinsker discussed the assumption that AI is designed with ethics or benevolence of purpose, with Pinsker noting that malign intent isn’t the only factor. Distinct definitions of “trust” seemed to be in conflict, with some wondering whether trust is possible under centralized control of AI models.

Andy Pickering responded by taking the question back to big data. “What kind of data would a cybernetic AI draw on? Would it be the same data that Google draws on? The phrase that comes to mind is “shallow data.” Rather than extrapolating for all time, you could just ask, “what did the last 10 people do?” It would be a great way to create new patterns, as we can’t get big data anyway.”

A discussion of the term “AI” came in response. Klaus Krippendorff noted that “The name AI is misleading... who funds it? Big institutions, centralization... decentralization is an issue of control. When more industries run on algorithms, it’s not intelligence, it is a different kind of purpose,” and Jamie Rose arguing that AI be called “Simulated Intelligence” for its “imperfect replication of thinking and decision skills.” 

“I wonder if we are late for the first big wave of AI,” Miguel Marcos Martinez said. “Successful AI models depend on centralization and scale. Perhaps the de-centralization model represented by blockchain technologies may force AI to slow down so we can seriously engage these questions. ... The waves in technology are centralize, decentralize, centralize, etc... If the data becomes decentralized, it automatically deflates the power of Google, Facebook, Microsoft to pull off these AI models. Otherwise, with centralization, it increases exponentially....” 

Nate Olson goes further, saying that there is a “dichotomy between the rising need for collective data and how it’s been corporatized, and the need to quantize that data for a universal AI platform in a decentralized way. We may not need to attribute a capitalist value to it.” 

The issue of decentralization and centralization launched a side conversation regarding the use of blockchains and their use for decentralized (oftentimes called “trustless”) finance systems. Daniel Friedman noted that “We don’t have to be limited to previous or existing forms of economics with NFTs and blockchain,” suggesting possibilities for the technology beyond its current trend toward decentralized finance and toward a decentralized distribution of data.

Claudia Westermann suggested additional possibilities: “it could be a point of discussion ... as a model of decentralized systems to build upon ideas of mutual aid, ideas of anarchy different from today’s anarcho-capitalism of Google and Facebook.” 


Responses 

Several guests were invited to attend, listen, and respond to the discussion. These notes are as delivered. The guests for session 2 included:

  • Deborah Forster

  • Andrew Pickering

  • Guillarme Kujawski 

  • Larry Richards


Deborah Forster

When I started to deal with cybernetics and systems in general, I was studying animal behavior, specifically baboons in the wild. Part of what I was dealing with was, ‘how do you explore a complex system where, perhaps, you could guess at a purpose but couldn’t really tell that purpose or intention from the outset. Are there ways to understand complex systems without having to attribute purpose at the beginning, so you could actually figure out if there was a discernible purpose? 

As we’re building artificial systems, that may become a more and more relevant consideration, either as an affordance or a challenge, as some of you have said. 

Secondly, analog and digital are kind of slippery things, especially when you look at biological systems. If you think about the on/off of a neuron, that’s where digital came into being, if we look at the old Macy’s conversations and the McCulloch-Pitts model for understanding neuron behavior. 

So those two tensions — purpose and digital vs analog — are really nice juxtapositions to engage with. 


Andrew Pickering 

That was an extremely interesting batch of questions to start off with, and there’s tons of things to talk about. I thought it might be useful to add a few remarks which carry on the basic line of thought to make it more concrete. I understood the subtext of the meeting to be, “Can we imagine some kind of cybernetic AI device/setup that could challenge and go beyond conventional AI?” That seems to be the fantasy, can cybernetics DO something here? 

We need conversational machines, machines we can talk to in a nonlinear fashion. Machines that can help us and surprise us, rather than just replying in a linear question-and-answer format. Perhaps we can put that on a future agenda. It’s worth remembering that the original Macy Meetings were, above anything else, technical. They made technical arguments about the characteristics of the cybernetic machines they built. So what can we say about conversational machines? It sounds like a kind of full-on hope that we could ever have conversations with a machine. But actually, there are many models of machines like that in the history of cybernetics. Pask’s teaching machines, designed to challenge learners and adapt to their responses, in a kind of back-and-forth, a dance of agents. 

There’s a whole tradition of these machines running up to the present. Paul Pangaro’s thought sticker from the late ‘70s! Bernard Scott’s educational tech, Stephen Willett’s Metafilter from 1973 in the art world, Nicholas Negroponte’s Architecture Machine, maybe even Stafford Beer’s Viable Systems Model and the de-syntegration process. These are concrete examples, seeds that could grow into a more imaginative challenge to Google, etc. And Pangaro is here, he can explain how these machines work! And the limitations of these machines. Why aren’t they flourishing in the world today? What is there to be done on this machine programming front? The discussion needs to have a technical dimension. What do they do? But it shouldn’t be purely technical. If we had some kind of concrete examples of Cybernetic AI to keep in mind, we could perhaps see the problems of the world as they are, and how to cybernetically advance here. 

Paul Pangaro: I feel very much that desire to make these concrete. You’ll read in the documents we’re producing that these counter-examples, these alternatives to AI, be concrete, be design patterns and prototypes, and a dissection of what happened before. An illumination of what Pask’s machines or Willett’s machines or even the VSM from Beer could be. It’s a beautiful synthesis that you’ve brought together here. 


Guillarme Kujawski 

Hello, thank you very much for promoting all of this, it’s amazing what you’re doing. I have general comments. I am a generalist, so I have generalist comments. The first one is that the New Macy’s role is to prepare, at best, or question technology in the age of a complete absence of questions. The complete absence of questions is today obscured by technological remedies celebrated by transhumanists, jejune engineering guys, human enhancement guys, economists, etc. 

My second comment, very brief — we have the urgent task right now, the urgent, urgent, urgent task to debunk and attack the AI guys who want to construct a foundational model based on neural network and machine learning. They’re trying to offer this as a foundational model for everything related to AI. This is very dangerous and needs to be attacked right now. People who want to rescue symbolic AI. IBM is talking about neural symbolic AI. I don’t know if this is the right path. It’s an open question.


Larry Richards (Closing Remarks)

Some mentioned capitalism, an ambiguous term, and blockchain, a bit different than what’s come before, and a few are thinking about what the consequences of it are. Cybernetics has something to offer: the concepts and ideas offered through the session today. How do we think about agency, the digi/analogue, participation, affordances, online teams? Andy mentioned conversational machines. Stafford Beer’s syntegration. In these ideas, which are around and being talked about, is something we could move forward with. How do we think about how we structure our interactions with each other, through machines? I don’t want to have a conversation with machines, but we need something to communicate better with each other.”