Looking to connect 2 recent events / conversations (as is my want) this time to explore a fundamental tension in how we approach AI’s relationship to power: are we designing systems that foreclose possibilities or ones that open compossible worlds?
In July at Politics of the Machines Perth https://lnkd.in/gCxQM2YY a panel with Claudia Westermann and Michael Dunbar “Experiments in Synthetic Logic” explored how AI might interface with more-than-human perspectives. Building tentative bridges between Chinese painting philosophy where “rocks must be alive,” and revisiting the Melbourne Design Week synthetic data workshops that imagined what conversations between common blue butterflies or forest floors might be like. When Madelaine Thomas pushed GPT to generate mushroom field notes “less human,” it occasionally responded: “This communication is not for you.”

Meanwhile RMIT University Karen Hao ‘s conversation about “The Empire of AI” revealed the mechanics behind these same systems chaired by Lisa M. Given (FASSA, FASIST). OpenAI’s scaling paradigm demands such massive data sets that quality becomes irrelevant. Data workers in Venezuela, Kenya, and the Philippines label training content in digital sweatshops, moving from crisis to crisis as companies chase cheaper labour.

The Double Bind of Representational Systems:
Both conversations revealed the same underlying trap. Gregory Bateson’s double bind, where all available responses seem to be bad responses. manifests clearly in our relationship with AI systems. We’re caught between needing these tools to imagine alternatives while simultaneously reinforcing the very systems of extraction we want to escape.
Our experiments used GPT to generate synthetic data for more-than-human perspectives, briefly transgressing when it declares “This communication is not for you.” But as discussed, designing with data is only a matter of the past, and every representational model (that is haunted by what we chose to capture) forecloses other possibilities by encoding existing power structures.
Hao’s analysis revealed the mechanics of this foreclosure: AI’s scaling paradigm systematically excludes the very perspectives our experiments tried to surface. There’s no data for birds, bees, or Indigenous knowledge systems in smart city datasets because “everything that needs to be constructed through OpenAI needs to be paid for.” The economic model determines what counts as reality.
Perhaps the question isn’t how to make AI more “ethical” within current scaling paradigms, but whether we can break the double bind that positions us as either exploiters or exploited. Can we move from designing systems that mine the past to ones that compost new possibilities?
The mushroom language isn’t for us—but maybe that’s exactly the point.
hashtag#AIColonialism hashtag#SpeculativeDesign hashtag#MoreThanHuman hashtag#DigitalColonialism hashtag#IndigenousAI hashtag#SyntheticData