Abstracted Intelligence: AI, Intellectual Labour, and Berkeley’s Legacy in Public Policy

This was meant to be a review of Revolutionary Mathematics by Justin Joque, but it became an essay on one of his points. A friend sent me a great review—so I’m off the hook. Joque’s book examines the radical potential of mathematics to reshape society, critiquing conventional practice and positioning math as a tool for social change. He explores its intersections with culture and activism, urging us to rethink its role beyond traditional frameworks. For me, it sparked deeper questions about thinking itself—how knowledge, data epistemology, and human insight are fundamentally threatened by our growing reliance on the technology of ghostly inference, where intellectual labour is not merely automated but restructured, displacing those who once performed it while subtly embedding the very biases and inequalities it claims to transcend.

Joque’s reference to George Berkeley (March 1685 – January 1753) in his book piqued my curiosity, especially as Berkeley’s critique in The Analyst (1734) challenged the abstract nature of infinitesimals in calculus, an idea that I just re-read in Wittgenstein. These are, essentially, like quarks or clouds—elusive and intangible, but unlike quarks, which we can at least observe through their effects, or clouds that we can still see, the infinitesimals remain purely abstract, with no direct manifestation. Berkeley argued that these unobservable entities lacked connection to the empirical world, undermining their validity. This critique feels remarkably relevant today, especially with the rise of Artificial Intelligence (AI: see note below). As machines increasingly make decisions based on data, the human dimension of intellectual labour risks being diminished to mere computational tasks. Just as Berkeley questioned mathematical abstractions, we must consider the implications of this abstraction on human intelligence in the AI era.

The rise of artificial intelligence (AI) has become one of the defining phenomena of the 21st century, promising to revolutionize intellectual and manual labour across sectors; however, this promise comes with an implicit threat: the displacement of human thought and expertise by computational models, transforming the nature of governance and intellectual work. The increasingly widespread belief in AI as an agent of efficiency and progress echoes earlier philosophical debates about the nature of knowledge, reality, and the human condition. From the critique of metaphysical abstraction in the Enlightenment to contemporary concerns about automation, the tension between human intellect and technological systems is palpable.

Artificial Intelligence in this essay refers to a broad range of technologies, including artificial intelligence (AI), augmented intelligence (AI), large language models (LLMs), and other related computational tools that enhance decision-making, learning, and data processing capabilities. These technologies encompass machine learning, deep learning, and natural language processing systems that assist or augment human intelligence using computer algorithms.

This philosophical concern is rooted in the intersection of metaphysics and epistemology, where Bayesian probability can offer a framework for assessing belief and knowledge. As machines take over decision-making, Bayesian inference could be used to model how human understanding is increasingly reduced to probabilistic reasoning, driven by data rather than lived experience. The concept of “infinitesimals” in Berkeley’s work, too small to observe directly, mirrors AI’s abstraction, with Bayesian probability similarly depending on unseen or abstract factors. Just as Berkeley questioned mathematical abstractions, we must scrutinize the abstraction of human intelligence through AI systems and their probabilistic reasoning.

AI systems, particularly in governance, often prioritize efficiency over nuance, leading to challenges in addressing complex social issues. For example, AI-based predictive policing models aim to reduce crime by analyzing past data to forecast criminal activity. However, these systems can perpetuate biases by over-policing certain communities or misinterpreting patterns. In Canada, this is evident in the overrepresentation of Indigenous communities in crime statistics, where AI-driven policies may misdiagnose the root causes, such as historical trauma or systemic discrimination, instead of addressing the socio-cultural context that fuels these disparities.

The implementation of AI in public service delivery also poses risks of oversimplification, especially when addressing the needs of vulnerable groups. For instance, in Canada, Indigenous communities have historically faced barriers in accessing health care, education, and social services. AI systems may identify general patterns of need based on demographic data, but they often fail to recognize specific local and cultural factors that are critical in understanding these needs. By relying solely on data-driven models, policymakers risk overlooking essential aspects of accessibility, such as language, geography, or traditional knowledge systems, which are integral to Indigenous communities’ well-being. This could lead to recommendations that do not effectively support their unique requirements.

Furthermore, while AI can process vast amounts of data, its inability to understand cultural nuances means that these models often miss the lived realities of marginalized groups. For example, the challenges faced by immigrants and refugees in Canada are deeply rooted in socio-cultural factors that are not always captured in statistical datasets. AI systems designed to assess eligibility for settlement programs or integration services may overlook the role of social capital, support networks, or personal resilience—factors crucial for successful integration into Canadian society. As a result, AI can produce one-size-fits-all solutions that neglect the complexity of individual experiences, further deepening inequality.

These examples underscore the limitations of AI in governance. While AI systems can process vast amounts of data, they lack the cultural sensitivity and emotional intelligence required to address the intricacies of human experience. Human oversight remains crucial to ensure that AI-driven decisions do not ignore the lived realities of marginalized communities, particularly Indigenous peoples and immigrants in Canada. The challenge is not just technical, but ethical—ensuring that AI serves all citizens equitably, taking into account diverse cultural and social contexts. It is essential that AI is integrated thoughtfully into governance, with a focus on inclusivity and the preservation of human agency.

Berkeley argues that these "infinitesimal" quantities, which are too small to be perceived, cannot be validly used in reasoning, as they detach mathematics from tangible reality. For Berkeley, mathematical concepts must be rooted in empirical experience to be meaningful, and infinitesimals fail this test by being incapable of direct observation or sensory experience.

AI has begun to transform the landscape of intellectual labour, particularly in fields that heavily rely on data analysis. Where human analysts once crafted insights from raw data, AI systems now process and distill these findings at unprecedented speeds. However, the value of human expertise lies not only in the speed of calculation but in the depth of context that accompanies interpretation. While AI systems can detect patterns and correlations within data, they struggle to navigate the complexities of the lived experience—factors like historical context, cultural implications, or social nuances that often turn a dataset into meaningful knowledge.

Data analytics, now increasingly dependent on algorithmic models, also underscores this divide. Machine learning can spot trends and produce statistical conclusions, yet these models often fail to question underlying assumptions or identify gaps in the data. For instance, predictive analytics might flag trends in employment patterns, but it is the human analyst who can explore why certain trends occur, questioning what the numbers don’t tell us. AI is exceptional at delivering quick, accurate results, but without the reflective layer of human interpretation, it risks presenting a skewed or incomplete picture—particularly in the realm of social data, where lived experiences are often invisible to the machine.

As AI continues to infiltrate sectors like healthcare, immigration, criminal justice, and labour economics, it is increasingly tasked with decisions that once relied on human intellectual labour. However, these systems, built on historical data, often fail to account for the subtle shifts in context that data analysis demands. Machine learning systems may flag patterns of healthcare access based on prior records, but they might miss changes in societal attitudes, emerging public health challenges, or new patterns of inequality. These are the kinds of factors that require a human touch, bridging the gap between raw data and its true significance in real-world terms.

This shift is also reshaping the role of data analysts themselves. Once, data analysts were the interpreters, the voices that gave meaning to numbers. Today, many of these roles are becoming increasingly automated, leaving the human element more on the periphery. As AI systems dominate the decision-making process, intellectual labour becomes more about overseeing these systems than about active analysis. The danger here is the erasure of critical thinking and judgment, qualities that have historically been central to intellectual work. While AI excels at scaling decision-making processes, it lacks the ability to adapt its reasoning to new, unforeseen situations without human guidance.

As AI continues to evolve, its influence on governance and intellectual work deepens. The history of data-driven decision-making is marked by human interpretation, and any move toward a purely algorithmic approach challenges the very foundation of intellectual labour. The increasing reliance on AI-driven processes not only risks simplifying complex social issues but also leads to the marginalization of the nuanced understanding that human intellectual labour brings. This tension between machine efficiency and human insight is not merely a technological concern but a philosophical one—a challenge to the nature of work itself and the role of the intellectual in an age of automation.

This shift invites a reconsideration of the historical context in which intellectual labour has developed, a theme that is crucial in understanding the full implications of AI’s rise. The historical evolution of data analysis, governance, and intellectual work has always involved a negotiation between human cognition and technological advancement. As we look toward the future, we must ask: in an age increasingly dominated by machines, how will we ensure that human experience and judgment remain central in shaping the decisions that affect our societies? This question points toward an urgent need to ground AI in a historical context that recognizes its limitations while acknowledging its potential.

As AI becomes more central in shaping political and social policies, particularly regarding immigration, there are concerns about its ability to reflect the complex realities of diverse communities. The reliance on AI can lead to oversimplified assumptions about the needs and circumstances of immigrants, especially when addressing their integration into Canadian society. AI systems that analyze immigration data could misinterpret or fail to account for factors such as socio-economic status, cultural differences, or regional disparities, all of which are critical to creating inclusive policies.

This evolving landscape signals a deeper erosion of the social contract between Canadians and their governments. In immigration, for example, particularly in light of the 2023–2026 Data Strategy and the findings of CIMM – Responses to the OAG’s Report on Permanent Residents, ensuring human oversight becomes increasingly crucial. Without it, there is a risk of diminishing the personal, human elements that have historically been central to governance. The shift towards automated decision-making could alienate citizens and weaken trust in political institutions, as it overlooks the nuanced needs of individuals who are part of the democratic fabric.

AI’s increasing role in governance marks a shift toward the disembodiment of knowledge, where decisions are made by abstract systems detached from the lived experiences of citizens. As AI systems analyze vast amounts of data, they reduce complex human situations to numerical patterns or algorithmic outputs, effectively stripping away the context and nuance that are crucial for understanding individual and societal needs. In this framework, governance becomes a process of automating decisions based on predictive models, losing the human touch that has historically provided moral, ethical, and social considerations in policy formulation.

The consequences of this abstraction in governance are far-reaching. AI systems prioritize efficiency and scalability over qualitative, often subjective, factors that are integral to human decision-making. For example, immigration decisions influenced by AI tools may overlook the socio-political dynamics or personal histories that shape individuals’ lives. When policy decisions become driven by data points alone, the systems designed to serve citizens may end up alienating them, as the systems lack the empathy and contextual understanding needed to address the full complexity of human existence. This hollowing out of governance shifts power away from human oversight, eroding the ability of democratic institutions to remain responsive and accountable to the people they serve.

The COVID-19 pandemic served as a catalyst for the rapid integration of AI in governance and society. As governments and businesses shifted to remote work models, AI tools were leveraged to maintain productivity and ensure public health safety. Technologies like contact tracing, automated customer service bots, and AI-driven health analytics became critical in managing the crisis. This acceleration not only enhanced the role of AI in public sector decision-making but also pushed the boundaries of its application, embedding it deeper into the governance framework.

The pandemic also saw the domestication of AI through consumer devices, which became central to everyday life. With lockdowns and social distancing measures in place, reliance on digital tools grew, and AI-powered applications—like virtual assistants, fitness trackers, and personalized recommendation systems—found a more prominent place in households. These devices, which had once been seen as niche, became essential tools for managing work, health, and social connections. The widespread use of AI in homes highlighted the shift in governance, where decision-making and the management of societal norms increasingly came under the control of automated systems, marking a techno-political shift in how people interact with technology.In revisiting Berkeley’s critique of infinitesimals, we find philosophical parallels with the rise of AI. Berkeley questioned the very foundation of knowledge, suggesting that our perceptions of the material world were based on subjective experience, not objective truths. Similarly, AI operates in a realm where data is processed and interpreted through systems that may lack subjective human experience. AI doesn’t “understand” the data in the same way humans do, yet it shapes decision-making processes that affect real-world outcomes, creating an abstraction that can be detached from human experience.

This disconnection between machine and human experience leads to the dehumanization of knowledge. AI systems operate on algorithms that prioritize efficiency and optimization, but in doing so, they strip away the nuanced, context-driven understanding that humans bring to complex issues. Knowledge, in this sense, becomes something disembodied, divorced from the lived experiences and emotions that give it meaning. As AI continues to play a central role in governance, the process of knowledge becomes more mechanized and impersonal, further eroding the human dimension of understanding and ethical decision-making. The philosophical concerns raised by Berkeley are mirrored in the ways AI reshapes how we conceptualize and act on knowledge in a tech-driven world.

The rapid integration of AI into intellectual labour and governance presents a profound shift in how decisions are made and knowledge is structured. While AI offers the promise of efficiency and precision, its growing role raises critical concerns about the erosion of human agency and the humanistic dimensions of governance. As AI systems replace human judgment with algorithmic processes, the risk arises that complex social, political, and ethical issues may be oversimplified or misunderstood. The hollowing out of governance, where decision-making is increasingly abstracted from lived experiences, mirrors the philosophical critiques of abstraction seen in Berkeley’s work. The human element, rooted in experience, judgment, and empathy, remains crucial in the application of knowledge. Without mindful oversight, the adoption of AI in governance could result in a future where technology governs us, rather than serving us. To navigate these challenges, preserving human agency and ensuring that AI tools are used as aids rather than replacements is essential to maintaining a just and ethical society.

Berkeley’s philosophy of “immaterial ghosts”, where the immaterial influences the material world, aligns with Richter’s cloud paintings at Ottawa’s National Gallery of Canada, which evoke a similar sense of intangible presence. Both focus on the unseen: Berkeley’s spirits are ideas that influence our perceptions, while Richter’s clouds, as abstract forms, suggest the unknowable and elusive. In this way, Berkeley’s invisible world and Richter’s cloudscapes both invite us to confront the limits of human understanding, where the unseen shapes the visible.

Nancy Baker Cahill’s CORPUS

Augmented Reality (AR) reshapes art by blending the physical and digital, turning viewers into co-creators through interactive, fluid experiences. Unlike the timeless works at the Prado or MOCA, which remain fixed in tradition, AR art evolves with the viewer’s actions and expands storytelling beyond galleries. This mirrors Don Quixote’s quest, where the boundary between reality and imagination blurs, inviting the viewer / reader into a co-constructed narrative. From a data literacy perspective, AR encourages intuitive engagement with data, highlighting its layered, real-time interaction. The contrast between the National Gallery or the Prado’s permanence and AR’s dynamic possibilities underscores how art can bridge tradition and technology, much as Quixote challenges the conventions of his time.

A Digital Sculpture of Entangled Futures

CORPUS, the towering Augmented Reality (AR) figure anchored on the Hammer Museum’s sculpture terrace, invites us to consider a future of blended, embodied entanglements between human, machine, flora, and microbiome. This virtual sculpture is not just an exploration of technology’s role in art, but a bold proposition that challenges the very fabric of how we perceive reality, identity, and our place within the web of life. Through its glowing, dynamic form, CORPUS offers a visual and conceptual disruption of the boundaries that separate the organic from the technological, the physical from the digital, and the human from the non-human. I haven’t visited this exhibition, only seen the videos online showing the augmented form.

Post-humanism and Entanglement: Blurring the Boundaries

At the heart of CORPUS is a theme of posthumanism, which imagines a future where distinctions between humanity and its technological and natural counterparts dissolve. The sculpture’s hybrid form—where human anatomy is fused with the organic patterns of plants and microorganisms—resonates with the speculative visions of thinkers like Rosi Braidotti and Donna Haraway. These theorists have argued that the boundaries separating the human from the non-human, the living from the non-living, are increasingly porous in our digital and ecological age.

CORPUS mirrors this vision. Its body—a glowing, ever-shifting mass—is a site of constant transformation, where the organic and technological are not in opposition but in symbiotic entanglement. It is not merely a human figure distorted by technology, but a representation of a future where these elements are indistinguishable from one another. This idea of entanglement is also a critique of human exceptionalism, echoing Haraway’s Cyborg Manifesto, which challenges us to move beyond the anthropocentric view and recognize that humans are intertwined with machines, animals, and the earth. The microbiome and flora elements embedded in the figure further highlight the importance of non-human life forms, which often remain invisible in traditional representations of the human body.

The work speaks to a future where humans are no longer the central agents of their own story. In this context, CORPUScould be seen as an embodiment of the posthuman—not in the sense of the end of humanity, but in the reimagining of human identity as something that is no longer defined in isolation but as part of a broader, interconnected system.

In this reimagined future, CORPUS envisions a world where the boundaries between the human, the technological, and the natural dissolve—similar to the themes explored in iconic movies such as Ghost in the Shell and The Matrix. Just as Major Motoko Kusanagi in Ghost in the Shell grapples with the merging of her human consciousness and cybernetic body, CORPUS embodies the notion that human identity is no longer isolated but exists as part of a complex, interconnected system. Similarly, in The Matrix, the characters are forced to confront the artificial nature of their reality, where the distinction between the human mind and the digital world is increasingly blurred. Both narratives question human exceptionalism and highlight the fluidity of identity in a world shaped by technology. In the context of CORPUS, this fusion of organic and technological elements invites viewers to rethink humanity’s place, urging a shift away from the idea of isolated agency toward an understanding of human existence as intricately tied to the larger ecological and technological systems around us. Just as in Ghost in the Shell and The MatrixCORPUS forces us to confront a posthuman future where identity is fluid, collective, and symbiotic.

The Digital and the Physical: The Dissonance of Augmented Reality

The AR nature of CORPUS elevates the conversation from one of purely thematic resonance to a dynamic, immersive experience that directly engages the viewer. Situated on the Hammer’s terrace on Wilshire Boulevard, the sculpture is accessible only through a smartphone device, creating a layered reality—a fourth wall (conveniently the name of the app used) that separates the viewer from the object of perception. This mediating technology forces us to reconsider how technology alters our relationship with space and materiality, echoing the profound dissonance between reality and illusion explored in Don Quixote. Like Cervantes’ Don Quixote, whose delusions of grandeur blur the line between fantasy and reality, CORPUS creates a digital world where the physical and the virtual collide, making us question the boundaries between what is real and what is simulated.

This tension between the tangible and the digital resonates with Jean Baudrillard’s theories on hyperreality. Baudrillard argues that in a world dominated by digital technologies, simulations can become more “real” than the realities they represent, blurring the line between the authentic and the fabricated. Similarly, by requiring viewers to experience the sculpture through the digital lens of their smartphones, CORPUS critiques how technology mediates and reshapes our interaction with the world. Its immaterial nature—existing only in virtual space—challenges the permanence and physicality traditionally associated with sculpture, urging us to reconsider what art can become in a digital age where the boundaries of reality are increasingly unstable.

This interplay between the digital and the physical suggests a broader shift in art toward the ephemeral and virtual. Like Don Quixote’s quest, which is both shaped by the tangible world and transformed by his fantastical perceptions, CORPUS reveals how digital spaces increasingly dominate artistic creation and our engagement with reality. It also echoes David Cronenberg’s Videodrome, where the merging of flesh and media interrogates the transformative—and often unsettling—power of technology over perception and identity. The sculpture’s intangibility evokes the digital sublime, an immersive experience that bridges the virtual and physical, making us question not only the nature of reality but our evolving place within it.

Marshall McLuhan’s idea that “the medium is the message” illuminates CORPUS’s reliance on AR. The smartphone, essential to experiencing the sculpture, shapes both its perception and meaning, turning the act of mediation into the message itself. CORPUS exists as a networked experience, where the viewer’s engagement is both empowered and constrained by technology. Like the television in Videodrome (the story can be seen as an homage to McLuhan), the smartphone extends the self, transforming perception into an integral part of the artwork and highlighting McLuhan’s belief that our tools actively redefine human experience.

Ecological Themes and the Reclamation of the Microbial World

Another compelling layer of CORPUS lies in its engagement with ecological themes, particularly through the integration of microbiomes and plant life. Unlike works where organic elements serve as aesthetic embellishments, here they are foundational, suggesting a vision where human and non-human entities coexist as equals. This echoes the growing field of eco-futurist art, aligning with thinkers like Sophie Yeo, who explores how climate art grapples with humanity’s role in ecological crises. CORPUS takes this further, embedding these invisible forms of life as intrinsic to its being, emphasizing the interdependence that sustains life on Earth.

By foregrounding the Anthropocene—the era defined by humanity’s reshaping of the planet—the sculpture critiques the hubris of technological dominance, instead advocating for a symbiosis between the organic and the synthetic. The blurring of these boundaries reflects an urgent need to rethink our ecological future, not as one of domination but of integration. In this way, CORPUS positions itself as both an artistic speculation and a call to action, challenging us to imagine coexistence as a path to survival.

The Viewer’s Role: Participating in the Entanglement

The experience of CORPUS is inherently (necessarily?) participatory, requiring the viewer to activate the sculpture through their smartphone—a device as central to contemporary life as it is to the artwork’s existence. In this way, the viewer is not merely an observer but an essential participant in the entangled system that CORPUS represents. The sculpture’s form and presence depend on this technological mediation, collapsing the boundary between object and observer. This interaction recalls the work of artists like Rafael Lozano-Hemmer, whose installations integrate the audience into the artwork itself, transforming passive observation into active engagement.

By relying on the viewer’s participation, CORPUS blurs distinctions between art, audience, and technology. The sculpture exists only through interaction, creating a shared space where the digital and physical intertwine. In doing so, it moves beyond traditional notions of art as static and complete, embracing a co-constructed reality shaped by networks of interconnected forces. Much like the flux of technological, ecological, and human systems, CORPUS offers an evolving expression of how meaning emerges collaboratively, reshaping the viewer’s role into that of an active participant in the art’s becoming.

In this way, CORPUS becomes not just a visual encounter but a metaphor for the larger ecological and technological networks that define contemporary existence. Much like Don Quixote, whose perceptions of the world are shaped by his idealized visions of chivalry and adventure, CORPUS challenges our perceptions of reality—an experience that cannot be understood without the mediation of technology. Just as Don Quixote’s quest is inextricable from his delusions, our contemporary lives cannot be disentangled from the digital and natural systems that shape them. The viewer, by engaging with the artwork, becomes an active participant in this entangled future, implicating themselves in the very realities CORPUS seeks to depict, much as Quixote’s journey invites us to explore the blurred boundaries between fantasy and reality.

CORPUS is a radical meditation on the future of human existence, framed within the interwoven relationships between nature, technology, and identity. It interrogates boundaries—both literal and metaphorical—urging us to reconsider our role in a world that increasingly resists easy categorization. Much like the intertwining of the real and the imagined in Don Quixote’s adventures, CORPUS blurs the lines between the organic and the synthetic, the human and the non-human. Through its glowing, mutable form and its invocation of ecological and post-humanist themes, the sculpture invites us to ask not what we can separate, but what we can bring together. This vision mirrors the shifting ideals of a world where boundaries dissolve, and like Quixote’s quest for a higher truth, it challenges us to reimagine what it means to be human in an increasingly complex, interconnected universe.