There was a time when images were evidence. A medieval map was not just a representation but a claim to knowledge, an argument about how the world was structured. A Renaissance painting revealed divine order, a photograph proved that something was. Today, we face a new kind of image—one that does not record but generates, one whose authority does not come from witnessing reality but from statistical inference. AI-generated imagery does not document the world; it thinks the world.
For centuries, knowledge was structured around categories. Aristotle, Linnaeus, and later the Encyclopédistes built systems to organize the world, classifying nature, history, and human thought into legible hierarchies. Even with the rise of empirical science, knowledge remained something accumulated, structured, and verified through observation.
mid journey image #prompt = [coffee in St. Peter’s Square –ar1:1]
The algorithm, however, does not organize knowledge in this way. It does not categorize the world from above but learns patterns from within. Unlike an 18th-century taxonomist, an AI system does not define a tiger by its stripes or its feline characteristics—it simply processes vast quantities of data, detecting statistical correlations that allow it to recognize a tiger without ever defining it.
This is a profound shift. Knowledge, once built through observation and classification, is now generated by inference. The AI-generated image follows this logic. It does not capture a moment, as a photograph once did, nor does it interpret a subject, as a painting might. Instead, it predicts what an image should look like, based on probabilities. The result is something fundamentally different from representation: an image that emerges from a machine’s internal logic rather than from reality itself.
For centuries, images were linked to material constraints: pigments on a canvas, light on film, a chemical process that left behind a physical trace. Even digital images, while infinitely replicable, still maintained a relationship to a source—a photograph taken, a frame captured. AI-generated imagery untethers itself from this history. It is not a copy but an invention, synthesized from a dataset of other images, none of which serve as the original.
This is not just a technological change; it is an epistemological one. If we once sought truth in the documentary image, where do we look now? If an AI can generate a face that has never existed, what happens to our belief in the evidentiary power of the portrait? And if an algorithm can create art indistinguishable from human creativity, what happens to the very idea of authorship?
We might think of AI as a historian of its own kind—one that does not preserve the past but extracts patterns from it. The great archives of human culture—museums, libraries, film reels—once functioned as repositories of collective memory. AI, trained on these vast datasets, does not remember but predicts. It does not curate the past; it recombines it.
The implications of this shift extend beyond aesthetics. In medicine, AI does not diagnose based on fixed categories but on pattern recognition, seeing correlations that escape human detection. In law, AI systems sift through precedent not to enforce continuity but to optimize decisions. Across disciplines, knowledge is becoming less about interpretation and more about computation.
Yet there is something unsettling in this. AI-generated imagery reminds us that knowledge, long thought to be something we built, structured, and controlled, may now be something we train—a vast statistical model that does not explain but predicts, does not reason but generates.
midJourney image #prompt = [grid of a single leaf –ar 1:1]
If the image was once a window onto the world, AI has made it a hall of mirrors, endlessly reflecting a logic we do not fully understand. The question is no longer whether these images are real, but rather: whose reality do they belong to?
The paradox of concrete as both the symbol of modernity and its antithesis—destruction—has been beautifully and vividly re-examined in the 2024 documentary by Viktor Kossakovsky, Architecton. The film opens with the ravaged remains of concrete structures in Ukraine, setting the stage for an exploration not only of architecture’s relationship to materiality but of its role in the broader narrative of progress and decay. Through this lens, Kazimir Malevich’s geometrically pure forms gain new resonance, shifting from abstract utopian ideals to poignant metaphors for the tension between stability and fragility inherent in all human endeavours.
The ruins of Baalbek, stark against the infrared sky, their massive columns diminished yet unwavering. Their presence in the landscape is both imposing and ghostly, a relic of human ambition that now exists in a state of suspension, neither fully intact nor wholly lost. In the documentary’s meditation on concrete and stone, Baalbek serves as a distant counterpoint—where modern concrete is cast and shaped to fit the needs of the present, these ancient stones, quarried and placed millennia ago, endure as both triumph and ruin, a reminder that all architecture, no matter how permanent it seems, is ultimately subject to time.
One striking quotation from the documentary reads: “After water, concrete is the most widely used substance on Earth.” This simple statement highlights concrete’s ubiquity and significance in shaping the modern world. Water—life’s most fundamental element—has long been the basis of human survival and connection with nature, while concrete, as the second most used material, represents humankind’s drive to dominate and define its surroundings. Yet, despite its ubiquity, concrete’s eventual decay exposes a different truth: the same forces that humans attempt to master—through architecture, engineering, and design—are ultimately beyond control. Concrete, while seemingly permanent, is just as vulnerable as the stone it mimics, subject to the ravages of time, war, and nature.
In one particularly striking image, a solitary man with a wheelbarrow is dwarfed by a massive block of stone, carved millennia ago and abandoned. This visual echoes the evocative imagery of Michelangelo’s Prisoner statues, housed in Florence’s Accademia Gallery. These figures, half-formed, trapped in their stone prison, seem to struggle towards liberation, embodying both the act of creation and the stasis of unfulfilled potential. The abandoned stone, much like these unfinished figures, occupies a space between being and non-being, between intention and entropy. The stone seems to call out for a form that has not yet been realized, just as the massive concrete structures in the documentary gesture toward what could have been—monuments of progress now succumbed to time and violence. In this way, both the material and its artistic potential exist in a state of suspended animation, caught between the historical force of its creation and the inevitable dissolution of all things.
Integral to this exploration is the use of infrared imagery, a technological choice that disrupts our traditional understanding of built structures. Infrared, often used to reveal hidden heat signatures, transforms concrete buildings into spectral forms. What was once solid, monumental, and permanent is reduced to an ethereal presence, a visual manifestation of the invisible energies and decay beneath the surface. It’s as though the material itself is attempting to communicate its vulnerability—an image of architecture that exposes itself not as a static entity, but as a system of energies, histories, and eventual dissolution.
A crucial scene in the film—an extended sequence of a massive rockslide—underscores the inherent power of stone, nature’s counterpoint to human architecture. As colossal boulders cascade down the mountainside, the camera lingers on the massive, unyielding force of the stone. This raw, natural destruction stands in stark contrast to the calculated, human-made beauty of classical architecture. The imagery here is a reminder that stone, while emblematic of permanence, is also vulnerable to the overwhelming forces of nature. This stark juxtaposition of classical ruins, once thought to be eternal, returning to the earth, punctuates the fragility of human ambition and the fleeting nature of monumental achievement.
The pairing of concrete and rock, two materials that symbolize permanence, with such violence and collapse speaks to their liminal nature. Both substances, when used for habitation or as symbols, straddle the boundary between human-made constructs and the natural world. They strain the traditional distinctions between subject and object, man and nature—two concepts that architecture has long worked to contain and define. Concrete, as both a building material and a symbol of modernity, offers the illusion of control over nature. Yet, it is precisely this illusion that makes it so susceptible to forces beyond our grasp. Rock, though an ancient and seemingly immutable material, can also become a harbinger of destruction when untethered from human will. These materials blur the boundaries of the architectural discourse, pointing to an inherent instability between humanity’s ambitions and the larger natural forces at play.
Concrete, though seemingly durable, is as much a material of transience as it is of permanence. The structures it creates can endure for centuries, but the very process of their construction—through human labour, environmental forces, and the inevitable decay—ensures their eventual dissolution. In this, concrete is emblematic of the human condition: the striving for permanence caught in the endless flux of change and decay.
Through this lens, the interaction between concrete and rock becomes a reflection of the tension between human intention and natural forces. These materials are not mere objects to be shaped or controlled but are agents in their own right, influencing the spaces they inhabit. When viewed through infrared, they reveal themselves not as passive backdrops but as active participants in the construction of meaning. Concrete’s malleability and rock’s permanence, when combined, create a tension that straddles the boundary between subject and object, a dialectic that architecture itself has long sought to transcend. If technologies shape our understanding of reality, then the use of infrared here forces us to confront the complex interplay between human creation and the natural world.
Malevich’s Architecton, in this context, becomes more than a study of abstract form. It serves as a blueprint for reconsidering the purpose and meaning of architecture in a time when the very materials that define our spaces are constantly in flux. If the built environment is constantly being reshaped by forces both seen and unseen, then architecture is not a static monument but an ongoing negotiation between humanity and the materials that constitute it. And in the suspended forms of stone and concrete, we find a reminder that art, too, lies at the intersection of creation and destruction—a space where form is constantly being struggled into existence, only to eventually fade back into the material world.
The argument that multimodal AI collapses the text-image distinction is, at first glance, compelling. However, this claim relies on an implicit assumption that such a distinction was ever stable or clearly demarcated. A closer examination reveals that AI’s generative processes do not so much “collapse” the distinction as they do mechanize an already-existing instability—one that has long been explored through avant-garde literary and artistic practices, particularly in asemic writing.
Throughout the 20th century, artists and writers repeatedly disrupted the supposed boundary between text and image. Dadaist collage, Surrealist automatic writing, and concrete poetry all foregrounded the materiality of language, demonstrating that text could function visually as much as linguistically. In Lettrism, pioneered by Isidore Isou in the 1940s, letters were untethered from conventional phonetic or semantic meaning, transformed into visual compositions. Henri Michaux’s asemic ink drawings similarly dissolved the distinction between writing and mark-making, demonstrating that the act of inscription need not resolve into legibility. These historical precedents complicate the article’s central claim: rather than producing an unprecedented collapse, AI merely accelerates and mechanizes a longstanding artistic impulse to question the division between reading and seeing.
Asemic writing resists the tyranny of meaning, inviting the reader into an interpretative space where language dissolves into pure form.
If asemic writing operates through intentional illegibility, inviting interpretation while resisting definitive meaning, AI-generated text-image hybrids do not resist meaning so much as they produce an excess of it. The logic of machine learning generates outputs that are overdetermined by probabilistic associations rather than by authorial intent. Cy Twombly’s gestural inscriptions, for instance, suggest meaning without fully disclosing it; their power lies in their resistance to linguistic capture. By contrast, AI-generated multimodal outputs do not refuse meaning but generate an abundance of semiotic possibilities, saturating the interpretative field. The article does not fully account for this distinction, treating AI’s multimodal capabilities as a collapse rather than an overproduction, a shift from resistant ambiguity to computational fluency.
What is most fundamentally altered by AI is not the existence of an intermediary space between text and image but the industrialization of indeterminacy itself. Asemic writing historically resists institutional legibility, positioning itself against systems of meaning-making that demand clear semiotic functions. AI, however, converts indeterminacy into a computational process, endlessly producing outputs that are neither fully readable nor wholly visual but are nevertheless monetized and instrumentalized. Where the illegibility of Chinese wild cursive calligraphy or Hanne Darboven’s sprawling numerical texts was once a site of aesthetic resistance, AI-driven multimodality turns this ambiguity into a product, systematizing what was once an act of refusal.
By severing the link between signifier and signified, asemic writing exposes the visual unconscious of text, revealing writing as an act of mark-making rather than communication.
Rather than signaling the collapse of the text-image distinction, AI-driven multimodality reveals how this boundary has always been porous. The article’s central argument overlooks the long history of artistic and literary practices that have anticipated and complicated the very phenomenon it describes. A more nuanced approach would recognize that AI does not dissolve the distinction between text and image so much as it absorbs their instability into a system that operationalizes ambiguity at scale, transforming what was once a site of aesthetic and conceptual resistance into an automated process of production.
This was meant to be a review of Revolutionary Mathematics by Justin Joque, but it became an essay on one of his points. A friend sent me a great review—so I’m off the hook. Joque’s book examines the radical potential of mathematics to reshape society, critiquing conventional practice and positioning math as a tool for social change. He explores its intersections with culture and activism, urging us to rethink its role beyond traditional frameworks. For me, it sparked deeper questions about thinking itself—how knowledge, data epistemology, and human insight are fundamentally threatened by our growing reliance on the technology of ghostly inference, where intellectual labour is not merely automated but restructured, displacing those who once performed it while subtly embedding the very biases and inequalities it claims to transcend.
Joque’s reference to George Berkeley (March 1685 – January 1753) in his book piqued my curiosity, especially as Berkeley’s critique in The Analyst (1734) challenged the abstract nature of infinitesimals in calculus, an idea that I just re-read in Wittgenstein. These are, essentially, like quarks or clouds—elusive and intangible, but unlike quarks, which we can at least observe through their effects, or clouds that we can still see, the infinitesimals remain purely abstract, with no direct manifestation. Berkeley argued that these unobservable entities lacked connection to the empirical world, undermining their validity. This critique feels remarkably relevant today, especially with the rise of Artificial Intelligence (AI: see note below). As machines increasingly make decisions based on data, the human dimension of intellectual labour risks being diminished to mere computational tasks. Just as Berkeley questioned mathematical abstractions, we must consider the implications of this abstraction on human intelligence in the AI era.
The rise of artificial intelligence (AI) has become one of the defining phenomena of the 21st century, promising to revolutionize intellectual and manual labour across sectors; however, this promise comes with an implicit threat: the displacement of human thought and expertise by computational models, transforming the nature of governance and intellectual work. The increasingly widespread belief in AI as an agent of efficiency and progress echoes earlier philosophical debates about the nature of knowledge, reality, and the human condition. From the critique of metaphysical abstraction in the Enlightenment to contemporary concerns about automation, the tension between human intellect and technological systems is palpable.
Artificial Intelligence in this essay refers to a broad range of technologies, including artificial intelligence (AI), augmented intelligence (AI), large language models (LLMs), and other related computational tools that enhance decision-making, learning, and data processing capabilities. These technologies encompass machine learning, deep learning, and natural language processing systems that assist or augment human intelligence using computer algorithms.
This philosophical concern is rooted in the intersection of metaphysics and epistemology, where Bayesian probability can offer a framework for assessing belief and knowledge. As machines take over decision-making, Bayesian inference could be used to model how human understanding is increasingly reduced to probabilistic reasoning, driven by data rather than lived experience. The concept of “infinitesimals” in Berkeley’s work, too small to observe directly, mirrors AI’s abstraction, with Bayesian probability similarly depending on unseen or abstract factors. Just as Berkeley questioned mathematical abstractions, we must scrutinize the abstraction of human intelligence through AI systems and their probabilistic reasoning.
AI systems, particularly in governance, often prioritize efficiency over nuance, leading to challenges in addressing complex social issues. For example, AI-based predictive policing models aim to reduce crime by analyzing past data to forecast criminal activity. However, these systems can perpetuate biases by over-policing certain communities or misinterpreting patterns. In Canada, this is evident in the overrepresentation of Indigenous communities in crime statistics, where AI-driven policies may misdiagnose the root causes, such as historical trauma or systemic discrimination, instead of addressing the socio-cultural context that fuels these disparities.
The implementation of AI in public service delivery also poses risks of oversimplification, especially when addressing the needs of vulnerable groups. For instance, in Canada, Indigenous communities have historically faced barriers in accessing health care, education, and social services. AI systems may identify general patterns of need based on demographic data, but they often fail to recognize specific local and cultural factors that are critical in understanding these needs. By relying solely on data-driven models, policymakers risk overlooking essential aspects of accessibility, such as language, geography, or traditional knowledge systems, which are integral to Indigenous communities’ well-being. This could lead to recommendations that do not effectively support their unique requirements.
Furthermore, while AI can process vast amounts of data, its inability to understand cultural nuances means that these models often miss the lived realities of marginalized groups. For example, the challenges faced by immigrants and refugees in Canada are deeply rooted in socio-cultural factors that are not always captured in statistical datasets. AI systems designed to assess eligibility for settlement programs or integration services may overlook the role of social capital, support networks, or personal resilience—factors crucial for successful integration into Canadian society. As a result, AI can produce one-size-fits-all solutions that neglect the complexity of individual experiences, further deepening inequality.
These examples underscore the limitations of AI in governance. While AI systems can process vast amounts of data, they lack the cultural sensitivity and emotional intelligence required to address the intricacies of human experience. Human oversight remains crucial to ensure that AI-driven decisions do not ignore the lived realities of marginalized communities, particularly Indigenous peoples and immigrants in Canada. The challenge is not just technical, but ethical—ensuring that AI serves all citizens equitably, taking into account diverse cultural and social contexts. It is essential that AI is integrated thoughtfully into governance, with a focus on inclusivity and the preservation of human agency.
Berkeley argues that these "infinitesimal" quantities, which are too small to be perceived, cannot be validly used in reasoning, as they detach mathematics from tangible reality. For Berkeley, mathematical concepts must be rooted in empirical experience to be meaningful, and infinitesimals fail this test by being incapable of direct observation or sensory experience.
AI has begun to transform the landscape of intellectual labour, particularly in fields that heavily rely on data analysis. Where human analysts once crafted insights from raw data, AI systems now process and distill these findings at unprecedented speeds. However, the value of human expertise lies not only in the speed of calculation but in the depth of context that accompanies interpretation. While AI systems can detect patterns and correlations within data, they struggle to navigate the complexities of the lived experience—factors like historical context, cultural implications, or social nuances that often turn a dataset into meaningful knowledge.
Data analytics, now increasingly dependent on algorithmic models, also underscores this divide. Machine learning can spot trends and produce statistical conclusions, yet these models often fail to question underlying assumptions or identify gaps in the data. For instance, predictive analytics might flag trends in employment patterns, but it is the human analyst who can explore why certain trends occur, questioning what the numbers don’t tell us. AI is exceptional at delivering quick, accurate results, but without the reflective layer of human interpretation, it risks presenting a skewed or incomplete picture—particularly in the realm of social data, where lived experiences are often invisible to the machine.
As AI continues to infiltrate sectors like healthcare, immigration, criminal justice, and labour economics, it is increasingly tasked with decisions that once relied on human intellectual labour. However, these systems, built on historical data, often fail to account for the subtle shifts in context that data analysis demands. Machine learning systems may flag patterns of healthcare access based on prior records, but they might miss changes in societal attitudes, emerging public health challenges, or new patterns of inequality. These are the kinds of factors that require a human touch, bridging the gap between raw data and its true significance in real-world terms.
This shift is also reshaping the role of data analysts themselves. Once, data analysts were the interpreters, the voices that gave meaning to numbers. Today, many of these roles are becoming increasingly automated, leaving the human element more on the periphery. As AI systems dominate the decision-making process, intellectual labour becomes more about overseeing these systems than about active analysis. The danger here is the erasure of critical thinking and judgment, qualities that have historically been central to intellectual work. While AI excels at scaling decision-making processes, it lacks the ability to adapt its reasoning to new, unforeseen situations without human guidance.
As AI continues to evolve, its influence on governance and intellectual work deepens. The history of data-driven decision-making is marked by human interpretation, and any move toward a purely algorithmic approach challenges the very foundation of intellectual labour. The increasing reliance on AI-driven processes not only risks simplifying complex social issues but also leads to the marginalization of the nuanced understanding that human intellectual labour brings. This tension between machine efficiency and human insight is not merely a technological concern but a philosophical one—a challenge to the nature of work itself and the role of the intellectual in an age of automation.
This shift invites a reconsideration of the historical context in which intellectual labour has developed, a theme that is crucial in understanding the full implications of AI’s rise. The historical evolution of data analysis, governance, and intellectual work has always involved a negotiation between human cognition and technological advancement. As we look toward the future, we must ask: in an age increasingly dominated by machines, how will we ensure that human experience and judgment remain central in shaping the decisions that affect our societies? This question points toward an urgent need to ground AI in a historical context that recognizes its limitations while acknowledging its potential.
As AI becomes more central in shaping political and social policies, particularly regarding immigration, there are concerns about its ability to reflect the complex realities of diverse communities. The reliance on AI can lead to oversimplified assumptions about the needs and circumstances of immigrants, especially when addressing their integration into Canadian society. AI systems that analyze immigration data could misinterpret or fail to account for factors such as socio-economic status, cultural differences, or regional disparities, all of which are critical to creating inclusive policies.
This evolving landscape signals a deeper erosion of the social contract between Canadians and their governments. In immigration, for example, particularly in light of the 2023–2026 Data Strategy and the findings of CIMM – Responses to the OAG’s Report on Permanent Residents, ensuring human oversight becomes increasingly crucial. Without it, there is a risk of diminishing the personal, human elements that have historically been central to governance. The shift towards automated decision-making could alienate citizens and weaken trust in political institutions, as it overlooks the nuanced needs of individuals who are part of the democratic fabric.
AI’s increasing role in governance marks a shift toward the disembodiment of knowledge, where decisions are made by abstract systems detached from the lived experiences of citizens. As AI systems analyze vast amounts of data, they reduce complex human situations to numerical patterns or algorithmic outputs, effectively stripping away the context and nuance that are crucial for understanding individual and societal needs. In this framework, governance becomes a process of automating decisions based on predictive models, losing the human touch that has historically provided moral, ethical, and social considerations in policy formulation.
The consequences of this abstraction in governance are far-reaching. AI systems prioritize efficiency and scalability over qualitative, often subjective, factors that are integral to human decision-making. For example, immigration decisions influenced by AI tools may overlook the socio-political dynamics or personal histories that shape individuals’ lives. When policy decisions become driven by data points alone, the systems designed to serve citizens may end up alienating them, as the systems lack the empathy and contextual understanding needed to address the full complexity of human existence. This hollowing out of governance shifts power away from human oversight, eroding the ability of democratic institutions to remain responsive and accountable to the people they serve.
The COVID-19 pandemic served as a catalyst for the rapid integration of AI in governance and society. As governments and businesses shifted to remote work models, AI tools were leveraged to maintain productivity and ensure public health safety. Technologies like contact tracing, automated customer service bots, and AI-driven health analytics became critical in managing the crisis. This acceleration not only enhanced the role of AI in public sector decision-making but also pushed the boundaries of its application, embedding it deeper into the governance framework.
The pandemic also saw the domestication of AI through consumer devices, which became central to everyday life. With lockdowns and social distancing measures in place, reliance on digital tools grew, and AI-powered applications—like virtual assistants, fitness trackers, and personalized recommendation systems—found a more prominent place in households. These devices, which had once been seen as niche, became essential tools for managing work, health, and social connections. The widespread use of AI in homes highlighted the shift in governance, where decision-making and the management of societal norms increasingly came under the control of automated systems, marking a techno-political shift in how people interact with technology.In revisiting Berkeley’s critique of infinitesimals, we find philosophical parallels with the rise of AI. Berkeley questioned the very foundation of knowledge, suggesting that our perceptions of the material world were based on subjective experience, not objective truths. Similarly, AI operates in a realm where data is processed and interpreted through systems that may lack subjective human experience. AI doesn’t “understand” the data in the same way humans do, yet it shapes decision-making processes that affect real-world outcomes, creating an abstraction that can be detached from human experience.
This disconnection between machine and human experience leads to the dehumanization of knowledge. AI systems operate on algorithms that prioritize efficiency and optimization, but in doing so, they strip away the nuanced, context-driven understanding that humans bring to complex issues. Knowledge, in this sense, becomes something disembodied, divorced from the lived experiences and emotions that give it meaning. As AI continues to play a central role in governance, the process of knowledge becomes more mechanized and impersonal, further eroding the human dimension of understanding and ethical decision-making. The philosophical concerns raised by Berkeley are mirrored in the ways AI reshapes how we conceptualize and act on knowledge in a tech-driven world.
The rapid integration of AI into intellectual labour and governance presents a profound shift in how decisions are made and knowledge is structured. While AI offers the promise of efficiency and precision, its growing role raises critical concerns about the erosion of human agency and the humanistic dimensions of governance. As AI systems replace human judgment with algorithmic processes, the risk arises that complex social, political, and ethical issues may be oversimplified or misunderstood. The hollowing out of governance, where decision-making is increasingly abstracted from lived experiences, mirrors the philosophical critiques of abstraction seen in Berkeley’s work. The human element, rooted in experience, judgment, and empathy, remains crucial in the application of knowledge. Without mindful oversight, the adoption of AI in governance could result in a future where technology governs us, rather than serving us. To navigate these challenges, preserving human agency and ensuring that AI tools are used as aids rather than replacements is essential to maintaining a just and ethical society.
Berkeley’s philosophy of “immaterial ghosts”, where the immaterial influences the material world, aligns with Richter’s cloud paintings at Ottawa’s National Gallery of Canada, which evoke a similar sense of intangible presence. Both focus on the unseen: Berkeley’s spirits are ideas that influence our perceptions, while Richter’s clouds, as abstract forms, suggest the unknowable and elusive. In this way, Berkeley’s invisible world and Richter’s cloudscapes both invite us to confront the limits of human understanding, where the unseen shapes the visible.
In black and white, architecture transforms into pure form—sharp lines and intricate textures stand out, while windows become portals to another world. The absence of colour forces the eye to focus on structure, light, and shadow, revealing the timeless beauty of built environments.
Looking up at the National Art Gallery in Ottawa, the stark contrasts of its glass and stone façade come to life in black and white. The sharp edges and sweeping curves of the architecture create a powerful dialogue between light and shadow, revealing the gallery’s majestic presence.
Looking up at the Maman statue outside the National Art Gallery in Ottawa, its towering, spider-like form becomes an intense study in contrast. The black and white frame emphasizes the intricate details of its legs and body, casting dramatic shadows that evoke both awe and vulnerability.
Really enjoyed this work by Virgine Brunelle the other night at the National Art Centre. After a pretty awesome meal at 1Rideau I sat down for a sensory explosion of both visual and audio sensations.
In Fables, Virginie Brunelle creates a visceral exploration of chaos and resilience, where contemporary feminine archetypes collide in a raw, primal dance. Drawing from her background in violin, Brunelle intricately weaves rhythm and movement, pushing the boundaries of traditional dance. The performers’ bodies, mostly naked and raw, amplified by their breath and cries, move through a sonic landscape composed by Philippe Brault and performed live by Laurier Rajotte on the piano, embodying a world in turmoil yet yearning for hope and humanity.
A particularly striking element is the immersive audio experience in the opening set, where a cast member swings a microphone close to the dancers, amplifying their physicality. This not only heightened the intimacy of the piece but also allowed me to feel the dancers’ movements—every breath, every collision becomes a tactile experience. Very immersive. The live soundscape intertwines with the dancers’ raw physicality, drawing the audience deeper into the emotional urgency of the piece. This fusion of sight and sound creates a profound connection, turning the stage into a space where chaos, music, and movement converge in a shared sensory reality.
Or how I am spending too much time on ChatGPT. ChatGPT is now being incorporated into the Bing Search engine BTW.
And forget about the hand wringing about teachers being underpaid to read computer generated text or the oft-repeated end of the humanities, most dangerously, this will make it easier for people to write malware.
In terms of data literacy, literacy is literacy, right? So I asked ChatGPT to give me a simplified overview of what it does when a user types text into its interface. As an aside, I have been testing it with gibberish much akin to the Turkish thriller Hot Skull (see another post): communication (ergo civilization) depends on iterative translation of understandable context and content – the Chomskean reference in the first episode of the show, I assume.
As you can see below from ChatGPT’s response it is straight forward semantic analysis but this semiology is, to me, not the same as that for humans: machine semiotics ≠ human semiotics. But how is it different is what I am most interested by. Can machine dynamic semantics be trained solely using iterative utterance pairs? What can I learn about human learning from learning about machine learning trying to mimic human learning. Am I caught in a cyclical redundancy error?
When I receive a user's input, here is a more detailed explanation of the process I go through:
For example, I know that verbs are often used to describe actions or states of being, while nouns are used to name people, places, or things. Adjectives are used to describe nouns, and adverbs are used to describe verbs, adjectives, and other adverbs.
1. Tokenization: The first step is to split the input into individual words and punctuation marks, a process called tokenization. This allows me to analyze the input at the word level.
2. Part-of-speech tagging: Next, I assign a part of speech (such as noun, verb, adjective, etc.) to each word in the input, based on its definition and its role in the sentence.
3. Dependency parsing: I then analyze the relationships between the words in the input, taking into account their grammatical roles and the structure of the sentence. This allows me to understand the meaning of the sentence as a whole.
4. Sense disambiguation: If a word has multiple meanings, I use the context of the sentence and the rest of the input to determine which meaning is most appropriate.
5. Entity recognition: I also try to identify any named entities (proper nouns) in the input, such as people, places, and organizations.
6. Response generation: With all of this information, I can then use my knowledge and understanding of language to generate an appropriate response. This might involve looking up information, using my pre-defined responses to similar inputs, or using my understanding of the user's intentions to formulate a response.
In order to determine the role that a word is playing in a sentence (also known as its part of speech), I use my understanding of grammar and the relationships between words.
By analyzing the relationships between words in a sentence and their definitions, I can determine the part of speech of each word and understand the overall meaning of the sentence.
I followed up with a question about what item 4 was, which seems a key step in its assignation. How does it connect content with context?
Sense disambiguation is the process of identifying the correct meaning of a word in a given context. Words can often have multiple meanings, and it's important to choose the right one in order to understand the intended meaning of a sentence.
For example, the word "bass" can be a noun referring to a type of fish, or it can be a noun referring to a low-frequency sound or a range of notes in music. In order to disambiguate the sense of the word, it's necessary to consider the context in which it is used. If the word appears in a sentence about music, it is more likely to refer to a range of notes, while if it appears in a sentence about fishing, it is more likely to refer to a type of fish.
There are a number of techniques that I can use to disambiguate the sense of a word, including analyzing the surrounding words and phrases, considering the overall topic of the conversation, and using my general knowledge about the world. By using these techniques, I can choose the most appropriate meaning for a word and better understand the intended meaning of the sentence.
I asked ChatGPT is come up with the plot for a cop buddy movie between a person (me) and an AI program. While not going to win points for originality, I am really intrigued by the two “mishaps and misadventures”, i.e. fire alarm and the vending machine.
I’m also interested that I didn’t provide the name or gender of the AI but I wonder if the word “sassy” would have been used if the name chosen wasn’t a “feminine” name. This may be an example of learned bias being reinforced.
The movie follows the adventures of two unlikely partners: a bumbling police detective named Michael and a sassy artificial intelligence named Alice. When Michael is assigned to a high-profile case involving a string of robberies, he is paired with Alice, a newly developed AI designed to assist with investigations.
At first, Michael is skeptical of Alice's capabilities and unwilling to trust a machine to help with his work. But as they team up to track down the perpetrators, Michael starts to see that Alice is more than just a machine - she is a valuable partner with her own unique skills and insights.
As they work together to solve the case, Michael and Alice encounter a series of hilarious mishaps and misadventures, including accidentally setting off the building's fire alarm while trying to hack into a computer, and getting into a heated argument with a vending machine over the price of a snack.
Despite the challenges they face, Michael and Alice always manage to come out on top thanks to their combined wit and determination. In the end, they successfully crack the case and bring the perpetrators to justice, becoming the most unlikely crime-solving duo in the city. As they celebrate their victory, Michael realizes that he has learned to appreciate and trust Alice's unique abilities, and the two vow to continue working together to keep the city safe and laughing.
Each time I practice I look to one or more of the following thoughts – what do I confront? What do I transform? What do I integrate or innovate? I always like to have a clearly stated intention when I begin.
I intend to assemble here and in subsequent posts, accessible objects that I use in my daily ritual.
I begin each day during brahma muhurta, one hour and thirty-six minutes before the rise of the sun. This is the penultimate muhurta, a period of 48 minutes – 1/30th of a day. Sunrise in Ottawa today was at 7:38.
I will digress that this word muhurta in Sanskrit is made up of two sounds that refers to the immediate and to how this immediacy functions as an ordering – a perpetual now found in thinkers familiar in the Western tradition from Heraclitus to Nietzsche to Latour.
I will order my bed and brush my teeth before I begin my wake-up routine of pranayama and meditation. I was told a long time ago that monks always start with ordering their space in preparation for practice which makes a lot of sense to me even though I am no monk. And while my meditation always begins during brahma muhurta, my pranayama may end after sunrise.
My goto app in the morning for meditation is Oak. I dont use it any other time except for my morning routine. It is a habit. Its free and does exactly what I needed it to do when I started this ritual when I was at home during the pandemic. It show my progress (good for motivation since I easily depreciate my accomplishments) and has both breathing and mediation timers including box breathing. I dont’ use it for Tummo sessions which I will describe in another post.
Oak is pretty basic but a nice GUI and breathing shouldn’t be anything but basic anyway. IOS only. I did pay for the course and it was worth the 2 bucks I paid, if, for nothing else, as an offering to initiate my ritual.
My current streak. Its on my old phone too so I always bring it with me when I travel since even though the app is on my new devices, my streak record is only on this phone and I cannot break the chain!*
*Breaking the chain refers to Jerry Seinfeld's rule about being successful at things - using a visual indication - in his case a calendar where he puts a big X each day he writes jokes. By having a visual cue, it was much easier for me pay attention to creating rituals and habits. It is ritual, according to Oscar Wilde, that is the origin of religion and, for us moderns, the basis of Cognitive Behavioural Therapy too.
I was using pillows on the floor or a chair and decided to upgrade my meditation setup during the pandemic. Again, like breathing, this should be simple. I started on a chair and laying down. And even my zabuton from Half Moon, while comfortable, isnt perfect but all you really do just need to sit. Or lie down. Pretty simple. Don’t buy anything, all you need is your breath.
I still enjoy and watch this series : 7 Days to Mindfulness with Kirat Randhawa. I appreciate her calm demeanour in this series along with some guidance on topics such as finding your centre and how clarity and curiosity work in the meditation process and its use in day to day life when I am not in mediation. I also watch Netflix’s Headspace Guide to Mediation which was very accessbile and enjoyable. It was a great beginner resource for me when I initially found it, a perpetual beginner.
I come back to these teachings once in a while and find new things in them, each time! Next up, my pranayama practice and books!