Can AI help us to understand how the human mind works, and how architects think? This essay argues that although human intelligence and artificial intelligence are very different things, analogies can be drawn. AI, it could be argued, can potentially offer us insights into how the human mind works, and how architects are trained to think. The concept of “architecturalisation” is introduced here to describe how architects have a tendency not only to translate elements of nature into architectural forms, but also to translate abstract philosophical concepts into architectural forms, often leading to a profound misunderstanding of those concepts.
The term often used to describe AI-generated images is “machine hallucinations”. Although we might think that “machine hallucinations” are fundamentally different to human hallucinations, there are surprising similarities, and we can make direct comparisons between how AI operates and how humans operate.1 In fact, AI sees the world through the lens of the data on which it has been trained, just as we also see the world through the lens on which we have been trained. For example, if we were to ask anyone how many colours there are in a rainbow, the chances are that they would reply “seven”, even though there are an infinite number of colours in the spectrum that constitutes a rainbow.2 This is because we are trained to understand that there are seven colours in a rainbow. If we were to ask any architect what a “functionalist” building looks like, the chances are they would describe a white building on piloti with a flat roof, even though flat roofs are not very functional in most countries because they tend to leak. This is because we are trained at architecture school to understand that functionalist buildings have flat roofs.
"We don't just passively perceive the world, we actively generate it. The world we experience comes as much, if not more, from the inside out as from the outside in.”
In this respect we are not dissimilar to Neural Networks (NNs),3 which also see the world as they have been trained. In Gloomy Sunday (2017), computational artist Memo Akten shows how a NN interprets objects through a particular lens based on its training dataset. If trained on flowers, the NN will read flowers into everything. As Akten observes, “The picture we see in our conscious mind is not a mirror image of the outside world, but is a reconstruction based on our expectations and prior beliefs.”4

Illustration 1. Memo Akten, "Gloomy Sunday" (2017). Credit: Memo Akten.
What can AI tell us about the mind of the architect?
Predictive Perception
The idea that we are conditioned to see the world in a certain way comes under the theory of “predictive perception”, a theory that is gaining traction in neuroscience. According to neuroscientist, Anil Seth, the brain is locked inside the “boney vault of the skull” without light or sound.5 The brain, therefore, attempts its “best guess” about what is outside, attempting to make sense of a “constant barrage of electrical signals which are only indirectly related to things out there in the world”.6 Perception is therefore highly subjective. The brain does not simply receive signals from the outside. It actively partakes in making sense of what it is perceiving:
“Instead of perception depending largely on signals coming into the brain from the outside world, it depends as much, if not more, on perceptual predictions flowing in the opposite direction. We don't just passively perceive the world, we actively generate it. The world we experience comes as much, if not more, from the inside out as from the outside in.”7

Illustration 2. Keisuke Suzuki, Anil Seth, Hallucination Machine, (2017). Credit: Anil Seth, University of Sussex.
Seth argues that the brain, therefore, makes predictions – or “hallucinations”, as he calls them – about what it is perceiving. But these hallucinations need to be reined in so as to prevent incorrect predictions. There needs to be a degree of control involved. Seth illustrates this with a video processed using a DeepDream algorithm that shows how overly strong perceptual predictions can lead to weird hallucinatory perceptions, where the viewer comes to read images of dogs into everything that they see.8 This leads Seth to conclude that if hallucination is a form of uncontrolled perception, then perception itself must be a form of “controlled hallucination”:9
“If hallucination is a kind of controlled perception, then perception right here and right now is also a kind of hallucination, but a controlled hallucination in which the brain’s predictions are being reined in by sensory information from the world. In fact, we’re all hallucinating all the time, including right now. It’s just that when we agree about our hallucinations, we call that reality.”10
What is interesting here is that Seth also uses DeepDream to illustrate the way in which we see the world through a form of predictive perception. Moreover, just as we use the expression ‘machine hallucinations’ to describe the images generated through deep learning, Seth uses the expression ‘controlled hallucinations’ to describe the process of predictive perception on the part of humans. According to this logic, when we look at the world we are actively hallucinating, similar to how machines are ‘hallucinating’ when they generate images. There is an implicit parallel being drawn, then, between human perception and computational image generation. The two would appear to be not as dissimilar as we might at first imagine.
Our perception of reality comes to us “via a detour through the maze of the imagination”, it is coloured and distorted by our imagination, no less than our outlook on the world is distorted by the way that we have been trained to view the world.
Equally, we could compare Seth’s notion of the “controlled hallucination” at work in our perception of the world with Slavoj Zizek’s notion of “fantasy” that serves as a lens through which we perceive the world. Zizek’s view is grounded in Lacanian thinking. According to Lacan, we do not access the “Real” except in moments of jouissance.11 What we take for the real is not the real in itself, but an appearance of reality.12 In fact our perception of reality comes to us “via a detour through the maze of the imagination”, it is coloured and distorted by our imagination, no less than our outlook on the world is distorted by the way that we have been trained to view the world, as the two experiments by Seth and Akten illustrate. Moreover, for Zizek, our perception of reality is a fantasy of reality.13 Fantasy, then, plays a key role in how we understand reality’ In fact fantasy, for Zizek, is literally “constitutive of how we see reality”: “Far from being a kind of fragment of our dreams that prevents us from ‘seeing reality as it effectively is’, fantasy is constitutive of what we call reality: “the most common bodily “reality” is constituted via a detour through the maze of imagination.”14
What virtual reality reveals is not how “virtual” virtual reality is, but rather how virtual our understanding of reality is.
Zizek goes further and speculates whether, as we use computation to simulate human thought ever more closely, an inversion might take place, such that human thought begins to emulate a computer programme, and our own understanding of the world itself becomes a model:
“What if this ‘model’ is already a model of the ‘original’ itself, what if human intelligence itself operates like a computer, is ‘programmed’, etc.? The computer raises in pure form the question of semblance, a discourse which would not be a simulacrum: it is clear that the computer in some sense only ‘simulates’ thought; yet how does the total simulation of thought differ from ‘real’ thought?”15
In other words, Zizek is speculating that we might be living in a simulation, an argument originally floated by sociologist Jean Baudrillard and made famous in the movie The Matrix, but since supported by other more recent philosophical arguments.16 Likewise, we can draw comparisons between this view and the way that Seth and other neuroscientists understand our conception of the world as a constructed one, based on prior beliefs and experiences, such that what we perceive is also a model – a simulation – of the world. This leads Zizek to conclude that it would be wrong to denigrate virtual reality as a lesser form of reality. What virtual reality reveals is not how “virtual” virtual reality is, but rather how virtual our understanding of reality is: “The ultimate lesson of virtual reality is the virtualization of the very true reality.”17

Illustration 3. Fernando Salcedo, Master of Architecture, FIU, Architectural Hallucinations, (2020). Credit: Fernando Salcedo, Florida International University
Architecturalisations
Could we compare our training as architects to the training of NNs? Florida International University (FIU) architecture student Fernando Salcedo conducted an AI experiment, Architectural Hallucinations (2020) similar to Gloomy Sunday, training a NN on a research centre designed by ZHA. The NN then reads architectural forms into everything it sees, including items of clothing, through a process similar to predictive perception. In this weirdly distorted view, a crumpled t-shirt is read as a warped ZHA project. Importantly, the NN reads these forms through a filter. This experiment raises some interesting questions. Is this perhaps how architects see the world, reading potential buildings into everything that they see? In other words, is the training of an architect similar to the training of a NN?
To claim that this is the case would be an argument based on pure analogy. Nonetheless it is tempting to pursue this line of enquiry further. Might this experiment offer insights, for example, into the nature of inspiration itself – the act of reading the world through a particular lens and then re-expressing that vision through design?Are we trained to interpret external forms just as a NN is trained to interpret images? Equally, might we also hallucinate architectural designs based on our training, just like a NN?
Architects tend to architecturalise the world and read it in architectural terms.
We could describe this process as a form of “architecturalisation”. In effect architects tend to architecturalise the world and read it in architectural terms. As with DeepDream, they are trained on a specific dataset. They see the world in terms of potential buildings. This allows architects to be inspired by various non-architectural items – such as biological entities and geological formations – and incorporate them into their architectural expressions. This might explain, for example, how Jorn Utzon was inspired by the billowing sails of yachts in Sydney Harbour and read them as potential vaults for the design of his opera house. This principle is not dissimilar to reading faces into clouds, or making sense of pointillist paintings by joining the dots.
By extension, this would also explain how architects tend to aestheticize the world, reading it in aesthetic terms, and rinsing it of economic, social and political considerations.18 Why is it, for example, that many architects tend to privilege design concerns over economic concerns, even though economic factors are the driver of any design? Indeed, why are there so few economic references in books on architecture, apart from the cost on their back covers? It is as though architects tend to see the world through a rose-tinted, aestheticizing lens.
The gaze of the architect is not neutral. It has been trained, no less than a neural network has been trained.
It is important to understand, then, that the gaze of the architect is not neutral. It has been trained, no less than a neural network has been trained. Whether we understand this conditioned outlook in Seth’s terms as a form of controlled hallucination or in Zizek’s terms as being constituted through fantasy, it is clear that architects see the world not as it is but as they are trained to see it. But is this not so dissimilar to the message behind the term ‘deconstruction’ coined by Derrida?
Derrida argues that our perception of the world carries with it certain biases, just as the data used in AI carries with it certain biases.19 Our perception of the world is therefore constructed. And this constructed perception is a distorted one, as we have seen with the standard interpretation of a rainbow. What needs to be exposed, then, is how our understanding of the world has been constructed. And this constructed way of understanding the world itself needs to be deconstructed. The same issue applies to architecture.
This might also help to explain, for example, why architects so often misinterpret philosophical concepts, such as the “fold” in Gilles Deleuze, as though they refer to architectural forms, when they have nothing to do with architecture.20 The same happened with Jacques Derrida. Architects seemed to think that deconstruction referred to the construction of buildings, whereas in fact it refers to our constructed understanding of the world, such as believing that flat roofs are functional. As Derrida puts it, there is an “architecture of architecture.” Our understanding of construction is itself constructed.
These misguided attempts to read architectural form into abstract concepts are perfect examples of the problem of architecturalisation.
It might also explain the mistake of some architects of assuming terms referring to the digital – such as discrete – refer potentially to architectural forms, even though the digital itself is immaterial and without form.21 It might also explain how historians read that Big Data is messy, and therefore assume the architectural style of Big Data must also be messy.22 These misguided attempts to read architectural form into abstract concepts are perfect examples of the problem of architecturalisation.
In the Mirror of AI
Let us be clear. The meaning of any term varies according to its context. For example, machine learning and human learning are quite different, as are artificial intelligence and human intelligence. There is a significant difference between AI and human intelligence, not least because AI does not possess consciousness – at least for now. We must be cautious of projecting human attributes on to AI – and vice versa. Likewise, machine hallucinations and human hallucinations are not the same.
Digital simulations can often help us to understand analogue behaviours.
There are moments, however, when NNs offer insights into human intelligence, and parallels present themselves. Of course, in such instances, parallels remain just parallels. A behaviour in one domain does not explain a similar behaviour in another. But nonetheless, digital simulations can often help us to understand analogue behaviours. For example, it was not until artificial life expert, Craig Reynolds, simulated the flocking behaviour of birds using boids in 1986, that we could understand the behaviour of actual birds.23 It is as though AI might help become a mirror in which to understand human intelligence.
Might AI also help us, then, to understand the mind of the architect?
This essay was originally published as "Machine Hallucinations: Architecture and Artificial Intelligence" in AD, Volume 92, Issue 3 (May/June 2022) Pp. 66-71
Bio
Neil Leach is a British architect and theorist. He is a Professor at FIU, Tongji and EGS, and has also taught at Harvard, AA, SCI-Arc, Columbia, USC and Cornell. He is a co-founder of DigitalFUTURES and an academician within the Academy of Europe. He has also the recipient of 2 NASA research grants exploring 3D printing technologies for the Moon and Mars. Neil Leach has published over 40 books, including On the Art of Building in Ten Books (MIT, 1988), Rethinking Architecture (Routledge, 1997), The Anaesthetics of Architecture (MIT, 1999), Camouflage (MIT, 2006), Digital Cities (Wiley, 2009) Computational Design (Tongji, 2017) and Architecture in the Age of Artificial Intelligence (Bloomsbury, 2022).
Notes
1 However, we should be conscious of the risk of anthropomorphising computer operations by using human-centric language to describe their operations.
2 Indeed, according to neuroscientist, Anil Seth, our whole notion of colour is itself a construct, generated by the brain: “DigitalFUTURES”.
3 See Neil Leach, Architecture in the Age of Artificial Intelligence: An Introduction to AI for Architects (London: Bloomsbury, 2022).
4 Ibid.
5 Seth, p. 75
6 Seth, “Your Brain Hallucinates Your Conscious Reality”, TED Talk, April 2017, shorturl.at/gpR37
7 Ibid. Camouflage, pp. 140-141.
8 Seth and his team have explored how altered states of consciousness help us to understand underlying conscious perception: Keisuke Suzuki, Warrick Roseboom, David J. Schwartzman and Anil K. Seth, “A Deep-Dream Virtual Reality Platform for Studying Altered Perceptual Phenomenology”, Scientific Reports 7, no. 15982 (2017), DOI:10.1038/s41598-017-16316-2.
9 Rick Grush has previously used the notion of “controlled hallucination”. Grush acknowledges, however, that the expression had been coined originally by Ramesh Jain in a talk at UCSD. Rick Grush, “The Emulation Theory of Representation: Motor Control, Imagery, and Perception”, Behavioral and Brain Sciences 27 (2004): 377–442.
10 Seth, “Your Brain Hallucinates Your Conscious Reality”.
11 Jouissance is the bittersweet moment that flares up in aesthetic contemplation. See Leach, Camouflage, p. 232. See also Dylan Evans, An Introductory Dictionary of Lacanian Psychoanalysis (London: Routledge, 1996): 92.
12 This is precisely what Lacan has in mind when he says that fantasy is the ultimate support of reality: “’reality’ stabilizes itself when some fantasy-frame of a ‘symbolic bliss’ forecloses the view into the abyss of the Real.” Slavoj Zizek, “From Virtual Reality to the Virtualisation of Reality”, in Leach, Designing for a Digital World: 122.
13 Idem.
14 Zizek, “From Virtual Reality to the Virtualisation of Reality”, 122.
15 Ibid, 45.
16 Nic Bostrom, “Are You Living in a Computer Simulation?”, Philosophical Quarterly 53, no. 211 (2003): 243–55.
17 Idem.
18 Neil Leach, The Anaesthetics of Architecture (Cambridge, MA: MIT Press, 1999).
19 Mitchell, Artificial Intelligence, 106–8.
20 “The concept of the fold allows Deleuze to think creatively about the production of subjectivity, and ultimately about the possibilities for, and production of, non-human forms of subjectivity.” Simon Sullivan, “The Fold”, in Adrian Parr (ed), The Deleuze Dictionary (Edinburgh: EdinburghUP, 2012), 107.
21 Neil Leach, “There Is No Such Thing as a Digital Building: A Critique of the Discrete”, in Gilles Retsin (ed.), Discrete: Reappraising the Digital in Architecture (AD, Profile 258, 2019), 139.
22 Mario Carpo, The Second Digital Turn: Design Beyond Intelligence (Cambridge, MA MIT Press, 2017).
23 Craig Reynolds, “Flocks, Herds, and Schools: A Distributed Behavioral Model”, Computer Graphics, 21(4) (July 1987) 25-34.
List of Images
Illustration 1. Memo Akten, Gloomy Sunday (2017). In this experiment, Akten shows how a neural network trained on images of ocean waves, fire, clouds and flowers will read them into everything that it sees. Credit: Memo Akten
Illustration 2. Keisuke Suzuki, Anil Seth, Hallucination Machine, (2017). A video processed using a DeepDream algorithm illustrates how overly strong perceptual predictions can lead to weird hallucinatory perceptions, where individuals come to read images of whatever they have been trained on into everything that they see – in this case images of dogs. Credit: Anil Seth, University of Sussex.
Illustration 3. Fernando Salcedo, Master of Architecture, FIU, Architectural Hallucinations, (2020). A study using a CycleGAN trained on images of the King Abdullah Petroleum Studies and Research Center, Riyadh, Saudi Arabia, to reinterpret images of clothing on the left to produce a hallucination of other possible designs for that building. Credit: Fernando Salcedo, Florida International University.
Illustrations 4, 5, 6. Giovanna Pillaca, Futuristic Temple in India, 2021. Image generated through CLIP and VQGan based on the prompt, Futuristic Temple in India, and the pre-prompt Ma Yansong, Morphosis Architects and Wolf Prix. Credit: Giovanna Pillaca
