A technological experiment recently published in the International Journal of Architectural Computing, conducted by researchers Matías del Campo and Sandra Manninger of the University of Michigan, opens up new possibilities for architecture. According to Del Campo’s statements to the specialised journal Tech Explore, the experiment responds to “a long obsession with the idea of cross-pollinating the fields of architecture and Artificial Intelligence (AI)”. In effect, it is a new application of AI to the field of architectural design. Specifically, it involves the use of convolutional neural networks (CNNs) for the automatic generation of architectural designs.
CNNs are complex algorithmic systems that allow computers to “learn” and, by learning, to recognise complex objects and shapes such as the human face (facial recognition is becoming increasingly widespread and is based on this technology), animals or, in the case of the University of Michigan experiment, architectural styles. The fundamental step that Del Campo and Manninger have taken in their research is to convert the passive shape recognition system of CNNs into an active one, which, from those shapes, is capable of generating others. To do this, they had to resort to generative algorithms such as Deep Dream, a technology that emulates the dream production of the human brain.
In the first steps of their work, they experimented with 2D neural networks, the simplest of the existing ones. They applied their method to artistic styles in painting. In this way, they were able to transform a model image to adopt the style of a classical painter such as Rembrandt. The next step was to apply these transformation algorithms to 3D images, in this case to architectural models.