If an aesthetic object is created without any human purpose, do we witness an accidental product of nature or do we perceive new forms of artificial intentions?
The project Helin investigates new perspectives towards the relationship of an individual artistic intention in respect to the collective nature of human expressions. By blending the concepts of human, nature and technology, we seek to discuss a holistic concept of natural phenomena. If an aesthetic object is created without any human leitmotif, do we witness an accidental product of nature or do we perceive new forms of organic intentions? Can art be considered natural, as it occurs even in the absence of the individual intent.
According to Immanuel Kant, nature follows no purpose but only creates the illusion of “Zweck”. In contrast to this natural phenomenon, art is initiated by a human purpose to shape a finite artefact. Using intelligent yet highly autonomous technology, we eliminate this human desire from the process of creation to witness the oxymoron of using technology to perceive natural art.
Our technical approach is based upon a novel procedure for Deep Learning in 3D space. The corresponding custom network architecture is trained on 120.000 sculptures and generates new alternative sculptures 30 times a second. Hence the sculpture Helin embodies an organic data mirror emerging from our collective historical heritage.
We enable insights into historical, spatial data of human expressions and translate this assembled intelligence into a natural and tangible artefact of heavy dark marble. This material is a central element of the artwork, as it renders a physical snapshot of an endless series of exchanged information into space.
A custom Deep Learning approach to train three dimensional shapes was developed at our studio and ranges across a diverse set of digital tools for advanced spatial reconstruction. Our 10-month research phase based upon classical Generative Adversarial Networks resulted into a new concept to compress three-dimensional data into lower dimensional container. Hence, we can heavily reduce the load of our neural networks and explore intelligent insights into spatial artefacts in Realtime. We call the corresponding method RayGAN.
Our production pipeline includes consumer tools such as Houdini and c4d. However, the core of our approach lays in a set of advanced custom procedures and algorithms written in C-sharp, Python and GPU shader languages such as DX11 and GLSL. We are especially proud to have crafted one of the highest resolution concepts available for 3d GAN and developed a confident case, advocating for the artistic contribution to scientific research.
The Dataset to train our network consists of publicly available 3d shapes related to historical busts. To remove bias from the dataset and establish a universal view towards human expression, we included material across all accessible epochs and cultures from online sources such as scan the world. Our original source consists of nearly 10.000 individual shapes. However, an applicable dataset for our network needs intense manual correction and generative extension. Each sculpture had to be re-oriented to face the z-axis, re-meshed to create a uniform topology and normalized in 3-dimensional space. Furthermore, we used different 3D noise functions and subdivisions to create new arrangements and permutations of the original historical busts. After a process of 3 month we were able to extend the dataset to a total of 120.000 sculptures and finally train our network. To explore the finale generated latent space of our collective historical art, we wrote a custom pipeline offering real-time control and feedback 25 times a second.
Art Direction: Christian Mio Loclair
AI Artist: Meredith Thomas
Generative Design: Helin Ulas
Production Management: Celia Bugniot
Management: Thomas Johann Lorenz
Camera: Marco Petracci
Photo: Marco Petracci
Video Production: Ali Naddafi
Sound Composition: Christian Losert
Robot Operations: Tor Art