A neon-noir automotive showcase created in Unreal Engine 5, blending modern car design with cyberpunk aesthetics. The project features custom neon material systems, dynamic ray-traced reflections, and atmospheric lighting to create an immersive retro-futuristic environment.
Technical Specifications
Unreal Engine 5
NVIDIA RTX Ray Tracing
Custom Neon Material System
Dynamic Lighting Setup
Real-time Rendering
This visualization explores the intersection of classic automotive presentation and synthwave aesthetics, creating a unique atmosphere through careful attention to lighting, materials, and environmental design.
Revolutionizing character animation, ACE TTS brings MetaHumans to life with realistic voice modulation, creating immersive experiences in gaming, film, and other interactive media.
Introducing the innovative ACE TTS plugin, which revolutionizes how you create lifelike MetaHumans with stunning voice and lip sync capabilities.
This powerful tool utilizes NVIDIA ACE technology to enhance your user’s experience by making their virtual characters even more realistic and engaging.
This video showcases the V2 of ComfyUI’s Style-Transfer feature, designed to create unique, experimental visuals using TensorFlow’s Neural Style Transfer technology. 🎨✨
Watch how you can transform your content into captivating, artistic images that will surprise and engage your audience. From custom nodes to a seamless workflow, this tutorial covers everything you need to create stunning visuals for your projects.
Hello and welcome.
Today I want to demonstrate a style transfer workflow using ComfyUI.
And I know that I’ve been working on a fast style transfer node.
First of all, you need to cancel your ComfyUI runtime.
And we need to copy and paste these commands into your terminal.
I will provide a link in the description.
This is just a workaround.
So you…
You see there are conflicts with protobuf versions conflicting.
Anyway, we’ll run Comfy again using python main.py.
And hopefully this will be resolved.
So we can refresh the window.
It seems that the fast style transfer node has been loaded.
Successfully.
You can find this node in my ComfyUI node toolbox.
And you need to download this as a zip or copy git clone into your ComfyUI directory,
which in my case is right here.
So there’s a bunch of…
There are a bunch of custom nodes here.
So I’m just going to…
There’s a bunch of custom nodes already.
But you need to paste this whole folder into the custom nodes directory.
There’s also the fast style transfer.
This is the original fast style transfer repo.
The code is actually based on the original one from TensorFlow tutorial.
From TensorFlow.
From TensorFlow.
From TensorFlow.
From the TensorFlow tutorial.
But I’ve modified it a bit and you can also download this onto your machine and run it
locally.
So git clone and set up the conda environment with just these two commands.
And then you should be able to run this on your machine.
So let’s go back into ComfyUI.
We have two images.
The one is the input image and the other one…
the style reference image, we can see that this is a bit of low quality, but let’s try
and run this actually.
So I’m doing a bit of color conversion here, but the main backbone of the effect is the
fast style transfer.
And then we’re moving into the VAE encoder, and I’m also injecting latent noise into the
latent space provided with a conditioning clip text encoder, so I’m describing this
image above.
Basically the style image.
So you can use ChatGPT, or you can use your own words to describe what this is doing,
or what style this actually is.
Be careful with the noise style strength, you need to play around with that as well,
but usually injecting less noise will be much easier.
Okay.
Okay.
Okay.
So the style will be more coherent to the original image, and also the CFG classifier
free guidance will define how much weight is given to the text encoder in that sense
of how much effect the text description will affect the final output, or how strong the
the image will affect this input image.
I hope this is clear.
And then finally, I’m also upscaling the output, final output.
Now let’s just run this.
So we can see it’s generating the first images, they’re pretty small, I think, and this is
using a bit more VRAM now since it’s also encoding these images again into latent space.
I will fast forward time here until the generation is done.
Okay.
The images have completed.
We can see them down here.
This is the effect.
This image has quite low resolution.
Let’s try with another image.
If we go into our folder again, I think this one is interesting.
By the way, in Comfy, you can shift-select nodes.
And then align them.
Change this prompt.
Let’s ask Ollama, Ollama, run Ollama 3.1, I think.
Modify prompt.
Modify.
Modify.
Modify.
Modify.
Modify.
Modify.
Modify.
Modify.
Modify.
Modify.
Run.
Disconnect.
Spread of shield.
Creator’s Irohaz.
Ah, actually we’re having a portrait here.
So what I’ll do is paste this.
So that’s that.
I don’t really know the overall picture.
But I’ll just say this.
What I’m going to do is connect to 거 aquilo.
Okay, so ChatGP is much better with description of stable diffusion prompts, but it’s running
locally and it’s free and it doesn’t send any data, you know, to OpenAI for them to
gather and so on, et cetera.
So let’s paste this in here, probably, okay.
So let’s run this again, queue up.
We’ll use a random seed, I think this is interesting.
I will modify time again and I will see you when this is done.
Okay, so the generations have finished, they’re interesting, but not quite what we want.
Think something is wrong here, let’s try a different sampler, we can try load this quantized
model.
This will require less vram.
Okay, I’m not particularly good in maths, but set this to 0.3.
We’re rounding this value, so 0.4, subtract, subtracting the two latents, and I will also
modify time again and I will see you when this has finished rendering.
Okay, so the generations are done and we’re having pretty much the same results as before.
How can I improve this?
Well, first of all, we can copy the complete prompt here.
And paste it into the clip conditioning, so just get some weight here.
So something is happening with the sampling, just want to see if we bypass this node.
Hold check.
Okay.
Okay.
I’ll generate again and fast forward the time when this is done.
Not quite what we want, but I think it’s interesting.
So I think the latent noise is doing some funky stuff with the image, with the output
image.
So feel free to bypass all of these nodes.
And we’ll run the queue again.
And I will fast forward time yet another time.
Okay.
So this is bypassing.
Okay.
Okay.
Thank God.
Okay.
So we did it.
And then we change notes.
Okay.
It’s done.
So just 그걸就是說 the noise injection pod and upscaling the image.
So we still get artifacts in there.
Anyway, I hope you found this introduction demo, streaming video interesting.
I think I like this one.
Okay.
Bye.
I’ll save this to my downloads folder, and I hope you learned something or found it interesting.
Links will be in the description.
So I have the ComfyUI, try to find this workflow suite, which will host these workflows.
And I will also try to update the GitHub README with the appropriate workflow.
I will put this in here and also link that up in the YouTube description.
Please help support the channel.
If you like it, share it.
Thanks for watching.
Hope to see you in the next one.
Goodbye.
🚀 Push the boundaries of creativity with ComfyUI’s groundbreaking Style-Transfer Node, designed to generate unique, experimental visuals using TensorFlow’s Neural Style Transfer.
Transform your content into captivating, artistic images that surprise and engage your audience.
Download the custom node and workflow.
Blend Celtic mythology with contemporary storytelling through Unreal Engine 5’s latest rendering capabilities.
🌈 Psychedelic Rapture
Immerse yourself in the transcendent journey of “Psychedelic Rapture”
as it takes you on a sonic exploration of mind-bending beats and ethereal melodies.
This captivating soundtrack seamlessly blends psychedelic trance with ambient sounds,
creating a euphoric experience that will elevate your senses and transport you to another realm.
This student semester project focuses on the creation of immersive, interactive, and engaging virtual environments. It serves as a demonstration of various tools available in Unreal Engine 5, showcasing the utilization of Lumen, World Building, lighting, Materials, and Blueprints.
Key Features:
Lumen: Enhanced visual realism with dynamic global illumination
World Building: Construction of detailed and expansive virtual worlds
Advanced Lighting: Creating depth and atmosphere
Materials: Achieving lifelike textures and reflections
Blueprints: Implementing interactive elements and gameplay mechanics
Through this project, we showcase the potential of Unreal Engine 5 in creating virtual environments that captivate users and push the boundaries of realism.
My real-time showreel, created by Limbicnation, is a captivating display of interactive and immersive content rendered in Unreal Engine.
This showcase highlights my best work utilizing Unreal Engine technology.
Step into the future with “Quantum Passage”, a sci-fi animated short that brings to life a world of adventure and possibility. Utilizing the latest technology from Unreal Engine 5, “Quantum Passage” offers a visual feast for sci-fi fans, set against a backdrop of stunning graphics and captivating storytelling. Get ready to explore the frontiers of imagination and join me on this intergalactic journey.
This project is a short video that showcases how generative AI models can be used to create visually stunning art in a short period of time. The theme revolves around “Heaven and Hell,” where powerful demonic energies are depicted as emanating from within the earth, symbolizing the eternal battle between good and evil. Through the use of cutting-edge AI, the evil is portrayed as ultimately being vanquished for eternity.💥
Technical Details & Credits
Special thanks to jpsaiart for providing initial prompt information, which I adapted slightly for this creation. I’ve used my own voice, though it still sounds a bit robotic due to the voice-cloning technology I used.
This video was generated using the Disco Diffusion Model, running on my Nvidia RTX 3090Ti Graphics Card. For those interested, I’m considering creating a tutorial to help others set up a similar workflow.
Artwork: Gero Doll | Limbicnation
Music: “Heaven and Hell (Part 2)” by Jeremy Blake from the YouTube Audio Library.
Masks and machines are older than we think they are. They’re part of the fabric of everyday life. Masks are what you wear when you don’t want to be seen, and machines are what we use to perform everyday tasks
There is something very special about sunflowers and bees, and I like to see them. I like to think that if we had the technology and tools to observe them, we would be amazed at the dance of these tiny animals. 🐝
It is early days, but could these new digital assets be a subject of a brighter future?
The question is, what value do these digital assets hold? I might argue that it is the
imminent signature and transaction linked into one gigantic network that evaluates and
stores data of the specific value at the given time.
“Illusion”, was an experimentation and metaphor to depict digital currencies like
Bitcoin (BTC) And Ethereum (ETH) linked with machine learning. In this surreal film,
I would like to show the potential of digital assets and machine learning algorithms
to generate unique digital items (NFT).
NFT’s might change the digital art market and are already doing so, by kickstarting
it into wider range of acceptance as a true artform. As we know, the machine needs
to be programmed in the very first stages. The initiator has the script to direct
the machine.
From that point onwards, every other step is based on the algorithm. Each prediction
is based on a previous assumption. This makes the process volatile and this is the
case with Crypto Assets as well. How knows what the outcome will be? It is a building
site if you will and it will be determined by the community, or by the people at large.
Cryptoart has proven to be a true store of value in terms of digital art and paths
the way towards a new digital economy in the artworld!
Masks and machines are older than we think. There is a specific obscurity about them. Mythologically the fascination of a mask has accompanied humanity since the dawn of Time. Neolithic stone age masks date back to 9000 years! The film CovidMasks depicts a psychotic state of a perception through an unaffected being. The point of view determines the absurdity of a deprecated perception as a collective whole. “The collective” whole’s delusion of reality would define this specific state of perception more precisely. Something as powerful as a mask and a machine combined however, could trigger an even more futuristic concept of the fusion of man and machine. This video clearly depicts a dystopian outcome of a “cyborg human” fully dominated by the machine entity. The AI Fusion – Quantum – Computer!
The moral of the story is that in a future where masks hide emotions, we lose a crucial anchor for evaluating and perceiving characters. This lack of veracity is a cognitive violation, altering and corrupting our sense of reality. As a result, this could lead to a complete loss of empathy on both sides of the character dynamic – that is, if machines ever reach the intelligence required to become sentient. However, for this to happen, the host (the biological agent) would need to merge with and ultimately gain control over the carbon-based and organic life forms.
For the 30th Anniversary of the fall of the Berlin Wall, I developed 3 Video Art installation pieces together with Monomango Studio and Jens Wischnewsky.
The Project was created using StyleGAN. The model was trained on a dataset of 50K + images.
Furthermore you can read more about the machine-learning model here -> StyleGAN Paper
For the Retune Studio Visit i developed a VR experience for the Oculus Quest.
We explored the VR space on the Quest, as a result, we came up with a simple idea of interaction.
by exploring space, color and sound in an abstract environment, we have managed to create a lo fi but engaging experience.
Personally I think the VR has the potential to dissolve borders between art, games, technology and experience.
Quest retune 4
Quest retune 3
Quest retune 2
Quest retune 1
Credits:
Creative director: Lois Kainhuber
Art and Code: Gero Doll
Graphic design / Art direction: Julia Parchimowicz
Director / Animator: Gero Doll Music & Sound design: Olivier Girardot Mastering: Jan Driver
(excerpt)
The film addresses the vast amount of negative impact in a society, based on capital and greed.The emphasis is on the reckless approach of capital corporations towards the environment, aiming to maximize profits. It is clear that constant growth is linked with imminent and predictable collapse, which in turn contributes to an ongoing circle of destruction and reconstruction.. [/su_highlight]
In this scenario of climate war and the collapse of societies, a fictional Cyborg character witnesses these events which are linked to the reckless power of greed for oil and other natural resources. This in results in large scale conflict and world war, also involving nuclear weapons in the end.[/su_highlight]
In return, the living planet counteracts with extreme weather and natural disasters, like volcanic eruptions, earthquakes, hurricanes, tornadoes, drought and many unforeseen consequences. The film does not convey a particular story. Nor is it directly linked to reality. But the style and the images are projected in a coherent order, to evoke a notion of a cyber dystopian future staged in a Sci-Fi scene with a ticking time bomb until doomsday. A collapse / apocalypse humans could have avoided, but history has proven them wrong. A metaphorical nightmare, into the abyss of dark forces at work, which are hauntingly real in the world’s current state.[/su_highlight]
Some might argue that conspiracies are necessary to bring about change for the better. In its core, the film mostly addresses the concerning components of the destruction of the environment and civilisation in the 21st century: Nuclear war, followed by climate change and other disruptive modern technology, which the governments implement and use on societies. There are many more and this film was made merely from a subjective standpoint. A future that nobody would want to live in, even though these are only computer graphics and can be viewed from a “safe distance”. This is a relatively new medium and art form, more relevant in today’s societies and art galleries than ever before. The film acts as a subliminal warning. It’s for the character in the film to decide on how to start acting, or to re-live the nightmare over and over again.
Designed to inform & influence Finnish policy makers, the “Virtual Forest” explores how current Forestry Management strategies are contributing towards climate breakdown. The 6 minute experience, led by the voice of Timo Vesala – Professor of Meteorology at the University of Helsinki, guides users through a hyper-real forest environment, combining verified science, creativity and domestic politics in one powerful story
I was commissioned by Neeeu Spaces GmbH as an Environment and 3D Artist.
“Virtual Forest” wurde entwickelt, im Bezug auf den Klimawandel und Globalen CO2 Ausstoß, um finnische politische Entscheidungsträger zu informieren und zu beeinflussen. Die 6-minütige Erfahrung, angeführt von der Stimme von Timo Vesala – Professor für Meteorologie an der Universität Helsinki, führt die Nutzer durch eine hyperreale Waldumgebung, die geprüfte Wissenschaft, Kreativität und Innenpolitik in einer kraftvollen Geschichte vereint.
Ich wurde im Auftrag von Neeeu Spaces GmbH Als 3D-Künstler angestellt und war verantwortlich für die Umsetzung der Umgebung in VR.
For this project my role was to create an entirely real time VR experience in Unity 3D. The viewer is centered and experiencing an ever evolving environment – contemplating changes, physically and digitally.
CREDITS:
Concept: Lois Kainhuber, Tim Stolzenburg, Olivier Fröhlich, Veith Michel
Spatial Planning and Development: Tim Stolzenburg
Art Direction: Lois Kainhuber
Creative Producer: Veith Michel
Motion Design: Nino Matthei, Jan Weber
Unity coding and design: Gero Doll
Compositing: Kevin Müller
Sound Design: Olivier Fröhlich
Live Visuals: Jan Weber
Film Shooting “Anything, Anytime”
Director: Lois Kainhuber
1st AD: Veith Michel
360° DOP: Kevin Müller
Choreography: Jakob Yaw
Dancer: Jakob Yaw, Joy Alpuerto Ritter, Georgina Leo Melody
A brief reflection and retrospect of all internal notions, nightmares and dreams. A complex mixture of the human psyche and how this relates to the natural processing of information.
Superstition and Magic. The Occult. How people around the world and especially in Africa relate to this old and malicious superstition throughout human history. Death is the indirect inverse of war. The film reveals the subconscious of an eminent struggle, of life and death. It reveals a metaphysical world of fractured memories, dreams hidden deep from what our senses can perceive. The film reveals the metaphysical world, which is believed to directly or indirectly affect reality, (or that what is perceived to be “real” and perceived by the human eye. The eye, as the viewer of all things, plays an important part in the film. It is the foreseer, much like the mind’s eye which overlooks all “inner” events. The eye in the film represents a more universal observer. A mystical being of such, not to be confused with any ideological or religious concepts or purpose. Nature is radical, shows no mercy and acts out of pure necessity! Its origin is uncertain but it has been around since the dawn of what humans call the dawn of time. The eye is the scavenger of the soul, it takes in whatever visible light frequencies are allowed in it’s scope of observation. In the film the viewer explores the possibility of cultural and social elements focusing on oblivious societies. It might not be of importance to the so called first world countries. For example, ethical and social principles which were brought to Africa by the Europeans. Africans living under the foundation of the western Biblical establishment, without realizing what they were confronted with. The film depicts a turning point of what happens to societies, or undermines to a certain extent what remains “unseen” to the mind’s eye, when dominating religious organizations form a so called “dictatorship of thought” or a “dictatorship of belief” dictating what is to be understood by religious indoctrination. On unconscious or psychological levels, we remain at the mercy of our imagination. Thus leaving the mind vulnerable to other states of consciousness and thereby also making it vulnerable to ideology.
As half of West Coast cloud rap duo Blue Sky Black Death, Kingston already has a wealth of musical experience. Today, he arrives with a new sound and new experience in his side project he’s calling 88 Ultra. Collaborating with Seattle producer No Merci, “Wish For More” finds Kingston shooting a soothing dose of deep space shoegaze and glacial electronics, melding both into a foundation where instrumental and cinematic elements collide. Shot by Seattle-based director Stephan Gray while in Santiago, Chile, the video was inspired by the song’s title, employing in the idea that people always want more regardless their socio-economic standing. The video’s wide scope shots are met with warped animation courtesy of Berlin-based Gero Doll and produced by Brooklyn-based dreambear.
FUTUREBIRDS is a fast moving film about controversial and future predictions on our environment and species. The specific ideology reflected in this film is not linked to any direct events, but rather is the sum of realities perceived by an infected bird.
Short interpretation by Beck Chung of “FUTUREBIRDS” on Creators project.
Concept / Animation / Art direction – Gero Doll
Music & Sounddesign Olivier Girardot
More info here on Future Apocalypse: Inside Gero Doll’s Sci-Fi Short ‘FUTUREBIRDS’
Advanced truth – Synopsis Advanced Truth is a Film depicting “The Nature of Things” – visualizing the subconscious world of imagination, meta imagery and using this “unpredictability” and “accidents” as a metaphor for the internet. Things that seem unreal and utterly uncanny. However, resonating with many of us by stimulating the limbic system of ancient fears and emotions of the “human animal”. This imagination is revealed by the possibility of today’s revolutionizing computer technology and software. More important, is the Code the, “Driving Force” behind these software. Maybe a sort of universal code if you will… This “code” was also used in a metaphorical sense in the film, by triggering an capturing generative motion. The main concept behind the film “Advanced Truth” is to use the 3D environment of an application and to film the animated objects and events as in real life, as though we would experience them in virtual reality. Let’s see what the future brings and how this new technology and Advanced Truth will hopefully affect the consciousness of future humans in a positive way.
The neurological alignment of the subconscious mind.
This is a work in Progress of a film and an interactive experience that will reflect notions of the current state of a human being. It is also a speculation on fragments on the nature of birth and death. It is a journey into exploring the subconscious mind in particular and the profound link between dreams and the waking state.
The film, as the interactive experience, also reflects human conflicts with its roots in personal and partner relations. Furthermore the experience also reveals core motives and interpretations of today’s global conflicts and challenges humans have, when facing the environment, other nations and religions.
The “Headhand” will be a central element in the film as in the interactive experience. Depicting the correlation to what the Head (in this case the mind) and the Hand (the Tool executing the task) are capable of doing. War, for example, is the most definite proof in this case, however there is little to be done about it, since the hand is ultimately the slave of the head and will obey the orders dictated by the dominating head – the ego and beyond.
Dreams might play a “key role” in understanding and revealing the relationship of dreams to the waking state.
Love, for example, might appear as the strongest bond between cells. This becomes questioned as the character reveals inner fears, which result in troubled emotions towards the other, also largely dictated by dreams and subconscious events. This might sound like a frantic statement of a somewhat deluded character, but it is highly linked to the limbic system – (also known as the emotional centre of the human brain) which motivates these reactions from an ancient point of view. This is how I think dreams in particular might play a “key role” in understanding the relationship of the “dream state” to reality or more so to the “waking state”. In this sense dreams can manipulate the approach on perception and ultimately on reality too, which could also give us access on predicting certain notions of the future.
If a person is aware of the current state of consciousness and that of the dream, he or she is more likely to understand the profound link between dreams and reality.
Most of the time we are not conscious of our actions, as at night our minds become like radars and receive vast amounts of information transformed and interpreted by our minds into what we call dreams. This information is merely reflected in the video and interactive experience as in any art form I guess. However, this essay is not based on any scientific proof, besides a mere reflection and notion on a concept of what art is capable of revealing in the 21st century using new media art technology.
HHS Gif 12
HHS Gif 11
HHS Gif 10
HHS Gif 9
HHS Gif 8
HHS Gif 7
HHS Gif 6
HHS Gif 5
HHS Gif 1
HNI 0001
HNI 0028
HNI 0027
HNI 0026
HNI 0025
HNI 0024
HNI 0023
HNI 0022
HNI 0021
HNI 0020
HNI 0019
HNI 0018
HNI 0017
HNI 0016
HNI 0015
HNI 0014
HNI 0013
HNI 0012
HNI 0011
HNI 0010
HNI 0009
HNI 0008
HNI 0007
HNI 0006
HNI 0005
HNI 0004
HNI 0003
HNI 0002
HHS Interactive Gif 10
HHS Interactive Gif 3
HHS Interactive Gif 5
HHS Interactive Gif 6
HHS Interactive Gif 9
HHS Interactive Gif 7
HHS Interactive Gif 8
HHS Interactive Gif 1
HHS Interactive Gif 2
HHS Interactive Gif 4
The Unity webplayer plug-in is needed for this experience, which you can get here
Buzzard is an short Character Driven Animation, inspired by Video Game Characters.
The Main Character “Buzzard” runs through an chaotic Landscape, accompanied by bizarre and mostly ugly stupid characters. ENJOY 😀
Visuals: Gero Doll
Music: Olivier Girardot
Some Original 3D models from EXTW by David Oreilly. davidoreilly.com/downloads/ Altered and remixed by Gero Doll.
Logan’s Run is a Short experimental Music Animation I created in the past 3 Months.
Initially the Idea was to create a film, depicting a Character that was lost in it’s own subconsciousness.
Story:
The Main Character Logan, walks with handicapped motion in a lonely surreal environment, as she battles to survive…
Non the less, this does not keep her from going on.
Suddenly, strange creatures with jagged heads and marshmallow body’s creep up on her.
At the same time crippled and manipulated spring heads attached to melting bodies go crazy.
Robot like machines driving in circles and bumping into each other– A hallucinatory trip loaded with unexpected character animation.
Suddenly at the end the “floating Clown”, peacefully and cheerfully floating through the calm thin breeze, interrupts the chaos.
However, the floating clown shows negative effects of consumerism thus he is surrounded by orbiting junk.
Projection Mappimg for Mercedes Benz Citan at the Rai 2012 Amsterdam.
My Role: 3D Animation and Design.
Client: Mercedes Benz
Agency: Phocus Brand
Producer: Chris Lenz
Head of 3D: Chris Smallfield
Lead Animator and Designer: Alex Janke, Gero Doll
3D Artists: Gero Doll, Nico Uthe, Nico Kahmann, Jonas Dahl
Music: Michael Kadelbach
Production: ACHT Frankfurt, Visual Catering
Client: Toshiba
Together with Matthias Winkelmann (Lead Designer) we realized this clip in a quite short production time of about 2 weeks. Art Direction by Julian F. Krueger.