Bridge to the Ethereal: Automotive Look Development
This research and development project explored new visual approaches for premium automotive visualization, focusing on creating surreal environments that complement sophisticated vehicle design. The technical R&D concentrated on solving integration challenges between photorealistic automotive rendering and stylized, impossible landscapes.
My process involved experimental compositing techniques that maintain material authenticity while placing the vehicle within dreamlike contexts. The development included extensive testing of lighting scenarios—from golden desert backlighting to dramatic red accents—establishing a consistent technical framework while allowing for environmental variation.
Development Process
This series documents the iterative development of environmental elements and lighting scenarios. Each image represents a key milestone in creating the surreal automotive landscapes, focusing on the interaction between precision engineering and fantastical surroundings.
Technical Visualization
These Unreal Engine wireframe renderings document the technical implementation behind the surreal landscapes. Each visualization demonstrates specific challenges in material integration, lighting systems, and environmental design that were solved during development to create coherent interactions between the vehicle and its fantastical surroundings.
👥 CLIENT:
NSYNK – https://www.nsynk.de
Project completed as contracted technical visualization work
A neon-noir automotive showcase created in Unreal Engine 5, blending modern car design with cyberpunk aesthetics. The project features custom neon material systems, dynamic ray-traced reflections, and atmospheric lighting to create an immersive retro-futuristic environment.
Technical Specifications
Unreal Engine 5
NVIDIA RTX Ray Tracing
Custom Neon Material System
Dynamic Lighting Setup
Real-time Rendering
This visualization explores the intersection of classic automotive presentation and synthwave aesthetics, creating a unique atmosphere through careful attention to lighting, materials, and environmental design.
This video showcases the V2 of ComfyUI’s Style-Transfer feature, designed to create unique, experimental visuals using TensorFlow’s Neural Style Transfer technology. 🎨✨
Watch how you can transform your content into captivating, artistic images that will surprise and engage your audience. From custom nodes to a seamless workflow, this tutorial covers everything you need to create stunning visuals for your projects.
Hello and welcome.
Today I want to demonstrate a style transfer workflow using ComfyUI.
And I know that I’ve been working on a fast style transfer node.
First of all, you need to cancel your ComfyUI runtime.
And we need to copy and paste these commands into your terminal.
I will provide a link in the description.
This is just a workaround.
So you…
You see there are conflicts with protobuf versions conflicting.
Anyway, we’ll run Comfy again using python main.py.
And hopefully this will be resolved.
So we can refresh the window.
It seems that the fast style transfer node has been loaded.
Successfully.
You can find this node in my ComfyUI node toolbox.
And you need to download this as a zip or copy git clone into your ComfyUI directory,
which in my case is right here.
So there’s a bunch of…
There are a bunch of custom nodes here.
So I’m just going to…
There’s a bunch of custom nodes already.
But you need to paste this whole folder into the custom nodes directory.
There’s also the fast style transfer.
This is the original fast style transfer repo.
The code is actually based on the original one from TensorFlow tutorial.
From TensorFlow.
From TensorFlow.
From TensorFlow.
From the TensorFlow tutorial.
But I’ve modified it a bit and you can also download this onto your machine and run it
locally.
So git clone and set up the conda environment with just these two commands.
And then you should be able to run this on your machine.
So let’s go back into ComfyUI.
We have two images.
The one is the input image and the other one…
the style reference image, we can see that this is a bit of low quality, but let’s try
and run this actually.
So I’m doing a bit of color conversion here, but the main backbone of the effect is the
fast style transfer.
And then we’re moving into the VAE encoder, and I’m also injecting latent noise into the
latent space provided with a conditioning clip text encoder, so I’m describing this
image above.
Basically the style image.
So you can use ChatGPT, or you can use your own words to describe what this is doing,
or what style this actually is.
Be careful with the noise style strength, you need to play around with that as well,
but usually injecting less noise will be much easier.
Okay.
Okay.
Okay.
So the style will be more coherent to the original image, and also the CFG classifier
free guidance will define how much weight is given to the text encoder in that sense
of how much effect the text description will affect the final output, or how strong the
the image will affect this input image.
I hope this is clear.
And then finally, I’m also upscaling the output, final output.
Now let’s just run this.
So we can see it’s generating the first images, they’re pretty small, I think, and this is
using a bit more VRAM now since it’s also encoding these images again into latent space.
I will fast forward time here until the generation is done.
Okay.
The images have completed.
We can see them down here.
This is the effect.
This image has quite low resolution.
Let’s try with another image.
If we go into our folder again, I think this one is interesting.
By the way, in Comfy, you can shift-select nodes.
And then align them.
Change this prompt.
Let’s ask Ollama, Ollama, run Ollama 3.1, I think.
Modify prompt.
Modify.
Modify.
Modify.
Modify.
Modify.
Modify.
Modify.
Modify.
Modify.
Modify.
Run.
Disconnect.
Spread of shield.
Creator’s Irohaz.
Ah, actually we’re having a portrait here.
So what I’ll do is paste this.
So that’s that.
I don’t really know the overall picture.
But I’ll just say this.
What I’m going to do is connect to 거 aquilo.
Okay, so ChatGP is much better with description of stable diffusion prompts, but it’s running
locally and it’s free and it doesn’t send any data, you know, to OpenAI for them to
gather and so on, et cetera.
So let’s paste this in here, probably, okay.
So let’s run this again, queue up.
We’ll use a random seed, I think this is interesting.
I will modify time again and I will see you when this is done.
Okay, so the generations have finished, they’re interesting, but not quite what we want.
Think something is wrong here, let’s try a different sampler, we can try load this quantized
model.
This will require less vram.
Okay, I’m not particularly good in maths, but set this to 0.3.
We’re rounding this value, so 0.4, subtract, subtracting the two latents, and I will also
modify time again and I will see you when this has finished rendering.
Okay, so the generations are done and we’re having pretty much the same results as before.
How can I improve this?
Well, first of all, we can copy the complete prompt here.
And paste it into the clip conditioning, so just get some weight here.
So something is happening with the sampling, just want to see if we bypass this node.
Hold check.
Okay.
Okay.
I’ll generate again and fast forward the time when this is done.
Not quite what we want, but I think it’s interesting.
So I think the latent noise is doing some funky stuff with the image, with the output
image.
So feel free to bypass all of these nodes.
And we’ll run the queue again.
And I will fast forward time yet another time.
Okay.
So this is bypassing.
Okay.
Okay.
Thank God.
Okay.
So we did it.
And then we change notes.
Okay.
It’s done.
So just 그걸就是說 the noise injection pod and upscaling the image.
So we still get artifacts in there.
Anyway, I hope you found this introduction demo, streaming video interesting.
I think I like this one.
Okay.
Bye.
I’ll save this to my downloads folder, and I hope you learned something or found it interesting.
Links will be in the description.
So I have the ComfyUI, try to find this workflow suite, which will host these workflows.
And I will also try to update the GitHub README with the appropriate workflow.
I will put this in here and also link that up in the YouTube description.
Please help support the channel.
If you like it, share it.
Thanks for watching.
Hope to see you in the next one.
Goodbye.
🚀 Push the boundaries of creativity with ComfyUI’s groundbreaking Style-Transfer Node, designed to generate unique, experimental visuals using TensorFlow’s Neural Style Transfer.
Transform your content into captivating, artistic images that surprise and engage your audience.
Download the custom node and workflow.
This project is a short video that showcases how generative AI models can be used to create visually stunning art in a short period of time. The theme revolves around “Heaven and Hell,” where powerful demonic energies are depicted as emanating from within the earth, symbolizing the eternal battle between good and evil. Through the use of cutting-edge AI, the evil is portrayed as ultimately being vanquished for eternity.💥
Technical Details & Credits
Special thanks to jpsaiart for providing initial prompt information, which I adapted slightly for this creation. I’ve used my own voice, though it still sounds a bit robotic due to the voice-cloning technology I used.
This video was generated using the Disco Diffusion Model, running on my Nvidia RTX 3090Ti Graphics Card. For those interested, I’m considering creating a tutorial to help others set up a similar workflow.
Artwork: Gero Doll | Limbicnation
Music: “Heaven and Hell (Part 2)” by Jeremy Blake from the YouTube Audio Library.
Masks and machines are older than we think they are. They’re part of the fabric of everyday life. Masks are what you wear when you don’t want to be seen, and machines are what we use to perform everyday tasks
It is early days, but could these new digital assets be a subject of a brighter future?
The question is, what value do these digital assets hold? I might argue that it is the
imminent signature and transaction linked into one gigantic network that evaluates and
stores data of the specific value at the given time.
“Illusion”, was an experimentation and metaphor to depict digital currencies like
Bitcoin (BTC) And Ethereum (ETH) linked with machine learning. In this surreal film,
I would like to show the potential of digital assets and machine learning algorithms
to generate unique digital items (NFT).NFT’s might change the digital art market and are already doing so, by kickstarting
it into wider range of acceptance as a true artform. As we know, the machine needs
to be programmed in the very first stages. The initiator has the script to direct
the machine.
From that point onwards, every other step is based on the algorithm. Each prediction
is based on a previous assumption. This makes the process volatile and this is the
case with Crypto Assets as well. How knows what the outcome will be? It is a building
site if you will and it will be determined by the community, or by the people at large.
Cryptoart has proven to be a true store of value in terms of digital art and paths
the way towards a new digital economy in the artworld!
For the 30th Anniversary of the fall of the Berlin Wall, I developed 3 Video Art installation pieces together with Monomango Studio and Jens Wischnewsky.
The Project was created using StyleGAN. The model was trained on a dataset of 50K + images.
Furthermore you can read more about the machine-learning model here -> StyleGAN Paper
For this project my role was to create an entirely real time VR experience in Unity 3D. The viewer is centered and experiencing an ever evolving environment – contemplating changes, physically and digitally.
CREDITS:
Concept: Lois Kainhuber, Tim Stolzenburg, Olivier Fröhlich, Veith Michel
Spatial Planning and Development: Tim Stolzenburg
Art Direction: Lois Kainhuber
Creative Producer: Veith Michel
Motion Design: Nino Matthei, Jan Weber
Unity coding and design: Gero Doll
Compositing: Kevin Müller
Sound Design: Olivier Fröhlich
Live Visuals: Jan Weber
Film Shooting “Anything, Anytime”
Director: Lois Kainhuber
1st AD: Veith Michel
360° DOP: Kevin Müller
Choreography: Jakob Yaw
Dancer: Jakob Yaw, Joy Alpuerto Ritter, Georgina Leo Melody
A brief reflection and retrospect of all internal notions, nightmares and dreams. A complex mixture of the human psyche and how this relates to the natural processing of information.
Superstition and Magic. The Occult. How people around the world and especially in Africa relate to this old and malicious superstition throughout human history. Death is the indirect inverse of war. The film reveals the subconscious of an eminent struggle, of life and death. It reveals a metaphysical world of fractured memories, dreams hidden deep from what our senses can perceive. The film reveals the metaphysical world, which is believed to directly or indirectly affect reality, (or that what is perceived to be “real” and perceived by the human eye. The eye, as the viewer of all things, plays an important part in the film. It is the foreseer, much like the mind’s eye which overlooks all “inner” events. The eye in the film represents a more universal observer. A mystical being of such, not to be confused with any ideological or religious concepts or purpose. Nature is radical, shows no mercy and acts out of pure necessity! Its origin is uncertain but it has been around since the dawn of what humans call the dawn of time. The eye is the scavenger of the soul, it takes in whatever visible light frequencies are allowed in it’s scope of observation. In the film the viewer explores the possibility of cultural and social elements focusing on oblivious societies. It might not be of importance to the so called first world countries. For example, ethical and social principles which were brought to Africa by the Europeans. Africans living under the foundation of the western Biblical establishment, without realizing what they were confronted with. The film depicts a turning point of what happens to societies, or undermines to a certain extent what remains “unseen” to the mind’s eye, when dominating religious organizations form a so called “dictatorship of thought” or a “dictatorship of belief” dictating what is to be understood by religious indoctrination. On unconscious or psychological levels, we remain at the mercy of our imagination. Thus leaving the mind vulnerable to other states of consciousness and thereby also making it vulnerable to ideology.