Stylizing with AI: The Future of Video Editing?
Peter Fodor
Founder & CEO at AppAgent │ I help mobile games grow their user base & revenue ??
This week’s newsletter is an excerpt from an article written by AppAgent Motion Designer, Polina Anisimova.
As a motion designer, I'm all about creating videos that grab your attention. And to stylize my work, I'm trying out some AI tools - no expert here, just starting to play around with them.
So today, I'm testing Stable Diffusion, Eb Synth and Disco Diffusion.
Every video is basically just a sequence of frames. So my plan is to stylize one frame and then apply that style to the rest of them.
Stable Diffusion
First, I installed Stable Diffusion on my laptop in order to have more control over the result. Then I used After Effects to render my video as a png sequence. Now, it's time to see what these AI tools can really do!
When stylizing our video frames, the creative possibilities are virtually limitless, with a wide range of settings to tweak, prompts to try out, and style models to explore. But we can’t allow Stable Diffusion to get too creative here, because it can lead to inconsistent results across frames. To maintain some level of consistency, it's best to strike a balance and keep the amount of creativity in check.
With the settings adjusted to our liking, it's time to hit the Generate button and let the AI work its magic. Once it's done, we'll have a stylized png sequence of our video saved in the output directory.
All we have to do now is head back to After Effects, import that sequence, adjust the speed, and switch on the frame blending for a smoother result.?
Ok, we have our video now, but is there a way to make it better? Let’s try Eb Synth.?
Eb Synth
This AI software can apply stylized frames to all frames of a video. However, using just one stylized frame may not work well for complex movements, such as turning or closing eyes. To address this, you'll need to use more keyframes, and Eb Synth can blend them to create a smooth transition.
Check out this example that demonstrates the difference between Stable Diffusion Img2Img batch function and Eb Synth for a better understanding.
My results were not perfect, but I was only testing the software. With more effort and careful selection of stylized frames that match the source file, the results can be impressive.?
Now let’s test Disco Diffusion.?
领英推荐
Disco Diffusion
This tool operates as a collaborative document hosted on Google Drive. All we have to do is upload a video (not a png sequence), adjust some settings (there are a loooooot of them…thankfully, they provide a helpful cheat sheet with explanations), and write our prompts.?
After that, all we need to do is run the code and let it work its magic. In no time, we can find a new folder on our Drive with a png sequence, and a final video file when all frames are rendered.
One thing I really appreciate about this AI tool is that it delivers a consistent style across all frames and creates a smooth transition between them, making it perfect for videos that require seamless transitions, like on TikTok or Instagram Reels.?
Of course, you can adjust the settings to your liking, but there are also some drawbacks to consider. For one, you don't have much control over the final style.
So to sum it up, while there is no one-size-fits-all solution for creating the perfect video using just one stylized frame or text prompts, there are many different approaches you can take. The tools I've shown here are just the beginning of the process. What's exciting is that there are seemingly endless ways to stylize videos by combining different tools and techniques.
This was just a sneak peek. For more details and technical specifications, check out the full article here.
Today's Insights ??
Indie filmmaker Karen X. Cheng is pointing out the financial benefits that generative AI tools may have for others in her industry:
“Being an indie filmmaker these days can be SO hard and SO expensive. To buy or rent expensive gear and lighting equipment, to build sets, source wardrobe, location, etc etc. I'd like to see more tools democratize this process, and make filmmaking more affordable. While this tech is VERY early days right now (and the results are more suitable for storyboards rather than finished production), I do see this tech rapidly getting better, and this is going to open up so many possibilities for creators.”
Check out how she’s experimenting with Runway GEN-1 in her filmmaking HERE.
Mind-blowing News ??
Let’s see what next week will bring, stay tuned (and subscribe)!
Senior Growth Marketing Manager at Freeletics, Freelance Consultant | Work with me - 41anton.com
1 年that's crazy
UX/UI Designer / Art Director / Full Stack Designer
1 年looks interesting, but still looks too bad for real world aplication. Results are too random and you lack control of the effect. And imagine if the client wanted some even minor change :D
Founder, Sovran | Modular video ad platform for UA and Growth teams to 10x creative testing velocity with 76% less manual work.
1 年Amazing! Deforum is also good for video input and you can provide a video mask so it doesn't stylize certain areas. ControlNet Batch img2img works too. I have to test into EBSynth next ??
??Illuminated Sculptures??for the Home??virtualens.art??
1 年I know these are video tests. The potential for major league A.I. special effects will be huge in the near future!????
Seasoned Media & Gaming Professional | Dedicated to Elevating Your Gaming Experience with Expert Insights ?? |
1 年It looks really cool!