How I Made My First AI Film using Dream Machine by Luma Labs and Other Tools: A Detailed Breakdown
By Amos (Guzman) Cheruiyot

How I Made My First AI Film using Dream Machine by Luma Labs and Other Tools: A Detailed Breakdown

Hello Film and AI enthusiast! Welcome back to another edition where I am going to break down my process of making "The Sad Story," my first AI-made film. Let's first check out the poster then continue below the poster.

Now on YouTube -

The procedure I used to make this film is:

  1. I first acquired the script by adapting an X (formerly Twitter) post by Wandia Njoya .
  2. I then looked for Alvina Gachugu who is a voiceover artist to narrate the story.
  3. Used Stylar AI to describe real images (reverse prompting).
  4. Used Microsoft Bing to generate images according to the script.
  5. Used Magnific AI to upscale those images.
  6. Used Dream Machine by Luma Labs AI to convert the resultant images into videos (animate).
  7. Used CapCut Web App to simulate news room and news intro.
  8. Used CapCut Web App to lip-sync built-in AI Character to news script and news audio.
  9. Used Artflow AI to build a clone of myself and generate image of the clone in a news report environment.
  10. Used Speechify AI to clone my voice.
  11. Used HeyGen AI to lip-sync my image clone to my voice clone.
  12. Used ElevenLabs AI to generate sound effects (SoundFX).
  13. Used Artflow AI to generate narrator avatar (presenter mode).
  14. Used Puppetry AI to lip-sync main narrator avatar to prerecorded voice audio.
  15. Used CapCut Video Upscaler to upscale resultant video.
  16. Used CapCut Desktop App to edit (piece together all elements) and create transitions and text effects.
  17. Used TextFx.co to add 3D effects on texts.
  18. Acquired music score from YouTube Audio Library.
  19. I then uploaded the full film on YouTube.

That's all!

Until next time, bye!


About author: https://bio.link/guzman_pictures


要查看或添加评论,请登录

Amos Cheruiyot的更多文章

社区洞察

其他会员也浏览了