Tutorial series for how to use Stable Diffusion both on Google Colab and on your PC with Web UI interface - Install, Run, DreamBooth, Train, Models

Tutorial series for how to use Stable Diffusion both on Google Colab and on your PC with Web UI interface - Install, Run, DreamBooth, Train, Models

Here the list of videos to with the order to follow

All videos are very beginner friendly - not skipping any parts and covering pretty much everything

Playlist link on YouTube:?Stable Diffusion - Dreambooth - txt2img - img2img - Embedding - Hypernetwork - AI Image Upscale

No alt text provided for this image
https://www.youtube.com/watch?v=AZg6vzWHOTA

1.)

Easiest Way to Install & Run Stable Diffusion Web UI on PC by Using Open Source Automatic Installer



No alt text provided for this image
https://www.youtube.com/watch?v=aAyvsX-EpG4

2.)

How to use Stable Diffusion V2.1 and Different Models in the Web UI - SD 1.5 vs 2.1 vs Anything V3



No alt text provided for this image
https://www.youtube.com/watch?v=Bdl-jWR3Ukc

3.)

Zero To Hero Stable Diffusion DreamBooth Tutorial By Using Automatic1111 Web UI - Ultra Detailed



No alt text provided for this image
https://www.youtube.com/watch?v=mfaqqL5yOO4

4.)

How To Do Stable Diffusion LORA Training By Using Web UI On Different Models - Tested SD 1.5, SD 2.1



No alt text provided for this image
https://www.youtube.com/watch?v=s25hcW4zq4M


5.)

How to Inject Your Trained Subject e.g. Your Face Into Any Custom Stable Diffusion Model By Web UI



No alt text provided for this image
https://www.youtube.com/watch?v=-6CA18MS0pY

6.)

How to Run and Convert Stable Diffusion Diffusers (.bin Weights) & Dreambooth Models to CKPT File



No alt text provided for this image
https://www.youtube.com/watch?v=mnCY8uM7E50

7.) If you don't have a strong GPU to do training then you can follow this tutorial to train on a Google Colab notebook, generate ckpt from trained weights, download it and use it on Automatic1111 Web UI

Transform Your Selfie into a Stunning AI Avatar with Stable Diffusion - Better than Lensa for Free


No alt text provided for this image
https://www.youtube.com/watch?v=2yGGorOxtbA

8.)

How to Use SD 2.1 & Custom Models on Google Colab for Training with Dreambooth & Image Generation




The topics covered in these videos are as follows:

Easiest Way to Install & Run Stable Diffusion Web UI on PC by Using Open Source Automatic Installer

  • 0:00 Intro - we are going to show 2 ways to install web UI on the PC
  • 0:28 Developer of the Automatic Installer of the Stable Diffusion Web UI
  • 0:31 Explanation of why we can trust this installer
  • 0:58 From where how to download this automatic installer
  • 1:26 How to start installation of the automatic installer. Run as administrator is the key
  • 2:08 Installer exe beginning screen and we start to install
  • 2:28 How you can find our Discord link to join and ask any questions
  • 3:25 Where you can find more information related to different Stable Diffusion models and vae files
  • 3:50 Pick the Web UI installation folder and start installation
  • 5:29 How to send shortcut of installed launcher to the desktop and start the Stable Diffusion Web UI application
  • 6:06 The launcher interface and settings
  • 7:23 Which another video you must watch to understand how to use Stable Diffusion Web UI better
  • 9:16 Web interface is started and we are doing first image generation
  • 10:46 Entering prompt to start image generation
  • 11:10 How to upscale an image by using AI - awesome quality
  • 12:04 Starting manual installation of the Stable Diffusion Web UI from its GitHub folder
  • 14:48 If you get error during Python installation, another version of this product already installed
  • 15:55 How to verify installed and actively selected Python version
  • 16:19 How to change default Python from environment variables path variable?
  • 17:58 How to run Web UI from manually installed folder after installing Python
  • 18:44 Fixing the encountered error during running the Web UI
  • 20:15 How to install downloaded stable diffusion models
  • 21:17 How to open started Web UI application

How to use Stable Diffusion V2.1 and Different Models in the Web UI - SD 1.5 vs 2.1 vs Anything V3

  • 0:00 Introduction to the video
  • 0:38 Official page of Stability AI who released Stable Diffusion models
  • 1:14 How to download official Stable Diffusion version 2.1 with 768x768 pixels
  • 1:44 How to copy paste the downloaded version 2.1 model into the correct web UI folder
  • 2:05 Where to download necessary .yaml files which are the configuration file of Stable Diffusion models
  • 2:41 Where to and how to save .yaml file in our web UI installation?
  • 3:53 Modification of command parameters in webui-user.bat file to properly run version 2.1
  • 4:55 What are command line arguments and where to find their full list
  • 5:28 The importance of messages displayed in the command window of web ui app
  • 6:05 Where to switch between models in the Stable Diffusion web-ui
  • 6:36 Test results of version SD (Stable Diffusion) 1.5 with generic keywords
  • 7:18 The important thing that you need to be careful when testing and using models
  • 8:09 Test results of version SD (Stable Diffusion) 2.1 with generic keywords
  • 9:20 How to load and use Analog Diffusion and its test results with generic keywords
  • 9:57 Where to get yaml file for version 1.x based models and how to use it for version 1.x based models
  • 10:36 Test results of version Stable Diffusion Anything V3
  • 11:28 Where you can find different Stable Diffusion models?
  • 12:17 Ending speech of the video

Zero To Hero Stable Diffusion DreamBooth Tutorial By Using Automatic1111 Web UI - Ultra Detailed

  • 0:00 Introduction to Grand Master yet most beginner friendly Stable Diffusion Dreambooth tutorial by using Automatic1111 Web UI
  • 3:11 How to install DreamBooth extension to the Web UI
  • 4:09 How to update installed extensions on the Web UI
  • 4:35 Introduction to DreamBooth extension tab
  • 4:45 Training model generation for DreamBooth?
  • 5:34 How to download official SD model files
  • 6:21 Training model selection and settings tab of the DreamBooth extension
  • 7:36 What is training steps per image epochs
  • 8:24 Checkpoint saving frequency
  • 9:15 What is training batch size in DreamBooth training and how to set them properly
  • 10:47 Set gradients to none when zeroing
  • 11:24 Gradient checkpoint
  • 12:04 Image processing and resolution
  • 12:39 Horizontal flip and Center crop
  • 12:50 What is Sanity sample prompt and how to utilize it to understand overtraining
  • 13:30 Best options to set in Advanced tab of DreamBooth extension
  • 14:22 Step Ratio of Text Encoder Training
  • 14:49 Concepts tab of the DreamBooth extension?
  • 15:27 How to crop images from any position with Paint .NET or use Birme .NET
  • 17:22 Setting training dataset directory
  • 17:44 What are classification images
  • 18:46 What is Instance prompt
  • 19:05 How to and why to pick your instance prompt as a very rare word (very crucial)
  • 21:52 Class of the subject
  • 22:15 Everything about class prompt
  • 22:55 Sample prompt
  • 23:30 Clas images per instance
  • 25:00 Number of samples to generate
  • 26:27 Teach multiple concepts in 1 run
  • 28:24 Saving tab
  • 29:10 How to generate checkpoints during training
  • 30:52 Generating class images before start training
  • 33:28 What is batch size in txt2img tab
  • 36:09 Start training
  • 38:25 First samples/previews of training
  • 39:13 Sanity prompt sample
  • 39:54 How to understand overtraining with sanity samples
  • 40:34 How to properly prepare your training dataset images
  • 43:15 Checkpoint saving during training
  • 44:30 What is Lr displayed in cmd during training
  • 45:38 How to continue / resume training if an error occurs or you cancel it
  • 46:41 We started to overtraining and how we understood it
  • 48:24 How to start generating our subject (face) images from best trained checkpoint
  • 50:09 What is prompt strength / attention / emphasis and how to increase it?
  • 51:17 How to increase image quality with negative prompts?
  • 51:50 How to get your taught subject with which correct prompting
  • 52:31 What is CFG and why should we increase it
  • 52:54 How to try multiple CFG scale values by using X/Y prompting?
  • 54:54 Analyzing CFG effect
  • 56:03 How to test different artist styles with different CFG scales by using X/Y plot
  • 1:00:47 How to use prompt matrix
  • 1:02:54 Prompts from file or text box to test many different prompts?
  • 1:03:57 Generate thousands of images while sleeping
  • 1:04:22 PNG info to learn used prompts, CFG, seed and others
  • 1:07:00 Extras tab to upscale images by using AI models with awesome quality
  • 1:09:54 How improve eyes and face quality by using GFPGAN
  • 1:11:35 How to continue training from any saved ckpt checkpoint
  • 1:12:06 How to upload your trained model to Google Colab and generate images there
  • 1:14:19 How to teach a new subject to your already trained model
  • 1:15:55 How to use filewords for training
  • 1:21:52 What is fine tuning and how it is done
  • 1:23:10 Hybrid training
  • 1:24:39 How to understand out of memory error
  • 1:25:39 Lowest GPU VRAM settings
  • 1:27:35 How to batch preprocess images
  • 1:31:47 How to generate very correct descriptions by using GIT large model
  • 1:33:19 How to inject your trained subject into any custom / new model
  • 1:37:36 Where is model hash written and how to compare

How To Do Stable Diffusion LORA Training By Using Web UI On Different Models - Tested SD 1.5, SD 2.1

  • 0:00 Introduction speech
  • 1:07 How to install the LoRA extension to the Stable Diffusion Web UI
  • 2:36 Preparation of training set images by properly sized cropping
  • 2:54 How to crop images using Paint .NET, an open-source image editing software
  • 5:02 What is Low-Rank Adaptation (LoRA)
  • 5:35 Starting preparation for training using the DreamBooth tab - LoRA
  • 6:50 Explanation of all training parameters, settings, and options
  • 8:27 How many training steps equal one epoch
  • 9:09 Save checkpoints frequency
  • 9:48 Save a preview of training images after certain steps or epochs
  • 10:04 What is batch size in training settings
  • 11:56 Where to set LoRA training in SD Web UI
  • 13:45 Explanation of Concepts tab in training section of SD Web UI
  • 14:00 How to set the path for training images
  • 14:28 Classification Dataset Directory
  • 15:22 Training prompt - how to set what to teach the model
  • 15:55 What is Class and Sample Image Prompt in SD training
  • 17:57 What is Image Generation settings and why we need classification image generation in SD training
  • 19:40 Starting the training process
  • 21:03 How and why to tune your Class Prompt (generating generic training images)
  • 22:39 Why we generate regularization generic images by class prompt
  • 23:27 Recap of the setting up process for training parameters, options, and settings
  • 29:23 How much GPU, CPU, and RAM the class regularization image generation uses
  • 29:57 Training process starts after class image generation has been completed
  • 30:04 Displaying the generated class regularization images folder for SD 2.1
  • 30:31 The speed of the training process - how many seconds per iteration on an RTX 3060 GPU
  • 31:19 Where LoRA training checkpoints (weights) are saved
  • 32:36 Where training preview images are saved and our first training preview image
  • 33:10 When we will decide to stop training
  • 34:09 How to resume training after training has crashed or you close it down
  • 36:49 Lifetime vs. session training steps
  • 37:54 After 30 epochs, resembling images start to appear in the preview folder
  • 38:19 The command line printed messages are incorrect in some cases
  • 39:05 Training step speed, a certain number of seconds per iteration (IT)
  • 39:25 Results after 5600 steps (350 epochs) - it was sufficient for SD 2.1
  • 39:44 How I'm picking a checkpoint to generate a full model .ckpt file
  • 40:23 How to generate a full model .ckpt file from a LoRA checkpoint .pt file
  • 41:17 Generated/saved file name is incorrect, but it is generated from the correct selected .pt file
  • 42:01 Doing inference (generating new images) using the text2img tab with our newly trained and generated model
  • 42:47 The results of SD 2.1 Version 768 pixel model after training with the LoRA method and teaching a human face
  • 44:38 Setting up the training parameters/options for SD version 1.5 this time
  • 48:35 Re-generating class regularization images since SD 1.5 uses 512 pixel resolution
  • 49:11 Displaying the generated class regularization images folder for SD 1.5
  • 50:16 Training of Stable Diffusion 1.5 using the LoRA methodology and teaching a face has been completed and the results are displayed
  • 51:09 The inference (text2img) results with SD 1.5 training
  • 51:19 You have to do more inference with LoRA since it has less precision than DreamBooth
  • 51:39 How to give more attention/emphasis to certain keywords in the SD Web UI
  • 52:51 How to generate more than 100 images using the script section of the Web UI
  • 54:46 How to check PNG info to see used prompts and settings
  • 55:24 How to upscale using AI models
  • 56:12 Fixing face image quality, especially eyes, with GFPGAN visibility
  • 56:32 How to batch post-process
  • 57:00 Where batch-generated images are saved
  • 57:18 Conclusion and ending speech

How to Inject Your Trained Subject e.g. Your Face Into Any Custom Stable Diffusion Model By Web UI

  • 0:00 Introduction to how to inject / merge / combine your models by using checkpoint merger
  • 1:48 Start of the tutorial
  • 1:57 My face trained model used training dataset
  • 2:12 The image quality of the default trained model (SD 1.5 official version)
  • 2:44 How to inject your trained info from your trained model into a new custom model
  • 3:04 What are primary model, secondary model and tertiary model?
  • 3:32 What is the strategy for extracting your trained subject from trained model and inject into a new custom model?
  • 4:31 What is Checkpoint Merger multiplier
  • 5:01 Add Difference selection
  • 5:25 How to use newly merged model
  • 5:54 How to select proper prompt strength and CFG value for the new subject injected model
  • 9:22 How to join our discord channel to ask anything and get support for free

How to Run and Convert Stable Diffusion Diffusers (.bin Weights) & Dreambooth Models to CKPT File

  • 0:00 How to run Stable Diffusion Diffuser (.bin / weight) models
  • 0:41 How to install Visual Studio Community edition and Python
  • 1:26 How to compose a Visual Studio Python project
  • 2:40 How to install necessary libraries in a Visual Studio Python project such as CUDA enabled/compiled PyTorch, Torch, Diffusers, Transformers
  • 5:43 How to install Accelerate in a Visual Studio Python project
  • 6:04 How to run / start Hugging Face Diffusers or any type Hugging Face Python project inside a Visual Studio Python project
  • 9:12 How to convert Stable Diffusion Diffuser project / model into a ckpt file
  • 10:00 How to download / clone entire repository of a Hugging Face model while preserving its structure and file names by using Git Bash
  • 11:29 How to download and use convert_diffusers_to_original_stable_diffusion.py script to generate ckpt file
  • 14:04 How to load generated ckpt file into the Automatic1111 web UI application?
  • 15:31 How to fix size mismatch for model.diffusion_model.input_blocks error happened on the new generated ckpt file
  • 16:13 We generated our first artworks of lambdalabs/dreambooth-avatar
  • 16:29 How to teach your own face to lambdalabs/dreambooth-avatar model by DreamBooth to generate your avatar portrait artwork
  • 17:59 How to join our Discord channel to get help and support us on Patreon

Transform Your Selfie into a Stunning AI Avatar with Stable Diffusion - Better than Lensa for Free

  • 0:00 Introduction and the content
  • 2:04 How to check you have enough Google drive space
  • 2:18 Starting to prepare Google Colab notebook for Stablediffusion Dreambooth model training
  • 5:01 Register, login and generate Hugging Face token for training of the AI model
  • 6:56 Continue training setup of the Stable Diffusion Dreambooth
  • 7:41 Setting the settings of Stable Diffusion Dreambooth model
  • 9:08 Providing our own photos to train the model to teach our own face image
  • 11:58 How to install Paint NET for image cropping
  • 12:12 How to crop and prepare your training images by using Paint NET
  • 13:36 How bulk image crop online with birme net website
  • 14:50 How to quickly check all images dimensions in a folder by using sort options (width, height) in detailed folder view
  • 15:26 How to upload training images to Google Colab
  • 17:53 Starting training with last settings / options
  • 18:11 Optimal parameters for training of Stable Diffusion Dreambooth model AI
  • 22:42 Training of the AI model done
  • 25:55 Exiting application/notebook completely and starting again to show how to use trained model
  • 28:00 Starting to generate AI / Lensa app magic avatars
  • 30:20 First avatar is generated and displayed
  • 31:14 Explanation of the positive / inference prompt input to generate avatar images
  • 32:44 Explanation of the negative prompt input to avoid such images, guidance_scale & num_inference_steps parameter of Stable Diffusion
  • 36:47 New avatars with different guidance_scale parameter
  • 37:34 How to generate colored portrait avatars / profile images
  • 38:57 Showcasing different styles of generated avatars
  • 40:00 Debated usage of artist styles
  • 40:26 Continue to generate more artworks
  • 42:18 Prompt for asian style artworks like korean, or japan or anime
  • 44:45 Long hair, different eye color version of me
  • 46:43 Adding armor keyword
  • 47:22 Further tuning of input prompt
  • 51:21 Cherry picking the results
  • 54:27 How to generate hundreds or thousands of magic avatars as a batch
  • 55:59 Clip of ~200 Stable Diffusion AI generated avatars
  • 57:40 Ending talk and discussion

How to Use SD 2.1 & Custom Models on Google Colab for Training with Dreambooth & Image Generation

  • 0:00 How to use different custom models in the Google Colab notebook
  • 1:06 How to fix revision parameter to make custom models work in Stable Diffusion Google Colab Dreambooth notebook
  • 2:16 How to use Stable Diffusion 2.1 on the Google Colab Dreambooth notebook
  • 2:58 How to change vae files to get better quality results with Stable Diffusion
  • 3:24 How to do inference (image generation) with Stable Diffusion 2.1 on the Google Colab Dreambooth notebook
  • 4:03 Necessary code changes to run Version 2.1 on the Stable Diffusion Google Colab Dreambooth notebook
  • 4:40 Generating 768x768 pixels images with SD 2.1 on the Google Colab notebook
  • 5:09 How to return back SD 1.5 version on the Stable Diffusion Google Colab Dreambooth notebook
  • 6:05 You don't have to do training to do inference (generating images)
  • 6:19 You can do training on different custom models as well
  • 6:30 When I say version 1.5 I refer to all models that are trained from 1.x Stable Diffusion models?
  • 7:04 The code changes between different prime version numbers 1.x vs 2.x
  • 7:27 Ending speech of the tutorial guide

要查看或添加评论,请登录

Furkan G?zükara的更多文章

社区洞察

其他会员也浏览了