Designer Input - 03 "How to use Dall-E ??, Meta AI"
Hello there,
This Week’s Topics:
Do you want to read the full version of the Designer Input? Sign up here.
Meta had big announcements this week, including:
Instagram has introduced two new AI-powered features:
Restyle allows users to transform their photos by applying different visual styles using text prompts. For example, you can type "watercolor" to make your photo look like a watercolor painting.
Backdrop uses AI to change the background scene of a photo while keeping the main subject in place. You can prompt it with texts like "in a street in Paris" to put your subject against a new background.
The generated images will be labeled as AI-created to distinguish them from human-made content.
In addition, you can now chat with 28 new celebrity chatbots across Instagram, Messenger, and WhatsApp. You can ask them questions, get advice, and use them like personal ChatGPTs.
Audio conversations with the bots are expected to launch next year.
Dall-E 3 is available - for free!
After the announcement of DALL-E 3 last week, it is now publicly available to anyone inside the Bing Image Creator. And yes, it's completely free.
If you have used Bing Image Creator before, you may need to download another web browser and try it there to gain access to the DALL-E 3 version instead of the 2.5 version.
But according to a Twitter post by a developer, DALL-E 3 will be rolling out to everyone over the next few days. So if you're seeing this later than Monday, you probably don't need to download a new browser.
I've been using it since yesterday and it's quite impressive for many use cases. You can create perfect texts, memes, and photos in different styles like:
领英推荐
DALL-E 3 has added great improvements in terms of styles and the accuracy between the generated image and text prompt.
However, I'm not as impressed with architectural images. I couldn't achieve the same level of quality for architectural images compared to Midjourney and SDXL. Dall-E 3 is still pretty good but it’s not as exciting as the other type of examples.
You can see two images generated with the same prompt - one from Midjourney and one from DALL-E 3.
For the architectural image, Midjourney was able to capture more realistic lighting, shadows, and material details compared to DALL-E 3, which looks more simplified.
So for architectural visualization, it seems Midjourney and Stable Diffusion may still have an edge over DALL-E 3 right now. DALL-E 3 produces amazing results for many use cases, but detailed architectural renders may be one area where it still has room to improve.
Let me know your thoughts!
3 Tools for Designer2.0’s
1) Rayon: Your software for collaborative space design. Create floor plans, mood boards together with your team on the web. (Freemium)
2) Speckle: ?Extract and exchange data in real time between the most popular AEC applications using our tailored connectors. (Free)
3) LoRA Roulette: Combine 2 random LoRAs, ?load them into SDXL, and generate images with them ?? (Free)
Stories Worth Reading - Picks for You
Thank you for reading and see you next time!
Do you want to read the full version of the Designer Input? Sign up here.
LED lights- for -trader,designer,contractor----module/strip/screen/signage/molds/pendants.
1 年Tks admin. Nice meeting each of you. Hope for benign communication and supportive sharing. Nice day