ChatGPT 4o + Reve → Moodboards. Looks like they can handle color codes. Using MBs here as a coherence test. To see what these tools can do. TOOLS: + ChatGPT 4o: Prompt + MB Gen + Reve: MB gen + Magnific: Upscale PROCESS: 01. Select hex codes 02. Build prompt 03. Generate + iterate.? 04. Upscale GPT + REVE PROMPT Ex: Create a vertical mood board collage in 3:4 ratio with a clean, minimalist layout on a soft ice-blue (E6EFF4) background. The central image is a stylish Black man in a white technical puffer jacket (F6F8F9), navy beanie (1F2F45), and white snow goggles, facing right. Surrounding images include: A jagged mountain peak in grayscale (1F2F45 shadows). An Arctic seascape with floating icebergs in muted cyan (B8D5DD). A close-up of the puffer jacket’s quilted texture (F6F8F9). A close-up texture shot of thick gray knit wool fabric (5D5F62 & 2E2F31). Portraits of men wearing deep navy winter jackets (2B3A4C) and beanies. Two minimalist flat design illustrations of men in winter clothing matching the same color palette. Color palette swatches arranged vertically: ice blue (E6EFF4), pure white (F6F8F9), stone gray (C3C6CE), and navy black (1F2F45)" Use soft, diffused lighting with no harsh shadows. Composition should feel balanced, with clean separation between images, and emphasize Arctic exploration, winter fashion, and textile detail. CHATGPT THOUGHTS: + Very coherent with structured prompts + Translates the hex codes well + They are off subtly…but very close + Great blend of lifestyle/illustration? + Super clean composition REVE THOUGHTS: + Composition is great + Less coherent to prompt + But still very good? + Added some creative detailing + Including the hex codes in text OVERALL THOUGHTS: + Was impressed with the execution + Played with some different prompts + It performed well on both platforms + The “hex-code” translation is multi-use + And you can change the color scheme? + Without changing much to the image? + The idea here is to get from 0-1 + Would use this as inspo for composition + Or building a MB wireframe In the concepting phase… If I could get from blank page-to-first draft… I see that as a win. Will be staying in the rabbit hole for a while. PS: We went deep on this in Midjourney : Fast Hours podcast (link in comments) #midjourney #chatgpt #openai #ai #innovation?
Systematiq Ai
营销服务
Brooklyn,New York 1,207 位关注者
Do Less...Create More...and Increase ROI with AI.
关于我们
Do less, create more, and increase ROI with AI. AI-powered design solutions for agencies, creative teams, and brands. Wtf do we do?: ?? Private 1:1 Midjourney Training. ?? Corporate Midjourney Training. ?? DFY Ai Creative Services. ?? Operational Ai Consulting. ?? Virtual Webinars. ?? Public Speaking. How We Work: ?? We look inside your business to understand how things work and why. ?? Layout customized roadmap design where we help your business understand how AI can be used to accelerate innovation and competitive advantage. ?? Develop and implement a bespoke workflow and roadmap with your team harness the power of AI and become a force multiplier in your business. Let's chat.
- 网站
-
https://systematiq.beehiiv.com/subscribe
Systematiq Ai的外部链接
- 所属行业
- 营销服务
- 规模
- 2-10 人
- 总部
- Brooklyn,New York
- 类型
- 合营企业
- 创立
- 2023
- 领域
- ai、midjourney、chatgpt、marketing、operations、advertising、design和innovation
地点
-
主要
US,New York,Brooklyn,11222
Systematiq Ai员工
动态
-
ChatGPT 4o → This is mindblowing. Wireframing ad templates to creative. It feels like everything has changed. This opens up a ton of possibilities. And new ways to think about workflows. (I don’t normally feel like this) PROCESS: 01. Wireframe an ad template. 02. Provide GPT details/brief 03. Provide GPT product images/inspo 04. Generate + post produce. 05. Iterate from template GPT PROMPT TEMPLATE USED: **Full prompt in comments section** WIREFRAME: + Structure everything for placement + Utilized bounding boxes for spacing + Provides a visual direction for GPT PROVIDE GPT DETAILS: + I gave it product/lifestlye images + Product description + features + So it could generate copy + icons GENERATE + POST PRODUCE: + This wasn’t perfect + It needed some editing? + With a less complex template + It might not need it ITERATE FROM TEMPLATE: + Using a template makes this transferable + You could do this for diff products/brands + Because the structure is provided + And provides a path to operationalize QUICK THOUGHTS: + This is for exploration purposes + And showing what’s possible? + It feels like this changes workflows + Having this within Chat makes it intuitive + It can work with data or previous templates + You can blend “marketing/data” + creative + You can also have it analyze winning templates + And provide more structured temps to iterate. My head is spinning… This type of process isn’t just for ads. It can be translated to other creative. Thumbnails, email marketing, infographics, etc. Still comes down to the marketer + the idea. Delivering the right message…? To the right audience…at the right time Going to explore this way further. Have fun. PS: Will be talking more about this on Midjourney : Fast Hours (link in comments) #midjourney #chatgpt #openai #ai #innovation?
-
Midjourney + Google Ai Studio + SORA Mashing tools…and getting creative. Just for weekend fun. TOOLS: + Midjourney: Base Image + Google Ai Studio: Image Variants + Sora: Storyboard + Topaz AiVid: FPS + Resolution increase + Topaz Starlight: Creative upscaling + MM Audio: Sound FX + Udio: Music PROCESS: 01. Gen base images in MJ 02. Goog Studio for different angles 03. Layer images in Sora storyboard 04. Evenly space “keyframes” in Sora 05. Use text prompts to connect images 06. Generate Midjourney Prompt: side view, plexiglass translucent futuristic car in a sharp studio, the body of the car is completely see-through soft aesthetic contrasting against the background, smooth curves and contours, deep textured off-road tires, soft studio lighting, high precision photography, clean and sleek, accenting shadows, clear wiring and mechanical engineering contrasting inside the anatomy of the car --chaos 5 --ar 3:2 --quality 2 --profile 9ikiq5k yvrnvah 7lgpc2a --stylize 850 Google Ai Studio Prompt: Let's get a [perspective] of this exact car in a white studio. Sora Storyboard Prompt Struct: the camera moves smoothly [direction] to a [shot type] shot as the car morphs. QUICK THOUGHTS: + Google Ai Studio is great + It keeps the images very consistent + From shot-to-shot/angle-to-angle + (This has more application beyond cars) + Used Sora for more creative interpolation + Wanted more glitchy/fast motion…less structure + As the car morphed from shot to shot. + So I tried using Soras flaws here as a strength CONTEXT: + The car loses fidelity between frames? + But that’s what I was going for + It’s not perfect by any means. + But wanted to test new workflows. + Esp. now that Sora ditched the credit system + It’s time to find more uses for it PS: Lmk if you want a longer tutorial on this. Have fun. #midjourney #Veo2 #kling #runwayml #ai #innovation?
-
MIDJOURNEY → Quality of life update. "Smart Select" is now within "Editor." They really needed this. PROCESS: 01. Open "Edit" 02. Upload an image 03. Choose "SELECT" 04. Highlight the area. 05. Select "Remove" or "Isolate" 06. Add prompt + generate QUICK THOUGHTS: + It’s super easy + works relatively well? + It’s great for editing small features + Or staying closer to edges + It’s not without faults? + It’s heavily reliant on text prompting + Sometimes the changes are minimal + If you want to add something specific + Try adding img-prompts or SREFs to prompt ADDITIVE EXAMPLE: + Let’s say you want to add a "palm tree" + If you highlight an area + add text prompt? + And the palm tree...doesn’t generate. + Try attaching an img of a palm tree prompt + As an Img-ref or Ań SREF to force it in. Still patiently waiting on version 7. Apparently it will be here in <2 weeks. But we’ve heard that story before. Have fun. PS: More MJ RESOURCES in comments section. #midjourney #runwayml #kling #claude #ai #innovation
-
CLAUDE 3.7 + SORA → 3D to V2V. SORA remix does a "decent" job here. This could be better... Consistency across gens is tough though. PROCESS: 01. Create 3D model in Claude 3.7 02. Screen record model 03. Upload into SORA 04. Select “Remix” 05. Add text prompt 06. Generate CLAUDE PROMPT:? can you code a 3d version of an F1 car in a studio environment in three.js? SORA REMIX PROMPT: replace with an F1 car, dark sleek studio with black reflective floors, single-source lighting QUICK THOUGHTS: + Remix is one of Sora’s best features + Great for making small/big changes + Was impressed with the render + Still has some things that are wrong + The biggest problem is consistency + Esp. from generation to generation? + I can see this being helpful for pre-viz + Or concepting different ideas or paths + Still see this as a way to “fail quicker” + Considering the gen time is <1 min + Anyone who’s worked in design knows + You have to balance your creative ideas + And the time it takes to chase those ideas. + That’s where I’m seeing the value here. It’s a fun one to play with. And might open up some additional ideas. Have fun with it. #midjourney #runwayml #kling #ai #innovation #Veo2?
-
We're just getting weird now. iPhone + Magnific + Runway + Sora. (This was just for fun) Probably could cut a few of these steps. Trying to conceptualize it better. PROCESS: 01. iPhone: Record video 02. Runway: Remove background? 03. Runway: Dup video + mirror 04. Runway: Extract 1st Frame 05. Magnific: Structure Ref + BG gen 06. Runway: Restyle 07. Runway: Layer bg + restyle 08. Sora: Remix MAGNIFIC RESTRUCTURE PROMPT: 2-person fighter arcade game aesthetic, blue ninja, yellow ninja, palette swapping, digitized sprites, 16-bit MAGNIFIC MYSTIC PROMPT: 2-person fighter arcade game arena, palette swapping, digitized sprites, 16-bit QUICK THOUGHTS: + There’s probably a more efficient way + Tried finding a process with swappable arenas + RW's Remove BG/Green-Screen feature... + Is decent…hadn’t played with it much + RW also has a video editor for layering + RW tends to smooth out generations? + Tried using Sora’s Remix to texture it? + And give it a more pixelated/vintage feel + I’m sure there’s other use cases for this + Not just recreating video games? + V2V will play a bigger role moving forward. + Trying to find more creative ways to use it. It was a fun little project…no agenda… No plan…just seeing what’s possible. Still needs some work. Go play. #midjourney #runwayml #kling #ai #innovation #veo2
-
CLAUDE + MAGNIFIC + RUNWAY WORKFLOW 3D-to-Video (full tutorial) ↓ From this post:?https://lnkd.in/eDyV92hU (More context below) PROCESS: 01. Build 3D Renders in Claude 3.7 02. Program camera movements 03. Screen record render 04. Upload video to Runway Gen-3 05. Extract 1st frame 06. Magnific Struct. Ref. 1st frame 07. Upload in Runway Restyle 08. Generate. CLAUDE PROMPT STRUCTURE: can you code a 3d version of [subject + env] in three.js? INITIAL CLAUDE PROMPT: can you code a 3d version of an epic castle atop a mountain plateau in a valley in three.js? MAGNIFIC STRUCT. REF PROMPT: editorial photo, epic castle on a plateau, intricate rocky textures and fine details, immaculate New Zealand landscape, white marble castle, high precision photography MAGNIFIC SETTINGS: + Model: Mystic 2.5 + Structure Reference + Structure Strength: 52% + Resolution: 2k + Creative Detailing: 75% + Engine: Magnific Sharpy CONTEXT: + This is far from perfect or without error + I have no idea what I'm doing w 3D. + I have a lot of respect for 3D artists + Their craft is extremely meticulous + This is a novice attempt at a complex task + I’m hoping there is some use for this + In concepting or pre-viz potentially + At the very least…knowing that it’s possible. QUICK THOUGHTS: + I’ve tried this with a few diff subjects + envs. + It’s replicable for most things? + Make sure to add “three.js” for claude + Or else it might not render the model? + Assuming this can be done in most 3D software + Without the need for Claude? + Runway needs an mp4 input file for Restyle + But it works off the 3D structure very well + I can imagine any 3D artist (who’s open to ai) + Could have a lot of fun playing with this Yes, you can do this w traditional I2V But 3D allows more controlled structure. I’m sure this translates to design pipelines… These tools allow us to “fail quicker” Iterate and ideate in a different way. I see benefit in that…others might not. This is also not immune to criticism.? It’s not immaculate quality or concepting. Just feels like something that could help. Try it out. #midjourney #runwayml #kling #claude #ai #innovation
-
So this is kinda ridiculous... Claude 3.7 + Magnific + Runway Restyle. (More context below) PROCESS: 01. Build 3D renders in Claude 3.7 02. Program camera movements 03. Screen record render 04. Upload video to Runway Gen-3 05. Extract 1st frame 06. Magnific Struct. Ref. 1st frame 07. Upload in Runway Restyle 08. Generate. INITIAL CLAUDE PROMPT: ok here's a crazy one...can you make a 3d model replica of an epic castle. so we can have full 360-degree view camera motion of the entire thing. Just kept chatting + editing from there. Asked for camera control functions. And screen recorded a video. MAGNIFIC STRUCT. REF PROMPT: high precision cinematic still, epic castle, white marble city, in a New Zealand open landscape, bright natural light, deep contrast and textures CONTEXT: + This is far from perfect or without error + I have no idea what I'm doing w 3D. + I have a lot of respect for 3D artists + Their craft is extremely meticulous + This is a novice attempt at a complex task + I’m hoping there is some use for this + In concepting or pre-viz potentially + At the very least…knowing that it’s possible. QUICK THOUGHTS: + I was unaware Claude could do this + It’s a relatively easy process to edit the model + Magnific does a great job with structure ref. + 3D models don’t have to be created in Claude + Assuming this could be done w any 3d software + But you’d need a video-file of the output? + Runway just needs an mp4 file for Restyle + I can imagine any 3D artist (who’s open to ai) + Could have a lot of fun playing with this Not sure what this means for design pipelines… I apologize in advance for butchering terms.? Like I said, I’m an infant in the 3D space. This is also not immune to criticism.? It’s not immaculate quality or concepting. Just feels like another tipping point. PS: Can do a longer tutorial for those who are interested. #midjourney #runwayml #kling #ai #innovation #claude?
-
RUNWAY RESTYLE → Video game nostalgia. Unexpected/fun little workflow. (There are probably better ways to do this) But something I was playing with. TOOLS: + VEO2: Video Gen + Premiere: Pixelation + FPS decrease + Magnific: Structure Reference Image + Runway: Restyle V2V PROCESS:? 01. Generate video in Veo2 02. Upload into Runway Gen-3 03. Extract 1st frame in RW 04. Magnific Struct. Ref. 1st frame 05. Upload Veo2 vid in Premiere 06. Increase pixelation + drop FPS 07. Upload vid + ref image into RW 08. Generate WHY: + RW tends to smooth the aesthetic + It’s working off edge detection + If the edges are smooth…it gens smooth? + I wanted a rougher/blocky edges + So adding the pixelation gave it a blocky texture + And dropping it to 18 FPS helped QUICK THOUGHTS: + This is a totally random use-case? + But the overall “ edge theory”…. + Might help someone with other uses + I am a novice with Premiere? + And there’s probably better workflows Somewhat of an interesting workaround. Curious if anyone else has found this in testing. Have fun. #midjourney #runwayml #Veo2 #ai #innovation #kling
-
RUNWAY RESTYLE → New Feature Alert. It’s just fun + kinda addicting. Using Veo2 + Midjourney + Runway. Opening the doors for motion capture. WHAT IT DOES: 01. You input a video 02. You add an image reference 03. RW "restyles" your video PROCESS: 01. Generate or curate a video. 02. Upload video to Runway Gen-3 03. Runway will extract the 1st frame. 04. Download the 1st frame. 05. Retexture 1st frame. 02. Upload retextured frame. 03. Generate. RETEXTURING: + You can use Midjourney Retexture + Or tools like Magnific Mystic 2.5 + With a “Structure Reference”? + Basically any tool utilizing controlnet MIDJOURNEY RETEXTURE PROCESS: 01. Go to “Edit” tab 02. Upload first frame 03. Select “Rextexture”? 04. Add a text prompt 05. Generate. MIDJOURNEY PROMPT: 8-bit video game style —s 25 —ar 5:3 QUICK THOUGHTS: + It’s super fast…typically 30s + 20 second max video upload + Outputs at 720p 24 FPS + Sometimes output can be choppy + Using topaz to increase FPS can help + Output can stray from ref image? + It’s very input image specific + It’s a super fun use for V2V + You can get creative with it + Runway’s previous V2v was good w. This + But having more control w ref image is great Going down the rabbit hole on this. More uses to come. Have fun. #midjourney #runwayml #Veo2 #kling #ai #innovation?