Tempest Digital, Inc.的封面图片
Tempest Digital, Inc.

Tempest Digital, Inc.

媒体制作

San Francisco,CA 522 位关注者

We provide tax incentives for global M&E, Film, VFX, Animation, & IT projects. Contact Alejandro Franceschi for details.

关于我们

We offer up to 40% in tax incentives, for cinema, streaming, TV / broadcast, vfx, animation, production, post, video games, and more. We do not have the same restrictions as one would find in the EU, UK, or AUS. We do not require theatrical distribution as a prerequisite for higher-tiered incentives. Nor do we restrict client options to either production or post, one may mix this ratio as required for your project. We provide turnkey services for clients who wish to handle a project themselves, including white-glove services for everything from pre-production to the final deliverables. We adhere to the highest safety and security protocols for the encryption and passing of data, atop 100% fiber optics services, for moving as much data as you need, worldwide, at the speed of light. We are mostly decentralized, and since we do not have the traditional overheard of other facilities, we can scale up or down in terms of services, compute and storage, down to the second, including scalable, AAA, experienced and vetted professionals, for all aspects of your project. Please reach out to me, Alejandro Franceschi, directly via LinkedIn InMail and/or LinkedIn Chat, as I do not personally engage via email for exploratory discussions. Reach me at: https://www.linkedin.com/in/alejandrofranceschi/ Thank you. Best, Alejandro Franceschi

所属行业
媒体制作
规模
2-10 人
总部
San Francisco,CA
类型
私人持股
创立
2006
领域
HD Video、Post Production、Video Editing、Animation、Motion Graphics、Digital Visual Effects、Compositing、Stereoscopic 3D、Podcasting、Video Compression、Optical Disk Authoring、Social Media Marketing、Graphic Design、Web Design、Audio Compression、Chromakey、VFX、Animation和VirtualProduction

地点

Tempest Digital, Inc.员工

动态

  • #Unilever and its beauty and wellness brands, which include #TRESemmé, #Dove, #Vaseline and #Clear, are now leaning harder into #DigitalTwin photography and #videography. While I am aware that many artistes at certain brands or #fashion houses can still manage to get multi-million dollar budgets, for now, at the end of the day, they answer to shareholders, and for them, it's always going to be about the bottom line and results. In February, Unilever reported its full-year results for 2024, showing a 2.3% increase in turnover to $65.5 billion. While volumes were up, the rise largely came from cost-cutting and margin expansion. Pressure to remain profitable despite rising ad and media costs was key to their decision to incorporate innovative technologies, that make the process smarter, faster, cheaper, and more personalized across a hyper-fragmented ad space from mobile to connected TV (CTV). Their 2030 G-A-P (Grow, Accelerate, Power) initiative, intends to remove duplication across functions, reduce unnecessary SKUs, while also driving end-to-end efficiency across departments. Here is what Unilever shared about their experience going into the Digital Twin and #GenerativeAI space: - Product imagery is being created 2X faster and 50% cheaper, leading to 100% #brand consistency and faster #content creation. - Use case TRESemmé Thailand demonstrated an 87% decrease in costs for content creation, and an elevated 5% of purchase intent. The results have been excellent and well received, so it should come as no surprise that Unilever now has some **500** AI applications across numerous divisions, including product R&D, #marketing, #SupplyChain optimization, #innovation and #CustomerService! The 3D and XR web is right around the corner. These investments and subsequent new practices will drive those new mediums, along with the respective creative practices being tested in R&D and innovation labs around the world. I've had the good fortune to be speaking with pioneers in this space, and I hope to be able to share more about what they/we might be doing soon. This is only the beginning!

    #Unilever and its beauty and wellness brands, which include #TRESemmé, #Dove, #Vaseline and #Clear, are now leaning harder into #DigitalTwin photography and #videography. While I am aware that many artistes at certain brands or #fashion houses can still manage to get multi-million dollar budgets, for now, at the end of the day, they answer to shareholders, and for them, it's always going to be about the bottom line and results. In February, Unilever reported its full-year results for 2024, showing a 2.3% increase in turnover to $65.5 billion. While volumes were up, the rise largely came from cost-cutting and margin expansion. Pressure to remain profitable despite rising ad and media costs was key to their decision to incorporate innovative technologies, that make the process smarter, faster, cheaper, and more personalized across a hyper-fragmented ad space from mobile to connected TV (CTV). Their 2030 G-A-P (Grow, Accelerate, Power) initiative, intends to remove duplication across functions, reduce unnecessary SKUs, while also driving end-to-end efficiency across departments. Here is what Unilever shared about their experience going into the Digital Twin and #GenerativeAI space: - Product imagery is being created 2X faster and 50% cheaper, leading to 100% #brand consistency and faster #content creation. - Use case TRESemmé Thailand demonstrated an 87% decrease in costs for content creation, and an elevated 5% of purchase intent. The results have been excellent and well received, so it should come as no surprise that Unilever now has some **500** AI applications across numerous divisions, including product R&D, #marketing, #SupplyChain optimization, #innovation and #CustomerService! The 3D and XR web is right around the corner. These investments and subsequent new practices will drive those new mediums, along with the respective creative practices being tested in R&D and innovation labs around the world. I've had the good fortune to be speaking with pioneers in this space, and I hope to be able to share more about what they/we might be doing soon. This is only the beginning!

  • Just when you thought maybe you were getting comfy with #Flux, #Midjourney, etc., along comes #Reve, which seems to be incredibly powerful. Here are a few reasons why. 👇 Reve is a new model trained from the ground up to excel at #promptadherence, #aesthetics, and #typography. Let's start with the "Artificial Analysis Leaderboard," which currently ranks Reve (French for "dream'), *above* Recraft, Google, Flux, #Minimax, Midjourney, #Ideogram, #StabilityAI 3.5 Turbo, etc., to name a few (weight their perspective as you will): https://lnkd.in/ggN-FXaH HIGHLIGHTS: - $0.01 per credit/image, and ALL images are only one credit. - You retain copyright. - *Rapidly* creates high-quality images. - Accurately portrays celebrities and recognizable characters! - Effectively generates extensive, readable, and stylistically consistent text. - Produces polished layouts, typography, branding, and UI/UX designs! - Generates hands more accurately than many competing models. (Why hasn't this issue been solved by *everyone* by now?) - Precisely follows instructions due to clear prompt understanding. - Supports detailed prompts through an extensive context window. - Includes an Enhance tool to transform simple inputs into detailed prompts, improving final imagery. - Provides easy refinement and iteration via Edit Prompt and Instruct features. Does it have all the bells and whistles yet? No? #ComfyUI? Not yet, it's only Tuesday. Wait a few days or weeks. Video? Not with this system as it is. There may be, but meanwhile, use the superior output to make better quality #generativevideo #content on the platforms of your preference. Oh, you were wondering where to sign up? It's right here: https://preview.reve.art/ _ *If you appreciate content like this, I value your sharing it, commenting, liking, and/or by following or connecting with me here on LinkedIn. It's the only way the algorithm "sees" me, so you can get this from me to stay ahead of the curve, FREE! Thank you!

    查看Alejandro Franceschi的档案

    Just when you thought maybe you were getting comfy with #Flux, #Midjourney, etc., along comes #Reve, which is ranking now as the most powerful of all. Here are a few reasons why. 👇 #ReveImage is a new model trained from the ground up to excel at #promptadherence, #aesthetics, and #typography. Let's start with the "Artificial Analysis Leaderboard," which currently ranks Reve (French for "dream'), *above* Recraft, Google, Flux, #Minimax, Midjourney, #Ideogram, #StabilityAI 3.5 Turbo, etc., to name a few (weight their perspective as you will): https://lnkd.in/ggN-FXaH Code named #Halfmoon, it presently ranks above *every other model* on ImgSys: https://lnkd.in/gkNSPegA HIGHLIGHTS: - $0.01 per credit/image, and ALL images are only one credit. - You retain copyright. - *Rapidly* creates high-quality images. - Accurately portrays celebrities and recognizable characters! - Effectively generates extensive, readable, and stylistically consistent text. - Produces polished layouts, typography, branding, and UI/UX designs! - Generates hands more accurately than many competing models. (Why hasn't this issue been solved by *everyone* by now?) - Precisely follows instructions due to clear prompt understanding. - Supports detailed prompts through an extensive context window. - Includes an Enhance tool to transform simple inputs into detailed prompts, improving final imagery. - Provides easy refinement and iteration via Edit Prompt and Instruct features. - Does it have all the bells and whistles yet? No. - #ComfyUI? Not yet, it's only Tuesday. Wait a few days or weeks. - #Video? Not with this system as it is. There may be, but meanwhile, use the superior output to make better quality #generativevideo #content on the platforms of your preference. Oh, you were wondering where to sign up? It's right here: https://preview.reve.art/ _ *If you appreciate content like this, I value your sharing it, commenting, liking, and/or by following or connecting with me here on LinkedIn. It's the only way the algorithm "sees" me, so you can get this from me to stay ahead of the curve, FREE! Thank you!

    • 该图片无替代文字
  • Fix your comps in #UnrealEngine with this little comp trick.

    查看Dean Yurke ❤️🕹️🎥的档案

    Writer / Director / Digital Artist

    UE5 Media Texture BLACK EDGE NIGHTMARE? ONE NODE, 55 SECONDS to FIX! Trying a different format with this one, a "Quick Tip." Thanks to Fergus Mulligan who suggested I make a video just about how I unpremultiply edges in media textures with the divide node as it's a compy thing that not a lot of Tech Artists know about, so thought I'd give it a go. www.youtube.com/deanyurke To visit my channel click above. And if you found the video useful, please subscribe and I'll make more! P.S. If you have video suggestions, please let me know. And as always, shares are very much appreciated!

  • Napster Broke Music. AI Might Break Everything Else. The recent #OpenAI announcement is bold—but not exactly surprising. With CEO Sam Altman allegedly confirming that the company trained on #copyright-protected material, the implications could be vast. I don’t think he would have, unless blanket legal immunity is already in place behind closed doors, or something similar. Some questions naturally follow: • Is that immunity retroactive? Does it protect past employees, just current ones, or the company only? • If OpenAI goes public, does that protection stay intact? Are they getting similar legal protections at the government level to execute this maneuver? • Is this a one-company deal, or will similar allowances be quietly extended to its competitors? Will OpenAI oversee them if they comply by sharing basically everything that makes them competitive? If OpenAI has been granted this kind of legal cushion, it gives them a massive edge over every other AI company that trained on similar data—many of which are now their direct rivals, many founded by ex-OpenAI staff. Does this allow Sam to legally discredit those rivals under the guise of compliance or enforcement? Could it even force the AI field to consolidate under shared IP, or else face liability? The threat of legal entanglement may have just become a powerful lever for shaping the future of the industry. And what if OpenAI doesn’t win the market? Is the U.S. government now so entangled that it doesn’t matter who builds the best model because it gets funneled to OpenAI anyway?We may be seeing a future where federal contracts—not customers—determine market dominance. Let’s not forget: Mira Murati, former CTO, was once asked point-blank if OpenAI trained on copyrighted material. Her response was a shrug, a mugging face, and basically: “I don’t know.” That moment feels less like a dodge now and more like a warning sign. As the CTO, how could she *not* have known? The broader fear? This signals not just legal protection—but a potential overhaul of copyright and #intellectualproperty law itself. If that happens, the floodgates could open—leading to widespread experimentation, appropriation, and chaos across the internet. This could make #Napster look quaint. Every major content stakeholder—#musiclabels, #filmstudios, #authors, #gamedevelopers, #newspapers, platforms like #YouTube and #Google—is likely mobilizing legal armies right now. Whether that amounts to anything is another story. For example, I am certain #Disney has some thoughts… And what if none of it matters? With government relationships & defense contracts allegedly in play, OpenAI may not need to win hearts & minds—or customers. This isn’t just a copyright fight. It may be a systemic, page one rewrite. We’re entering uncharted territory now, so buckle up, it’s gonna’ be a wild ride! Forbes: https://lnkd.in/df_NNDxB

  • What happens when James Cameron & the VFX mind behind Titanic & Avatar team up with Stability AI? They build tools that might actually replace legacy Hollywood production & post. in late 2024, I met Hanno Basse, the new CTO of Stability AI. He showed me the early work on their updated UI. It simplifies the complexity of ComfyUI, while keeping its strengths—like #ControlNets & #IPadapters. Imagine if #ComfyUI, #Photoshop, #AfterEffects, & #StabilityAI all had a baby. The resultant hybrid is intuitive to use, w/ additional deep controls for artists & technical teams. In late September 2024, James Cameron joined Stability’s board. But here’s the headline that hasn’t gotten nearly enough attention: Rob Legato, who has collaborated multiple times w/ James, has joined Stability AI as its Chief Pipeline Architect. If you’re not in the industry, you might not recognize his name—but his work, you absolutely do. Rob has been nominated 5 times for the Best #VFX Oscar & has won 3: Titanic, Hugo, & The Jungle Book. He also developed the #virtualcinematography techniques that made a tiny franchise known as #Avatar possible. His résumé includes Apollo 13, The Lion King, & dozens more. He’s won #Emmys, #BAFTAs, & totals 29 industry award wins, w/ 27 nominations. Now he’s building *the* next-gen tools for generative AI, at one of the most important companies in the space. This isn’t just about #artistry, but #infrastructure. This is a moment we'll look back upon as an inflection point in the industry. Let's get nerdy & cover *why this matters.* Most generative platforms only output 8-bit content. That won’t cut it for #cinema standards, #streaming #postproduction, & #VAD for LED Volumes, they demand 4K - 16K HDR. Under Rob’s guidance, they'll likely pioneer 16-bit float, EXR files w/ proper RGBA layers, metadata, color pipelines in linear (not sRGB 2.4), & I/O designed for HDR10+ & Dolby content, w/ fully encrypted I/O for studio standards—true cinema-grade workflows. I’d expect a full rework of the #UI logic behind ComfyUI, likely intended to play nicely with #Nuke & other Foundry products (maybe #Adobe, but the former permits #python, so maybe not). I wouldn’t be surprised if all of this leads to robust APIs that enable necessary, high-end, modular composability, w/ faster iterative releases, #DCC & real-time tools integration, agnostic interoperability, collaboration, & scalability. And yes—all this likely moves the company to profitability, in time. If I were any similar platform, I’d be paying attention. Anyone ignoring the professional VFX production pipeline, or pretending ComfyUI alone is enough, is in for a rude awakening. And to those inside or outside #Hollywood—ask yourself this: Would YOU bet AGAINST James Cameron AND Rob Legato? If your answer is "no," then you appreciate this move could very well become *the* new Hollywood standard. Announcement link: https://lnkd.in/gPvyjaQa

    What happens when James Cameron & the VFX mind behind Titanic & Avatar team up with Stability AI? They build tools that might actually replace legacy Hollywood production & post. in late 2024, I met Hanno Basse, the new CTO of Stability AI. He showed me the early work on their updated UI. It simplifies the complexity of ComfyUI, while keeping its strengths—like #ControlNets & #IPadapters. Imagine if #ComfyUI, #Photoshop, #AfterEffects, & #StabilityAI all had a baby. The resultant hybrid is intuitive to use, w/ additional deep controls for artists & technical teams. In late September 2024, James Cameron joined Stability’s board. But here’s the headline that hasn’t gotten nearly enough attention: Rob Legato, who has collaborated multiple times w/ James, has joined Stability AI as its Chief Pipeline Architect. If you’re not in the industry, you might not recognize his name—but his work, you absolutely do. Rob has been nominated 5 times for the Best #VFX Oscar & has won 3: Titanic, Hugo, & The Jungle Book. He also developed the #virtualcinematography techniques that made a tiny franchise known as #Avatar possible. His résumé includes Apollo 13, The Lion King, & dozens more. He’s won #Emmys, #BAFTAs, & totals 29 industry award wins, w/ 27 nominations. Now he’s building *the* next-gen tools for generative AI, at one of the most important companies in the space. This isn’t just about #artistry, but #infrastructure. This is a moment we'll look back upon as an inflection point in the industry. Let's get nerdy & cover *why this matters.* Most generative platforms only output 8-bit content. That won’t cut it for #cinema standards, #streaming #postproduction, & #VAD for LED Volumes, they demand 4K - 16K HDR. Under Rob’s guidance, they'll likely pioneer 16-bit float, EXR files w/ proper RGBA layers, metadata, color pipelines in linear (not sRGB 2.4), & I/O designed for HDR10+ & Dolby content, w/ fully encrypted I/O for studio standards—true cinema-grade workflows. I’d expect a full rework of the #UI logic behind ComfyUI, likely intended to play nicely with #Nuke & other Foundry products (maybe #Adobe, but the former permits #python, so maybe not). I wouldn’t be surprised if all of this leads to robust APIs that enable necessary, high-end, modular composability, w/ faster iterative releases, #DCC & real-time tools integration, agnostic interoperability, collaboration, & scalability. And yes—all this likely moves the company to profitability, in time. If I were any similar platform, I’d be paying attention. Anyone ignoring the professional VFX production pipeline, or pretending ComfyUI alone is enough, is in for a rude awakening. And to those inside or outside #Hollywood—ask yourself this: Would YOU bet AGAINST James Cameron AND Rob Legato? If your answer is "no," then you appreciate this move could very well become *the* new Hollywood standard. Announcement link: https://lnkd.in/gPvyjaQa

    • 该图片无替代文字
  • #opensource ACE++ setup can use just one image to then employ #diffusion models that can create a consistent character, even if the environment, angles, clothing, lighting, etc., change!

  • #StabilityAI releases Stable Virtual Camera: Multi-View Video Generation with 3D Camera Control From Stability AI: Key Takeaways: Introducing Stable Virtual Camera, currently in research preview. This multi-view diffusion model transforms 2D images into immersive 3D videos with realistic depth and perspective—without complex reconstruction or scene-specific optimization. The model generates 3D videos from a single input image or up to 32, following user-defined camera trajectories as well as 14 other dynamic camera paths (templates), including 360°, Lemniscate, Spiral, Dolly Zoom, Move, Pan, and Roll. Multiple Aspect Ratios: Capable of producing videos in square (1:1), portrait (9:16), landscape (16:9), and other custom aspect ratios without additional training. Long Video Generation: Ensures 3D consistency in videos up to 1,000 frames, enabling seamless loops and smooth transitions, even when revisiting the same viewpoints. Model limitations In its initial version, Stable Virtual Camera may produce lower-quality results in certain scenarios. Input images featuring humans, animals, or dynamic textures like water often lead to degraded outputs. Additionally, highly ambiguous scenes, complex camera paths that intersect objects or surfaces, and irregularly shaped objects can cause flickering artifacts, especially when target viewpoints differ significantly from the input images. Stable #VirtualCamera is available for research use under a Non-Commercial License. Research: https://lnkd.in/g2rzUd2P Weights (on #HuggingFace): https://lnkd.in/gEbPDPZg Code on #Github: https://lnkd.in/gKNTBv7s

    #StabilityAI releases Stable Virtual Camera: Multi-View Video Generation with 3D Camera Control From Stability AI: Key Takeaways: Introducing Stable Virtual Camera, currently in research preview. This multi-view diffusion model transforms 2D images into immersive 3D videos with realistic depth and perspective—without complex reconstruction or scene-specific optimization. The model generates 3D videos from a single input image or up to 32, following user-defined camera trajectories as well as 14 other dynamic camera paths (templates), including 360°, Lemniscate, Spiral, Dolly Zoom, Move, Pan, and Roll. Multiple Aspect Ratios: Capable of producing videos in square (1:1), portrait (9:16), landscape (16:9), and other custom aspect ratios without additional training. Long Video Generation: Ensures 3D consistency in videos up to 1,000 frames, enabling seamless loops and smooth transitions, even when revisiting the same viewpoints. Model limitations In its initial version, Stable Virtual Camera may produce lower-quality results in certain scenarios. Input images featuring humans, animals, or dynamic textures like water often lead to degraded outputs. Additionally, highly ambiguous scenes, complex camera paths that intersect objects or surfaces, and irregularly shaped objects can cause flickering artifacts, especially when target viewpoints differ significantly from the input images. Stable #VirtualCamera is available for research use under a Non-Commercial License. Research: https://lnkd.in/g2rzUd2P Weights (on #HuggingFace): https://lnkd.in/gEbPDPZg Code on #Github: https://lnkd.in/gKNTBv7s

  • What if you could train a Gaussian Head at up to 630 images per second, rendering at up to ~398 FPS, in real-time, on a single RTX 3090 GPU? You can now, thanks to the State Key Lab of CAD & CG, Zhejiang University. Astonishingly, the work below is provided as an #MITlicense! The following method can take a single 2~3 minutes video, and process in ~81 seconds, a batch of high-fidelity Reduced #GaussianBlendshapes. It preserves wrinkle-level skin deformations, teeth, even reflections in eye-wear, with amazing temporal consistency. The ablation on the #ReducedBlendshapes, preserves deformations of the topology even when the expressions are at their most extreme, especially around the lips, even inside the mouth (teeth, tongue)! They can accomplish this by compressing it to only 20 blendshapes! However, it seems that cross-identity transformations might be a weakness. In some examples, they don't look highly responsive or accurate. At the end of the video, the converse is true, and it's not really clear as to why. If anyone can clarify, I'd appreciate learning from you in the comments. Astonishingly, due to its efficiency, one can see the on-the-fly reconstruction of the Gaussian Head, using readily available cameras, hardware, and software. This is visible at the end of the video and is incredibly satisfying to watch. I wouldn't be surprised if this were used in #XR, #videogames, #VFX, and just about anywhere else one would want to create or use such an #avatar. #RGBAvatar: Reduced Gaussian Blendshapes for Online Modeling of Head Avatars: Research: https://lnkd.in/dRT2TENb Code: https://lnkd.in/dnBuBZak Project Page: https://lnkd.in/db_cMwcg _ #GaussianHead #opensource #research #avatars #nvidia #3Dreconstruction

    What if you could train a Gaussian Head at up to 630 images per second, rendering at up to ~398 FPS, in real-time, on a single RTX 3090 GPU? You can now, thanks to the State Key Lab of CAD & CG, Zhejiang University. Astonishingly, the work below is provided as an #MITlicense! The following method can take a single 2~3 minutes video, and process in ~81 seconds, a batch of high-fidelity Reduced #GaussianBlendshapes. It preserves wrinkle-level skin deformations, teeth, even reflections in eye-wear, with amazing temporal consistency. The ablation on the #ReducedBlendshapes, preserves deformations of the topology even when the expressions are at their most extreme, especially around the lips, even inside the mouth (teeth, tongue)! They can accomplish this by compressing it to only 20 blendshapes! However, it seems that cross-identity transformations might be a weakness. In some examples, they don't look highly responsive or accurate. At the end of the video, the converse is true, and it's not really clear as to why. If anyone can clarify, I'd appreciate learning from you in the comments. Astonishingly, due to its efficiency, one can see the on-the-fly reconstruction of the Gaussian Head, using readily available cameras, hardware, and software. This is visible at the end of the video and is incredibly satisfying to watch. I wouldn't be surprised if this were used in #XR, #videogames, #VFX, and just about anywhere else one would want to create or use such an #avatar. #RGBAvatar: Reduced Gaussian Blendshapes for Online Modeling of Head Avatars: Research: https://lnkd.in/dRT2TENb Code: https://lnkd.in/dnBuBZak Project Page: https://lnkd.in/db_cMwcg _ #GaussianHead #opensource #research #avatars #nvidia #3Dreconstruction

  • #MeshMatch for #Maya looks rather incredible!

    查看David Liebard的档案

    Animation | CG | VFX | Programmer | Rigging Expert | Artist | Tech Explorer Maya since 1998. Studio character rigging supervisor & developer at Illumination on 14 feature films.

    After 8 months of work, countless nights, endless iterations, I’m beyond excited to finally share my registration tool for Maya (morph and retarget between different topologies): "MESH MATCH" I know many of you have been waiting for this feature in Maya, and I truly hope it proves useful. Watch the video below for details. This is just the beginning. Mesh Match is built to evolve. Eventually it will integrate automatic facial recognition with AI. Mesh Match tools, videos, example scenes, etc will be available for download on Gumroad soon. I am looking for beta testers, if you are interested please fill this form: https://lnkd.in/gwhrQCE4 --- If you find it interesting, please share—every bit of exposure helps! --- Link to HD video: https://lnkd.in/egh_ZJwe #maya #register #registration #deformer #plugin #codinglive

相似主页