The #WSJ poses a question, "What Makes You Human?" The answer provided by #OpenAI’s #SamAltman; is precisely the complementary technologies we provide to #CognitiveAI. Source: https://lnkd.in/dB7-FPDj _ Message me directly for a demo. | #eai #ai #emotion #emotionalintelligence #personalitytype
Tempest Digital, Inc.
媒体制作
San Francisco,CA 505 位关注者
We provide tax incentives for global M&E, Film, VFX, Animation, & IT projects. Contact Alejandro Franceschi for details.
关于我们
We offer up to 40% in tax incentives, for cinema, streaming, TV / broadcast, vfx, animation, production, post, video games, and more. We do not have the same restrictions as one would find in the EU, UK, or AUS. We do not require theatrical distribution as a prerequisite for higher-tiered incentives. Nor do we restrict client options to either production or post, one may mix this ratio as required for your project. We provide turnkey services for clients who wish to handle a project themselves, including white-glove services for everything from pre-production to the final deliverables. We adhere to the highest safety and security protocols for the encryption and passing of data, atop 100% fiber optics services, for moving as much data as you need, worldwide, at the speed of light. We are mostly decentralized, and since we do not have the traditional overheard of other facilities, we can scale up or down in terms of services, compute and storage, down to the second, including scalable, AAA, experienced and vetted professionals, for all aspects of your project. Please reach out to me, Alejandro Franceschi, directly via LinkedIn InMail and/or LinkedIn Chat, as I do not personally engage via email for exploratory discussions. Reach me at: https://www.linkedin.com/in/alejandrofranceschi/ Thank you. Best, Alejandro Franceschi
- 所属行业
- 媒体制作
- 规模
- 2-10 人
- 总部
- San Francisco,CA
- 类型
- 私人持股
- 创立
- 2006
- 领域
- HD Video、Post Production、Video Editing、Animation、Motion Graphics、Digital Visual Effects、Compositing、Stereoscopic 3D、Podcasting、Video Compression、Optical Disk Authoring、Social Media Marketing、Graphic Design、Web Design、Audio Compression、Chromakey、VFX、Animation和VirtualProduction
地点
Tempest Digital, Inc.员工
动态
-
"Generative Omnimatte: Redefining Video Layering for Unprecedented Creative Flexibility" Hear ye, hear ye, all you #GraphicDesigners, #MotionDesigners, #Compositors, and other #Creatives, here is the next evolution in #GenAI content assistance and/or creation: LAYERS! In a media landscape driven by innovation, #GenerativeOmnimatte is setting a new benchmark for #videoediting capabilities. This cutting-edge technology decomposes videos into detailed #RGBA layers, capturing both objects and their natural effects—shadows, reflections, and more. (For those who had criticized #DepthCrafter2 or not being able to isolate shadows…) Imagine a tool that empowers artists to isolate, modify, or even remove elements with unmatched precision. This is a milestone. Whether you're working on #VFX for a #blockbuster, crafting immersive# XR experiences, or designing assets for #videogames (take a frame from a video, or multiple frames, to create #3DGS, #4DGS, or #3DModels) Omnimatte’s layer-centric approach ensures seamless integration and editing flexibility, without the need for frame-by-frame intervention. The implications? Faster workflows, elevated #creative control, and the ability to handle dynamic scenes without sacrificing quality. Generative Omnimatte isn't just a technical advancement—it's creative liberation for #storytellers across industries who have been waiting for this particular type of tool. Is it perfect? No. However, given how this baseline needed to be achieved first, you can be assured that something even better will be along in a few weeks to a few months, at most. However, for garbage roto and general pre-vis or other types of #storyboarding for reviews and approvals, this is nothing short of incredible. Watch the video to see the magic unfold. How could this reshape your processes, #pipelines, #workflows, etc.? Project website: https://lnkd.in/gmsbVd2x No code available! This is coming from #GoogleDeepMind. This could be a highly valuable product, but given how #Google tends to release products, who knows? I would bet money that #Meta and #Llama come up with something similar for an upcoming version of #SegmentAnything (aka #SAM), but there are some hacks of that already in use in some VFX pipelines. However, to export automatically into layers, for an RGBA EXR sequence? That would be gold! _ #generativeai #generativevideo #genai #videoproduction #production #postproduction #animation #3D #omnimatte #exr
"Generative Omnimatte: Redefining Video Layering for Unprecedented Creative Flexibility" Hear ye, hear ye, all you #GraphicDesigners, #MotionDesigners, #Compositors, and other #Creatives, here is the next evolution in #GenAI content assistance and/or creation: LAYERS! In a media landscape driven by innovation, #GenerativeOmnimatte is setting a new benchmark for #videoediting capabilities. This cutting-edge technology decomposes videos into detailed #RGBA layers, capturing both objects and their natural effects—shadows, reflections, and more. (For those who had criticized #DepthCrafter2 or not being able to isolate shadows…) Imagine a tool that empowers artists to isolate, modify, or even remove elements with unmatched precision. This is a milestone. Whether you're working on #VFX for a #blockbuster, crafting immersive# XR experiences, or designing assets for #videogames (take a frame from a video, or multiple frames, to create #3DGS, #4DGS, or #3DModels) Omnimatte’s layer-centric approach ensures seamless integration and editing flexibility, without the need for frame-by-frame intervention. The implications? Faster workflows, elevated #creative control, and the ability to handle dynamic scenes without sacrificing quality. Generative Omnimatte isn't just a technical advancement—it's creative liberation for #storytellers across industries who have been waiting for this particular type of tool. Is it perfect? No. However, given how this baseline needed to be achieved first, you can be assured that something even better will be along in a few weeks to a few months, at most. However, for garbage roto and general pre-vis or other types of #storyboarding for reviews and approvals, this is nothing short of incredible. Watch the video to see the magic unfold. How could this reshape your processes, #pipelines, #workflows, etc.? Project website: https://lnkd.in/gQrafdXH No code available! This is coming from #GoogleDeepMind. This could be a highly valuable product, but given how #Google tends to release products, who knows? I would bet money that #Meta and #Llama come up with something similar for an upcoming version of #SegmentAnything (aka #SAM), but there are some hacks of that already in use in some VFX pipelines. However, to export automatically into layers, for an RGBA EXR sequence? That would be gold! _ #generativeai #generativevideo #genai #videoproduction #production #postproduction #animation #3D #omnimatte #exr #compositing #compositor #dneg #ilm #ilmvfx #ves #mpc #eyelinestudios #mgmstudios #amazonstudios #digitaldomain #whiskytreevfx #aswf #academysoftwarefoundation
-
#LumaAI introduced the latest version of the #DreamMachine #GenAI #imaging and #video model. Learn more at: https://lnkd.in/gc_QVHuK The focus is on the "Luma Photon" model. It claims to not require any quirky "prompt engineering". Instead, one may be as simple or as specific as they like, it their own voice, and explore the the results in a fluid manner. One may also bring in their own unique images, style, and character references. One can create the start *and* end frames of an intended video, from Photon, so the output video by the same model ends up being more temporally cohesive. While it's not clear what is new about the model beyond the marketing hype on the page, it seemingly generates high resolution, details, and creatively composed images at "8X the efficiency and speed of comparable models". There is nothing linked to that statement that demonstrates any benchmarks, but time will tell. It's available now on web and #iOS. What will you create?
#LumaAI introduced the latest version of the #DreamMachine #GenAI #imaging and #video model. Learn more at: https://lnkd.in/gc_QVHuK The focus is on the "Luma Photon" model. It claims to not require any quirky "prompt engineering". Instead, one may be as simple or as specific as they like, it their own voice, and explore the the results in a fluid manner. One may also bring in their own unique images, style, and character references. One can create the start *and* end frames of an intended video, from Photon, so the output video by the same model ends up being more temporally cohesive. While it's not clear what is new about the model beyond the marketing hype on the page, it seemingly generates high resolution, details, and creatively composed images at "8X the efficiency and speed of comparable models". There is nothing linked to that statement that demonstrates any benchmarks, but time will tell. It's available now on web and #iOS. What will you create?
-
Jobs these days are a numbers game, so it’s inevitable that #AI recruiters meet AI #avatars of the candidate, in an effort to scale their job applications. This makes a lot of jobs unattainable for those without the resources to mimic this type of approach. Companies such as #Amazon, have scrapped their AI recruiters when they noticed the bias it had against women. Any guess as to why? What do you think? Dystopian hell? PKD novella? Brazil (the film)? Yes or no? https://lnkd.in/gg-TXDqp
Award-winning AI & Automation Expert | Keynote Speaker, Influencer & Best-Selling Author | Forbes Tech Council | 2 million+ followers | Follow me to thrive in the age of AI and become IRREPLACEABLE ✔️
Al hiring avatars will soon be talking to Al candidate avatars who were screened by an Al recruitment agent 😳 That agent previously reviewed the candidate’s CV which was also generated by AI 🤖 Welcome to this AI-driven future. But beware—this convenience has a cost. Overreliance on AI risks eroding human creativity, empathy, and critical thinking. We call it "AI obesity," and it's already impacting businesses and individuals alike. Let us strive to be AI-wise—harnessing AI's power while sharpening our human edge. Join the IRREPLACEABLE community and read the Book: www.irreplaceable.ai #AI #Innovation #FutureOfWork #3CompetenciesOfTheFuture #HumanityFirst
-
#FigureAI and its #robotics team showcase the #Figure02 update that's being trained and iterated upon at a #BMW #manufacturing plant in Spartanburg, South Carolina. Significance? For eight years in a row (up until 2022 records), it's the largest #automotive exporter in the USA. Imagine when they can scale output for whatever the market can bear? Faster. Cheaper. Fewer errors that amount to perhaps a decimal point. 24/7/365, with no unions. (Source: https://lnkd.in/gWrHPcr9) Imagine a plant of theirs where robots simply make clones of themselves for everyday, multi-purpose, civilian, military, and civil service use? 24/7/365. Imagine #Tesla doing the same thing? Both of them have proprietary AI systems, and all the variables that come with it. Learn more about Figure #AI, and its $675M in funding, giving it a #marketcap of $2.6B. It's backed by #OpenAI. I wonder what will power the Figure0X for #enterprise and/or home use?
#FigureAI and its #robotics team showcase the #Figure02 update that's being trained and iterated upon at a #BMW #manufacturing plant in Spartanburg, South Carolina. Significance? For eight years in a row (up until 2022 records), it's the largest #automotive exporter in the USA. Imagine when they can scale output for whatever the market can bear? Faster. Cheaper. Fewer errors that amount to perhaps a decimal point. 24/7/365, with no unions. (Source: https://lnkd.in/gWrHPcr9) Imagine a plant of theirs where robots simply make clones of themselves for everyday, multi-purpose, civilian, military, and civil service use? 24/7/365. Imagine #Tesla doing the same thing? Both of them have proprietary AI systems, and all the variables that come with it. Learn more about Figure #AI, and its $675M in funding, giving it a #marketcap of $2.6B. It's backed by #OpenAI. I wonder what will power the Figure0X for #enterprise and/or home use?
-
#ComfyUI is likely poised to be the Nuke of #GenAI. The latest release has some great updates, check them out! 👇 https://lnkd.in/gZNnEwpG _ #GenAI #TextToImage #TextTo3D #ImageTo3D #GenAIvideo #textures #3D #animation #imaging #vfx
#ComfyUI is likely poised to be the Nuke of #GenAI. The latest release has some great updates, check them out! 👇 https://lnkd.in/gZNnEwpG _ #GenAI #TextToImage #TextTo3D #ImageTo3D #GenAIvideo #textures #3D #animation #imaging #vfx
-
Yesterday I was given a project, and the audio was almost as bad as in the example below from #Adobe. The client was expecting miracles, but I knew from experience the #audio was still going to S-U-C-K, because they didn't record it properly. Garbage in, garbage out. Previously, I had used #AdobePodcast to clean up audio for another #videoeditor I know, who was in a tight bind, and it worked out rather well. I gave it another shot, and it was easily 70%+ better. I could live with it, and I knew the client would be happy. Yet, as soon as it finished, Adobe gave me access to their #EnhanceSpeechV2 model. "OK", I thought, and I smashed the twirly to V2, let it process, and OMG, it's **absolute magic.** The client couldn't believe it. I couldn't believe it. I didn't need a full-blown sound engineer to try to make it salvageable, at a higher cost. No hours fussing around in Audition. I simply dragged-and-dropped, waited about a minute for 16m of audio(!) and voilà! **AUDIO MAGIC!** Try it out for yourself! https://podcast.adobe.com/
Yesterday I was given a project, and the audio was almost as bad as in the example below from #Adobe. The client was expecting miracles, but I knew from experience the #audio was still going to S-U-C-K, because they didn't record it properly. Garbage in, garbage out. 🤷🏻♂️ Previously, I had used #AdobePodcast to clean up audio for another #videoeditor I know, who was in a tight bind, and it worked out rather well. I gave it another shot, and it was easily 70%+ better. I could live with it, and I knew the client would be happy. Yet, as soon as it finished, Adobe gave me access to their #EnhanceSpeechV2 model. "OK", I thought, and I smashed the twirly to V2, let it process, and OMG, it's **absolute magic.** The client couldn't believe it. I couldn't believe it. I didn't need a full-blown sound engineer to try to make it salvageable, at a higher cost with OT rates. No hours fussing around in Audition. I simply dragged-and-dropped, waited about a minute for 16m of audio(!) and voilà! **AUDIO MAGIC!** Try it out for yourself! https://podcast.adobe.com/
-
Here is a quick trailer highlight of #UnrealEngine 5.5/ Chief among these are major advances in Sequencer, Niagara, #Megalights, Control Rig (I used to dream of such a tool in undergrad and grad school), and a whole lot more. If you'd like to dig through the documentation of it, you can find that here: https://lnkd.in/gBTT_ZRv
Here is a quick trailer highlight of #UnrealEngine 5.5/ Chief among these are major advances in Sequencer, Niagara, #Megalights, Control Rig (I used to dream of such a tool in undergrad and grad school), and a whole lot more. If you'd like to dig through the documentation of it, you can find that here: https://lnkd.in/gBTT_ZRv
-
Inquire within, to learn more about SOTA #AffectiveComputing.
Inquire within, to learn more about SOTA #AffectiveComputing. The application is for anything where humans interface with machines, in any mode, on any platform, OS, device, or #mechatronic. Fully tech agnostic. No discernible latency. Backed by a family of #patents covering emotion across #hardware and #software. Imbue the cognition of AI by the likes of #OpenAI, #Anthropic, #Gemini, etc., with the ability to comprehend and respond across the full gamut of human #emotions; all 64 trillion parameters, every 1/10 of a second, in real-time. Without the ability to understand, respond, remember, and in parallel, synthesize an intelligence that also accepts the same sensory inputs as humans (and more), there cannot be anything remotely resembling human-level intelligence, let alone #AGI. (Please see video clip above, with the brilliant Yann LeCun.) We offer the next milestone required for the development of advanced, human-centric, human-like, AI. Private sessions for Series-A funding, please inquire directly. Thank you. - #media #entertainment #xr #metaverse #videogames #avatar #digitalhuman #robotics #humanoidrobotics #emotionalintelligence #callcenter