XR Studios

XR Studios

娱乐提供商

Los Angeles,CA 493 位关注者

We provide tax incentives for global M&E, Film, VFX, Animation, & IT projects. Contact Alejandro Franceschi for details.

关于我们

We offer up to 40% in tax incentives, for cinema, streaming, TV / broadcast, vfx, animation, production, post, video games, and more. We do not have the same restrictions as one would find in the EU, UK, or AUS. We do not require theatrical distribution as a prerequisite for higher-tiered incentives. Nor do we restrict client options to either production or post, one may mix this ratio as required for your project. We provide turnkey services for clients who wish to handle a project themselves, including white-glove services for everything from pre-production to the final deliverables. We adhere to the highest safety and security protocols for the encryption and passing of data, atop 100% fiber optics services, for moving as much data as you need, worldwide, at the speed of light. We are mostly decentralized, and since we do not have the traditional overheard of other facilities, we can scale up or down in terms of services, compute and storage, down to the second, including scalable, AAA, experienced and vetted professionals, for all aspects of your project. Please reach out to me, Alejandro Franceschi, directly via LinkedIn InMail and/or LinkedIn Chat, as I do not personally engage via email for exploratory discussions. Reach me at: https://www.dhirubhai.net/in/alejandrofranceschi/ Thank you. Best, Alejandro Franceschi

所属行业
娱乐提供商
规模
2-10 人
总部
Los Angeles,CA
类型
私人持股
创立
2006
领域
Virtual Reality、Augmented Reality、Mixed Reality、Animation、3D、VR、AR、360 Video、Visual Effects、Computer Vision、Video Production、Video Post Production、Stereoscopic Production、Stereoscopic Depth Conversion、B.P.O.、Film Studio Pipelines、Machine Learning、metaverse、avatar、artificial intelligence、ai、decentralized、tax incentives、filmmaking、cinema、vfx和visual effects

地点

XR Studios员工

动态

  • 查看XR Studios的公司主页,图片

    493 位关注者

    #Lionsgate is going to dump its catalog into #Runway, for the future of #cinema. Expect *every* studio to follow, no matter how much they protest at first. They will cave because of the economics of it. Studios are corporations nowadays that answer to shareholders, *not artists.* This is the end of Hollywood as we have known it. It will collapse upon itself, and a new thing will emerge. Between now and then, it’s going to be vicious, swift, brutal, and ugly as all Hell.

    查看Alejandro Franceschi的档案,图片

    I have been talking to IP holders, as well as investors about crafting a new vision for #cinema. Today, the first studio domino to fall, is Lionsgate, and you can bet that others will follow in order to compete. Soon, hyper-personalized series and movies, complete with ads and product placements, will be served to you, on demand. Some of them will even include you, your friends, and/or family, as you upload clips of your voices, videos, scans, etc., so the Generative Engine can insert you to be the leads, or supporting characters, of the programs you want to watch. In some instances, it might be only fully synthetic content. For studios with big catalogs, there is a lot of money to be made in letting people re-re-re-re-watch #StarWars with themselves as Han, Leia, or even Luke. Maybe *you* want to be Frodo to take the ring to Mordor, but this time, via an alternate route? This time, you’ll encounter new characters, new monsters, and depending on the pipeline, real-time interaction with fellow audience members who are also participants, on the other side of the world. Or, perhaps your kid wants to be Dorothy today in “The Wonderful Wizard of Oz.” Alternatively, perhaps a variant of the film opens up as an open-world in which you can participate, like #Cyberpunk2077. It’s a #marketing team’s dream come true, and a nightmare for everyone in legal. ?? Make no mistake that this is an end, as well as a new beginning. The revolution will be generated, not televised. We’re still open for discussions on various aspects of our creative and technical approaches. For those who are interested, please message me directly on LinkedIn. https://lnkd.in/gYzMUJF7 _ #lionsgate #xr #filmproduction #postproduction #vfx #streaming #ai #genai #generativevideo

    Lionsgate signs deal to train AI model on its movies and shows

    Lionsgate signs deal to train AI model on its movies and shows

    theverge.com

  • 查看XR Studios的公司主页,图片

    493 位关注者

    #Arcane is one of those rare gems that is not only an #adaptation, but a masterful technical accomplishment, and a singular aesthetic achievement that elevates it into a masterpiece. Season 2 is finally coming to #Netflix. _ #adultanimation #vfx #videogame #leagueoflegends

  • 查看XR Studios的公司主页,图片

    493 位关注者

    This #genai #commercial is so well made and crafted with tons of #joy that it truly deserves to go viral. I love that they went ahead and used every tip, trick, hack, technique, workflow - whatever - to get it done! I enjoy working on teams that will embrace playing with all the cards in the deck, or at least willing to try new things! Congratulations to all! _ Lovis Odin + DOGSTUDIO/DEPT? + team + for The Coca-Cola Company #genaivideo #generativevideo #comfyui #animatediff #vfx #stopmotion #openai #dalle3 #blender #procreate #animatedDiff #stablediffusion #runway #magnific #adobe #aftereffects #compositing https://lnkd.in/gjHnHdZJ

    查看Lovis Odin的档案,图片

    AI/3D Interactive Designer - ComfyUI specialist - Speaker, teacher - AI/3D prototyping ! - you need AI for your everyday work ! - Modeling - Texturing - Animation - CEO lovis.io - Freelance

    After long months , I am so happy to share probably one of the biggest and most early project using mostly ComfyUI (in march) and other AI tool to generate a full animation movie. this project was a Real team project made with DOGSTUDIO/DEPT? for The Coca-Cola Company . second part IS making off ! Hope you will like it !

  • 查看XR Studios的公司主页,图片

    493 位关注者

    Elevating AI-Generated Video Quality with VEnhancer Say hello to #VEnhancer, a new framework designed to improve the quality of AI-generated videos by enhancing both #spatial and #temporal resolution. Unlike traditional methods, VEnhancer improves existing #TextToVideo results by refining visual details and creating smoother motion— effectively eliminating artifacts and flickering! This is so much better than attempting to deflicker in #compositing, or in having to create start and end frames that are (not) super precise with #GenAI (which can be challenging), or without involving another suite of skills, or staff, with 3D. Built on a pretrained video diffusion model, VEnhancer uses a specialized #video #ControlNet to condition low-resolution, low-frame-rate videos, enabling flexible #upscaling in both space and time. Its innovative approach leverages #spacetime data augmentation and video-aware conditioning, allowing for stable, end-to-end training. The results speak for themselves: VEnhancer outperforms current state-of-the-art methods for video super-resolution, enabling tools like VideoCrafter-2 to achieve top rankings in #videogeneration benchmarks. This breakthrough makes VEnhancer an essential tool for anyone looking to push the boundaries of AI-generated content. For those seeking to improve the quality and fluidity of AI-generated videos, VEnhancer offers a powerful, flexible solution. #Github code download: https://lnkd.in/genb_qsW _ #VideoEnhancement #GenerativeAI #VideoEditing #CreativeTech #GenAI #GenAIVideo #AIvideo

    查看Alejandro Franceschi的档案,图片

    Elevating AI-Generated Video Quality with VEnhancer! This is incredibly powerful, generative video people, take a look! ?? Say hello to #VEnhancer, a new framework designed to improve the quality of AI-generated videos by enhancing both #spatial and #temporal resolution. Unlike traditional methods, VEnhancer improves existing #TextToVideo results by refining visual details and creating smoother motion— effectively eliminating artifacts and flickering! This is so much better than attempting to deflicker in #compositing, or in having to create start and end frames that are (not) super precise with #GenAI (which can be challenging), or without involving another suite of skills, or staff, with 3D. Built on a pretrained video diffusion model, VEnhancer uses a specialized #video #ControlNet to condition low-resolution, low-frame-rate videos, enabling flexible #upscaling in both space and time. Its innovative approach leverages #spacetime data augmentation and video-aware conditioning, allowing for stable, end-to-end training. The results speak for themselves: VEnhancer outperforms current state-of-the-art methods for video super-resolution, enabling tools like VideoCrafter-2 to achieve top rankings in #videogeneration benchmarks. This breakthrough makes VEnhancer an essential tool for anyone looking to push the boundaries of AI-generated content. For those seeking to improve the quality and fluidity of AI-generated videos, VEnhancer offers a powerful, flexible solution. #Github code download: https://lnkd.in/genb_qsW _ #VideoEnhancement #GenerativeAI #VideoEditing #CreativeTech #GenAI #GenAIVideo #AIvideo

  • 查看XR Studios的公司主页,图片

    493 位关注者

    Redefining Volumetric Video: Introducing "Robust Dual Gaussian Splatting" Volumetric video has the potential to reshape the future of digital experiences, offering fully immersive content that allows users to navigate freely within virtual environments. However, its adoption has been slowed by the challenges of current techniques—primarily the need for painstaking manual cleanup to stabilize complex 3D mesh sequences. The new “Dual Gaussian Splatting” method tackles these challenges head-on, providing a fresh approach to capturing and rendering human-centric volumetric videos. Utilizing 81 calibrated industrial cameras, this technique captures highly dynamic human performances with remarkable detail. At the core of this method is a dual Gaussian representation, which compresses motion and appearance data separately, allowing for efficient storage and real-time high-fidelity rendering. The results are impressive: up to a 120X compression ratio without sacrificing quality (!!!), enabling seamless playback of complex scenes. By avoiding the heavy manual interventions required in existing workflows, this method streamlines the process, significantly lowering the barrier to entry for creatives and developers. Additionally, the technique integrates smoothly with VR platforms and CG engines, enhancing both playback and editing in various immersive scenarios. From #interactive #storytelling to advanced gaming and beyond, “Dual Gaussian Splatting” could be the key to unlocking the full potential of volumetric video, making immersive experiences more accessible and engaging than ever before. As we continue to blur the lines between the digital and real worlds, methods like these offer a glimpse into the future of how we create, share, and experience content. There is so much more that can and needs to be discussed about this technique and the medium, but there is only so much room on LinkedIn to capture that. If you are interested in volumetric media for your production, which could also mean volumetric content for #virtualproduction, I’m available for consultations. "Robust Dual Gaussian Splatting for Immersive Human-centric Volumetric Videos", Project Page: https://lnkd.in/ga32H5qf _ #volumetricvideo #dualgaussiansplatting #gaussiansplatting #3DGS #4DGS #volumetric #XR #metaverse #vr #ar #mixedreality #virtual #immersive #production #postproduction #animation #vfx #streaming #videogames

    Redefining Volumetric Video: Introducing "Robust Dual Gaussian Splatting" Volumetric video has the potential to reshape the future of digital experiences, offering fully immersive content that allows users to navigate freely within virtual environments. However, its adoption has been slowed by the challenges of current techniques—primarily the need for painstaking manual cleanup to stabilize complex 3D mesh sequences. The new “Dual Gaussian Splatting” method tackles these challenges head-on, providing a fresh approach to capturing and rendering human-centric volumetric videos. Utilizing 81 calibrated industrial cameras, this technique captures highly dynamic human performances with remarkable detail. At the core of this method is a dual Gaussian representation, which compresses motion and appearance data separately, allowing for efficient storage and real-time high-fidelity rendering. The results are impressive: up to a 120X compression ratio without sacrificing quality (!!!), enabling seamless playback of complex scenes. By avoiding the heavy manual interventions required in existing workflows, this method streamlines the process, significantly lowering the barrier to entry for creatives and developers. Additionally, the technique integrates smoothly with VR platforms and CG engines, enhancing both playback and editing in various immersive scenarios. From #interactive #storytelling to advanced gaming and beyond, “Dual Gaussian Splatting” could be the key to unlocking the full potential of volumetric video, making immersive experiences more accessible and engaging than ever before. As we continue to blur the lines between the digital and real worlds, methods like these offer a glimpse into the future of how we create, share, and experience content. There is so much more that can and needs to be discussed about this technique and the medium, but there is only so much room on LinkedIn to capture that. If you are interested in volumetric media for your production, which could also mean volumetric content for #virtualproduction, I’m available for consultations. "Robust Dual Gaussian Splatting for Immersive Human-centric Volumetric Videos", Project Page: https://lnkd.in/ga32H5qf _ #volumetricvideo #dualgaussiansplatting #gaussiansplatting #3DGS #4DGS #volumetric #XR #metaverse #vr #ar #mixedreality #virtual #immersive #production #postproduction #animation #vfx #streaming #videogames

  • 查看XR Studios的公司主页,图片

    493 位关注者

    #Blender and #AI art #workflows. If you’ve missed it, check out my latest workflow - how to create Depth #ControlNets for #StableDiffusion. Works in any UI! _ #generativeAI #generativevideo #animation #vfx #opensource #genai

    查看Albert Bozesan的档案,图片

    Filmmaker & AI Wrangler @ Storybook Studios · Bestselling Writer · Work recognized by Forbes, 3sat, SZ, XPLR Media

    98.5% like ratio! Thank you for your super positive response to my latest YouTube tutorial! I’m glad Blender is bringing value to your AI art workflows. If you’ve missed it, check out my latest workflow - how to create Depth ControlNets for Stable Diffusion. Works in any UI! Link to the vid is in the comments - make sure you sort by “Most Recent” so LinkedIn doesn’t hide it from you. #generativeAI #stablediffusion #controlnet

    • 该图片无替代文字
    • 该图片无替代文字
    • 该图片无替代文字
  • 查看XR Studios的公司主页,图片

    493 位关注者

    Not even two weeks ago, I shared #Google #DeepMind's #GameNGen running #Doom from a specialized #LLM. It should come as not surprise that similar efforts are underway elsewhere, and that #neuralrendering taking us to the next levels of #media and #entertainment. Be sure to stay for the end of the trailer demo below, where they basically show off a take on the insanely complex #GhostOfTsushima from #Sony, as a product of this engine, and it's jaw-dropping. #GameGen-O: is an amazing #GenAI model that’s reshaping how we think about creating #openworld #videogames. This is the first #diffusiontransformer model designed specifically for generating entire game worlds from scratch, simulating everything from character interactions to dynamic environments and complex storylines. GameGen-O doesn’t just generate content; it brings a new level of #interactive control, allowing for #realtime #gameplay #simulation. Behind GameGen-O is the first-ever Open-World Video Game Dataset, OGameData, built from the ground up by collecting data from over a hundred next-gen open-world games (#3Dmodels still aren't dead, because these need to be trained, and next-gen image fidelity, #vfx, etc., still need to be made for training, but in time, it won't be as necessary as at present). This extensive dataset is curated using an innovative pipeline that sorts, scores, filters, and captions game data to create a rich foundation for our model. One has to wonder which games, and whether consent was provided, because that's a thorny problem. The research paper informs that #LightSpeedStudios, which is a part of #Tencent, participated in this effort. A lot of this training took place in #PROC, where #copyright effectively goes out the window. The training of GameGen-O happens in two key stages. First, it undergoes pretraining on OGameData, learning to generate and extend video content in an open-domain setting. In the next stage, we fine-tune the model with a specialized module, #InstructNet, to ensure it can generate future game sequences based on structural instructions. GameGen-O represents a major leap forward in using #generativeAI models for video game creation. It opens up new possibilities by combining #creative generation with interactive capabilities, challenging the limitations of traditional #rendering techniques. This is not where it stops. This is not "just" the future of gaming, you are also looking at the imminent future of hyper-personalized #cinema, very likely starting as #branching narrative content for #streaming platforms. If you would like to discuss the latter, my team has been exploring this for a while. Please DM me directly via LinkedIn, I cannot reply to emails. Thank you.

    查看Alejandro Franceschi的档案,图片

    LLM-based open-world, video game engine!?? Not even two weeks ago, I shared #Google #DeepMind's #GameNGen running #Doom from a specialized #LLM. It should come as not surprise that similar efforts are underway elsewhere, and that #neuralrendering taking us to the next levels of #media and #entertainment. Be sure to stay for the end of the trailer demo below, where they basically show off a take on the insanely complex #GhostOfTsushima from #Sony, as a product of this engine, and it's jaw-dropping. #GameGen-O: is an amazing #GenAI model that’s reshaping how we think about creating #openworld #videogames. This is the first #diffusiontransformer model designed specifically for generating entire game worlds from scratch, simulating everything from character interactions to dynamic environments and complex storylines. GameGen-O doesn’t just generate content; it brings a new level of #interactive control, allowing for #realtime #gameplay #simulation. Behind GameGen-O is the first-ever Open-World Video Game Dataset, OGameData, built from the ground up by collecting data from over a hundred next-gen open-world games (#3Dmodels still aren't dead, because these need to be trained, and next-gen image fidelity, #vfx, etc., still need to be made for training, but in time, it won't be as necessary as at present). This extensive dataset is curated using an innovative pipeline that sorts, scores, filters, and captions game data to create a rich foundation for the model. One has to wonder which games, and whether consent was provided, because that's a thorny problem. The research paper informs that #LightSpeedStudios, which is a part of #Tencent, participated in this effort. A lot of this training took place in #PROC, where #copyright effectively goes out the window. The training of GameGen-O happens in two key stages. First, it undergoes pretraining on OGameData, learning to generate and extend video content in an open-domain setting. In the next stage, we fine-tune the model with a specialized module, #InstructNet, to ensure it can generate future game sequences based on structural instructions. GameGen-O represents a major leap forward in using #generativeAI models for video game creation. It opens up new possibilities by combining #creative generation with interactive capabilities, challenging the limitations of traditional #rendering techniques. This is not where it stops. This is not "just" the future of gaming, you are also looking at the imminent future of hyper-personalized #cinema, very likely starting as #branching narrative content for #streaming platforms. If you would like to discuss the latter, my team has been exploring this for a while. Project Page: https://lnkd.in/gcaYYAnE Github (code will be released soon, so bookmark it: https://lnkd.in/gTnrEbc3 Please DM me directly via LinkedIn, I cannot reply to emails. Thank you.

  • 查看XR Studios的公司主页,图片

    493 位关注者

    We knew it was coming….KEYFRAMES for your #GenAI media! BOOM! ?? - #animation #genaivideo #video #vfx https://lnkd.in/gD4cqkf6

    查看Christoph 'gizmo' Mütze的档案,图片

    Freelance Designer & Developer | Real-Time Graphics Specialist | Data Viz

    Hey everyone! Thanks for all the encouraging feedback! It means the world to us! We are working hard to get a set of tools into your hands that give you unparalleled control and creative freedom. If you have reached out to us and we haven’t gotten back to you, please be patient. This is all a bit overwhelming at the moment, but we will adjust.. give us a little more time to sort things out, thanks! Cheers, zebrapunk & gizmo

相似主页

查看职位