Challenging problems. Unimaginable scale. Doing what’s never been done, then doing it again. Different builds different. Let’s build together. https://bit.ly/3WA5PIn
关于我们
Meta builds technologies that help people connect, find communities, and grow businesses. When Facebook launched in 2004, it changed the way people connect. Apps like Messenger, Instagram and WhatsApp further empowered billions around the world. Now, Meta is moving beyond 2D screens toward immersive experiences like augmented and virtual reality to help build the next evolution in social technology. We want to give people the power to build community and bring the world closer together. To do that, we ask that you help create a safe and respectful online space. These community values encourage constructive conversations on this page: ? Start with an open mind. Whether you agree or disagree, engage with empathy. ? Comments violating our Community Standards will be removed or hidden. So please treat everybody with respect. ? Keep it constructive. Use your interactions here to learn about and grow your understanding of others. ? Our moderators are here to uphold these guidelines for the benefit of everyone, every day. ? If you are seeking support for issues related to your Facebook account, please reference our Help Center (https://www.facebook.com/help) or Help Community (https://www.facebook.com/help/community). For a full listing of our jobs, visit https://www.metacareers.com
- 网站
-
https://www.metacareers.com/
Meta的外部链接
- 所属行业
- 软件开发
- 规模
- 超过 10,001 人
- 总部
- Menlo Park,CA
- 类型
- 上市公司
- 领域
- Connectivity、Artificial Intelligence、Virtual Reality、Machine Learning、Social Media、Augmented Reality、Marketing Science、Mobile Connectivity、Open Compute和Metaverse
地点
Meta员工
动态
-
Meet the React team at Meta, building cutting-edge experiences with React and React Native! At Meta Connect 2024, we showcased exciting projects built with React and React Native, including Instagram and Facebook for Meta Quest, Meta Horizon mobile app, and Meta Spatial Editor. Learn more: https://bit.ly/3NkBY0T #LifeAtMeta #MetaConnect #AugmentedReality
-
Today is World Mental Health Day! ?? ?? ?? At Meta, we're committed to building community and bringing people together. Taking care of ourselves is essential for caring for others. Check out the top tips from Meta employees on combating stress and boosting mental health in our latest reel, with a little help from their Ray-Ban Meta smart glasses! What self-care activity will you try today? #LifeAtMeta #WorldMentalHealthDay #MentalHealthMatters
-
For these women, building the future of AR glasses is more than tackling career-defining challenges and creating a bold vision. It’s establishing a north star for building rich AR experiences. Learn how Selena S., Anaid G., and Jossie T., are building Orion behind the scenes ?? https://bit.ly/4eBOzZA #Orion #AR #ARglasses #HispanicHeritageMonth
-
Embrace the cozy spirit of October: enjoy warmth, comfort, and a boost in productivity. ?? #ImaginedwithAI
-
Meta转发了
?? Today we’re excited to premiere Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in entirely new possibilities for casual creators and creative professionals alike. More details and examples of what Movie Gen can do ?? https://go.fb.me/00mlgt Movie Gen Research Paper ?? https://go.fb.me/zfa8wf ??? Movie Gen models and capabilities ? Movie Gen Video: A 30B parameter transformer model that can generate high-quality and high-definition images and videos from a single text prompt. ??Movie Gen Audio: A 13B parameter transformer model can take a video input along with optional text prompts for controllability to generate high-fidelity audio synced to the video. It can generate ambient sound, instrumental background music and foley sound — delivering state-of-the-art results in audio quality, video-to-audio alignment and text-to-audio alignment. ??Precise video editing: Using a generated or existing video and accompanying text instructions as an input it can perform localized edits such as adding, removing or replacing elements — or global changes like background or style changes. ??Personalized videos: Using an image of a person and a text prompt, the model can generate a video with state-of-the-art results on character preservation and natural movement in video. We’re continuing to work closely with creative professionals from across the field to integrate their feedback as we work towards a potential release. We look forward to sharing more on this work and the creative possibilities it will enable in the future.
-
Celebrating #HispanicHeritageMonth is about “remembering we’re honoring everyone, but we’re honoring ourselves as well” says Karla V., Program Manager at Meta. ?? For Karla, making time to build community helps to open doors for your community. It’s highlighting that “representation and success can have different shapes and form” and that there are different career paths one can have. “Someone might want to be a director and that's awesome, and I will be rooting for that person, while for others being a program manager like myself will be good enough, and that's a form of success as well.” ? Thank you Karla for being an incredible leader and reminding us that success can come in many ways. #LHHM #LifeatMeta
-
Meta at UNGA 2024: Accelerating Global Progress ?? Last month, world leaders gathered at the 2024 United Nations General Assembly, and we were honored to have shared our contributions to a better future. ?????? ????????????????????: ?? No Language Left Behind: Our AI model supporting 200 languages ?? Partnership for Global Inclusivity in AI: Joining forces with the US State Department ?? Llama Impact Grants and Awards: Supporting innovative AI solutions Learn more about how we can leverage technology to create a more equitable future for all: https://bit.ly/3TS23rK #UNGA79 #UNGA2024 #SustainableDevelopment #OpenSourceAI
-
Meta转发了
Llama 3.2 features our first multimodal Llama models with support for vision tasks. These models can take in both image and text prompts to deeply understand and reason on inputs. These models are the next step towards even richer agentic applications built with Llama. More on all of our new Llama 3.2 models ?? https://go.fb.me/14f79n