Astor Perkins featured on Forbes The deep tech revolution creating a new class of wealth “Space maverick” Scott Amyx’s skills are niche – so niche that only a handful of people in the world possess his depth of knowledge about deep tech. In layperson’s terms, he’s a leader in the deep tech VC market, investing in complex scientific ideas and space technology that will one day change the world as we know it. Amyx – a Forbes Business Council Member and TEDx speaker – believes that deep tech will create the next industrial revolution, leading to profound changes in how we live and work and creating a new class of mega-wealth. SpaceX alone, for example, has a market valuation of $US140 billion ($198.8 billion) – that figure is predicted to substantially increase when the company goes public. “When we look at… the digital revolution that kind of spanned 2000s until recently — we think that what we’re looking at now [with deep space] is probably going to eclipse all of that combined,” Amyx says. “We are very much at the precipitous of being a one-planet species to being an interplanetary species. We’re going to build habitats [on other planets], and the long-term plan is to have a habitat on the moon, as well as on Mars and other planets.” As previous industrial revolutions created billionaire tycoons like the Rockefellers, Amyx predicts that the deep space revolution will see a new class of entrepreneurs – only this time, they will be trillionaires. “For the first time, we’re going to see mega industries [with players] like SpaceX and Starlink, where we’re going to create trillionaires not billionaires. Elon Musk will be just one of a few” Amyx says while 95 per cent of VCs “pile money into the same thing”, such as Web3, crypto, SAS and FinTech, deep space is a potentially massive market for innovation that very few VC experts understand. Astor Perkins’s recent deep tech investments include D-Orbit, Metawave and Lunar Station. “We are focused on backing mavericks and solving some of the hardest problems facing humanity on earth and in space. And specifically, we look at areas of climate change mitigation…longevity, human survival, and deep space.” When it comes to space travel and tourism, Amyx says history can lend a clue to where the real money will be made by the next generation of entrepreneurs. “When people came to California during the gold rush, it wasn’t the people that were mining for gold that became rich. It was actually the merchants. The same goes for the moon,” he says. https://lnkd.in/e547_7zq #forbes #space #deeptech #vc #startup
Astor Perkins
风险投资与私募股权管理人
New York,New York 1,057 位关注者
We back mavericks solving some of the hardest problems facing humanity on Earth and in space.
关于我们
A VC fund focused exclusively on deep tech & human survival We back mavericks solving some of the hardest problems facing humanity on Earth and in space. Astor Perkins is a venture capital fund focused on deep tech, sustainability & survival. We partner with global leaders on their mission to build and protect the future cities on Earth and in space. In the complex sectors of deep tech and human survival, we believe no other firm offers the depth and breadth of domain expertise, experience and mission-critical advice to achieve best-in-class results. Our relentless pursuit of excellence in our specialization and impact is unrivaled in the industry. Sector Coverage Survival & longevity, biotech, life sciences, health, agtech, foodtech, climate change mitigation & adaptation, sustainability, impact investing, AI/ML/DL/NN, robotics, autonomy, autonomous vehicles, cybersecurity, renewable energy & storage, hydrogen, nuclear fusion, quantum computing, quantum information/ teleportation, space, rockets, satellites, satellite servicing, space tourism, space stations, lunar moon base, asteroid mining, and terraforming.
- 网站
-
https://astorperkins.com/
Astor Perkins的外部链接
- 所属行业
- 风险投资与私募股权管理人
- 规模
- 2-10 人
- 总部
- New York,New York
- 类型
- 合营企业
- 创立
- 2020
- 领域
- Venture Capital、Space、Human Survival、BioTech、HealthTech、MedTech、Startups、Private Equity、Longevity、Life Sciences、Health、AgTech、FoodTech、Climate Change Mitigation & Adaptation、Sustainability、Impact Investing、AI、Machine Learning、Deep Learning、Neural Networks、Robotics、Autonomy、Autonomous Vehicles、Cybersecurity、Renewable Energy、Solar、Wind、Hydrogen、Storage & Batteries、Nuclear Fusion、Quantum Computing、Quantum Information、Quantum Teleportation、Rockets、Satellites、Satellite Servicing、Space Tourism、Space Stations、Lunar、Asteroid Mining和Terraforming
地点
-
主要
US,New York,New York,10001
Astor Perkins员工
动态
-
Model-Based Transfer Learning for Contextual Reinforcement Learning Deep reinforcement learning (RL) is a powerful approach to complex decision making. However, one issue that limits its practical application is its brittleness, sometimes failing to train in the presence of small changes in the environment. Motivated by the success of zero-shot transfer—where pre-trained models perform well on related tasks—we consider the problem of selecting a good set of training tasks to maximize generalization performance across a range of tasks. Given the high cost of training, it is critical to select training tasks strategically, but not well understood how to do so. We hence introduce Model-Based Transfer Learning (MBTL), which layers on top of existing RL methods to effectively solve contextual RL problems. MBTL models the generalization performance in two parts: 1) the performance set point, modeled using Gaussian processes, and 2) performance loss (generalization gap), modeled as a linear function of contextual similarity. MBTL combines these two pieces of information within a Bayesian optimization (BO) framework to strategically select training tasks. We show theoretically that the method exhibits sublinear regret in the number of training tasks and discuss conditions to further tighten regret bounds. We experimentally validate our methods using urban traffic and standard continuous control benchmarks. The experimental results suggest that MBTL can achieve up to 50x improved sample efficiency compared with canonical independent training and multi-task training. Further experiments demonstrate the efficacy of BO and the insensitivity to the underlying RL algorithm and hyperparameters. This work lays the foundations for investigating explicit modeling of generalization, thereby enabling principled yet effective methods for contextual RL. https://lnkd.in/eMDB7GEF
-
Goldman Funds Take $900 Million Hit on Northvolt Funds managed by Goldman Sachs Asset Management are set to write off almost $900 million on investments in Swedish battery maker Northvolt AB, which filed for bankruptcy protection this week, the Financial Times reported, citing letters to investors it has seen. The US bank’s private equity funds, which have at least $896 million in exposure to Northvolt, will write that down in its entirety at the end of the year. Goldman’s holdings made it the second-largest shareholder in Northvolt, which is seeking to restructure under Chapter 11 proceedings in the US. The move capped months of talks with owners, customers and creditors. Goldman was spearheading an investor group trying to rescue the company. The bid failed, leaving the battery maker with just one week’s cash in its accounts. https://lnkd.in/eERTMejf
Goldman Funds Take $900 Million Hit on Northvolt, FT Says
bloomberg.com
-
Measurements of the quantum geometric tensor in solids Understanding the geometric properties of quantum states and their implications in fundamental physical phenomena is a core aspect of contemporary physics. The quantum geometric tensor (QGT) is a central physical object in this regard, encoding complete information about the geometry of the quantum state. The imaginary part of the QGT is the well-known Berry curvature, which plays an integral role in the topological magnetoelectric and optoelectronic phenomena. The real part of the QGT is the quantum metric, whose importance has come to prominence recently, giving rise to a new set of quantum geometric phenomena such as anomalous Landau levels, flat band superfluidity, excitonic Lamb shifts and nonlinear Hall effect. Despite the central importance of the QGT, its experimental measurements have been restricted only to artificial two-level systems. Here, we develop a framework to measure the QGT in crystalline solids using polarization-, spin- and angle-resolved photoemission spectroscopy. Using this framework, we demonstrate the effective reconstruction of the QGT in the kagome metal CoSn, which hosts topological flat bands. Establishing this momentum- and energy-resolved spectroscopic probe of the QGT is poised to significantly advance our understanding of quantum geometric responses in a wide range of crystalline systems. https://lnkd.in/ewDXNY3b
Measurements of the quantum geometric tensor in solids - Nature Physics
nature.com
-
Perform outlier detection more effectively using subsets of features This article is part of a series related to the challenges, and the techniques that may be used, to best identify outliers in data, including articles related to using PCA, Distance Metric Learning, Shared Nearest Neighbors, Frequent Patterns Outlier Factor, Counts Outlier Detector (a multi-dimensional histogram-based method), and doping. This article also contains an excerpt from my book, Outlier Detection in Python. We look here at techniques to create, instead of a single outlier detector examining all features within a dataset, a series of smaller outlier detectors, each working with a subset of the features (referred to as subspaces). There are also a number of technical challenges that appear in outlier detection. Among these are the difficulties that occur where data has many features. As covered in previous articles related to Counts Outlier Detector and Shared Nearest Neighbors, where we have many features, we often face an issue known as the curse of dimensionality. This has a number of implications for outlier detection, including that it makes distance metrics unreliable. Many outlier detection algorithms rely on calculating the distances between records — in order to identify as outliers the records that are similar to unusually few other records, and that are unusually different from most other records — that is, records that are close to few other records and far from most other records. To address these issues, an important technique in outlier detection is using subspaces. The term subspaces simply refers to subsets of the features. In the example above, if we use the subspaces: A-B, C-D, E-F, A-E, B-C, B-D-F, and A-B-E, then we have seven subspaces (five 2d subspaces and two 3d subspaces). Creating these, we would run one (or more) detectors on each subspace, so would run at least seven detectors on each record. We’ve seen, then, a couple motivations for working with subspaces: we can mitigate the curse of dimensionality, and we can reduce where anomalies are not identified reliably where they are based on small numbers of features that are lost among many features. As well as handling situations like this, there are a number of other advantages to using subspaces with outlier detection. These include... https://lnkd.in/exfdadZC
Perform Outlier Detection More Effectively Using Subsets of Features
towardsdatascience.com
-
Combining AI and Crispr Will Be Transformational In 2025, we will see AI and machine learning begin to amplify the impact of Crispr genome editing in medicine, agriculture, climate change, and the basic research that underpins these fields. It’s worth saying upfront that the field of AI is awash with big promises like this. With any major new technological advance there is always a hype cycle, and we are in one now. In many cases, the benefits of AI lie some years in the future, but in genomics and life science research we are seeing real impacts right now. In my field, Crispr gene editing and genomics more broadly, we often deal with enormous datasets—or, in many cases, we can’t deal with them properly because we simply don’t have the tools or the time. Supercomputers can take weeks to months to analyze subsets of data for a given question, so we have to be highly selective about which questions we choose to ask. AI and machine learning are already removing these limitations, and we are using AI tools to quickly search and make discoveries in our large genomic datasets. In my lab, we recently used AI tools to help us find small gene-editing proteins that had been sitting undiscovered in public genome databases because we simply didn’t have the ability to crunch all of the data that we’ve collected. A group at the Innovative Genomics Institute, the research institute that I founded 10 years ago at UC Berkeley, recently joined forces with members of the Department of Electrical Engineering and Computer Sciences (EECS) and Center for Computational Biology, and developed a way to use a large language model, akin to what many of the popular chatbots use, to predict new functional RNA molecules that have greater heat tolerance compared to natural sequences. Imagine what else is waiting to be discovered in the massive genome and structural databases scientists have collectively built over the recent decades. These types of discoveries have real-world applications. For the two examples above, smaller genome editors can help with more efficient delivery of therapies into cells, and predicting heat-stable RNA molecules will help improve biomanufacturing processes that generate medicines and other valuable products. In health and drug development, we have recently seen the approval of the first Crispr-based therapy for sickle cell disease, and there are around 7,000 other genetic diseases that are waiting for a similar therapy. AI can help accelerate the process of development by predicting the best editing targets, maximizing Crispr's precision and efficiency, and reducing off-target effects. In agriculture, AI-informed Crispr advancements promise to create more resilient, productive, and nutritious crops, ensuring greater food security and reducing the time to market by helping researchers focus on the most fruitful approaches. https://lnkd.in/eytTQgc8
Combining AI and Crispr Will Be Transformational
wired.com
-
Ray Kurzweil believes humanity will achieve longevity escape velocity in just five years Computer scientist and futurist Ray Kurzweil believes humanity will achieve “longevity escape velocity” in just five years. The concept basically states that due to medical and technological advances, we will soon reach a point where our life expectancies lengthen by more than one year per year, effectively giving us time back on the clock. This is a very controversial concept, and one that—even if possible—would require widespread access to cutting edge medical technology. “Past 2029, you’ll get back more than a year. Go backwards in time,” Kurzweil said in an interview with the venture capital and private equity firm Bessemer Venture Partners. “Once you can get back at least a year, you’ve reached longevity escape velocity.” That may seem like a remarkably near future, but Kurzweil seems convinced, largely because medical advancement seems to be speeding up. If it’s even possible to achieve, it would not mean that everyone around the world would suddenly experience dramatically extended lives. That would require everyone to have access to the very height of cutting-edge medical technology and infrastructure, which is highly unlikely to happen in the span of five years. As an example of that unlikelihood, tuberculosis—a disease we have known how to treat and prevent for decades—kills more people per year worldwide than any other infectious disease. The existence of medical treatments and advances is not synonymous with their widespread implementation. It’s true that medicine is advancing rapidly, as is technology. And if the past has anything to say about the future, those advances will likely continue to extend average life expectancies. But as enticing as the idea of longevity escape velocity is, it’s still just a prediction for now. Death and taxes, at this point in time, both remain inevitable. https://lnkd.in/ewDWEW88
-
Introducing Anthropic's Model Context Protocol Anthropic is open-sourcing the Model Context Protocol (MCP), a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. Its aim is to help frontier models produce better, more relevant responses. As AI assistants gain mainstream adoption, the industry has invested heavily in model capabilities, achieving rapid advances in reasoning and quality. Yet even the most sophisticated models are constrained by their isolation from data—trapped behind information silos and legacy systems. Every new data source requires its own custom implementation, making truly connected systems difficult to scale. MCP addresses this challenge. It provides a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol. The result is a simpler, more reliable way to give AI systems access to the data they need. Three major components of the Model Context Protocol for developers: - The Model Context Protocol specification and SDKs - Local MCP server support in the Claude Desktop apps - An open-source repository of MCP servers Claude 3.5 Sonnet is adept at quickly building MCP server implementations, making it easy for organizations and individuals to rapidly connect their most important datasets with a range of AI-powered tools. To help developers start exploring, we’re sharing pre-built MCP servers for popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer. Instead of maintaining separate connectors for each data source, developers can now build against a standard protocol. As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today's fragmented integrations with a more sustainable architecture. https://lnkd.in/grPMihv7
Introducing the Model Context Protocol
anthropic.com
-
Next-generation combination approaches for immune checkpoint therapy Immune checkpoint therapy has revolutionized cancer treatment, leading to dramatic clinical outcomes for a subset of patients. However, many patients do not experience durable responses following immune checkpoint therapy owing to multiple resistance mechanisms, highlighting the need for effective combination strategies that target these resistance pathways and improve clinical responses. The development of combination strategies based on an understanding of the complex biology that regulates human antitumor immune responses has been a major challenge. In this Review, we describe the current landscape of combination therapies. We also discuss how the development of effective combination strategies will require the integration of small, tissue-rich clinical trials, to determine how therapy-driven perturbation of the human immune system affects downstream biological responses and eventual clinical outcomes, reverse translation of clinical observations to immunocompetent preclinical models, to interrogate specific biological pathways and their impact on antitumor immune responses, and novel computational methods and machine learning, to integrate multiple datasets across clinical and preclinical studies for the identification of the most relevant pathways that need to be targeted for successful combination strategies. https://lnkd.in/eAEwkpXj
Next-generation combination approaches for immune checkpoint therapy - Nature Immunology
nature.com
-
Amazon Web Services Launches Quantum-Computing Advisory Program Amazon Web Services launched Quantum Embark, an advisory program that aims to prepare customers for society’s shift towards quantum computing. The program—which builds upon the company’s existing quantum-computing service, Amazon Braket—will take users through up to three modules designed to provide people being introduced to the field with context, guidance and expertise. https://lnkd.in/egD8JrWY
Amazon Web Services Launches Quantum-Computing Advisory Program
wsj.com