Take Data Science Models To The Next Level
Recently, organizations have been leveraging artificial intelligence (AI), machine learning (ML), and optimization (OPT) models to improve organizational decision-making. Many model-building tools — such as Sagemaker, Azure ML, DataRobot, and Dataiku — let you easily build, train, and tune models, as well as utilize out-of-the-box models to fit against specific datasets.
However, the journey from development sandbox to impactful operational tool for users remains precarious. One-off AI/ML/OPT models may be developed to address specific business needs via a point-solution or made available on an API endpoint, but many organizations lack the overarching framework to ensure that data, models, and decisions can be captured and deployed across use cases — resulting in fragmentation and limited learning.
Unlocking the full value of your data science models means going beyond serving an API endpoint. It means driving decision-making processes and systems at scale.?
Achieving this state requires: a trustworthy data foundation, full-fidelity feedback loops between consumers and model builders, safe mechanisms for writing back to systems of action, shared security and lineage frameworks across teams — and much more.?Foundry can make this possible.
How it works
Palantir Foundry provides a complete matrix for AI/ML. For data scientists and AI/ML/OPT teams, Foundry offers deep operational connectivity and dynamic feedback loops between consumers and model builders. For business teams, Foundry enables technical and non-technical users alike to interact with key model levers, search and discover available data, test “what-if” scenarios, run large-scale simulations, and make decisions through entirely customizable user-facing applications.
Whether you choose to bring your own model building tools or use FoundryML, our built-in modeling framework, the Foundry operating system helps you unlock the full potential of your models in three steps.
1) Integrate
Foundry builds on the foundational work from your data science teams, combining relevant data and models using a suite of interoperable connectors to external systems.
For data, Foundry enables bi-directional connection to your enterprise data platform as well as your operational and transactional systems (e.g. ERP, CRM, MES, Asset Config, Edge, and more). Similarly, for models, integration can occur via native platform integrations (e.g., AWS Sagemaker, Azure ML, DataRobot, or Databricks) or by importing your model artifact directly into Foundry (as code, libraries, or trained models).
Once data and models have been integrated, Foundry provides a range of operational AI/ML capabilities intended to complement model building:
The extensive support for versioning and branching of data and models enables your data scientists to quickly switch from data-centric workflows (where the model is fixed and iterations focus on improving data) to model-centric ones (where the data is fixed and iterations focus on enhancing the model). The ability to pivot in both dimensions accelerates analytic improvements while maintaining engineering rigor throughout experimentation.
领英推荐
2) Bind
Once integrated, Foundry binds your models to the Foundry Ontology. The Ontology sits atop the digital assets in Foundry and connects them to their real-world counterparts, ranging from physical assets like manufacturing plants, equipment, and products to concepts like customer orders or financial transactions. In many settings, the Ontology serves as a digital twin of the organization.
The Ontology helps orchestrate the flow of data and models through operational workflows and enables collaboration among data scientists, AI/ML/OPT, business, and operational teams on a shared substrate. Models — and their features — can be bound directly to the primitives and processes that drive the business. The Ontology then allows them to be governed, released, and injected directly into core applications and systems — without additional adapters or glue-code — and served in-platform (batch, streaming, or query-driven) or externally.
3) Operationalize
Once you bind your AI/ML/OPT models to the Foundry Ontology, you can unleash their full potential. By leveraging Ontology objects and native application builders, Foundry provides the needed primitives to design, configure, and deploy AI-infused workflows. As operators, business processes, and systems make decisions and take action, the results are written back into the Ontology — providing unprecedented feedback loops to model monitoring, evaluation, re-training, and MLOps.
This advanced operationalization helps to “close the loop” between AI/ML/OPT and operations in several ways:
Foundry integrates data and models from external systems, binds them to the organization’s Ontology, and embeds them in operational workflows and applications.
Foundry captures decisions made in the operational sphere and writes data back to both the Ontology and systems of action, providing a “closed loop” between AI/ML/OPT and operations.
Doubling Down on Your Data Science Investments
We built Foundry to help organizations more deeply connect their data, analytics, and operations. Integrating your data science models with Foundry allows you to begin augmenting and amplifying existing data and model investments in hours. Binding your models to the Ontology helps you move your models out of the lab and onto the front lines of the organization, enabling technical and non-technical users alike to interact with model levers. Finally, operationalization helps you close the loop with operators, continuously improve models through rich end user feedback, and achieve continuous learning.
Learn more?about how Foundry can help you unlock the power of your data science models and?get in touch?with a member of our Foundry product team.
CEO, Private Investor, Board Director
2 年Forgot to mention that Gartner Group has predicted that 80% of Data Bases in the next 5 years will be Graph Databases. #Gartner #Lucata
CEO, Private Investor, Board Director
2 年I truly believe that Palantir is addressing production level AI/ML. The rest of the world seems to be experimenting in Graph Analytics often referred to as Sandboxes. The challenges of making a company wide graph with the ability for massive amounts of concurrent queries, real time streaming data and scale are elusive with current commercially available products. Lucata Corporation is the answer to anyone trying to achieve concurrencies and scale without the need for clustering software overhead. Palantir, Katana Graph, Tiger Graph, Neo4J, and RedisGraph all have much to gain by adopting the use of the Lucata Platform. #lucata #graphanalytics
BOARD DIRECTOR, FOUNDER & ADVISOR: AI| Digital Transformer| Datatician|Mathematician| Innovator| Leader| Strategist| VC
2 年EXCELLENT! BRILLIANT! Thank You, Diane
Developer at MSEINT
2 年I'm always a bit surprised that bot encapsulated ML SYSTEMS are not a bigger trend in the Ai community... though i see the work happening at the likes of google and a few attempts at amazon and really the former FAANG bunch and a bit at tesla ... Perhaps the issue is a system architecture problem whereby the big guys have the processing power needed to create the models that can then be given a level of pseudoautonomy as semi supervised systems . The combinatorial spaces available for models to exist in combined with the bottle neck of processing ability makes our creative minds the limits of such systems while the guys capable of setting up a robust pipeline for real-world data for sandboxing get to bypass the limits of human thinking with training quantity that can be reapplied to new tasks. We've seen that pace at which semisupervised training can accelerate development in organizations with such use cases but to develop such usecases demands a multiquarter commitment to not seeing any progress and innumerable headaches before Athena emerges from the mind of Zeus fully formed and ready to lay waste to the competition.