Data Squared Submits Response to Federal AI Action Plan RFI
Jon Brewton
Founder and CEO - USAF Vet; M.Sc. Eng; MBA; HBAPer: Data Squared has Created the Only Hallucination Resistant and Fully Explainable AI Solution Development Platform in the world!
Executive Summary
Data Squared USA Inc. (Data2) respectfully submits this comprehensive response to the Office of Science and Technology Policy's Request for Information on the Development of an Artificial Intelligence (AI) Action Plan.?
As a Service-Disabled Veteran-Owned Small Business (SDVOSB) specializing in explainable AI solutions for government and defense applications, we applaud the Trump Administration’s commitment to ensuring United States leadership in artificial intelligence through Executive Order 14179.
In this response, we address core issues limiting scaled AI adoption in government and provide concise policy recommendations that can guide the Federal AI Action Plan. Specifically, we highlight four areas of critical concern:
1.???? The fragmentation of data across agencies and systems.
2.???? The importance of security and trust in government AI.
3.???? The need for high explainability and traceability in government AI solutions.
4.???? The importance of meeting stringent accuracy requirements, especially for mission-critical applications.
The Data2 reView Platform delivers the critical capabilities essential for scaled government AI solution:
1.???? Transparent Reasoning: Every inference can be traced back to its source data with explicit citations.
2.???? High-Level Accuracy: Graph-based data structures “connect the dots” for language models to provide the context necessary for mission-critical accuracy.
3.???? Secure Collaboration: Through integration with zero-trust frameworks, reView enables collaboration across security boundaries while maintaining strict control over information access.
4.???? Flexible Integration: reView connects with existing systems and data sources without requiring wholesale replacement, enabling incremental adoption.
5.???? Continuous Learning: The platform evolves its understanding as new data is ingested, building increasingly rich context over time.
I. Introduction AI can drastically improve government efficiency and effectiveness in many areas, particularly national security, cybersecurity, logistics, and secure management of supply chains. However, many agencies encounter serious challenges when exploring high-value AI use cases.
One of the most frequently cited obstacles is data quality and data fragmentation. Large agencies often manage hundreds or thousands of separate information systems, each with its own proprietary data model and copy of the underlying data. This build-up of data systems over years or decades leads to massive duplication of records, inconsistent naming conventions, and conflicting metadata standards. Simply put, this type of data is not suitable to be used in an AI system.
Security and trust are equally important considerations. Government data often involves sensitive personally identifiable information or national security content. Black-box AI systems that cannot provide transparent reasoning leave agencies vulnerable to adversarial manipulation, data poisoning, or misinterpretation of outcomes. Inaccurate or untrustworthy AI tools can erode public confidence, impede inter-agency collaboration, face legal or regulatory scrutiny, and delay meaningful AI implementations.
Explainability and traceability are non-negotiable requirements. Agencies require a clear understanding of how AI systems arrive at conclusions, especially when they inform major policy decisions or sensitive operational contexts. In mission-critical domains such as defense or cybersecurity, AI accuracy of 99% should be the standard systems are evaluated against.
Our knowledge graph-based AI infrastructure addresses these requirements by enforcing data provenance, clarifying the chain of reasoning, and enabling advanced security protocols that guard against unauthorized data access, modification, or use.
II. The State of AI Infrastructure in Government
Government efforts to adopt AI are often hindered by aging infrastructure. Many agencies rely on traditional relational databases designed and implemented decades ago. As a result, data lives in silos and is rarely governed by standardized ontologies or schemas leading to duplicate spending, increased security risk, and patchwork modernization initiatives.
Procurement processes compound these challenges. Contracts for AI often assume agencies can simply layer an algorithm on top of existing systems. In reality, data typically lacks consistent structure, and legacy systems do not track information lineage or provenance. AI models need to reference metadata and cross-system relationships in order to deliver high-quality insights. Without an infrastructure designed for integrated data, agencies incur major reengineering costs at each new AI deployment.
Security requirements create additional complexity. Classified systems impose compartmentalization rules that make data sharing difficult. Many AI tools lack mechanisms to enforce need-to-know access requiring agencies replicate data in separate environments or rely on fragile, case-by-case integrations. Such fragmentation also hampers explainability because system owners do not have a holistic view of how an AI model processes information.
III. Critical Dependencies for Scaled AI Implementation Scaling AI in the federal government requires strategies that address the following dependencies:
1.???? Data Unification and Quality: Data must be cleaned, resolved, and standardized. This is essential for multi-source integration, efficient model training, and accurate inference. Without rigorous data governance and modernization, the cost to integrate and prepare information is untenable.
2.???? Security and Trust: AI solutions must incorporate security by design, with comprehensive provenance tracking, real-time auditing, and fine-grained access controls. Black-box models that cannot provide transparent explanations of their reasoning add substantial risk. Agencies need the ability to trace outcomes back to the source data and the relevant inference steps to ensure systems are reliable.
3.???? Workforce and Skills: Government personnel will need the ability to oversee AI projects, evaluate outputs, and manage the full lifecycle of AI tools. Dedicated training programs and new career paths can ensure agencies are able to manage AI responsibly.
4.???? Procurement and Governance Frameworks: Existing procurement models are not designed for such rapidly evolving technologies. Agencies need new contract vehicles, specialized oversight boards, and flexible pilot programs that help them evaluate and integrate AI with minimal bureaucratic hurdles.
IV. The Importance of AI Explainability and Traceability Explainability and traceability are requirements for building safe, reliable AI systems. It is essential to be able to understand how any AI system arrived at a decision or recommendation.
Traceability is equally important for improving AI models over time. If a system’s outputs are repeatedly inaccurate or incomplete, agencies need to be able pinpoint where the underlying data or reasoning failed. This is only possible when each step of the process is understood, and inference can be linked back to source data points. Knowledge graphs provide the level of transparency necessary to understand all of these steps in a complex AI system.
V. Achieving High Accuracy in Mission-Critical Applications Government operations can have life-or-death implications. In scenarios such as contested logistics, border security, or cyber operations, an AI system that is 90 percent accurate is not sufficient. Such use cases require at or near 99 percent accuracy plus the ability to measure, validate, and continuously improve performance in real operational conditions.
Mission-critical AI systems must also incorporate zero-trust security and contain built-in checks for unexpected or adversarial inputs. The cost of a single high-impact error outweighs the benefit of quick deployment of a partially tested system. This reinforces the need for high accuracy and transparency mechanisms that can trace errors back to their root cause.
VI. Why Knowledge Graphs Are Essential Knowledge graphs are a proven method for unifying disparate data, maintaining provenance, and enabling query and analysis across domains. A knowledge graph structure can represent data as nodes and relationships in a highly flexible schema, acting as a strong complement to existing structured database systems. This makes it possible to incorporate new data types and semantics without extensive and perpetual re-engineering of existing data environments.
The transparent, node-and-relationship design of Data2’s reView platform enables:
1.???? Data Unification. Knowledge graphs excel at combining structured and unstructured data. They can maintain context by linking related entities, which means analysts can quickly see the chain of evidence that supports any finding.
2.???? Advanced Reasoning. Graph-based reasoning supports both symbolic logic and statistical inference, providing deeper insights and uncovering non-obvious patterns.
3.???? Explainability. Each node, relationship, and property in a knowledge graph can be tagged with source data, security classification, and other metadata. This ensures that every inference path is traceable.
4.???? Security and Access Control. Graph data models can enforce granular role based access, preserving classification levels and ensuring only the right individuals see sensitive data.
These properties make reView an optimal approach for implementing mission critical AI, and for allowing agencies to meet accuracy, explainability, and security targets more effectively than with traditional data architectures.
VII. Knowledge Graphs for AI Agents and Agentic Workflows
As Arvind Jain, Glean co-founder & CEO, aptly noted:
"For agentic AI, there's so much friction that comes with accessing data in the underlying environment... Success only works after you've solved for this problem."
AI agents are increasingly being deployed to handle dynamic workflows that involve continuous data ingestion, multi-step decision-making, and close collaboration with other systems or humans. As agencies build more advanced agentic solutions, a Data2 knowledge graph offers a powerful way to ensure that each AI-driven action is accurate, explainable, and traceable.
When an AI agent consults a Data2 knowledge graph, it grounds its reasoning in structured data that captures relationships between entities. Instead of relying solely on statistical patterns, the agent can reference factual connections that are continuously updated.
A Data2 knowledge graph also improves multi-hop reasoning by allowing agents to traverse relevant nodes and edges to find non-obvious correlations. For instance, an agent processing a complex financial fraud case might link information from transaction logs, user profiles, and regulatory databases to detect subtle anomalies. This process provides a more definitive basis for conclusions, helping AI agents reach the 99% accuracy level required by many mission-critical applications.
As with any other AI use case, explainability is critical when AI agents operate in mission-critical government scenarios. A Data2 knowledge graph creates explicit records of how an AI system arrives at conclusions by logging which nodes, relationships, and properties were utilized. This capability allows investigators, auditors, or analysts to trace and analyze each inference step.
In the future, government AI deployments may require multiple agents to work together. A Data2 knowledge graph serves as a shared, up-to-date information layer where agents can store and retrieve facts without duplicating data. A graph’s standardized structure and well-defined relationships reduce ambiguity during inter-agent communication. New agents introduced into the environment can immediately tap existing information, accelerating their onboarding and improving overall system performance.
Data2 knowledge graph forms a robust backbone for AI agents and agentic workflows. They enable higher levels of accuracy by grounding AI in factual, up-to-date data, while offering built-in capabilities for transparent and explainable decision-making. This approach is increasingly vital for federal agencies that need AI solutions to be both high-performing and responsible.
VIII. Policy Recommendations The following recommendations can guide the development of a comprehensive AI Action Plan that addresses data fragmentation, security needs, and the high bar for explainability and accuracy:
1.???? Establish a Federal Intelligence Interoperability Program
a.???? Create a cross-agency working group to define common data ontologies for shared mission areas such as defense, public health, and financial oversight.
b.???? Encourage pilot implementations that demonstrate how a graph data model can improve AI accuracy, reduce integration costs, and provide transparency.
2.???? Strengthen Security and Trust for AI Deployments
a.???? Require AI systems in mission-critical contexts to provide clear provenance tracking and explainable reasoning.
b.???? Fund research on detecting adversarial data manipulation in graph-based environments.
c.???? Deploy zero-trust architectures to ensure that sensitive data is only accessible based on need-to-know, even within AI systems.
3.???? Modernize Procurement and Governance
a.???? Update acquisition rules to mandate that AI solutions demonstrate alignment with agency data standards, including the ability to integrate graph structures.
b.???? Create new, flexible contracting vehicles that enable ongoing pilot projects, iterative development, and faster adoption of proven solutions.
c.???? Ensure that AI projects undergo explainability and security evaluations as part of the procurement and renewal process.
4.???? Invest in Training and Workforce Development
a.???? Develop specialized training for government analysts, program managers, and executives to understand the fundamentals of graphs, explainable AI, and secure data practices.
b.???? Encourage professional certifications for AI project oversight.
c.???? Provide funding for small business set-asides to foster innovation in explainable AI solutions.
5.???? Target High-Impact Use Cases for Rapid Prototyping
a.???? Identify five to ten cross-agency challenges where improved data integration would have a transformative effect on operational efficiency or public service quality.
b.???? Sponsor proof-of-concept projects that harness our reView platform to combine multiple data sources, showcase near-99 percent accuracy in real scenarios, and demonstrate robust security.
IX. Conclusion
The federal government stands at a pivotal juncture in adopting advanced AI systems. Addressing data fragmentation, implementing robust security measures, and ensuring explainability will be essential for scaling AI to mission-critical contexts. By placing Data2's reView platform at the core of AI infrastructure, agencies can unite previously siloed data sources, enable transparent reasoning with complete traceability, and deliver the high-accuracy outcomes the American public expects and deserves.?
We appreciate the opportunity to contribute to this important policy initiative. As a Service-Disabled Veteran-Owned Small Business developing sovereign American AI solutions, Data2 is uniquely positioned to collaborate with the federal government in designing and deploying secure, transparent AI systems that advance national priorities while achieving measurable gains in efficiency and effectiveness.
Jon Brewton graphs are key to connecting the dots at the unit, agency, and even government levels. For AI to be effective, government will need to make this happen.
Founder and CEO - USAF Vet; M.Sc. Eng; MBA; HBAPer: Data Squared has Created the Only Hallucination Resistant and Fully Explainable AI Solution Development Platform in the world!
1 周Follow the data2 LinkedIn page and our team members! ? ? Jon Brewton, Amanda Fetch, MSc, Eric Costantini, Jack Singer, Jeff Dalgliesh, Daniel Bukowski, Alexander Elkholy, RJ D., Matt Kilker or Chris Rohrbach?