Ai Critique of the paper "Intelligent Prospector v2.0"
I asked ChatGPT (4o) what it thought of the paper Intelligent Prospector v2.0 (here https://arxiv.org/abs/2410.10610) I then asked if it thought Ai wrote the paper
While this paper presents an intriguing application of AI to exploration drill planning under epistemic uncertainty, several fundamental issues must be addressed before such a system can be fully relied upon in real-world mineral exploration.
1. Overreliance on Prior Hypotheses—A Risky Assumption
The authors correctly acknowledge that geological hypotheses are often proven wrong after data acquisition. However, their approach still hinges on predefined hypotheses rather than embracing true geological uncertainty. Real-world deposits frequently defy traditional models, especially in structurally complex terrains like the Copperbelt. AI cannot "invent" new geological models—meaning any AI-driven system is only as good as the human-generated inputs. If these inputs are flawed, the AI may systematically lead drill programs astray rather than refining the correct interpretation.
2. The Challenge of Detecting the "Right" Hypothesis
The paper introduces a "null hypothesis" to detect when all human-generated geological models are incorrect. While this is an interesting concept, it assumes that AI can distinguish between real geological complexity and statistical noise. Geology is not random, but it is also not a simple probability function. Factors such as structural deformation, hydrothermal overprints, and complex lithological variations cannot be captured through simple entropy-based models. In real-world exploration, a deposit may exhibit characteristics of multiple models simultaneously, making rigid classification problematic.
3. AI Optimisation vs. Geological Ground Truth
The study highlights the ability of AI to reduce drillholes by improving targeting efficiency. However, this assumes that AI "knows" the best location to drill based on incomplete data. In exploration, anomalies often require step-out drilling, cross-cutting structures must be tested, and fluid pathways rarely conform to neat Bayesian probabilities. The assumption that drilling can be optimally planned in a linear fashion may be problematic, particularly in complex depositional environments where unexpected structures or geochemical changes significantly impact mineralisation.
4. Grid-Based Drilling vs. AI-Optimised Drilling—A False Dichotomy
The paper contrasts AI-guided drilling with grid-based drilling, portraying the former as more efficient. However, experienced geologists rarely rely on rigid grid patterns in modern exploration. Instead, drill programs evolve dynamically based on ongoing results. The paper does not convincingly demonstrate that AI outperforms human-led, adaptive drill planning that incorporates real-time geological insights.
5. Economic Considerations—Real Exploration is Driven by Risk Mitigation
One of the most critical issues is the paper’s assumption that AI-driven drill planning is inherently more economical. Junior exploration companies operate under severe financial constraints, and a "fail fast" AI approach may not align with the need to generate compelling early-stage drill results to secure further funding. Moreover, AI models require extensive computational resources and training data—resources that many exploration companies lack.
I then asked " do you think the paper was written by ai ?"
ChatGPT said:
There are several indicators that suggest this paper may have been AI-assisted or AI-generated, though it’s difficult to say with certainty. Here’s my analysis:
Indicators Suggesting AI Involvement
1. Highly Formal but Sometimes Awkward Wording
2. Repetitive Explanations of Concepts
3. AI-Typical Citation Behavior
4. Formulaic and Highly Structured Writing
领英推荐
5. Lack of Nuanced Geological Insight
Indicators Against AI Involvement
While there are strong signs of AI assistance, a few factors suggest human input:
Final Verdict
This paper was likely written with AI assistance—probably using a tool like GPT-4 or another large language model to generate drafts or sections of text. However, it is unlikely to be fully AI-generated, as the core research and methodology seem genuine.
?? Most plausible scenario: The authors conducted the research but used AI to help draft, structure, and refine the writing.
Would I trust it blindly? No.
I then asked ChatGPT o3 Mini-High how it could fix the paper
Enhancing the Intelligent Prospector v2.0 System for Exploration Drill Planning
The paper “Intelligent Prospector v2.0: Exploration Drill Planning Under Epistemic Model Uncertainty” presents an innovative approach by framing mineral exploration as a sequential decision‐making problem using a POMDP framework. The integration of Bayesian uncertainty quantification, Gaussian process regression, and Monte Carlo tree search (via a POMCPOW variant) is particularly compelling. However, as with any pioneering work, there are avenues to further refine and enhance the system. Here are some technical recommendations:
1. Transition to Higher-Dimensional Models The current demonstration utilises a synthetic 2D model. For real-world applicability, it would be beneficial to extend the framework to full 3D geological models. This upgrade would capture the complex spatial variability inherent in subsurface formations and provide a more realistic assessment of mineralisation patterns. Such an extension could also explore advanced meshing and discretisation techniques to manage increased computational complexity.
2. Dynamic Hypothesis Generation and Revision Currently, the system relies on a fixed set of human-generated hypotheses. Incorporating a mechanism for dynamic hypothesis generation—possibly via unsupervised clustering or reinforcement learning—would allow the model to propose and refine new geological hypotheses on the fly when initial assumptions are falsified. This adaptive capability could improve decision quality in cases where the true subsurface conditions diverge from the predefined models.
3. Robust Hyperparameter Sensitivity and Computational Efficiency The performance of the Monte Carlo planning component (e.g. progressive widening parameters, discount rates) is sensitive to hyperparameter choices. Conducting a detailed sensitivity analysis and exploring alternatives such as point-based value iteration (PBVI) or GPU-accelerated Monte Carlo simulations could both reduce variance in value estimates and enhance scalability. This would be particularly important for managing the combinatorial explosion inherent in POMDPs with high-dimensional continuous spaces
.
4. Integration of Field Data and Validation While synthetic cases are useful for initial validation, the system’s credibility would be greatly strengthened by applying the framework to real-world exploration data. Incorporating historical borehole data or real-time geophysical survey results would help calibrate the Bayesian priors and validate the model’s uncertainty quantification, thereby bridging the gap between simulation and operational practice.
5. Enhanced Interpretability and User Interface For adoption in industry, the output of the decision process must be accessible to geoscientists and investors alike. Developing interactive visualisation tools that clearly represent the evolving belief state, uncertainty margins, and decision rationale would aid in translating the complex POMDP outputs into actionable insights. This improved transparency could foster greater trust and facilitate interdisciplinary collaboration.
6. Refined Economic Modelling The reward function in the current model is relatively simplistic. Integrating a more sophisticated economic model that accounts for factors such as extraction costs variability, environmental impacts, and risk of false negatives would provide a more comprehensive cost-benefit analysis. Such refinements could lead to more balanced decisions that not only maximise expected profit but also mitigate operational risks.
Conclusion By extending the dimensionality of the model, incorporating adaptive hypothesis generation, enhancing computational robustness, integrating real field data, and improving interpretability, the Intelligent Prospector v2.0 system could become even more robust and industry-ready. These modifications would not only refine the theoretical framework but also ensure that the system addresses the practical challenges faced in modern mineral exploration.
Sr. Product Manager at Enverus
3 周So you had ai write a paper, about an ai writing a paper,, about ai?
Director and Principal Geotechnical Engineer at Bastion Geotechnical Pty Ltd
3 周I'd love to hear your thoughts Warwick, on Search term: ai finds new protein, a plethora of links on AI inventing new proteins, in the context of your quote "AI cannot "invent" new geological models". I agree with you don't get me wrong that I'm challenging. And agree that the AI system is only as good as the training material that it's fed. I have serious concerns myself about the products coming out of these systems representing reality or established truths. Does the problem lie then in the imputation of assumptions into the unknown gap areas? The areas between assay points, chip samples, mapped structures?
Aff?rsutveckling, DD-light, ?garst?d. Ring 0722 160730
3 周We will see a lot more of that. ?Already articles in news papers are written by AI (sometimes marked), and is spreading more or less rubbish. AI don’t have a critical mind. Nor a logic analytical curiousity.
Specialist in deposit-scale structural geological analysis | Conceptual founder Leapfrog software | Analysed over 600 mineral deposits drilling data | Founder StructuralGeology.com
3 周Warwick, I've downloaded the paper and quickly scanned it. There are two major problems I have with the paper: 1) It completely assumes currently accepted geological interpretations for the copper belt are valid (I don't believe they are), The priors are all framed around these ideas. 2) they deal with 2D data, when the data really should be treated as a 3D dataset.
Exploration Geologist
3 周Warwick Anderson AI apparently from time to time suffers from "hallucinations" whilst it's trawling the internet looking for insight. Maybe it needs downtime occasionally from overuse of "tokens" and to lay off the magic mushrooms (easy on the diatremes. .) "In artificial intelligence (AI), tokens are the smallest units of text that an AI model can process. They are the building blocks of language for AI systems. How are tokens used? Tokenization The process of breaking down text into tokens. Tokenization is the first step in training an AI model. "