Parker Challenge - the better news
To say that some people were surprised at the range of estimation and classification results from this year’s Parker Challenge would be an understatement. The headline outcomes were startling. That is not too unexpected in such an uncontrolled environment. The competition rules allowed anyone and everyone to enter. There was no stipulation of competency, no limitation on experience, expertise or even profession. It was a free-for-all. ?
There’s no doubt that some of the noise was created by people who, under more normal circumstances, would not qualify to report a mineral resource estimate. We do need to be careful however as there are certain business models that are exactly like that. Some high(ish) profile experienced person who nominally can call themselves a Competent Person wins a consulting job, or works on a site somewhere. The actual estimation work, and possibly the classification, is largely outsourced to junior, less experienced people with minimal supervision. The ‘Competent Person’ then ‘signs-off’. I guess there are the right controls, checks and balances in place that might work. But all too often the day-to-day pressure of more work gets in hte way…?
But what about people who are more experienced? Those of us who have a few years fighting geology, software and statistics to a standstill to create models that can be relied on (or so we hope). How does experience change the outcome? We would expect less variation - right? As Jacqui Coombes discusses in her 2013 PhD thesis both length and breadth of experience matter (the 15-2-5 concept).
I thought I’d check the results and see if there was anything interesting. Again, please remember the limitations of this work. The relatively low number of entries, the nature of this as a competition, the volunteer work behind the scenes. The results are interesting, worth further investigation and discussion but I make no claim they are definitive or representative.
Here’s what I looked at today. I took the winning entry, a second anonymous model that did not make the final due to the level of documentation and a few other matters, and three models estimated by the judges including one by yours truly. These five models were all estimated by people (or teams) where the lead person exceeds the 15-2-5 ideal. That is, they have more than 10 years industry experience including 5 directly involved with resource estimation. They have all generated more than 15 estimates for more than 2 different commodities and they’ve all been involved in reconciling operational performance against those estimates (yes I know… I don’t believe in reconciliation!)
Of the five estimates, three were by people with more than 30 years in the industry and the other two were by people with more than 15.
Interestingly the amount of effort (measured as time) involved was widely variable from my own very quick model - copper only done in 3 hours, to the winner who had a team working over weeks.?
The following graphic shows a breakdown of the results. The first thing you will notice is the relatively tight grouping of the grade-tonnage curves. I’ve presented these here without any consideration of the resource classification and also limited the volume to the 0.1% Cu iso-surface that was used to assist us during the judging process. At a zero cut-off the range of estimates is -6% to +8% compared to the average of all five models. At a grade closer to the possible economic cut-off (0.5%) the range is -7% to +9% Still quite large but lying within the naive expectation of a +/-10% outcome.?
That’s good news I think! A nice little set of estimates demonstrating that given the same data, a group of experienced people reach a reasonable consensus on the tonnes and grade of the mineralisation. There’s still a problem though. Look at the slices through the different models. There’s quite a bit of variability in the grade geometry. While the higher-grade zones share a common location on the eastern side of the mineralised zone, these estimate all have slightly different connectivity and anisotropy. In other words, the local estimation precision is low and there’s a lot of uncertainty about the way the grade is spatially distributed.?
Why is that a problem? It is the precision of this local grade distribution that we rely upon for just about any mine design and scheduling approach. Thus, the ore reserve for these estimates would all be starting from a slightly different position. That may impact on the design, the mining sequence and even the mining method. Those decisions have flow-on and iterative impacts on mining equipment selection, fleet size, stockpile strategy, and ultimately the design of the ore treatment plant. You many need different blending strategies or different surge capacities depending on the estimated production variability… A variability that is almost certainly to be different to the reality when operations begin.
So we might be close on global tonnes and grade across a range of cut-offs but that’s only the beginning of the way estimation noise can impact our industry.?
I’d like to talk about my quick and dirty estimate a bit. As I mentioned above it took me about 3 hours from start to finish. If you were at the conference you will know there was an entry that took 6 hours and I thought it was close to the winner in quality. How can such quick models perform so well??
In my case there are a few factors. Number one is experience. The years of working in this space mean I have developed skills and a bespoke set of tools that let me tackle this sort of thing. It helps that I’ve worked in porphyry copper systems from Cadia to Escondido and a number of others. That hands-on knowledge helps with reality checking and knowing what is and is not reasonable.?
The second factor is my approach to developing an estimation domain. That age old question of ‘what is a domain’ strikes again. Given that I was short on time I threw away the purist rule book. No agonising over geology (!) no detailed investigations of stationarity or other considerations. Instead I created long composites from the assay data (20m) to smooth out any underlying data issue and variability and built a simple grade shell at a grade threshold that looked like it was suitable (there’s that experience kicking in again). I tweaked that volume a bit, trimming off stuff that looked unlikely and smoothing around the edges. All those decisions were made on my understand of porphyry copper grade distribution which, compared to something like narrow-vein gold, is quite benign. I mean, the coefficient of variation was less than 1. That’s a rare luxury in this line of work.?
I used that quick volume to flag the data. Again taking a shortcut. I expanded the sample selection by 20m around the 3D solid to allow adjacent samples to inform the internal grade estimate. A soft-boundary approach - more smoothing.?
For the variogram… I took another short-cut. No experimental variograms. No variogram fan, no investigation into anisotropy. None of the stuff I tell everyone else to do. Instead I took an informed guess… I looked at the geometric anisotropy of the 3D solid I created and set the range of the variogram to 75% of the size of the domain. I set the nugget to 10% and assigned 60% of the sill to 40% of the total range.?
领英推荐
Yes, I guessed the variogram. Experience comes into play yet again. You see, I have a virtual database of variogram models sitting in my head from over 30 years of estimations, reviews, due diligence. Jokingly if you tell me your deposit type and the coefficient of variation I can approximate the variogram model. Ouch.
It was the same for the search neighbourhood. Just a guess. I though about the grade continuity I was looking for in the final model and went to the look-up tables in my head and pulled a min/max number of samples and a number per drill hole out.?
In this discipline experience is a central ingredient. It makes it much easier to make decisions and understand consequences. It makes it possible to see the end-to-end process from data to estimate and have a good idea of what works and what does not.
And that’s another problem.?
We need some sort of industry-supported and profession-supported development program and I think we need it urgently. We need to help ourselves make the most of the time spent when we are making estimates, maximising the learning and skills development. Resource estimation is a hands-on field and it has a wide purview. You need knowledge of geology, statistics, geostatistics, mining operations, mine planning, design and scheduling, mineral processing, reconciliation, and increasingly machine learning. You need knowledge of ESG matters. You need to know about economics, market factors, and a smattering of organisational design. You need to span the space from discipline specialist to industry generalist.
How do we ensure we have the people, knowledge and skills the future world needs? To me it needs a major change. A shake-up. We need to look at our current practices and discard those that do not work while emphasising those that do. I’m still contemplating this space.
Back to my 3 hour estimate.?
At the conference someone asked, tongue in cheek I’m sure, if the judges on the speaking panel thought they would have been able to win the competition. There were a few laughs and some general banter but no one was willing to say yes or no. Given the years of experience each and every panellist, each and every judge brought to the competition I’d say yes! The judges could have submitted winning entries.
But not mine! Why? Reporting and documentation. 3 hours from start to finish means shortcuts galore. I was running things on the fly, in my head and not even contemplating being able replicate the result. I’d disqualify my entry on that basis alone…
You may have noticed that I haven’t mentioned classification… All the evaluations above ignore the resource classification. The next graphic shows the same slice through 4 of the 5 models with the classification. There are a lot of similarities in the first three. The shapes for the highest confidence classification are quite similar, but that is where it ends. Each of the entries has a different perception of the resource risk. My entry is all classified as Inferred. Why? It’s a function of my quick estimate. No way would I allow anyone to try and covert that to an Ore Reserve! I’m happy enough with the global outcome but don’t push it… The fourth image is the 6 hour estimate. Here it looks like the entrant has taken a similar approach with a lot of inferred but they have generously allowed some Indicated where there is concentrated drilling and even some Measured centred on a few small holes. The central two models take a middle road.
So even though the global tonnes and grade in an identical volume are within +/-10%, once the estimate is classified using industry standard terminology the outcomes can (and often are) wildly different. Hmmm…. Maybe that industry standard terminology needs a bit of a shake-up too…
That’s it for now.
Key take aways:
More later!
Senior Resource Geologist - Technical & Strategy
10 个月Fantastic article and a great read!
Project Director - Mining Business Development
1 年Hi Scott - I'd have to say you won on ROTI (Return On Time Invested)! Guessing the variogram is a perfectly valid way when time is short and certainly possible for porphyry copper deposits, ideally with support from covariance and correlograms.?As for resource classification, reason will always remain the servant of instinct!
Consultant
1 年Hi Scott - I am enjoying your posts on the Parker Challenge which I think is a great initiative and something we should see more of. This whole paragraph of yours is spot on - starting "Why is that a problem? It is the precision of this local grade distribution that we rely upon for just about any mine design and scheduling approach." At the resource scale the true local variability is difficult to predict and many block models fail in this regard. They might get it right for grade but geological variability (including contaminants, dilutants, deleterious elements) is usually lacking - which can have a big impact on scheduling, stockpiling and processing.
Director | GeoData Analytics | Metal Accounting | Geometallurgy | Machine Learning | Data Strategy
1 年Well articulated Scott Dunham and I always enjoy reading your posts and the sense of humor. Given that much of the processes involved from exploratory data analysis, variography, estimation, post estimation analysis and classification are not taught in most universities except few that offer post-graduate studies. If a Graduate Geologist find themself in a consulting firm or Mining Company that is led by someone with >30 years experience, considered CP and has estimated and classified several commodities during their formative years. What advice will you give to the graduate to build their experience and profile on solid foundation as learning from their qualified leader who has been focusing on winning Contracts as opposed to doing the actual estimation and classification may not be very helpful from your analysis? I often receive questions from our post graduate students who work in consulting firms asking for help while they work in firms / companies that have more than one QP/CP.