Parker Challenge (iv) - some of the stuff I left out
Restless flycatcher (or resource estimator)

Parker Challenge (iv) - some of the stuff I left out

The Parker Challenge keynote presentation was tough but probably not for the reasons you may think. For me the biggest hurdle was trying to decide what to leave out. There was so much that could be said! As it was, my first draft included more than 80 PowerPoint slides… I trimmed that to about 70, then 60- and finally 50. Still too many slides and not enough time to discuss everything. Even now, there are more ideas and concepts we can draw from the Challenge and the analysis continues, albeit at a fairly slow pace.

I thought you may like to read about some of the concepts I left out of the conference presentation. So let’s talk about human factors, noise and variability and?risk in mineral resource estimation.?

The aim of the Parker Challenge was to make a first step on characterising a?type of variation in resource estimation that is rarely discussed. We know, almost intuitively, that different people, different teams, will generate different resource estimates. I mean… it’s fairly common knowledge amongst some of the industry that if you want a ‘generous’ estimate with a high proportion of ‘Indicated’ as opposed to ‘Inferred’ resource there are certain people all too willing to help. It’s also pretty easy to see how someone working internally can get a bit carried away either through enthusiasm or through pressure from others.

?What we don’t really understand is?just how different those cases are. We have not studied and published the ‘person-to-person’ variability.?

But that is only one factor we need to consider. It’s one component of the noise in the system. Where else might we find systemic variability that we have never openly investigated??It’s pervasive. We only need look with the right mindset to see.

Here is one of my favourites.?Kahneman et al. (Noise - a flaw in human judgement) call it ‘pattern noise’.?It’s the variance that occurs when the same person making the same judgement (estimate in our case) exhibits some pattern of bias without knowing it. Say, for example that when I estimate gold I am always more conservative in my decisions compared to when I estimate copper. That could be a stable type of difference or it might be a short-term thing caused by some environmental nudge (like recent news of a failed gold operation).?

Then there is ‘occasion noise’.?Does your mood affect the estimation decisions you make? What about the order you make those decisions,?the order data is presented? Or maybe you are affected by the number of days you’ve worked or hours in the day. Do you make your decisions with more confidence in the morning and less at the end of the day?

Occasion?noise… is the estimate a ‘Monday marvel’ or a ‘Friday failure’? ?

This all may sound very esoteric, surely it doesn’t matter if I’m in a bad mood when I’m investigating domains or determining my variogram model? Maybe, maybe not. We don’t know. Worse the answer will be different from case to case, deposit to deposit and estimate to estimate. Some decisions are more important than others in different types of mineralisation and commodities.?

But we don’t talk about these factors, much less have an understanding of their degree of influence.?

There is one type of noise that I know affects estimation results, and other things for that matter. It’s culture. You see this type of influence when a project is ‘too good to fail’. Or when everyone is under pressure and nothing seems to be going right. Or when the boss has a gung-ho approach or when high risk is tacitly accepted as normal. These cultural tones influence us and pervade our practices and results.?

Like the group-think that often sidetracks a feasibility study causing the team to find a viable solution no matter how unlikely it may be (instead of saying it's not viable).

And there’s yes another type of noise… when someone uses the same ‘workflow’ and algorithm for every occasion regardless of the nature of the mineralisation or the proposed mining approach. You know… always MIK or always OK or always ID^2.1254987. Dogmatic adherence to a single methodology, through ignorance, expedience or ill informed belief. Often it's because they have a system that will churn out a set of numbers fast with little intervention.

Let’s face it. As humans working in human-centric systems where there is sparse data and high uncertainty, we will always have variance and noise in our efforts - at least until we have machines that can do better :)

All of these noisy concepts are ideas for future Parker Challenge investigations. To me, they are the forgotten cousin of resource risk. We agonise over small parametric optimisations designed to improve the estimate by less than 5% and yet… we fail to consider the way our own nature affects those same estimates, a factor that may well exceed 5%.

And then we amplify both the signal and the noise by applying a subjective resource classification… a very blunt tool for a very complex problem….

Why do I think this all matters? Let’s try a thought experiment. Choose one of the two following options.

  1. Your share of the outcomes of 1,000,000 people each betting $100.00 on a coin toss.
  2. Your share of 1 person betting $1,000,000 on a 100 consecutive coin tosses.

Are the outcomes the same? No. There is something fundamentally different.?

In the first case we can use pretty basic probability theory to work out the expected value. Each participant has a 50:50 chance of winning:losing. Your share of the pool is relatively unaffected by the individual wins and losses.

What about the second case? A naive view would say there are the same number of coin tosses, each with the same probability distribution - right? The difference is in the consequence of the outcomes. A single tail when the bet was on heads and the entire prize pool is gone. It’s a -stop- function. One outcome leads to a complete dead end.?

There are some decisions that are -stop- functions in the field of resource estimation. Not identifying them, or worse, thinking they are like the first coin toss where one result doesn’t have a large impact on the global result, will lead to major disaster.

I most often see these stop functions in four aspects of resource estimation

  1. In the base geological data. This is mostly when there is so much noise or variation in the data that attempting to draw any conclusion is simply a flawed concept. It may be mixing different generations of data or sample types. More often it’s working with badly characterised geological descriptions - descriptions that are biased by our past experience and expertise. It may also be the old ‘fooled by randomness’ or ‘seeing patterns where none exist’ problem.?
  2. In the interpretation and domaining stages. There is a high frequency of stop functions at this point. They range from the development of 3D spatial solids that are internally inconsistent through to over-fitting solutions from sparse data. I have a plethora of examples where the domain curtails any possibility of a halfway acceptable estimation outcome. From lodes where the strike is more than 30 degrees from the evidence of the rocks through to people throwing away data because it ‘doesn’t fit our expectations’. ?
  3. In the choice of estimation approach. Building and estimation models that are technically correct but practically useless. More on this in a separate article I think but for now, imagine building an MIK using 14 indicator thresholds where 10 of those thresholds are below any expected economic cut-off… yes I’ve seen it done.
  4. Lastly in the classification stage. Like the domaining decision classification is replete with stop functions. I’d argue it’s probably more affected by this type of problem than any other aspect of resource estimation. Why? Two key things:

  • Classification is an expert decision driven by very loose guidelines where there is rarely any feedback; and
  • The very system itself is almost inherently designed to cause major failures. As a tool for communicating risk our classifications systems are woefully inadequate for the demands of the 21st century.

The concept of a single decision (or choice) which results in a complete failure is critical when it comes to understanding resource estimation.?It’s also the reason the Parker Challenge version 1 was an end-to-end competition. Can you see why? If we want to understand how different an estimate can be given a single set of data we must look across the entire system.

Those stop functions mean the errors are not additive. The variance is not additive. You cannot separate a single aspect, say classification, from the ensemble whole without biasing the result.

Nonetheless, there can be value in looking at component variability. It’s like;y that future challenges will try and do both. Examine the individual components and the whole.

And that will be a challenge of a different type.

Tsogkhuu Iderzana

Resource Geologist

1 年

Great to be here following Parker Challenge news. Is there any possibility to find the slides?

回复
Richard Sulway

Group Principal Geologist, Mine and Resources

1 年

The issue I have is yes the data is there, but myself and I suspect countless others who have done consulting in their past/current Iives and done lots of “check estimate jobs” including yourself??, what are what are the big drivers of differences. The results for me looking back at the old days were mostly, 1 wrong(significant mistakes) or 2 different (close) but in the range of not material. Most were close. Aside from extreme examples e.g. coarse gold the issue for me was largely a case of if people doing the job had the experience, knowledge and peer review - sanity checks. What is close is another debate, I accept that. Parker for me was more about experience rather than natural variation ???? Maybe that’s just my biases, roll on the discussion.

回复
Andre Wulfse

Mineral Resource Geologist

1 年

Thanks for posting Scott - it wiuld be nice to see a quantitative breakdown of the Variance by domain volume - prior to the black art of interpolation and extrapolation of grades. How much of a role did the various geological modelling techniques (and software?) play ?

Wes Roe

Exploring how explorers explore.

1 年

The Parker Challenge’s biasi (multiple bias’) discussion here is another fascinating layer of debate affecting resource estimation, on top of what were in the 50 original slides. Keep I coming Scott, it’s the study that keeps on giving ????

回复

Well written Scott, lots of good examples. Thanks for sharing.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了