Bogsat Competency
I started writing this post a couple of weeks ago, looking to integrate some of the ideas and concepts I’ve come across during some recent reading around consciousness, thinking, complexity and artificial intelligence. As I wrote, the story grew. And grew. And grew.?
Ridiculous?Scott! This is simply too long. No one will read the whole thing (especially without pictures). So, I’m going to split these ideas into a series of posts. Conceptually I’m still rolling these ideas around in my head - maybe some comments and commentary from everyone will help solidify some of the more nebulous aspects.
Let me start by stating this series is about ‘competency’. Initially framed by the ideals raised (but not well defined) in the JORC Code. I suspect however, that these thoughts extend well beyond resource and reserve estimation and reporting. Anywhere we expect ‘competent’ performance perhaps.
A concept that seems relevant for the times. We live in an era of high-stakes where a single simple error can have dire consequences. And as we rely more and more on automation this situation grows. Our world is more intricately intertwined than ever before.
Here’s the challenge as I see things. How can we base a code of practice around ‘competency’ if we do not define ‘competency’ first? I’ve always been a fan of taking time to frame the question before trying to find a solution. So, exactly what -is- ‘competency’? Oh, you can look up the definition in any number of dictionaries. You’ll get something along the lines of ‘suitably skilled, knowledge and experienced to undertake some task or purpose’. In some cases the definition is g?delian, looping back on itself as in ‘the quality of being competent’.?
But none of those definitions -really- tell me much. I mean, what is ‘suitably’ in this context? It’s a judgement and I might have vastly different standards and expectations compared to yours. Exactly what skills, knowledge and experience are we talking about? And why is it -those- skills, knowledge and experience? Are they the best ones? How do we know?
Let’s face it, the idealised ‘competency’ model in the JORC Code is little more than a definition using the bogsat method. (Bunch Of Gits Sitting Around a Table). As for the ‘5 years relevant experience’… where did that number come from? What makes 5 so special? Why not 6? Or 5 1/2? Does something magic happen for everyone at that 5 year mark? What about our less fortunate fellows who seem to work the same ‘year’ again and again, never learning and improving?
No, you can pretty much guarantee that both ‘relevant experience’ and ‘5 years’ come from the?“it seemed like a good idea at the time” school of thought. And much as I am a fan of heuristics I also have a healthy suspicion about using them and never going back to check if they are working.
To make matters worse, the JORC Code has not changed much of the words around ‘competency’ for more than 40 years. I’d argue someone who could validate claim they were ‘competent’ in 1989 would be look at askance if they claimed ‘competency’ in 2022. The world has moved on as have our thoughts around ‘best practice’ resource estimation. And technology is rapidly challenging any traditional idea of ‘competency’.?
What to do? What to do??
Well, we can continue with the bogsat approach with a new bunch and a new table… try another heuristic and see what happens. I’m not sure that’s a great idea in 2022 when we are all supposed to be so ‘modern’, so ‘rational’, so ’data-driven’. Or, and here’s a wild idea, we could talk to some people who have the ‘relevant experience’ to help us define ‘competency’.?
But who -are- those people? And why should we trust them??
These are the thoughts spinning in my poor ageing head. This is not a new problem, surely not! So I started looking around. Now, the science nerd in me was immediately suspicious of sociological approaches to understanding competency. Too much speculation and not enough h-a-r-d?d-a-t-a. That probably reflects some deep seated prejudice that I need to haul out into the light one day! But… who else needs to define ‘competence’ in a very rigorous way??
And then it dawned… the idea of ‘competence’ must be heavily entwined with the field of artificial intelligence! I mean, who wants an incompetent self-driving car to take them on their next road trip? Surely the AI and computer science mob has looked at this idea! How else could they understand if they are winning or losing in the quest to usurp humans from the world? Of course… in the quest for AI inspired insight I was led straight back to the sociology and psychology arena…
领英推荐
As it happens I’ve been following a narrative thread for the last 12+ months and that thread has taken me right to the pointy end of artificial general intelligence (AGI). There’s been some really frightening and equally inspiring work happening in that field. Frightening because it is all too easy to see how things could go really really wrong at the individual and societal scale. Inspiring because, there are some bright people looking this challenge squarely in the eye and trying to prevent a virtual Terminator world. Hmm…?
Guess what? In the track of their research they need to think about ‘competence’. Oh, they don’t call it that but conceptually there is a lot of similarity. How well an artificial agent performs some task is a critical success factor. How it get there is just important in many cases. What’s more many of the methods involved suffer from problems your average resource and reserve specialist is familiar with - like sparse events (or signal)… and bad (or misleading) data… or the problems that arise when novices imitate experts without understanding why the expert took a certain approach, leaving them more vulnerable to mistakes and error.?
Let’s head down the rabbit hole and see what we can discover. It’s a fascinating trip. I’m certainly no expert guide but I’ll do my best and try to highlight the analogies as we go…
In 1958, New Scientist published an article “Machines Which Learn”.?One of the phrases in that story stood out for me - “when machines are required to perform complex tasks it would often be useful to incorporate devices whose precise mode of operation is not specified initially, but which learn from experience how to do what is required. It would then be possible to produce machines to do jobs which have not been fully analysed because of their complexity.”
Take that paragraph and replace ‘machine’ with ‘people’ and you,can see the parallels. We have a complex objective and the road from here to there is unspecified. How we achieve competency and how we create efficacious estimates are questions in the same domain.?
From those early 1950’s beginnings of ‘perceptrons’ and the idea of ‘machine learning’ the AI world exploded, well in fits and starts with numerous dead ends and even more numerous bad outcomes. It’s these dead ends and poor results I find most informative. Not so much in themselves but more because of the lessons they hold for the idea of ‘competency’ and modelling in general.
Here’s an example we are all familiar with in 2022 and it directly relates to ‘competency’. Would you be happy taking a trip in a self-driving car, barrelling at 100kph down the M1? I think that question provides a pretty sharp binary division. Yes or no… it depends on so many things, not the least being your ability to trust in not only the competence of your auto-pilot but the competence of the team of people standing behind it’s performance. If you are like me, you’d want to know a fair bit about how that self-driving system worked and what ‘unknown unknowns’ it might have moulded into its head.
Now, some people will just say ‘hey it’s a self driving car! Let’s go’. There’s not much I can do for them - risk takers the lot (from my perspective). Those with a bit of hesitation… they are more interesting. What will it take to convince them it’s safer in the self-driving model than the car driven by the flawed human? Isn’t it strange… we accept that we humans are flawed but we demand perfection from our creations.. there seems to be some belief we share that acting together we compensate for each other’s weaknesses.
This picture of someone reluctant to trust their life and the lives of their loved ones to the ‘black box’ of the self-driving car strikes me as having parallels in our ‘competency’ challenge. I mean, if I -really- want to know how the AI works, I’d best go back to school and delve into the decades of research - and that would only scratch the surface of what could go wrong with these systems! It’s the same for a non-specialist investor and the resource industry. Worse…. It’s the same for 90%+ of corporate leaders, financiers and other stakeholders…. They need to ‘trust the competent person’ and by implication trust that the competent person is well versed in the strengths and weaknesses of their models.
And there is one part of the problem. We have systems that automate the estimation process. How many people know what lies inside those automated assistants? I’m looking at you, you ‘competent person’! Do you know how a variogram is calculated? Do you know the accuracy of your drill hole survey and the desurvey algorithm used to calculate those xyz coordinate for every sample? Do you know the impact of location error, analytical error, parametric uncertainty? Have you quantified those factors? And that’s the easy part… the challenge and degrees of freedom only increase when you cross into the world of modifying factors! Woe betide the poor person ‘competent’ for the ore reserve! The uncertainty associated with and the shear number of scenarios available make reserve estimation seriously problematic. To explore the scenario and alternative space in the shift from resource model to reserve estimate is a task I think we’ll suited to some AI assistance!
This is one of the first lessons I see. We can learn about understanding what can go wrong. How can decisions we make affect the outcome? And that’s a question about both our estimates -and- our definition of competency! If we take some action around defining competency what will be both the intended and unintended outcomes? Are we at risk of ‘rewarding A and expecting ‘B’?
I think we do need to improve our conception of competency but I also worry about that last question…. Rewarding A and expecting ‘B’. If and when we change the rules we’d best be very certain we close the loopholes - or else we may simply make matters worse!
More to follow.
Always an interesting read The open source software world has argued that transparency and openness is in part an answer to trust and confirmation of competence. If you can see and review then you can at least gauge through your own lens. But as humans, we also trust based on experience and probability. It s the only thing that allows us to trust the hundreds of people barrelling towards us at closing rates exceeding 200km/hr at night on two lane highways on rainy nights.... we just believe that we will get home safe.
Director and Principal Mining Geologist at Rose Mining Geology
2 年Reminds me of Zen and the Art of Motorcycle Maintenance, but the discussion there is about "what is quality" We all know something is poor quality when we see it, but how do you define it?
Specialist in Mineral Resource and Ore Reserve Estimation at Geostokos Limited
2 年many decades ago, someone asked me what I thought of "expert systems". My answer: "show me your expert............"
Chief Geoscientist at INX-K2fly
2 年A definition of intelligence, let alone 'artificial', is also worthy of some debate ...