I'm an old, very experienced identity architect. When I lead complex, global teams, after determining the program scope, the next thing to do is to assemble a widely diverse team, and begin to bear down on use cases. This then lays out deliverables, identifies requirements, costs, etc.
Thus, this article discusses my first cut at creating use cases for AI systems and bots legal identities. It covers high-level categories for use cases, i.e. governance, business process, technical and end users. As well, it discusses complex use cases like sharing of data between different AI systems/bots. It ends with a brief discussion of PIAM (personal identity access management) systems for AI systems and bots.
IT'S VERY FAR FROM A GOOD START. However, I wanted to begin "staking out the ground" such that jurisdictional leaders and AI/law research groups could begin to see what lies ahead.
Talk is cheap. Use cases are where the proverbial rubber hits the road.
Notes - Use Case Process
- The diverse team first prepares high-level categories of use cases, such as those below in this article.
- Then we begin to dive down into each category, refining the high-level use cases
- For each use case we then begin to create flow-charts. These identify the actors, actions, results, etc.
- Where electrons are involved, we then map the flow of electrons through all parts of an enterprise, internet, etc. This involves firewalls, load balancers, networks, servers, apps etc.
- We then determine security conditions for each part of the use case, i.e. policies, encryption, digital signatures, identity assurance, credential assurance, session assurance, etc.
- Next we move into POC (proof of concept) environments, where a red team constantly attacks, not only the tech, but also the governance, business processes and end users, looking for any potential weakness.
- Once we think the use case is ready for "real life", we then migrate to a very tightly controlled pilot environment, where again the red team constantly attacks, looking for weaknesses.
- Assuming it's good, the use cases are then migrated through the environments towards production. Red teams attack along the way. Any weakness means the use case is sent back to POC, and then repeats the process.
- When the use case finally hits production, in the old days, a process was put in place to test the use case at per-determined intervals, looking for weaknesses. Those days are now dead. Why?
- This curve, means EACH HOUR, new attack vectors are being created to attack the governance, business process, tech and end users involved in a legal identity use case. This requires out of the box thinking to address it.
- Which is why Michael Kleeman, ex-CTO of Boston Consulting Group, and I came up with the concept of having a global, independent non-profit, whose job it is to constantly attack the legal identity framework, 24x7x365, doing threat analysis. This constant "red team" issues threat assessments.
- Based on the level of threat, enterprises, governments and end uses take the appropriate response. For example, a very low threat might take months or longer to address, while a very high threat is responded to in hours. This brings industry best practice to the world of legal identity.
- The same process also includes standards, laws and regulations. Thus, a very high threat, requires very fast changes to these. This is something which our existing processes aren't prepared for politically or organizationally.
- This is how to create a secure, legal identity framework for AI systems/bots as well as humans. It's out of the box thinking for out of the box times.
Use Case Categories
- Governance - laws, regulations and administration
- Business Processes - the operational processes involved in making the use case a reality
- Technical - the end to end tech used for a use case
- End User Entities - the entities involved in the use case
Category Notes:
Governance
- AI Systems/bots legal identity governance is complex -why?
- It requires local jurisdictional laws at state/provincial levels, which are in-sync with national and international laws regarding AI systems/bots legal identities.
- THIS ISN'T EASY TO ACHIEVE. Which is why I'm starting off the use cases with governance since, in my experience, politics is the slowest rate of change in implementing identity changes.
- It goes beyond laws, to regulations. This is where legal governance hits the road. As above, it means local state/provincial/country regulations need to be in-sync with other countries.
- As noted above, the regulations need to be able to be rapidly changed based on threat assessments. This likely means the laws and the regulations need to be carefully worded, allowing for fast changes to them. New legal admin processes need to be created allowing for this to happen - this won't be easy.
- Criminals/malicious states attack the weakest part of a use case. Given this, assuming the tech use cases are secure, it means they'll zero in on humans and business processes.
- The first rung in the chain of administration is the regulations. Thus these need to be very carefully crafted assuming people and weaknesses from the regulations will be exploited.
Business Processes
- In my experience, the next most complicated part of use cases is the underlying business processes. With respect to AI systems and bots, almost all of it will have to be built from scratch, with nothing to reference it to.
- For example, what's the business processes for applying for a legal registration for AI systems and/or bots? How will the "system" check to see if the AI system or bot actually exists? What's the business process to grant a new unique legal identifier to an AI system/bot? What happens when the bots terminated? What happens when the AI system or bot is merged with another? These are just the tip of the proverbial iceberg of business process use cases.
- Criminals and malicious states will be constantly looking for weak spots in the business processes. It's much easier to crack these than deal with the underlying tech.
- Thus the red team needs to have not only the best and brightest tech experts, but also legal and business process experts, who are nefarious in nature, seeing potential attack vectors. As an aside, on one of my past teams, one of the best people for coming up with new attack vectors was an English major student. Her mind was excellent at looking at things very differently. It's these types of folks I look for when assembling use case and red teams.
Tech
- The underlying challenge of how to create a unique legal identifier, which can be inserted into source code, in such a way it can't be easily masqueraded, or manipulated with, is not trivial. As I've written about in other articles, I have no idea how this will be done.
- This is the first tech challenge to overcome. Assuming the best and brightest come up with a tactical coding approach, it needs an equally cunning red team to attack it. This trial and error approach will, in the end, deliver a workable tech solution framework.
- Then there's the questions of how the source code will be accessed, written to, and then compiled? All of these points in the tech process are possible attack vectors.
- Then there's the speed at which all of the above occurs. Hypothetically, it could all be done in seconds, from the start of the business process to the final deliverable.
- Add to this the hypothetically awe-inspiring numbers of legal identity registrations per second.
- All of which lead to new attack vectors where criminals will likely try to overwhelm the legal identity registration system.
- This in turn leads to not only defenses against these types of attacks, but also high availability for the AI/bots legal identity registration systems in each jurisdiction and globally.
- Summing it up, there are likely at least a few hundred or more local jurisdictional systems around the planet, which need to tie into the global AI system/bot legal registration system. All of which are highly available to at least 99.999% availability (5 9's). As well, they all need to be secure.
- Given the curve mentioned earlier in this article, the use case team cannot rest on its laurels. With each hour, new tech attack vectors are likely being created. Thus, it's highly likely the teams in charge of defense as well as red teams attacking it, will increasingly leverage AI to defend and attack.
- Criminals will also be using the same type of tech to attack. It's a game of cat and mouse, occurring at ever increasing fast rates.
End User- i.e. Each Entity and/or Human Owner
- In human legal identity fraud, criminals leverage "ignorant" humans, getting them to give them usernames, passwords and/or their consents, to then masquerade as them. While the actual attack vectors with AI systems or bots will be different, the same concepts will be used by malicious parties.
- Thus, if the unique identifier is written to an AI system or bot source code, then criminals will want to gain easy access to the underlying source code.
- One can easily see how criminals will try to obtain access to the source code. Depending on risk and potential gain by doing so, they will likely expend serious energy and money into this.
- Now, one can't tell others what to do to protect their source code, i.e. people will do what they want, potentially creating attack vectors. HOWEVER, BEST PRACTICES CAN BE ESTABLISHED.
- This is where the standard setting group/global independent non-profit, can create best practices for use of source code, and continually update it.
- Based on risk, enterprises dealing with others via contracts, might specify adoption of the source code best practices for AI systems and/or bots, within the contracts. This then contractually begins to reduce risks, and potentially assign liabilities to other parties if their source code is accessed and/or tampered with.
- Then there's the human administrator who's in charge of the source code for the AI system and/or bots (and in the future an AI system who manages other AI systems/bots source codes). These are all potential attack vectors.
- As the risk rises, so to must the identity, credential and session assurance for the human/AI system administrator. Very high risk scenarios might entail multiple parties being involved before access to this portion of the source code is allowed.
- Hypothetically, I can see where portions of the source code are in some way isolated from other portions of the source code, requiring different access rights to change it, with legal identity becoming one of the first ones to apply this to.
- All of the above are things an enterprise AI/security team must consider.
More Complex Use Cases
- The use case team should start off with crawling steps, i.e. create the underpinning governance, business process, tech and end user entity use cases, allowing for simple AI systems and bots to be legally identified. Then, it can move on to more complex use cases. Like what?
- How will people, enterprises, companies or governments be able to contractually specify AI system/bot 12345 can only share data with AI systems/bots ABCDE and UVXYZ. The growing ability for AI systems and bots to work together in singularity, means the planet needs new tools to contractually regulate this, and technically enforce it.
- In my mind, it first starts with having contractual confidence an enterprise is dealing with AI system/bot 12345. Then, it requires coding practices allowing for 12345 only to be able to share data with ABCDE and UVXYZ.
- How all this will occur - I don't know. What I do know if the resulting framework must be highly efficient, secure and easily scalable around the planet.
- It's highly likely AI systems will be used to create contracts between the enterprises and AI systems/bots enterprises. This type of tech requires a solid legal/tech underpinning as described above.
Then There's AI System/Bots PIAMS to Consider
- In my legal self-sovereign identity (LSSI) framework for humans, I have the concept of a PIAM (personal identity access management ) system. To understand this, as background, first skim pages 31-33 of this paper. Then skim this paper which discusses PIAM. Now, what does this have to do with AI systems/bots legal identities? Everything.
- As computational intelligence increases with AI systems/bots, it means, based on risk, there will not only be a need to have a legal identity, but some form of onboard PIAM. Now what does this mean? Answer - it depends...
- A bot or AI system which is only doing a limited number of functions might only require a rudimentary PIAM doing basic access control functions. It's communication with other AI systems and bots might be very limited.
- Now consider sophisticated AI systems and its bots, which might number in the hundreds, thousands or millions. The architecture for these will be VERY DIFFERENT than the simple example just given. As a result, their PIAM needs will likely be highly complex.
- Now bring all of this down to use cases. It's highly likely as AI systems/bots sophistication increases, and the need for PIAM becomes more obvious, it will also become obvious standards need to be set for PIAM for AI systems/bots. Thus, use cases need to be developed.
- The first step is legally identifying the AI system/bot the PIAM is operating on. From this, then identity access management use cases can be created.
Summary
Yes, it's very complicated. Yes, it's VERY political. Yet, this curve referred to earlier in this paper, is driving the planet to come together to solve the challenge of dealing with AI systems and bots.
It comes down to being able to legally identify these, which in turn means the planet needs to come up with a legal identity framework for AI systems and bots. Thus, bearing down on this requires use cases to stake out the ground for how this will securely happen, to high availability and speeds around the planet.
My hope in writing this article is to get policy folks, AI/law researchers awareness of what's coming your way, sooner than you might think. It requires a VERY DIVERSE group of people to come together to solve this.
This is what I was thinking about at the end of the, "Mission Control - We Have a Problem" article when I stated:
"I want to assemble some really bright technical minds from around the planet to begin drilling into this.?Yet, “AI technoids” are not the solution in and of themselves.?It also requires some very, very smart and clever lawyers, business process folks, and politicians to get involved.?
For what little it’s worth, in my gut, I’m looking for a “home” to locate this, with lots of resources, to rapidly begin work, POC’ing, and then piloting portions of it, in at least 1-3 jurisdictions around the planet."
About Guy Huntington
I'm an identity trailblazing problem solver. My past clients include Boeing, Capital One and the Government of Alberta's Digital Citizen Identity & Authentication project. Many of my past projects were leading edge at the time in the identity/security space. I've spent the last eight years working my way through creating a new legal identity architecture and leveraging this to then rethink learning.
I've also done a lot in education as a volunteer over my lifetime.?This included chairing my school district's technology committee in the 90's - which resulted in wiring most of the schools with optic fiber, behind building a technology leveraged school, and past president of Skills Canada BC and Skills Canada.
I do short term consulting for Boards, C-suites and Governments, assisting them in readying themselves for the arrival of AI systems, bots and AI leveraged, smart digital identities of humans.
I've written LOTS about the change coming. Skim the?over 100 LinkedIn articles?I've written,?or my webpage?with lots of papers.
Quotes I REALLY LIKE!!!!!!:
- We cannot solve our problems with the same thinking we used when we created them” – Albert Einstein
- “Change is hard at first, messy in the middle and gorgeous at the end.” – Robin Sharma
- “Change is the law of life. And those who look only to the past or present are certain to miss the future” – John F. Kennedy
Reference Links:
An Identity Day in The Life:
My Message To Government & Industry Leaders:
National Security:
Rethinking Legal Identity, Credentials & Learning:
Learning Vision:
Creativity:
AI Agents:
- “Personal AI FinTech Agents - Risks, Security And Identity”
- “AI/Bots Health Agents, Medical IoT Devices, Risks, Privacy, Security And Legal Identity”
- “Marketing In The Age of AI Agents, Bots, Behavioural Tech and Crime”
- “Legal Departments - AI/Bots, Gen AI, AI Agents, Hives, Behavioural Tech And AI's Ability To Own LLC's”
- “AI Agents & Kids"
- “AI Agent Authorization - Identity, Graphs & Architecture”
Architecture:
AI/Human Legal Identity/Learning Cost References
AI Leveraged, Smart Digital Identities of Humans:
CISO's:
Companies, C-Suites and Boards:
Legal Identity & TODA:
Enterprise Articles:
- “Legal Departments - AI/Bots, Gen AI, AI Agents, Hives, Behavioural Tech And AI's Ability To Own LLC's”
- “Marketing In The Age of AI Agents, Bots, Behavioural Tech and Crime”
- "Major Change – Future of HR"
- “AI/Bots Health Agents, Medical IoT Devices, Risks, Privacy, Security And Legal Identity”
- “TODA, EMS & Graphs – New Enterprise Architectural Tools For A New Age”
- "Entity Management System"
- "Personal AI FinTech Agents - Risks, Security And Identity"
Rethinking Enterprise Architecture In The Age of AI:
LLC's & AI:
Challenges With AI:
New Security Model:
DAO:
Kids:
Sex:
Schools:
- “The Coming Classroom Revolution – Privacy & Internet of Things In A Classroom”
- “Kids, Digital Learning Twins, Neural Biometrics, Their Data, Privacy & Liabilities”?
- “Bots, Classrooms, Privacy, Legal Identity & Contracts”
- “We Have An Identity Problem – AI/Bots in School, Home & Work”
- “Kids, Schools, AI/AR/VR, Legal Identities, Contracts and Privacy”
- “EdTech Law – Legal Identity Contracts”
- “AI, Cheating & Future of Schools/Work”
- “Using AI/Digital Learning Twins in Assessment & Education”
Biometrics:
Legal Identity:
Identity, Death, Laws & Processes:
Open Source:
Notaries:
Climate Change, Migration & Legal Identity:
Fraud/Crime:
Behavioral Marketing:
AI Systems and Bots:
- “AI, Bots & Us - Examples of Rapid Change”
- “Decentralized AI – Risks, Legal Identity, Consent & Privacy”
- "ChatGPT, AI, Identity & Privacy"
- “Why We Need To Legally Register AI Systems and Bots”
- “Why AI Regulation Requires Legal Identities of AI Systems and Bots”
- “Artificial Intelligence & Legal Identification – A Thought Paper”
- “Mission Control – We Have a Problem”
- “Lease or Rent a Bot! Rapidly Emerging Contract Law & Legal Identity Challenges”
- “The Infrastructure Behind Coordinating up to 3,000 bots in One Factory”
- “Nanobots & Legal Identity”
- “Micro Flying Bots & Legal Identity”
- “Microbots Able to Swim Through Your Body & Legal Identity”
- “Bots, Swarms, Risk & Legal Identity”
- “Nanobots, Microbots, Manufacturing, Risk, Legal Identity & Contracts”
Contract Law:
Insurance:
Health:
AI/AR/VR Metaverse Type Environments:
SOLICT:
EMP/HEMP Data Centre Protection:
Climate:
A 100,000-Foot Level Summary Of Legal Human Identity
- Each person when they’re born has their legal identity data plus their forensic biometrics (fingerprints, and later when they can keep their eyes open – their iris) entered into a new age CRVS system (Civil Registration Vital Statistics - birth, name/gender change, marriage/divorce and death registry) with data standards
- The CRVS writes to an external database, per single person, the identity data plus their forensic biometrics called a SOLICT “Source of Legal Identity & Credential Truth).?The person now controls this
- As well, the CRVS also writes to the SOLICT legal identity relationships e.g. child/parent, cryptographically linking the SOLICTs.?So Jane Doe and her son John will have cryptographic digitally signed links showing their parent/child.?The same methodology can be used for power of attorney/person, executor of estate/deceased, etc.
- The SOLICT in turn then pushes out the information to four different types of LSSI Devices “Legal Self-Sovereign Identity”; physical ID card, digital legal identity app, biometrically tied physical wristband containing identity information or a chip inserted into each person
- The person is now able, with their consent, to release legal identity information about themselves.?This ranges from being able to legally, anonymously prove they’re a human (and not a bot), above or below age of consent, Covid vaccinated, etc.?It also means they can, at their discretion, release portions of their identity like gender, first name, legal name, address, etc.
- NOTE: All consents granted by the person are stored in their SOLICT
- Consent management for each person will be managed by their PIAM “Personal Identity Access Management) system.?This is AI leveraged, allowing the person, at their discretion, to automatically create consent legal agreements on the fly
- It works both locally and globally, physically and digitally anywhere on the planet
- AI systems/bots are also registered, where risk requires it, in the new age CRVS system
- Governance and continual threat assessment, is done by a new, global, independent, non-profit funded by a very small charge per CRVS event to a jurisdiction to a maximum yearly amount.
A 100,000-Foot Level Summary Of The Learning Vision:
- When the learner is a toddler, with their parents’ consent, they’ll be assessed by a physical bot for their learning abilities.?This will include sight, sound, hearing and smell, as well as hand-eye coordination, how they work or don’t work with others, learning abilities, all leveraging biometric and behavioral data
- All consents given on behalf of the learner or, later in the learner’s life by the learner themselves, are stored in the learner’s SOLICT “Source of Legal Identity & Credential Truth”
- This is fed into a DLT “Digital Learning Twin”, which is created and legally bound to the learner
- The DLT the produces its first IEP “Individualized Education Plan”, for the learner
- The parents take home with them a learning assistant bot to assist the learner, each day, in learning.?The bot updates the DLT, which in turn continually refines the learner’s IEP
- All learning data from the learner is stored in their LDV “Learner Data Vault”
- When the learner’s first day of school comes, the parents prove the learner and their identities and legal relationship with the learner, via their LSSI devices (Legal Self-Sovereign Identity)
- With their consent, they approve how the learner’s identity information will be used not only within the school, but also in AI/AR/VR learning environments
- As well, the parents give their consent for the learner’s DLT, IEP and learning assistant bot to be used, via their PIAM (Personal Identity Access Management) and the learner’s PIAM
- The schools LMS “Learning Management System” instantly takes the legal consent agreements, plus the learner’s identity and learning information, and integrates this with the school’s learning systems
- From the first day, each learner is delivered a customized learning program, continually updated by both human and AI system/bot learning specialists, as well as sensors, learning assessments, etc.
- All learner data collected in the school, is stored in the learner’s LDV
- If the learner enters any AI/AR/VR type learning environment, consent agreements are created instantly on the fly with the learner, school, school districts, learning specialists, etc.?
- These specify how the learner will be identified, learning data use, storage, deletion, etc.
- When the learner acquires learning credentials, these are digitally signed by the authoritative learning authority, and written to the learner’s SOLICT.
- The SOLICT in turn pushes these out to the learner’s LSSI devices
- The learner is now in control of their learning credentials
- When the learner graduates, they’ll be able, with their consent, to offer use of their DLT, IEP and LDV to employers, post-secondary, etc.?This significantly reduces time and costs to train or help the learner learn
- The learner continually leverages their DLT/IEP/LDV until their die i.e., it’s a lifelong learning system
- IT’S TRANSFORMATIONAL OVER TIME, NOT OVERNIGHT
?