Defining the purview of an Algorithmic Assessment – Environment

Defining the purview of an Algorithmic Assessment – Environment

Last week, I attended an IEEE robotics Summer School on Autonomous robotics systems in Prague and was fortunate to have discussions with many researchers working on cutting edge methodologies in path planning, coordination, simulation, deployment, obstacle avoidance etc for UAVs. We even got a chance to implement algorithms in a challenge to accomplish a set of tasks, simulate paths to ensure safety of the drones in the pre-designed environment and deploy them on real UAVs! The use case was inspection of electrical cables with UAVs and the goal was to have multiple UAVs to achieve a set of goals avoiding obstacles, take pictures at specific view points and safely land. Not a lot of intelligence deployed but the algorithms had to accomplish the tasks factoring all constraints and optimize for the best outcome. We implemented a version of the shortest path algorithm and it was fascinating to see how different teams approached the problem! Here is a short video to get you a feel for the challenge!

Verifying and validating algorithms for autonomous systems is challenging especially when the models are deployed on the edge! The context of use and assumptions are important to constrain the problem enough so that you can scope the verification problem to a tractable one. In the drone challenge, it was fascinating to see the various variables involved and how hard it was to test without constraining the problem significantly. I shared some of the challenges in my talk there!?

No alt text provided for this image

Now, coming back to the XYZ bank use case we have been discussing. In edition 22 and 23 of this newsletter , we talked about defining the purview of the assessment from the prism of data and models. In this edition, we will discuss aspects of the environment. Most enterprises have software stacks that are well defined. Decision support systems are typically made available to end-users through applications/tools that are integrated into the decision making process. For example,?

  • The application could be end-user facing where an applicant could input their information and get an automatic decision of whether they would be given credit or not.?The decision model could be updated by the bank periodically
  • It could also be for a human-in-the-loop (for example, a bank gets an application through mail and the processor enters the customer’s credentials to authenticate the user first and pull the applicant’s credit score and other details and then used the application to determine eligibility). This may be needed when the applicant’s risk profile would warrant additional reviews.
  • The applicant data could be batched and sent to a back office where the applicant’s eligibility could be processed. The back office may use a custom client application with the application hosted through an API with only authorized clients eligible to use the API
  • The institution may use an alternative scoring mechanism provided by an external vendor (For example a fintech which aggregates credit data) and use an ensemble or some other decision criteria to make decisions.?

Considering there are multiple components in the workflow, it is important to understand that there are many aspects to be considered when assessing risks. For example:

  1. How is data handled?
  2. How is the validity of the data checked?
  3. How do we ensure that the right versions of the models/APIs are invoked?
  4. How do we track which version of the model was used to make decisions?
  5. If the application was denied and the applicant asks for reasons of denial, how should it be approached?
  6. How to ensure that the model and the access points are secure?
  7. What quality and availability of service can we guarantee?
  8. When decisions are made, how are they monitored and tracked?
  9. How are anomalies detected?

These questions are discussion starters but the gist is that the environment in which the algorithmic system is deployed needs to be assessed formally. We will discuss the fourth element - process in the next newsletter

??Keep on learning!

?? Want to learn more? QuantUniversity is offering a LIVE Algorithmic Auditing course on August 22nd 9.30 AM to 4.00pm in partnership with PRMIA. If you are interested, check out details here:?https://prmia.org/Shared_Content/Events/PRMIA_Event_Display.aspx?EventKey=8906

??Many of these topics will be elaborated in the?AI Risk Management ?Book published by Wiley. Check updates here ->?https://lnkd.in/gAcUPf_m

??Subscribe to this newsletter/share it with your network ->?https://www.dhirubhai.net/newsletters/ai-risk-management-newsletter-6951868127286636544/

I am constantly learning too :) Please share your feedback and reach out if you have any interesting product news, updates or requests so we can add it to our pipeline.

Sri Krishnamurthy?

QuantUniversity

#machinelearning ?#airiskmgt ?#ai

Richard Self

Leadership and Keynote Speaker and member of the Data Science Research Centre at University of Derby

2 年

Points 3, 4 and 5 are particularly important. I raised these points several years ago at the beginning of this process. At last they are becoming mainstream questions about the #risks of using ML learning systems and models in fintech. They are black boxes leading to lack of explanations. Fintech needs to learn that ML is the lazy and quick approach. What they need in order to meet governance and compliance requirements is to use fully deterministic algorithms based on the decision criteria that are the basis of the analysis. It is then easy to explain the decision to the customer. Nice simple if-then-else decision trees that are self explanatory.

要查看或添加评论,请登录

Sri Krishnamurthy, CFA, CAP的更多文章

社区洞察

其他会员也浏览了