Car Crash or New Superhighway? – AI, Audits, Accountability, and the FCA
Global Relay
Leader in compliant communications archiving, messaging, supervision, information governance, and eDiscovery.
A recent speech from the Chief Executive of the Financial Conduct Authority (FCA), Nikhil Rathi, has laid out the groundwork for a pivot towards becoming a ‘digital regulator’. The speech covered the opportunities – and risks – the FCA see presented by Big Tech and Artificial Intelligence (AI), and how the regulator is attempting to balance the clear need for regulation around how these technologies intersect with finance and encouraging innovation.
Interestingly, the FCA’s steps towards becoming a digital regulator have drawn the attention of the National Audit Office. The NAO has flagged “significant changes … to the FCA’s regulation of the sector” as the rationale for the audit , and will be assessing the FCA’s suitability and adaptability when it comes to balancing innovation and regulation.
The FCA’s proposed direction opens up a lot of questions around the need for new regulation, how existing legislation will play with emerging technologies, and where AI accountability sits – with users, developers, or with firms and their managers?
Rob Mason , our Director of Regulatory Intelligence, shares his insight on whether regulators are getting it right on AI and Big Tech, and what this audit means for the FCA.
?
--------------------------------------------------------------
It’s a fair challenge for the FCA to be audited, and it will ensure their approach is valid as their remit expands to cover new topics and emerging technologies. But from my experience – from both sides of the boardroom table – being scrutinized by another body will not be painless, and an audit never finds nothing wrong. Whatever the NAO finds will be made public, and with them seeking to become the ‘digital regulatory leader’, the FCA will hope the results of this audit might validate those credentials.
领英推荐
It seems likely that the main focus of this audit will be around the FCA’s Big Tech and AI agenda. It’s an almost impossible position to regulate AI and Big Tech, and both these areas carry a wealth of regulatory risks. While the timelines might not support Rathi’s speech drawing the NAO’s attention by itself, there’s definitely a connection to be made between the FCA’s proposed direction of travel and this audit.
What the FCA seems to intend to leverage is existing regulation against emerging technologies. Legislating similar to the SMCR regime should make senior managers and stakeholders nervous when it comes to AI, explainability, and accountability. If the regime requires that management are ultimately responsible for the activities of the firm, there will be a scramble to do more due diligence on AI tools that are already use.
Currently, many won’t be able to explain how or why AI models their organization is using work. Machine Learning and AI algorithms learn ‘on the job’, reacting to changes in the market and making decisions from those inputs. But they can’t have morality encoded into them, and could end up engaging in practices human traders would know to avoid. When it comes to regulations and compliance, if something looks like spoofing, and cannot be obviously mitigated, it is spoofing. If senior managers can’t explain why their model is producing that result, they’re going to be held accountable as if they intended that result all along. Because if something looks like a duck, and quacks like a duck …
The regulators will be looking for answers to the question ‘what do you do when AI goes wrong?’ and will expect firms to have that answer ready. While regulators can implement strategies like the FCA’s digital sandbox, they need to have a solid understanding of where existing regulation goes far enough, and where new measures may need to be implemented. Otherwise, given the rate of technological innovation, they run the risk of becoming ‘bicycles chasing Ferraris’.
Written by: Jay Hampshire , Content Writer