Measure Twice, Cut Once: How to Implement an AML Solution Properly – Implementation

Measure Twice, Cut Once: How to Implement an AML Solution Properly – Implementation

One of the most common themes that I hear from my clients relates to questions regarding the best way to implement a new AML transaction monitoring solution. Last week, in part three of the four-part series on implementing new AML solutions, we reviewed the software selection process. This week, in part four, we’ll explore software implementation.

Once a solution has been selected, there are several phases that financial services institutions will go through to implement it:

  • Implementation planning: this phase includes budget allocation and allocation of people (internal/external) to the project (IT, project management, user testing, etc), timeline planning, scope analysis and authoring business requirements for any customization that is required.
  • Data Management: Most financial institutions don’t typically have all the required information for transaction monitoring solutions in one place. This can be due to ageing core banking systems and a lack of a previous business need to have all of it in one place. Many financial institutions build data lakes specifically for compliance now. Data acquisition can also sometimes be complicated by the need to get specific information in a more expedient manner than it might otherwise be available, for instance, OFAC screening on inbound/outbound wires would need to happen in real time, not a day or two later when information might typically become available in a data lake. Sometimes, issues like this can require re-engineering a front office products or systems to support the new monitoring solution.
  • Parallel support for the legacy monitoring solution: since implementing a new transaction monitoring solution can often take anywhere from nine months to two years based on complexity, scope and available resources, it’s incredibly important to make sure that the legacy application continues to be available for the duration of the implementation process, that enough staff are available to support both environments, and that a support plan is in place for continuing to update and make changes to the legacy system while it’s still being used.
  • Transitioning data: the legacy solution will have lots of valuable data ranging from previous alerts and cases with their associated dispositions, notes, attachments and audit trails. It’s important for regulatory and operational purposes to decide what to do with this data: integrate it into the new solution, or make it available as a read only archive for investigators and regulators to review upon request.
  • Selection of rules, models and business logic to be applied: An organization needs to have a recent risk assents in place to determine what their monitoring requirements. From there out of the box rules and models can be matched up to the risk assessment and business requirements can be written for any customization that is required.
  • Tuning and Surveillance Optimization: After the appropriate models have been selected thresholds and scoring factors need to be identified. This typically requires reviews of vendor documentation, determining tunable parameters & population groups, extracting test work items for analysis and associated productivity data, producing tuning reports, utilizing visualization tools to create dashboards, preforming an alert distribution analysis, meeting with investigators to get additional information, preforming a functional validation of the model, preforming threshold validation by reviewing productive/unproductive alerts at/near the desired threshold for each tunable parameter, preform below the line testing and generate a report for audit and regulatory review.
  • Model and System Validation: Many banks find it advantageous to have a 3rd party review their own work (or the work of another 3rd party) to generate a report that can be used to satisfy audit or regulatory requirements that the system was implemented correctly and models with their associated thresholds are appropriate to meet the monitoring requirements. This is because written guidance on AML models that the Office of the Comptroller of the Currency (OCC) and the Federal Reserve have issued, which states that “all model components including input, processing, and reporting, should be subject to validation” by an independent third party.

Having a good understanding of the phases of implementing a new AML solution will make the implementation go more smoothly. If you’re looking at improving the software that you’re using for transaction monitoring, please reach out to me, I’d be happy to share my experience managing system implementations and upgrades from my experience at several financial institutions and tell you more about how I can help you address your needs.

Rebecca Wilson

Product Manager, Anti Money Laundering (AML) | Financial Crime Protection? at NICE Actimize

6 年

Thanks for sharing!

要查看或添加评论,请登录

Andrew Kostin的更多文章

社区洞察

其他会员也浏览了