Responsible Use of AI in Customer Service

Responsible Use of AI in Customer Service

Microsoft's generative AI is a powerful tool for customer service centres, but its regulatory landscape can be hard to navigate.

Copilot's natural language is accessible to end-users, and its integration into an interactive voice response system (IVR) within Copilot Studio eliminates cumbersome FAQ trees. However, the data it accesses and processes to answer end-users' questions can lead to privacy problems for organizations implementing the solution. Balancing customer needs and responsible data usage isn't easy.

Barry Givens, Avanade's director of Microsoft CRM product management, and Frank Vukovits, Delinea's chief security scientist, discussed the best ways to navigate data regulations in a recent MSDynamicsWorld masterclass.

You can watch the complete masterclass by clicking the image below.


Example: Improper Use of AI in Customer Service

Barry asks us to imagine an individual whose healthcare plan covers several mental health appointments annually. The individual's healthcare payer, a modern, user-friendly organization, allows its customers to interact with its Copilot implementation. The individual asks Copilot how many mental health visits they have left this year, and Copilot lets them know. The individual is happy because they receive the answer to their question quickly, easily, and intuitively.

To generate its response, the Copilot needed to access the individual's private medical records, retrieve the relevant information and compare it against the individual's medical plan.

Barry sees a problem there.

"You don't necessarily know what that language model is going to do with that customer data at the end of the day," says Berry. "In order to (generate a response) you've got to play up some sensitive data to an external service, usually. That will come straight into violation of a lot of regulations on data privacy."

Navigating Data Privacy Regulations

Frank Vukovits suggests that the best practices for responsibly using customers' data within Copilot are learning all you can about the regulations governing your region and keeping your customers informed.

"Don't just say, 'Microsoft must do all the right stuff when it comes to the governance of AI so I, as a company, don't have to do anything,'" Frank says. Instead, organizations must understand how they use both the AI tool and their customers' data.

Different regulations govern different geographical regions. For example, in the United States, statutes like the California Consumer Privacy Act and the Colorado Privacy Act vary by state. Meanwhile, the General Data Protection Regulation governs the entire European Union. By crafting a Responsible Use of AI Statement, you can let your customers know exactly what to expect.

Getting Started Safely

Focusing on knowledge search and knowledge summarization first is effective, especially for those using Dynamics 365, which has Copilot built in. By directing your Copilot to known content, such as public-facing websites, you can be sure that you know what data is feeding the model, which Barry says is a good place to start.

From there, it's best to keep learning.

To learn more about implementing customer-facing AI tools responsibly, watch the complete masterclass for free here

要查看或添加评论,请登录

社区洞察

其他会员也浏览了