#12 On the ethical use of AI in B2B

#12 On the ethical use of AI in B2B

AI is everywhere—it’s the AirPods of 2016 and Bitcoin of 2018. Everyone has it, everyone craves it, and few understand how it all works. You’ve noticed that virtually every SaaS out there is now AI-powered—so fancy! Unlike AirPods, though, a lot more is going on behind the scenes when you use AI.

For app devs, the benefit of AI is tremendous: anyone can now publish an app in record times. I myself did that as both an experiment and exercise (which was a blast, by the way). All it takes is $99 to launch a new app on the App Store. Add a $20-or-so ChatGPT subscription and a few weeks of work, and boom—you’ve got a new competitor in the mix.?

The same extends to reputation management—AI can do that for you instantly and scale up without limit. Sounds awesome, sure. But does AI hold to privacy standards? Is it ethical? Is it useful enough? Here’s what I think about it.

The ideal use of AI

Let’s start with usefulness. Say, you’ve got an app with a high volume of reviews coming in every day. You can approach this in several ways:

  • Manually: for complex issues, unique scenarios, or featured reviews.
  • Automatically: for common problems or reviews obviously made by mistake (like a 1-star review that reads positive).
  • Via AI: for filtering through positive feedback, clarifying negative points, and keeping users updated on bugs or new features.

I’ll say that every approach can, and should be used—the “when” of it depends on the use case. A human response is needed in unique situations. Automated for the complex things you can template. AI for the low-hanging fruit, and the improvement of both manual and automated approaches (personalization!)

That’s the hybrid approach. Each team you’ve got can and should exercise it in the same way.

Support team > manual for unique queries/responses, automated for feedback gathering and updates, AI for standard reports and adaptation of responses to your tone of voice.

Marketing team > manual responses to featured reviews, automated for responses and categorization, AI for promotions. The best thing here is that AI can help you post updates in bulk based on certain criteria: tags, topics, you name it.?

The key here is updates: on both Google Play and the App Store, users only get notified once about a developer’s first response. Any more chats from your side? They have to check manually—for now, at least. If users see that you respond all the time, they’ll come back for a follow-up on their own.

We’ve seen that up to 30% of reviews get updated if you keep the conversation going.

Naturally, the support team will also use software like Zendesk and Helpshift to categorize feedback and work with tickets. That’s why a unified, AI-powered inbox (oh no, I have succumbed to the same hype train!) is a great idea.?

Now, manual and automated approaches are easy to understand—all ends of the process are within your control. AI does not offer this transparency; something happens in a third-party system, and then you get results.?

Now, how does it happen? What are the measures of control? Let’s talk about that now.

Considerations on training, accuracy, and privacy

AI is new. Without boring you with how it all works, the fundamental thing about AI is that it’s built to predict. Data sets look for the nearest, most valid answer based on other data. That’s good because most commercially available models have been trained on…virtually most of it (which is why Claude or ChatGPT have such distinct tone of voice and response structure, by the way—academic papers are likely the core of their training data!)

Let’s assess the use case of review management with this in mind:

Privacy

Since review data is usually public and posted under nicknames, privacy isn’t really a big deal here. I would say that unless sensitive data is concerned, AI is a great helper and you don’t need to worry about anything. If it’s openly on the internet, it’s fair game for anyone, not just AI training models.?

Also, in time, AI training models will be properly regulated, as all things are. For the time being, major platforms have hundreds of fail-safes in place to prevent anything unbecoming from happening: they will be liable if it does happen.

Accuracy

This one is tricky. How good AI is at getting things right depends on the model and how it's trained. Large models have a training of their own, which can then be tempered with custom instructions.?

You can have AI respond based on what’s available in your knowledge base, tone of voice guide, etc. Sure, there are hallucinations on a rare occasion, but they are rare now, and don’t happen when simpler queries are made.

At AppFollow , we don't just throw AI tools into the mix, whether they're made in-house or come from outside. Everything’s tested rigorously. That’s why you can report inaccuracies, and soon you will be able to use custom instructions in the process as well. Unless we are sure it’s 100% safe, it’s not happening.

My advice: if you consider starting with AI, don’t enable auto-post just yet; generate and see if the answer is up to your standards. Tailor, and then reach a 100% response rate.?

Ethics

Is it really okay to let a machine handle personal chats? That’s still up for debate. And how is using a scripted response any different? AI does that—it's just quicker and on autopilot. In many ways, it’s a step up from prefabricated replies, because it can take your templates and reword them for uniqueness. Same, useful data, but with a personal (which is ironic) touch.

At the same time, not every company gets to use AI or automation, depending on their own rules. It can be a simple no-go because all data must be governed internally. That’s okay, even though it is a bottleneck that could be solved. But I want to make it clear: our AI sticks to the toughest ethical standards, and will not offer anything other than what’s asked of it within terms and conditions.?

At the end of the day, if a customer gets a spot-on, personalized response almost right away, isn’t that killer customer service? Human responses will be where they are needed regardless.

AI at AppFollow

As much as I’d like to sell our creation a little bit, I won’t. Instead, I just want to say a few words about a case study we have with Standard Bank—one of the active users of AppFollow's AI Replies.?

They have tons of customers, now, 80% of reviews get instant, automated replies. When things get complex, their human agents are ready to step in. This mix of automated and personalized service has sparked a 646% jump in positive customer interactions.?

It worked so well, that Standard Bank has a CSAT of 50%, with an industry average of just 15%.?Kudos to Nkoebe Motlhajoa and their team.

That’s your perfect example of the hybrid approach I wrote about earlier. Humans where it matters, automation and AI can handle everything else.?

Outro

Things will change all the time. AI at AppFollow will change all the time. There’s no stopping progress. We can, however, figure out how to apply this new technology in the best way, always with the end user’s happiness in mind.?

One more thing, in closing: AI is a tool, and should be treated as such. It should not replace people; it’s meant to help them do more, with less stress.

要查看或添加评论,请登录

Anatoly Sharifulin的更多文章

社区洞察

其他会员也浏览了