“Siri, open Alexa” - a story behind a skill.

“Siri, open Alexa” - a story behind a skill.

Recently, I announced the Alexa skill that helps find out chain restrictions on the way to Big Bear. Last weekend a new version went live: now the skill analyses 7 different routes to and from Big Bear and gives a recommendation on the fastest one.

As promised, I’m sharing some thought processes behind the product.

Part 1: The problem

The problem of finding an optimal way to get to the mountain bothered me for a while.?Last year there was a time when we didn’t check the conditions, got to a closed route, drove around the mountain, and got stuck dead in traffic there. After losing an extra hour there, we turned to another resort.

Another time, in January, I had a particularly long day: even though I woke up before 6 am, I got to the slopes at 11. I lost just a few minutes checking the route, and the main parking got full right in front of my car. As I was driving home and thinking about the day, it struck me: this is not how things should work. This is not customer-friendly at all. So I decided to fix it.

Part 2: Working backwards

How should it work then??

If we look at a typical skiing day, the only time one thinks about the road is inside the car. Before and after, the focus is on other things, so it’s only natural to forget to check the road conditions. Inside the car, no one wants to spend time lurking through the mobile phone searching for answers: the driver is eager to head out. So the solution must be a driver-friendly experience, hence hands-free. So it has to be a voice assistant skill.

Skier is excited about the day ahead, they pack the car, head out and ask a question: “What are the road conditions?" And the brief answer they hear gives them all information they need to make a decision. Once they confirm the choice - the navigation is on.

No time wasted, no hassle, no diverting attention, perfectly seamless and organic experience consistent with habits and human nature.

Part 3: Choosing the assistant

There are three relevant mobile assistants: Siri, Alexa, and Google. And there were two considerations for choosing:

  • Accessibility and popularity: how many people can benefit from the skill
  • Development speed: it is important to move fast and gain maximum results without bloating the scope

Based on my observation, most people in Big Bear use iPhones, which makes Siri the first choice. Alexa is cross-platform and many people install it to manage Echo devices, so it comes second. Google Assistant is also cross-platform, but there are fewer incentives for iPhone users to have it.

When it comes to implementation efforts, Siri falls behind: the skill requires developing an application. That means:

  • Creating and managing certificates
  • Creating UI, and a set of mandatory graphics collateral
  • Complicated build and submission process

On Alexa or Google, the skill requires:

  • Accepting developer terms and conditions
  • Uploading a couple of icons

While Siri requires significant overhead that creates no value in a driver-friendly voice-only application, Alexa handles everything and then some. I can talk all day about how mind-blowingly good is the Alexa development experience: you can have Alexa answering your custom questions in just a few minutes, and you won’t have to leave the browser or pay a dime.

Google Assistant has an excellent infrastructure for skills, but their approach is a bit stricter.

It is interesting how the solutions provided by those companies reflect the corporate culture:

  • Apple leverages the user base and taxes developers.
  • Amazon fuels growth by treating developers as an important customer and removing any possible friction for them. Amazon invites more developers to create more skills, with an ultimate goal to increase the value of Alexa for the customers.
  • Google is a brilliant engineering company, so they focus on technology and use a bit more formal and disciplined approach.

Based on the weighted criteria, I went with Alexa. Unfortunately, that means ‘Siri, open Alexa’ starting phrase.

Part 4: Prioritization

I’m a prophet of lean approach through a series of MVPs

It is important to understand how MVP is different from a prototype, milestone, or just a version:

  1. MVP serves a goal of learning, testing some assumptions in small batches
  2. MVP is extremely focused on what matters right now
  3. MVP, unlike a prototype, is not a bad version of the future product, it is a good version of a very small product. The quality is crucial to be able to learn what needs to be learned. Shitty MVP leads to false-negative results: it gets dismissed due to insufficient experience, and not because the assumption was wrong

I’ve already validated that the problem exists and observed people dealing with it in different ways. Existing solutions to the problem exist, but they are bad, so the streamlined experience is the key value of using the voice approach, and there is no space for compromise.

The next critical risk is feasibility: can I solve the problem while providing a great customer experience?

That naturally leads to an MVP break-down:

  1. A good skill that solves the least driver-friendly part of the problem: checking the chain requirements. It requires minimal code but stays viable.?
  2. A skill that can greatly simplify choosing the route: it checks traffic situation on different routes and tells them to the customer. This task is a bit more complicated, as it requires not only getting the traffic information but also using the driver’s location. Still, it cannot be broken down, because without the location it will be barely usable, and hence not viable.
  3. Helping the customer to make a decision in one step: integrating chain requirements and traffic information together to give a comprehensive recommendation.
  4. A skill that removes the last manual step: turning on the navigation. This remains to be an important UX risk because Alexa doesn’t support multi-point navigation out-of-the-box.

So, phase 2 is live now. It was a very interesting?task that required some investigation, involved some graph theory, and had a few smart challenges. I will probably post about it later.

Phase 3 is in progress, and I just got Alexa Auto to play with while working on phase 4.

要查看或添加评论,请登录

Evgeny Balashov的更多文章