Thoughts on a new possibility of digital UI/UX with Generative AI
Junghoon Woo
H&A (Home Appliance and Air Solution) Data Platform Lead / CVP at LG Electronics
Mobile devices offer greater accessibility compared to PCs, but they come with the inherent limitation of a smaller screen. As a result, mobile UIs tend to be simpler, with less information displayed on a single screen, leading to deeper navigation structures. Users often have to scroll extensively or press multiple buttons, and they frequently encounter scenarios where they reach the same functionality through various entry points, rather than following a single linear path.
For example, when using the "Baemin" app to order food, users must search for a restaurant, select it, explore the menu items one by one, read reviews, and then place their order. This process involves several layers of clicks and navigation. While visual UIs are powerful in that they allow users to see various options, prices, and ratings at a glance, they often feel inefficient for execution tasks due to excessive clicking, scrolling, and drilling down. This structure not only increases the complexity of development and maintenance but also negatively impacts the user experience. The problem worsens as the app succeeds, more features are added, and user needs grow.
The complexity of mobile UI/UX for both users and developers can escalate due to the basic "IF-THEN" logic that underlies software. As an app becomes more successful and gains more users, it has to accommodate a greater variety of needs. The more features it supports, the number of IF-THEN conditions multiplies, exponentially complicating the software’s logic. This, in turn, increases development and operational costs.
But what if we didn’t need to model interactions using IF-THEN logic? What if buttons and dropdown menus designed to clarify the user’s intent for specific actions were no longer necessary? Imagine software capable of processing and responding to the user's vague or ambiguous requests.
领英推荐
This is where Generative AI opens up new possibilities. By leveraging AI that can understand and reason about a user's ambiguous needs, we can simplify the otherwise complex IF-THEN logic. Users would no longer need to navigate through intricate menu structures; they could simply express their requirements in natural language. For example, a user could say, "I want to order spicy chicken from a place with good reviews," and the AI would immediately present relevant options that meet those criteria. This minimizes user clicks and scrolling, allowing them to reach their desired outcome quickly.
However, this approach introduces another challenge. Users often don’t know what to ask or request before they see what options are available. This is one reason why chatbots or natural language interfaces aren’t always effective. Users can struggle to interact with the interface because they aren't aware of the system’s capabilities or what questions they can ask.
To address this confusion, AI can offer a guided conversation flow. It could start with simple questions or options, gently steering the user toward their desired outcome. For instance, it could ask, "What kind of food are you looking for?" to help narrow down their request. Additionally, rather than using a purely text-based interface, combining visual elements with the conversation can be beneficial. When the user makes a request in natural language, the AI could display images or brief information about the relevant options, helping the user make more informed choices.
By integrating Generative AI, we can maintain the strengths of visual UIs while improving the efficiency of the execution phase. Instead of complex menus or buttons, conversational interfaces or voice commands that understand user intent can be adopted. Visual elements can be used to clearly showcase available options, aiding the user in navigating their choices effortlessly. This hybrid approach could revolutionize mobile UI/UX, making it more intuitive and effective.