Why 'Googling' as a Prompt Doesn’t Work – And What to Do Instead
[Note: the opinions shared here are my own]
Professionals across industries are increasingly integrating AI chatbots (like ChatGPT, Bard, etc.) into their workflows. However, many approach these tools with habits formed by decades of Google use – short keyword queries, an expectation of instant answers, and minimal iteration. This sectioned report examines the barriers professionals face in learning effective AI prompting, focusing on how ingrained search engine behaviors impede optimal use of AI. It draws on recent studies (last 9 months) and user research to highlight cognitive/behavioral factors, differences between search vs. AI prompting, the learning curve involved, and design/UX implications. The goal is to identify key friction points and recommend how AI products can ease users’ transition from “Googling” to crafting structured prompts.
1. Cognitive and Behavioral Factors
Ingrained Search Habits: After 20+ years of “googling,” using a search engine has become second nature for most professionals. As one report quips, “to Google is ingrained as our go-to action when we need to find something” (BrandWell). This habitual reliance on keyword searches means users tend to prompt AI in the same way – entering a terse query as if the AI were a search box. These deeply ingrained behaviors are hard to break; as long as the Google-style habit sticks, users may struggle to adopt new prompting strategies (BrandWell). In practice, many default to familiar patterns like using a few keywords or phrasing requests as they would to a search engine, expecting the AI to figure out the rest. This carryover of habits often leads to suboptimal AI outputs or misunderstandings, because a vague prompt to an AI can produce irrelevant or overly general responses (Dan Thomas).
Mental Shortcuts and Biases: Cognitive biases also play a role. For instance, the availability heuristic can cause users to stick with the query style that comes easiest to mind (often the Google-like phrasing they’ve used thousands of times before). This “anchoring” on known search strategies makes it harder to experiment with more elaborate or structured prompts. Users may also exhibit confirmation bias – phrasing prompts in a way that they expect certain answers, similar to how they might cherry-pick search terms to get a desired result. Moreover, people tend to anthropomorphize AI chatbots, expecting them to “understand” naturally phrased or incomplete questions as a human would (Prompt Learning). This can lead to overconfidence in minimal prompts. A recent overview on human-AI interaction notes that users often presume an AI agent will interpret their query correctly without much clarification, which “leads to frustration if the AI misinterprets the prompt” (Prompt Learning). In other words, professionals may rely on mental shortcuts (“the AI probably knows what I mean”) rather than applying the rigor needed for precise instructions.
Instant-Result Expectations: Another behavioral factor is the expectation of immediacy. Web search has trained users to expect near-instant answers or at least a quick list of results. This preference for speed can clash with the more iterative approach effective AI prompting often requires. Crafting a good prompt sometimes means providing context, specifying format, or doing follow-up refinements – steps that take time. Many professionals, even tech-savvy ones, initially treat an AI query as a one-and-done affair. If the first answer isn’t useful, they may become impatient or conclude the AI is faulty, rather than adjusting the prompt. The culture of “fast answers” cultivates a low tolerance for slow, step-by-step clarification. In fact, a practical guide on iterative prompting emphasizes that while it can yield deeper insights, “iterative prompting can be a slow process,” requiring the user to spend time refining each query (A Guide to Learn How to Use AI Better). Busy professionals under deadline may revert to shallow, Google-like queries because they feel they don’t have time to iterate. Ironically, this often results in more time wasted on unsatisfactory outputs. There is also evidence of “metacognitive laziness” when using AI tools – i.e. users offloading thinking to the AI and not engaging in self-reflection on how to improve their query (SoLAR webinar) (SoLAR webinar). If an AI’s first answer seems okay, users might not probe further, reinforcing any poor prompting habit. In summary, the mental model carried over from search (quick query -> immediate answer) can inhibit the careful, conversational probing that complex AI tasks sometimes need.
2. Comparing Search vs. AI Prompting
Query Structure and Effort: Prompting an AI assistant is a qualitatively different interaction from typing into Google. Traditional search queries are often very short and keyword-driven (the average Google query is only a few words). By contrast, effective AI prompts tend to be longer, more descriptive, and written in natural language. Data bear this out: in one analysis of 80 million interactions, user prompts to ChatGPT without search were significantly longer – averaging ~23 words vs. ~4 words for typical web search queries (Investigating ChatGPT Search). In other words, 70% of ChatGPT queries did not resemble the terse patterns of traditional search (Investigating ChatGPT Search). This means that users can and often do input more detailed requests to AI. However, professionals new to AI may not realize this difference; they might start with a short, Google-like query and receive a generic answer, not realizing the model could handle (and benefit from) more context. Crafting a good AI prompt requires a bit more effort up front – specifying your intent, any necessary context, and the desired format or criteria for the answer. Google searches rarely require this level of detail from the user, since the user can refine results by clicking different links. With AI, the onus shifts to the query itself. Users now have to “front-load” the request with details that search engines would typically let them refine after seeing results. For example, asking Google “project management AI tools” will yield a list of websites to explore, but asking an AI the same thing might need clarification like “List five AI tools that project managers can use, with a brief description of each” to get a meaningful, concise answer.
Information Retrieval vs. Synthesis: The fundamental behavior of a search engine versus a large language model also differs. Google (or Bing, etc.) retrieves existing information – it shows you documents or snippets that match your keywords, and you as the user do the work of selecting, reading, and synthesizing the information. An AI like ChatGPT, on the other hand, directly generates an answer by synthesizing information (based on its training data or retrieval) into a coherent response. This can be a double-edged sword. On one hand, AI models can provide a neatly packaged answer or explanation, saving the user from digging through multiple sources. On the other, this can lull users into treating the AI’s response as authoritative or comprehensive when it may not be. With search engines, users expect to scan multiple results and perhaps piece together an answer. With an AI, users often expect a single, fluent answer – and they may assume the AI has effectively “done the research for them.” This leads to higher expectations that the AI will interpret even vague queries correctly and produce something useful. As an HCI study notes, ChatGPT’s conversational interface encourages users to treat it more like an expert assistant rather than a search tool (BrandWell). Users can ask follow-up questions, get clarifications, etc., in a back-and-forth dialogue. By contrast, Google is a one-shot transaction – you enter a query, get a list of links, and if it’s not what you want, you try another query. These different paradigms mean user expectations diverge: AI prompting feels more like instructing a junior colleague, whereas search is querying a database. Indeed, interface design plays a role: Google presents a list of results that the user must navigate through, while ChatGPT presents a single answer in a conversational format (BrandWell). Many users consequently assume the AI “understands” the question’s intent in context, whereas they know a search engine will only match keywords. This explains why users often expect AI models to handle vague or incomplete prompts better than a search engine would. They personify the AI as if it has reasoning: for example, a professional might input “Explain the new tax law changes for my project” without specifying what project or context, assuming the AI will infer what’s needed – something they’d never ask of Google without more keywords. Research on the psychology of AI interactions confirms this pattern: people tend to use more natural, open-ended language with AI, expecting the system to fill in gaps intelligently (Prompt Learning). While advanced models are quite good with context, they are not truly mind-readers – a vague query can yield a vague answer. The difference is the AI will always attempt an answer (no “no results found” pages), which can give a false sense of understanding. Users might not immediately realize their prompt was ambiguous because the AI still responded with something.
User Expectations and Biases: There’s also a contrast in how errors or uncertainty are handled. With search, if the results are poor, users blame their query or the search engine and simply try new keywords. With AI, if the answer is wrong or nonsensical, users might not know whether the fault lies with the prompt, the model’s knowledge, or the model’s reasoning. Many users expect AI to be smarter – sometimes too much so. They may assume the AI knows exactly what they meant, and if the answer is off, they get frustrated (“Why didn’t it get what I was asking?”). Part of this is due to what one source calls “inappropriate expectations” about LLMs (Task Supportive and Personalized Human-Large Language Model Interaction). Because the AI speaks in fluent sentences, users may overestimate its understanding. A recent study on information-seeking with ChatGPT noted that lack of user familiarity with a topic, high task complexity, and misguided expectations about the AI often lead to misconceptions about how the LLM will respond ([Task Supportive and Personalized Human-Large Language Model Interaction). For example, a user might expect the AI to always give a definitive answer like a human expert, and not realize it might need more specific instructions or could be uncertain. These expectations contrast with search engines, where users don’t expect Google to “figure out” the perfect answer on the first try – users anticipate doing some work (like refining queries or reading sources). In summary, professionals often approach AI with a mix of search-engine habits and human-like expectations, an uneasy combination. They assume the AI can interpret broad, conversational prompts (unlike Google), yet they may still phrase those prompts as if entering a search query. This mismatch can result in frustration and misunderstandings if not addressed through experience or guidance.
3. Learning Curve and Adoption Barriers
Challenges for Tech-Savvy Professionals: It’s tempting to think that digitally literate professionals would take to AI prompting easily, but evidence suggests even experienced users face a learning curve. The core difficulty is that effective prompting is both an art and a science, blending skills that professionals might not have needed before. According to an action report from late 2024, “successful prompting demands both a technical knowledge of AI’s language-processing capabilities and the creativity to effectively communicate complicated ideas.” (mbaresearch.org) In other words, users must understand a bit about how the AI “thinks” and be able to phrase queries in a precise yet imaginative way – a combination that can be non-intuitive. Many skilled workers are initially unaware that prompt crafting is a skill at all. They might assume you can ask an AI a question in plain English and get a great answer, full stop. This assumption is reinforced by how user-friendly these tools are marketed to be (e.g. “no coding required”). While it’s true no specialized degree is needed to use ChatGPT, there is still a tacit skill set involved in getting high-quality results (mbaresearch.org) (mbaresearch.org). For example, a project manager might ask, “Give me a project plan for initiative X,” and receive a very generic output. Without guidance, they may not realize that adding details about budget constraints, team roles, timeline, etc., would dramatically improve the response. They must learn through trial and error that the prompt is often the “program” – garbage in, garbage out. This learning curve can be steep for those who are used to tools (like search engines or software) that don’t require such nuanced input.
Lack of Feedback Reinforcing Poor Habits: One barrier to learning prompt skills is the nature of feedback from AI systems. When you use a search engine poorly (e.g. ill-chosen keywords), the feedback is immediate – you get irrelevant results or none at all, signaling you to try a different approach. With AI chatbots, even a poorly constructed prompt will return something (often a confidently written answer). The quality might be low, but it’s not always obvious to the user what went wrong. AI responses don’t usually come with an explanation like, “your question was too broad, therefore I gave a generic answer.” This lack of explicit feedback can reinforce bad prompting habits. Users might accept mediocre answers or blame the AI (“it just isn’t good at this topic”) rather than recognize that a tweak in their query could greatly improve the output. In essence, the AI’s willingness to attempt an answer for any prompt means users don’t get the same clear signal to refine their query as they do with search results. This was highlighted in a user study where participants had difficulty formulating good queries and often didn’t realize when their prompt lacked clarity (Task Supportive and Personalized Human-Large Language Model Interaction) (Task Supportive and Personalized Human-Large Language Model Interaction). The study noted that “cognitive barriers and biased perceptions” (e.g. misjudging what info the AI needs) impeded task completion (Task Supportive and Personalized Human-Large Language Model Interaction). Only with guidance did users start to improve their prompts.
Another factor is immediate gratification. If the first AI answer looks passable, users may move on without further prompting, missing an opportunity to get a better result. This can reinforce minimal-effort prompting. Over-reliance on AI can also set in: if users treat the AI as infallible, they won’t push it with critical follow-ups or experiment with re-prompts, leading to a plateau in their learning (Iterative Prompting in Research) (Iterative Prompting in Research). On the flip side, if the AI produces an obviously wrong answer once, some users might falsely conclude it’s generally untrustworthy for that task and give up on using it, rather than learning how to steer it. Both scenarios – blind trust and quick abandonment – stem from not knowing how to interpret AI outputs and adjust one’s approach. Developing a good mental model of the AI (its limits and how to guide it) takes time and exposure.
Need for Training and Literacy: Given these challenges, many organizations are recognizing that prompt engineering is a teachable skill – and that professionals won’t magically acquire it without support. Industry surveys confirm a trend toward formal training in AI prompting. A Forbes-backed estimate in late 2023 suggested “60% of workers will receive training on prompt engineering” in 2024 (Is AI Prompt Engineering a Must-Have Training Need in 2024?). For many managers or employees only recently exposed to generative AI, this is a “head-spinning revelation” (Is AI Prompt Engineering a Must-Have Training Need in 2024?) – it underscores that using AI effectively isn’t as plug-and-play as they assumed. Trainers note that while writing a prompt sounds simple, there is nuance: “there really is an art to creating good prompts… A prompt needs to set appropriate guidelines and guardrails to ensure the user gets the output they expect.” (Is AI Prompt Engineering a Must-Have Training Need in 2024?) This means teaching users to include essential context (who the audience is, what format is needed, what to exclude, etc.) and to avoid overly generic asks (Is AI Prompt Engineering a Must-Have Training Need in 2024?). Professionals also may need to unlearn some habits: for example, instead of repeatedly asking an AI to “redo” a task hoping for a better outcome (as one might re-run a search), it’s more effective to refine the instructions – a concept that isn’t obvious without guidance. Demographically, some have speculated that younger users (who grew up with chat interfaces and AI in products) might adapt faster than older professionals. There is modest evidence of differences in usage patterns: one analysis found ChatGPT’s user base skews younger (more students and early-career users) compared to Google’s broad demographic (Investigating ChatGPT Search). However, struggling with prompt formulation is common across ages and expertise levels. For example, even software developers (an ostensibly tech-savvy group) have noted difficulty in phrasing queries to coding assistants to get desired outputs, which is why communities share “prompt hacks” and examples. The key barrier is less about age or job and more about mindset – being willing to experiment, iterate, and learn from failures, which is not how traditional search is used.
In summary, the learning curve involves overcoming initial misconceptions (e.g. “the AI will do it all for me”), gaining new skills in communicating with the AI, and adjusting one’s workflow to include iterative refinement. Without explicit training or at least deliberate practice, professionals can remain stuck at a superficial usage level, failing to unlock the AI’s full potential for their work.
4. Design and UX Implications
Interface Influence on User Behavior: The design of AI systems can either exacerbate or alleviate these prompting challenges. Early chatbot interfaces largely mimicked a messaging app – a blank chat box waiting for user input – which provides very little guidance. Users often approached that blank box as they would a search bar, since that was their closest point of reference. Recognizing this, some AI interfaces have started to introduce features to shape user behavior. For instance, Bing’s AI chat (and others like Google’s Bard) prompt the user with example questions or a conversational opening, subtly indicating that a dialogue is expected rather than a one-shot query. This kind of contextual UX cue can nudge users away from Google-like habits. If the interface demonstrates, say, a multi-turn Q&A example (“User asks X… AI responds… User follows up with Y…”), users learn by example that they too can ask follow-ups or specify details. In short, the UI/UX can signal the user to engage in a more iterative, exploratory interaction, as opposed to the transactional feel of a search engine.
Moreover, AI chat systems now often provide suggested follow-up questions after an answer. These serve a dual purpose: they add convenience, and they educate users on how to drill deeper. For example, after an AI gives a general answer, it might show prompts like “?? Ask for a summary” or “?? How does this apply to X scenario?”. Such features gently push users to refine or continue the inquiry, addressing the earlier noted reluctance to iterate. A study in late 2024 introduced an experimental ChatGPT-style interface with additional supportive functions – namely **“prompt suggestions” and “conversation explanations” – and found these helped users “better refine prompts” and manage their expectations (Task Supportive and Personalized Human-Large Language Model Interaction). The prompt suggestion feature would generate potential follow-up or revised queries for the user, essentially coaching them on how to ask for what they needed. The conversation explanation feature would clarify what the AI understood from the user’s input, helping identify misunderstandings. Participants in the study who had access to these supports were more successful in their tasks, as the supports “reduced cognitive load”* and increased engagement by guiding the user through the AI interaction (Task Supportive and Personalized Human-Large Language Model Interaction).
This is a clear indication that UX design can play a major role in bridging the gap for new AI users. Instead of leaving professionals to figure out prompting on their own, the system can be designed to tutor them in real-time.
Design Interventions to Ease Transition: There are several concrete design interventions that researchers and developers are exploring to help users transition from search-minded querying to effective prompting:
One recent library and information science study went so far as to develop a full framework of user tactics for conversational search AI. The researchers **identified 45 strategies adapted from traditional search to help users define needs, refine queries, and evaluate AI answers in a ChatGPT context (Marcelo Tibau) (Marcelo Tibau). They categorized these into seven groups and framed it as a model for “searching as learning” with conversational (Marcelo Tibau) (Marcelo Tibau). The implication is that users can (and should) engage in a learning-oriented process: start with a hypothesis or need, ask the AI, evaluate the answer’s reliability, possibly ask the AI to provide sources or justification (to verify), and refine the question further. Designing AI systems to facilitate this process is key. For example, after the AI answers, providing an easy way to say “verify this” or “show sources” encourages the user to not just take the answer at face value. Similarly, allowing users to highlight part of the AI’s answer and ask, “Where did this come from?” could bridge the gap between search and AI by reintroducing source-based trust. Some new AI search hybrids (like Perplexity.ai or Brave’s Summarizer) do incorporate citations in responses to help users trust and verify information.
Guiding without Burdening: A crucial consideration is making these assistive features lightweight so they help users without overwhelming or annoying them. Professionals generally want to get their task done, not play twenty questions with an AI. So the design balance is to gently steer user behavior while still delivering value quickly. The earlier-mentioned study (Wang et al. 2024) showed that users benefited from the prompt aids without feeling hampered, as it “increased user engagement” and improved task success (Task Supportive and Personalized Human-Large Language Model Interaction). This suggests that, when done right, users appreciate a bit of hand-holding from the interface. It reduces frustration and builds confidence because the system is effectively saying “we’re in this together – I (the AI) will help you ask me better questions.” Over time, as users internalize these practices, the supports could be dialed back or made optional.
Finally, it’s worth noting the feasibility of AI systems automatically enhancing prompts. Researchers are exploring automated prompt optimization, where the system might rewrite or append to a user’s prompt under the hood to improve results (with the user’s consent). For instance, if a prompt is detected as too short, the AI could add some clarifying language before querying the main model. Google’s “Prompting Essentials” course even teaches the idea of meta-prompts – asking the AI to suggest a better prom (Learn AI Prompting with Google Prompting Essentials) (Beyond the prompt). This kind of feature could be integrated such that a user’s vague input leads the AI to generate a more detailed version and then execute it, showing the user both the original and improved prompt. Such designs blur the line between user input and system initiative, but they hold promise for easing users into effective prompting with minimal upfront learning. The system does some of the heavy lifting in prompt formulation, effectively training users by example.
Recommendations: Based on the above, here are a few targeted recommendations for AI product design to help professionals transition from search-style querying to structured prompting:
By implementing such design considerations, AI tool providers can significantly lower the barrier to entry for effective prompting. The transition from search to AI assistant need not be jarring; with thoughtful UX, users will gradually adopt better prompting practices almost without realizing it, simply because the system encourages and rewards those behaviors. This symbiosis of user training and tool design is critical. As one industry brief put it, generative AI’s promise is huge but *“the concept of a generative AI prompt sounds simple, yet requires an art to truly master (Is AI Prompt Engineering a Must-Have Training Need in 2024?)– we should leverage design to make mastering that art as straightforward as possible for busy professionals.
Conclusion
Professionals face a multi-faceted challenge in learning to prompt AI effectively. Cognitively, they are conditioned by years of search engine use – leading to short, imprecise queries, over-reliance on the AI’s guesswork, and frustration when instant answers aren’t forthcoming. Behaviorally, they may be reluctant to iterate or provide extensive details, due to biases and expectations of immediacy. Technologically, the difference between retrieving information (search) and generating information (AI) creates a gap in mental models that users must bridge. Over the last nine months, studies have pinpointed key friction points: users often misunderstand what an AI needs to know, they default to “Googleese,” and they frequently underestimate the importance of refining prompt (Task Supportive and Personalized Human-Large Language Model Interaction) (Task Supportive and Personalized Human-Large Language Model Interaction). On the positive side, research and practical experiments also show ways to alleviate these issues – through better interface design, user education, and gradual habit-building. Ingrained habits don’t change overnight, but with AI becoming ubiquitous, users will naturally gain more experience and confidence in adjusting their querying style. The insights and recommendations compiled here stress that the solution is twofold: users need to adapt to the AI, but AI tools should also adapt to users. By meeting in the middle – via intuitive UX that guides users, and users making a conscious effort to apply structured, iterative prompting – the power of generative AI can be unlocked without the current frustrations. In essence, the journey from Google-style searching to effective AI prompting is a learning process, one that can be significantly accelerated with the right support. As companies and researchers implement these changes, we can expect prompting AI to become as routine and natural to professionals in the coming years as web search has been in the past two decades.
Sources:
1. Marcelo Tibau et al., “ChatGPT for Chatting and Searching: Repurposing Search Behavior,” Libr. & Info. Science Research, vol. 46, no. 4, 2024 – Identifies 45 adapted search tactics for conversational AI, highlighting need for new user strategies.
2. Wang, Liu et al. (CHIIR ’24 user study), “Task Supportive and Personalized Human-LLM Interaction,” 2024 – Found that added UI features (prompt suggestions, explanations) helped users overcome cognitive barriers, manage expectations, and refine prompts.
3. Semrush Blog (Nov 2024) – Analysis of 80M ChatGPT interactions: showed ChatGPT prompts average ~23 words vs 4.2 words for search queries; 70% of ChatGPT queries are unlike traditional search queries. However, younger users and students are overrepresented among ChatGPT use.
4. Prompt Learning (2024) – “Human-AI Interaction and the Future of Prompting” – Discusses psychological aspects: users anthropomorphize AI and expect human-like understanding, which can lead to frustration with vague prompts. Emphasizes iterative feedback loops in prompting.
5. Dan Thomas, “Challenges & Risks of Prompt Engineering” (Dec 2024) – Highlights difficulties like ambiguity: vague prompts yield poor results; context must be provided by the users. Notes lack of standardization across AI models means users must experiment - which takes time and skill.
6. Brandwell AI Blog (Oct 2024) – “ChatGPT Search vs Google” – Points out entrenched habit: *“Most people instinctively turn to Google…‘to Google’ is ingrained ([ChatGPT Search vs. Google: How to Track Third-Party Traffic as AI Search Expands - BrandWell]())8】. Also notes trust issues: users are cautious with AI and Google enjoys high trust, affecting willingness to rely on AI outputs. Provides side-by-side comparison of ChatGPT’s conversational UI vs Google’s list-based UI.
7. Lin Grensing-Pophal, HR Daily Advisor (Jan 2024) – Discusses prompt engineering as a new training need: ~60% of workers likely to get prompt training. Stresses that crafting good prompts is an “art” with guidelines; giving proper context in prompts is essential, otherwise it’s a “waste of time” to just repeat vague requests.
8. MBA Research Action Brief (Dec 2024), “AI at Work: Prompt Engineering” – Reports that businesses seek employees skilled in prompting. Describes prompt engineering as blending technical and creative skills; “an art and science…demands technical knowledge of AI and creativity in communication". Lists advanced prompting techniques (few-shot, chaining, etc.) as emerging competencies.
9. Edureka Blog (2024), “Iterative Prompting in Research” – Notes that iterative prompting, while powerful, is time-intensive and depends on user expertise in both domain and AI tooling. Warns that users without sufficient knowledge may struggle to guide the AI, underscoring the need for skill development or tool support.
10. Fan et al., pre-print (SoLAR Webinar June 2024) – Studied generative AI in education; introduced concept of “Metacognitive laziness” where students over-rely on AI and don’t engage in deeper learning. By analogy, professionals might over-rely on AI outputs without refining prompts, highlighting the importance of maintaining critical engagement (e.g., verifying and iterating) even when using AI.
Transparency: ChatGPT-4o was used to assist with this article.
Helping SMEs automate and scale their operations with seamless tools, while sharing my journey in system automation and entrepreneurship
1 周I always recommend thinking about prompts as conversations rather than simple queries. Being detailed and iterative helps the AI understand the context, leading to better responses.
Andrew Dempsey the learning curve from prompting and not Googling, also needs to include how much sensitive information should you put in these AI assistants, especially related to business. What do you think?
Microsoft Cloud Expert | 6x Microsoft MVP
1 周I encourage my users to dictate their prompts. They tend to add a lot more context and instruction verbally.
AI Builder and Partner for Insurance companies AWS-Machine Learning certified, AWS Cloud Quest: Generative AI, AWS-Associate Architect
1 周Good article. In a nutshell: Prompting is easy with a template approach. Example for a coding task. -Your role - I am a novice developer -Role of the AI model - You are an expert coding teacher well versed in both writing code and explaining it... -Context - I am trying to develop a... -Constraints - Break up into modules. Don't use numpy use something else. -Ancillaries - Heavily comment since I am a novice.
UoL | Ex - BNY | Ex - Volkswagen Group
1 周Great article Andrew!