11. Find The Key to Step-Changing Your NPS

11. Find The Key to Step-Changing Your NPS

Background

Using your NPS tracking to help you “do things better” may give you incremental gains in NPS but to make a substantial difference you need to “do better things”.

And if you are relying solely on NPS as your key measure of the affinity your brand has amongst your actual and potential buyers then you are probably missing some significant insights. That’s because NPS is not a measure of Brand Affinity, it is a measure of Customer Satisfaction.

So using NPS as your ‘Holy Grail’ indicator of marketing and/or brand strength is very unlikely to ever give you the insight you need to step-change your future NPS performance.

Even when you use the standard follow-up question “And why did you give that rating?” the answers you get will typically only provide some tactical steps you can take to maintain or increment your existing NPS performance, they will rarely provide that breakthrough insight which enables you to leapfrog the competition and drive up your market share.

What’s worse, NPS is not even a reliable indicator of future business performance.

Consider some common technical issues.

Common Problems with NPS

1.????? NPS has no Direct Correlation with Company Performance

Although NPS is now used as a key metric by almost all companies interested in managing and maximising the standard of service experienced by their customers, it is not necessarily an indicator of company success. There are many potential pitfalls.

On the plus-side, it is quick and easy to collect, there are recognised industry norms for calibrating and comparing the results, and (some) feedback on ‘what’s working and what’s not?’ can be gathered via a simple follow-up question such as “And what makes you say that?”

But it was never really intended to be used as a measure of a single customer interaction. Its original purpose was to evaluate customer loyalty to a brand or company, not their satisfaction with an individual transaction / experience. Using it that way is far too tactical, even though it has now become the norm. The information gained from it is typically pretty thin.

Having a measure of customer loyalty is an effective way to determine the likelihood people will buy again, talk up the company and resist market pressure to defect to a competitor. But for many company Directors their typical experience of using NPS is that it is an interesting fact to know but very hard to know what significant strategic decisions to take that will directly improve the outcome, and especially so when it is not able to be benchmarked against competitors.

Worse still, the link between changes in NPS and company financial performance is (at best) weak and frequently quite obscure. Whilst the Board may understand and may be convinced by the principle of NPS, they find the practice of using it for decision-making far more challenging.

The concept has also attracted controversy over the years from academic and market research circles alike. Research by Keiningham, Cooil, Andreassen and Aksoy [Journal of Marketing 2007] disputed the evidence that NPS was a useful predictor of company growth. Similarly, Hayes [Linked-In, 2016 and numerous other articles] claimed there was no scientific evidence that the "likelihood to recommend" question was a better predictor of business growth compared to other customer-loyalty questions (e.g., overall satisfaction, likelihood to purchase again).

2.????? NPS is a Derived Metric – so very different situations can result in the same score

Whilst the concept of looking at the percentage of people who are completely satisfied with the company compared to the percentage who are dissatisfied has an inherent appeal as a simple measure of how well the company is doing its job, the NPS value itself does not immediately tell you where the problem is.

As you will probably know, but I will repeat it just in case, to calculate your NPS rating people are asked “On a scale of 0 to 10, where 0 means ‘not at all likely’ and 10 means ‘very likely’, how likely would you be to recommend this company?”

Those giving a score of 9 or 10 are denoted as “Promoters”, those giving a score of 6 or less are designated as “Detractors” and those giving 7 or 8 are denoted as “Passives”.

The NPS formula is the difference between the percentage who are Promoters and the percentage who are Detractors, with the Passives being ignored.

Now, consider this set of figures:

Table 1: Comparing Two NPS Outcomes

As you can see in Table 1, in both cases the company has achieved an NPS rating of 20 yet the reasons for doing so are wildly different.

Arguably, the company in Case 1 is probably doing everything right from a performance management perspective. It has no dissatisfied customers, and with 20% being ‘delighted’ whilst the rest are satisfied enough with the service, it is likely not to be over-spending on customer service by seeking to delight everyone.

In Case 2, by contrast, there is a serious problem as 35% of customers are unhappy.

The future business performance of the two companies would be expected to be greatly different, with the company in Case 1 prospering whilst the Company in Case 2 wilts away.

Exactly the same NPS but very different situations – one of which should be sounding alarms.

Now think, are you comparing your NPS ratings across brands, stores, segments, regions, countries or any other business splits? If so, is what you are seeing being really helpful?

It’s no wonder that Keiningham et al, Hayes, and many others have struggled to find a direct link between NPS and company performance – any relationship is complex and hard to see.

3.????? NPS is a Myopic Metric – By itself it tells you nothing about causality

Cases 1 and 2 above are a perfect example – the situations driving each outcome cannot be seen just from the NPS rating alone. Context and detail are always needed. Uncovering the reasons behind the outcomes requires additional questions. Certainly at least the “And why did you give us that score?” question, and ideally also something that looks at the various aspects of people’s general experience with the brand as well.

Limiting yourself to just looking at your NPS rating is, in fact, very limiting indeed.

More concerning though is that the scores you see may not reflect reality - NPS ratings can sometimes be manipulated or even “bought”.

There are car companies who, when they service your vehicle, tell you that you might be contacted to give a rating on how well the service was done and the service agent asks you not to rate them at less than a 9 or 10 or they’ll face an enquiry. And they may even give you a bonus for awarding them a 9 or 10 rating. A classic case of what gets managed is the metric, not the actual performance, aka “Campbell’s Law”: https://en.wikipedia.org/wiki/Campbell%27s_law .

And another issue is that the cause may not even be inside your company or brand.

You Need the Full Context to Understand the Outcomes:

External events like economic trends (‘the feel-good factor’), competitor actions (your service may not have changed but suddenly the competitor has upped their game and now you’re lagging behind), and business discontinuities (think “pandemic” etc) can easily knock your scores for six without you having the ability to do anything about it.

If you want to manage your NPS, a knowledge of causality is key and NPS does not provide it.

4.???? NPS is an over-used metric – it’s often used in the wrong places

?These days one sees NPS ratings being quoted on every possible occasion, with little regard for whether it’s a meaningful use of the concept. It’s often just a badge.

Consider, for example, whole-company or whole-industry benchmark NPS metrics. If you are a brand owner, like Unilever or P&G etc, you will have many products for which you could measure an NPS rating. From a technical perspective it is possible to calculate “Unilever’s overall NPS” by averaging the scores across all its brands in a market. But loyalty to a brand owner is not the same as loyalty to a brand. It's a measure of brand strength, but it’s not a measure of loyalty to Unilever, only to the brands it happens to own. And the problems discussed above tell you why that score is not a reliable guide to Unilever’s likely future business growth even if it was.

People buy specific products or services (i.e. solutions) rather than thinking about the parent company, and they generally recommend specific products or services, not the companies providing them. When was the last time you recommended “Unilever” as best provider of washing powder?

People do recommend parent companies, though, in situations where those companies are providing solutions to a range of differing but related needs – like in retail for example.

But comparing a derived NPS for Unilever with an NPS for Sainsburys would not be a valid comparison. The causal drivers are so very different that any comparison is meaningless.

NPS is not omnipotent.

5.????? NPS is an Inadequate Metric – Having just one supporting question is never enough

To manage your NPS outcome means you must have an effective way to steer the direction in which it is heading. But the previous list of problems highlights another important issue, the trajectory of the measure itself is difficult both to predict and to control.

For example, from figures cited on Net-Promoter.com , in the US telecommunications industry an 11% score (AT&T) is a good score for that industry. But is it the best because of AT&T’s technology, its service, its coverage, or all of these and more? We can’t say without knowing lots more about the brand and the company.

And if you have a bad NPS, then what do you do?

In essence, to make decisions about how to manage your future NPS outcomes you always need more information than the NPS rating itself provides.

?Please Sir, Can I Have Some More?

As mentioned earlier, it is normal to accompany your NPS question with a simple follow-up question like “And why did you give us that score?”. Assuming the answer is something other than “Because you asked me to give you a rating” then the verbatim responses obtained will be helpful in understanding some of the tactical, day-to-day, service issues people have experienced.

The issue comes when you try to extract significant strategic meaning from those responses. Certainly AI (text mining) tools can be used to extract themes that are often displayed in “word clouds” like the one shown below.

Word Clouds: Extracting Themes from Responses to “And why did you give us that rating?”

However, whilst text mining is undoubtedly helpful, it may only lead to tactical responses – like running extra promotions or changing an aspect of how customers are served which appears to be affecting a sufficiently large enough number of customers to make it worthwhile doing so.

To significantly step-change your NPS performance you need to do something that is more than just a random-walk along the ramparts of individual customer experiences. You need a vision of where to go to and a route map of how to get there. NPS alone will not give you those. The simple follow-up question is notoriously bad at picking up on systemic issues, unmet needs, competitor actions, and showing a clear relationship between NPS and business performance.

Where’s the issue?

As a consequence, those managing NPS have often found Senior Management lose interest in NPS performance as the ‘insights’ they are seeing presented to them are not, in their view, strategic enough. Moreover, fixing the problems that do emerge may involve many departments within an organisation having to work together and not all of them will buy-in to the importance of making the changes necessary. They have their own agendas and/or pressing issues.

The Strategic Management of NPS Performance

To know the strategic steps you should take to improve your NPS performance you need to know what data to look for, how to analyse that data, and how to supplement it with other information that gives you all the Whys and Wherefores.

Seen in Figure 1 below is an example from 2014. It shows the changes in the market shares of the leading UK supermarkets between 2011 and 2014 charted against their NPS scores at the start of that period. As you can see, there is quite a strong relationship across stores.

Tesco’s NPS rating was actually negative in 2011 and had been trending down throughout that year due to declining opinions of its pricing relative to other retailers – see Figure 2.

Tesco’s problems with its NPS were not picked-up on and addressed though. In 2011 they were riding high market share-wise and that was all that mattered. It was easy to dismiss the declining NPS rating as being insignificant, especially when viewed in terms of its relative performance against other brands as most brands had been trending down that year. And Morrison’s, for example, showed a gain in NPS in 2011 but its market share had still declined by 2014.

So, for Tesco, the warning light had come on but to know if the warning was significant you had to know what was driving the downward trend in NPS. That required a deeper level of analysis than you get just from the NPS rating alone, and also for senior management to be concerned enough about its performance to devote the time to study it.

Figure 1: UK Supermarket Market Share Changes 2011 – 2014 vs NPS Ratings in 2011

Obtaining that extra insight is not easy though. One of the key issues highlighted earlier is that NPS is not just driven by a company’s own customer service performance. The potential causes of the movements in NPS go well beyond that.

As the originator of NPS, Frederich Reichheld highlighted in his second book “The Ultimate Question 2”, that?“Willingness to Recommend” is driven by a range of factors, some emotional and some functional. The supplementary verbatim question may get answers which are too detailed and/or have been asked of too few customers to pick-up on the bigger, most significant, trends. To overcome this, a more strategic way to obtain, structure and probe the responses to the supplementary question is really needed.

The reason Tesco’s fall was more significant than it may have first appeared was because it was being driven by perceptions of its Value for Money – which is a strong driver of overall supermarket satisfaction. As Figure 2 shows, Tesco’s rating on that score fell almost continuously throughout 2011 and fell by more than all of its competitors. So, it’s relative position on that metric versus all of its competitors had worsened quite significantly by year-end. You could not see the extent of that fall just by looking at its top-line NPS rating.

Figure 2: Tesco “Value for Money” Rating on YouGov Brand Index Tracker Throughout 2011

Separating the Wood from the Trees – Using RAAVE? to Move Beyond NPS

If you want to really take control of your NPS and be in a position to step-change its performance, galvanise the whole business to assist you in achieving that objective, and ultimately have a noticeable and directly attributable impact on the financial performance of your business, then you need to up-your-game in terms of how you use your NPS tracking data.

One highly effective way to do this is to use RedRoute’s RAAVE? model of the strategic drivers of Brand Affinity because it correlates strongly with “Willingness to Recommend”.

Key question – does it work? Yes. We have many case studies and bags of data that show it. And it works because it is based on understanding consumer behaviour.

For the background on RAAVE?, please see some of the earlier articles in this Newsletter series. And keep an eye out for more articles to be published in the future.

The RAAVE? model summarises the 5 key motivational drivers that determine someone’s likelihood to use a brand, to continue to use it, and to recommend others to use it. These five drivers are:

-????????? Relevancy?????????????????????? Does this do what I need?

-????????? Association???????????????????? Is this a brand I would be proud to say I use?

-????????? Accessibility??????????????????? How easy is it to find and use this brand?

-????????? Value?????????????????????????????? Is the benefit I’ll get worth the price I’ll have to pay?

-????????? Expectation??????????????? ???? What’s the likelihood I’ll be completely satisfied?

You can keep your existing “And why did you give that score?” question but supplement it with whatever is the appropriate wordings for each of the above drivers for your own brand, company, product or service. For some more thoughts on how to create those set of statements, see Newsletter 3 in this series.

The background of how these 5 key drivers came to be identified stems originally from a need to accurately predict individual customer behaviour by modelling loyalty card data for Sainsburys and Homebase, and then later for Carphone Warehouse, T-Mobile, Carrefour, Viking-Office Depot, and several financial services companies. These learnings were then later applied for numerous other companies as well.

What we found from that all that modelling was that to describe actual choice behaviour at the individual transactional level we needed to bring together data on the values and attitudes of the person with their circumstances at the time they are making the purchase decision.

Like when selecting food for a barbecue, we are faced with deciding our personal preference on whether to eat healthily or not, and then also take into consideration our level of income, what’s available in-store at the time we get there, and so on. This coming together of what our options are and what’s important to us leads us to the decision. It is what underpins much of what has been described as “behavioural economics”.

In other words, reconciling What’s Needed? with What’s Possible? to decide on What’s Best?

Measuring your scores on each of the key 5 drivers and then taking the average gives a summary metric that can be correlated with key behavioural statistics like Share of Wallet and Willingness to Recommend. We call this the overall measure the RAAVE? Rating.

An example of the correlation with “Willingness to Recommend” (WTR), taken from our work for a leading energy company, is shown in Figure 3 below:

Figure 3 Willingness to Recommend vs RAAVE? Rating – Example from European Energy Market

Using the 5 RAAVE? Drivers to Understand How to Improve Willingness to Recommend

Once you have enhanced your NPS Tracker to gather the additional data required to create a chart like the one shown in Figure 3, then the next step is to use the 5 RAAVE? drivers to see which dimensions you need to improve upon in order to increase your average WTR score.

A great way to begin that process is to create a web chart like the one shown in Figure 4.

Figure 4: Using the 5 RAAVE? Drivers to Diagnose What Needs to be Improved

In Figure 4, Brand 1 is in the weakest position, lagging behind both of its competitors in terms of its overall RAAVE? Rating (see the first spoke of the web at the top of the chart). Brand 2 is the Challenger Brand, looking to surpass the market leader, Brand 3, but it is being held back by poor perceptions of Value for Money and Expectations about being able to deliver a satisfactory experience. Brand 3, the market leader, dominates on all of the 5 dimensions aside from Accessibility.

The strategic implications for the motivational drivers each of these brands needs to improve upon to strengthen their competitiveness is clear from the diagram.

Brand 1 needs to focus on improving Relevancy, Association and Accessibility to look to exceed those of Brand 2. By contrast, Brand 2 needs to focus on improving perceptions of Value and Expectation, targeting to equal or exceed the market leader on those dimensions. Brand 3, meanwhile, needs to understand what the issue is which is leading it to get a lower score on Accessibility than Brand 2. There could be an availability issue but more likely it is a usability issue – Brand 2 has probably managed to create an innovation that makes it better meet consumer needs (for example, when Tetley introduced the round tea bag the impact on its market share was profound even though nothing else had changed!).

Moreover, the strong relationship between the RAAVE? Rating and Willingness to Recommend means that you know how much you need to improve your RAAVE? Rating to achieve a given increase in Willingness to Recommend. And once you know that you can derive the implications for the change in NPS by doing the standard NPS calculation using the WTR data.

However, to know exactly what to address to achieve these changes does still need more information than is available in the RAAVE? chart but it does at least tell you where to look. That is a major step-forward compared to NPS alone.

Seeking Additional Depth

Knowing what to change begins by just re-analysing the data you collected via the “And why did you give us that score?” question, looking at it this time through the 5-driver RAAVE? lens.

The example below is taken from our work for a major online retailer. By looking at the verbatim comments given by those scoring 6 or less (i.e. those who we describe as Detractors) we can use AI to analyse their comments and separate out the themes that arise under the 5 driver headings. These are summarised in the table.

Note that in this real-life example, no comments were made which were classified under the heading of “Association” but that is normal. It is less common for people to say the main reason they would not recommend a brand is because they do not like the company’s marketing or persona. They usually attribute their decision to other, more rational, factors. Emotional factors do come into the equation of course, but if there is a clear problem with the brand’s Association score then that can be probed by speaking to those who gave a low score on that dimension.

Figure 5: Re-Analysing the “Why did you give that score?” data using the 5-Driver RAAVE? Lens

The table shows immediately where the main issues are and probing the specific verbatims under each of those topics reveals the practical issues people are experiencing.

Using this approach to leveraging your NPS data provides a strategy for improvement that will directly link back to improved NPS performance by focussing on what will make a difference.

And there is no need to stop there.

Social Media Tracking data can be analysed in a similar way. The example below, from our work for Heathrow airport, shows the change in mix of social media comments that occurred when a new “Airporters” initiative was launched. The Airporters were there to assist with handling luggage from car parks and drop-off points to terminals at no additional cost to travellers. Spontaneous recommendations and comments ensued which were tracked through a standard social listening service. Those verbatims were then analysed using text mining and categorised into the 5-driver headings (plus ‘other’). The campaign impact can be seen in the resulting tracking chart shown in Figure 6.

Figure 6: Analysing Social Media Comments Using the 5-Driver RAAVE? Lens

(Note: in this chart “Association” is labelled as “Brand ID” and “Expectation” is labelled as “Confidence”)

Remember, at this point we have not made any changes to the NPS tracking approach other than to suggest the inclusion of one (maximum two) additional agreement questions for each of the 5-drivers. The other information was probably already being collected anyway. Yet we now have the ability to form a well-structured strategy for managing our NPS outcomes.

Getting the Rest of the Company On-Board

Another way of thinking about the 5 drivers is look at it from the company perspective – namely to ensure your product or service will be viewed by potential buyers as offering them the:

-????????? Right Solution (Relevancy)

-????????? Right Brand (Association)

-????????? Right Effort (Accessibility)

-????????? Right Price (Value)

-????????? Right Way (Expectation)

When you do this, you suddenly find that its easier to on-board other departments from across the business to engage in helping to drive up the NPS ratings. You are able to demonstrate clear relationships between Willingness to Recommend and Business Performance and get all departments aligned on wanting to deliver the “Right Result” for the customer. The graphic below helps pull it all together:

UK Client Study: Large International Office Supplies Company

Figure 7 below shows the Brand Affinity (RAAVE? Rating) scores being achieved by the brand at the start of the 3 year campaign period, amongst its differing customer segments. As can be seen, there was a major problem with Brand Association – there was no inherent emotional attachment. This situation had arisen because for many years all marketing had been about pricing and discounts, encouraging buyers to treat all companies in the sector as being totally substitutable – the only differentiation was on the level of customer service. Fortunately, the brand excelled on all those dimensions and so its overall RAAVE? Rating was impacted less than it might have been otherwise. Nevertheless, the RAAVE? analysis showed that there was an opportunity for the company to bolster its market position by addressing the weakness in Brand Association. Doing so could produce a major, 5-percentage point, step-change in the RAAVE? Rating which would equate to a potentially similar increase in NPS and brand loyalty.

Figure 7: RAAVE? Ratings for a Major UK Office Supplies Company, 2011

The relationship between NPS and RAAVE? Rating had already been plotted using tracking data we had gathered between 2009 and 2011. This relationship is shown in Figure 8. This data showed that a 5% increase in RAAVE? Rating would correspond with a 5.7% increase in NPS. That would be about a 9% growth in the average NPS ratings seen in the past.

Figure 8: Relationship between NPS and RAAVE? Rating for a Major Office Supplies Company

On a similar basis, using customer level transactional data for over 25,000 customers we had established that there was also a strong relationship between RAAVE? Rating and Share of Wallet. This relationship is shown in Figure 9 below:

Figure 9: Share of Wallet vs RAAVE? Rating, Respondent Level Data; Office Supplies Company

There was a strong correlation between RAAVE? Rating and past 12 months Share of Wallet. This relationship suggested that if the client’s RAAVE? Rating could be increased by an average of 1% then Share of Wallet and sales could be increased by c. 0.7%, which equated to £2.1m per annum and hence over £10m over 5 years. So raising the RAAVE? Rating became a priority.

One interesting feature about the relationship though, was indicated by the shaded area. Our client offered Next Day delivery to any address in mainland UK, and those living in rural areas therefore tended to rely on them for their business supplies needs. However, their rural locations meant that the service they received was not always as prompt or reliable as the client had promised, leading to a higher level of dissatisfied customers in those areas.

Moreover, as this was at a time prior to Amazon entering the business supplies market in a big way, our client was effectively the only choice of vendor that they had. So, for those people the share of wallet the client obtained was much higher than it would have been if those customers had had an alternative provider they could have easily switched to. By therefore being able to identify these vulnerable customers before they found a way to defect it was possible to provide additional tailored incentives to compensate for the less reliable delivery times they were experiencing. Not ideal, but it did help to level the playing field to some extent.

Another key benefit obtained by having the RAAVE? Rating was that it also reliably predicted each customer’s future share of wallet. It was not just a reflection of their recent experience.

Figure 10 shows the percentage of people who shopped again in the 12 months following the date on which they had given us their rating and, as can be seen, the correlation between their RAAVE? Rating and whether they shopped again was extremely high.

Figure 10: Relationship between RAAVE? Rating and Subsequent 12m Purchasing Behaviour

Those customers who were dissatisfied but had no alternative suppliers can be described as “Hostages”. They have no choice but to use the supplier even though they would really prefer to switch if they could. Similar behaviours have been seen at “hub airports” in the USA where customers are ‘forced’ to use the airline whose hub is at their local airport - even if they find the airline’s service is poor - because no other airline companies are able to offer the same (wide) range of destinations and departure times from that location. We know, however, that given a suitable alternative these customers would immediately switch supplier.

Addressing the Brand Association Issue

The client decided that they needed to raise their profile and generate increased emotional motivation to use their brand. It was decided that a national multi-channel media campaign would be used, including the use of TV ads.

The client’s target audience were mostly people working in or owning small and medium sized companies and the campaign creative was designed to show situations they would readily identify with. As a result, the campaign was well received by the target audience because it showed the numerous occasions on which obtaining great service from our client helped their small business customers to achieve a successful business outcome (like winning a new business account or convincing the “Dragons" to support their venture). A still from the campaign is shown below.

Impact of the Media Campaign

The slide below shows the impact of the sequence of campaign waves on the total RAAVE? Rating and the specific impact on the Brand Association driver. As can be seen, the waves of activity had a noticeable impact on Brand Association and a corresponding proportional increase in the overall RAAVE? Rating.

There were only minor changes in the other 4 RAAVE? drivers but all were slightly strengthened. The change in the “Association” perception was extremely noticeable, however, as shown in the slide below:

The key question, of course, is what effect did this have on Willingness to Recommend and hence upon the client’s NPS rating. The impact in terms of year-on-year growth in both metrics is shown in Figure 11 below.. Over the 2.5 years as a whole the correlation was c. 69% but was much higher at the start of the campaign. Compared to the pre-campaign baseline, NPS grew by 16 points, from a pre-campaign average of c. 54% to a post-campaign average of c. 70%. And there was also a noticeable impact on sales performance.

Figure 11: NPS Responded to the Growth in the RAAVE? Rating Generated by the Campaign

The slide below shows the impact the campaign had on the underlying trend in sales. The growth in the RAAVE? and consequent growth in WTR and NPS produced a turnaround in order volumes and sales. The new levels of RAAVE? and NPS being worth an additional £18m per annum in turnover, which was c. 6% sales growth.

The Strategic Management of NPS

If NPS is your key measure of the affinity that exists for your brand and your indicator of future brand performance, then do not leave it to chance as to whether you stumble across a significant piece of feedback from the “And why did you give that score?” question.

Instead, extend the probing of your questioning by adding five simple questions so you know why people have given you that rating in a way that means you can do something to improve it for everybody.

Many years ago a former Ipsos colleague, Todd Kirk , summarised this in a very neat way: Don’t just do things better, do better things.

Next Time

In Newsletter 12 we'll cover the importance of having both a “Great Idea” and “Great Execution”,

Having a Great Campaign Idea comes to nothing if the media and marketing strategy does not get the message across. But at the same time, if you have a mediocre (or bad!) creative idea then the best media strategy in the world is not going to change that. A dud is a dud.

We’ll show how, by using RAAVE? Tracking and Analytics, you can do both with aplomb!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了