Confirming the Perceived Website Clutter Questionnaire (PWCQ)
MeasuringU
UX research and software supporting all stages of the user and customer experience.
Poor layout, irrelevant ads, and overwhelming videos. Websites can be cluttered.
Clutter can lead to a poor user experience.
Poor experiences repel users.
So how does one measure clutter?
Earlier we did a deep dive into the literature to see how clutter has been first defined and then measured. We found the everyday concept of clutter was defined with two components: disorganized collection of what’s relevant and/or the presence of unnecessary/irrelevant objects.
But most measures of clutter were based on objective measures such as grouping and layout complexity. The only questionnaire we found measured airplane cockpit displays which didn’t seem relevant to website clutter.
So, we began the process of building our own questionnaire for measuring website clutter.
In this article, we briefly review the exploratory research we conducted and then analyze new data to validate what we found using a statistical technique called confirmatory factor analysis.
Ready the full article on MeasuringU's Blog
Summary and Discussion
Confirmatory analysis of over 1,000 ratings of the perceived clutter of 57 websites found:
Confirmatory factor analysis of the five-item version of the PWCQ indicated excellent fit. CFA of the five-item version of the clutter questionnaire had excellent fit statistics (CFI = 0.997, RMSEA = 0.047, BIC = 96), better than a similar two-factor CFA of the 16-item version (CFI = 0.92, RMSEA = 0.11, BIC = 2,144).
领英推荐
Clutter questionnaire scores varied across websites but with possible range restrictions. The sensitivity analyses of Content Clutter, Design Clutter, and Overall Clutter showed significant variation in the means of these metrics by website, but our analyses of the ranges of these values showed that none of them, after rescaling values to 0–100-point scales, had any clutter score greater than 65, covering about half the possible range for Content Clutter and Design Clutter and about 40% of the range of Overall Clutter.
Bottom line: We expect UX researchers and practitioners to be able to use this version of the clutter questionnaire when the research context is similar to the websites we studied in our consumer surveys. We don’t anticipate serious barriers to using the clutter questionnaire in other similar contexts including task-based studies, mobile apps, and very cluttered web/mobile UIs, but because that research has not yet been conducted, UX researchers and practitioners should exercise due caution.
Ready the full article on MeasuringU's Blog
MUiQ: Discover UX Insights
MUiQ is a remote UX research and analysis tool helping companies to gather deep insights into user behavior and preferences.
MUiQ has everything you need for essential UX research, including:
With real-time reporting, advanced behavioral analytics, and powerful visualizations, MUiQ supports your team in delivering high-quality quantitative and qualitative research to help make informed decisions.
Accessible on desktop and mobile, MUiQ is fully customizable and comes with live onboarding, training sessions, and technical support.
Reach out today to learn how your team can use MUiQ !
Founder and President of Mauro Usability Science / Neuroscience-based Design Research / IP Expert
3 周Our research team has been interested in following the development of the PWCQ, Such a tool would be helpful. However, the question of clutter is irrelevant without a counterbalance or objective measure of the user's experience with a given data visualization task flow and problem space. Clutter can only be an accurate measure of cognitive complexity based on how the user engages with a given screen or interface based on prior experience. This is another way of saying clutter is context dependent. Take for example the design of the famous Bloomberg Terminal screens which by your methodology would be highly cluttered, but when users with high expertise levels with the Bloomberg Terminal rate the displays they find them simple and easy to interpret. My point is that counting the elements is likely not a useful measure of actual screen complexity or clutter. The real point is how your rubric impacts the user's actual cognitive complexity. This is just my POV.