Can we rely on the National Student Survey?

Can we rely on the National Student Survey?

The results for the 2019 National Student Survey will be released at 9:30 next Wednesday (3rd July).

As the name suggests, it is the most prominent case study of a student survey within the UK context, having been introduced in 2005 and with roughly 300,000 students completing the survey each year.

The results are very important for university leaders and their governing bodies – however the NSS is just one survey. How can it be so important? And how should it fit into a wider effort to improve the student experience?

Why do we conduct surveys with students?

Surveys, implemented locally within departments, institutionally, nationally and globally, are most common method of evaluating the student experience.

As the number of students in higher education grows, surveys are an increasingly useful tool for universities to gather feedback from large populations.

Ultimately, universities need to evaluate the student experience to be able to enhance it. Beyond this, surveys can also be useful for collecting benchmarks (internally and externally), fulfil external quality requirements and produce great stats for marketing materials.

Many universities will also have some form of student representation system and may even evaluate the student experience through the use of learning analytics. However, it is normally the case that these methods will be used in conjunction with student surveys.

What are the biggest issues with surveys?

Despite being useful, there are a number of principled critiques around the reliability of surveys.

They fail to capture the very different set of needs and expectations that exist within student populations. Surveys take the views of many and average them together – creating results that may miss many individual stories. As noted by Alison Johns, Chief Executive of Advance HE, when publishing the results of their Student Academic Experience Survey, “Clearly, ‘what works’ for one group of students doesn’t necessarily work for another.”

Surveys are also limited by only being able to gather student input at a single point in time, whereas student views can change across their lifecycle.

Not only do individual student perspectives change over time, but so can collective experiences. There is much discussion around how students’ expectations of ‘value for money’ have changed in the UK in the era of higher fees. This can make it unreliable to compare survey results over the years.

Not all surveys are created equally

There are also specific issues around the reliability of student surveys focused on satisfaction. Results of satisfaction surveys depend heavily on student expectations, which are often set out poorly when students start university. Increased competition between universities in their marketing efforts will only make it harder to adequately set student expectations when they arrive, as they will have already been promised so much more.

Satisfaction surveys are also limited in the parts of the student experience that they can reliably evaluate. Assessment processes and the relevance of course content to the jobs market have objective criteria that are best evaluated by academics – with expertise in pedagogy – and employers – who understand the skills needed within their organisations. Can a student really be expected to know what ‘good assessment’ is in this regard?

Meanwhile, students are only able to reflect on these aspects of the student experience from their own perspectives and might conflate comfort with satisfaction, resulting in better scores being given to the teachers and modules that challenge them least. However, there is a balance to be achieved, as students who are satisfied are more likely to engage further in their learning, especially when they perceive teaching and assessment practices to be fair.

The alternative is focusing student surveys around engagement and learner-reflection questions. Student engagement surveys are able to ask whether students perceive that they have learned and therefore more reliably measure areas of the student experience more directly related to learning and teaching.

This has led to many student surveys globally focusing on student engagement, rather than satisfaction; such as the National Survey of Student Engagement (NSSE), Australian Survey of Student Engagement (AUSSE) and South African Survey of Student Engagement (SASSE). In the UK, a survey that bridges this satisfaction / engagement divide is the aforementioned Student Academic Experience Survey, which I wrote about earlier this month.

So what does this mean for NSS?

Alex Buckley, in his great report for the Higher Education Academy looking at the NSS, said: “most institutions feel strongly that the NSS has increased the visibility and impact of the ‘student voice’.” Despite this, “The NSS can suffer from being viewed as a managerial and bureaucratic exercise in box-ticking”

Even the Office for Students acknowledged last year that there have been reports of universities “exerting undue influence on what students say when they answer the survey.”

If NSS results weren’t made public, they would probably be a lot more reliable. However, universities probably wouldn’t take them nearly as seriously and prospective students wouldn’t be able to use the results in deciding which university is right for them.

There are clearly helpful aspects of the NSS, however we have to recognise its limitations and build an approach to evaluating the student experience that counterbalances its weaknesses.

How can universities improve their approach to student surveys?

  1. Student survey results should be segmented, based on student demographics and reviewed accordingly
  2. Multiple surveys asked to student cohorts across time should be collectively analysed, to evaluate how opinions have changed or remained consistent
  3. Institutions need to consider how they can better set expectations when recruiting and inducting students to improve the reliability of satisfaction surveys
  4. Self-reflection from students in engagement surveys should be complemented with some form of external assessment– e.g. from employers – to be able to verify skills gained
  5. Universities should avoid any efforts to ‘game’ surveys such as the NSS, as this will reduce the reliability of the results
  6. Institutions should balance satisfaction and engagement questions within surveys, learning from examples such as HEPI’s Student Academic Experience Survey.


This article was adapted from an assignment I wrote for my MBA in Higher Education Management with UCL. The views expressed are entirely my own

要查看或添加评论,请登录

David Gilani的更多文章

社区洞察

其他会员也浏览了