Search
Close this search box.
What are we counting? – quality metrics

Roger Tomlinson, What are we counting?, Arts Professional, June 2017

Roger Tomlinson has more questions than answers about the quality metrics system that Arts Council England’s larger NPOs will soon be required to use.

People keep asking me what I think about quality metrics, the audience research system that Arts Council England (ACE) will shortly require its largest National Portfolio Organisations (NPOs) to use.

When I try to answer this complex question, many immediately tell me they were asking confidentially and don’t want their own views known. I hear a lot of reservations and many worries, but everyone seems reluctant to say anything during the current NPO application process.

Whilst understandable, this is not helpful. It is surely essential to embark on a proper discussion of whether this will deliver reliable results for NPOs and ACE, and to address people’s concerns.

Uneasy questions

I have been a champion of audience data for a long time. I conducted my first year-long audience survey at the Vic in Stoke on Trent in 1969, supervised by Keele University. I have been commissioning research surveys for over 40 years and the Arts Council published my book ‘Boxing Clever’ on turning data into audiences in 1993. And I have collaborated with them on many audience initiatives, including the drive to place socio-economic profiling tools at their NPOs’ fingertips.

So, I ought to be welcoming the concept of quality metrics and what Culture Counts proposes to deliver for Arts Council England. I can see why Marcus Romer (read his blog from 27 September) would welcome the voice of the audience, as end-recipient of the art, into ACE thinking. But I am left with a lot of uneasy questions, mostly methodological.

Unreliable research

Most people with any knowledge of research methodology are asking the same questions, because this type of research is inherently unreliable, yet a lot of reliance is being placed on the findings.

The Arts Council’s own former Senior Marketing Officer, Peter Verwey, constantly reminded arts marketers of the inherent unreliability of audience surveys, unless there were controls to manage the sample. Even then, reliability depends on respondents understanding the questions. If you ask a question and the respondent can’t ask for clarification on what the question means, then the answers can’t be relied upon. But if explanations are given, then bias creeps in, depending on what is said to them.

At the Arts Council of Wales, we used Beaufort Research to check respondents’ understanding of some simple questions about the arts, including: “When did you last attend an opera?” Sadly for Welsh National Opera, the majority who said when and where they had seen an opera, turned out to not actually have attended an opera at all. The public have a very different understanding of the words we use to discuss the arts, and this can have a significant impact on whether survey questions are completed.

This is an inevitable drawback of quantitative research. Researchers have to decide in advance what precise questions to ask and have to constrain answers to a fixed choice. Qualitative write-in answers can’t produce reliable, comparable results, even though narrative answers can provide the richest source of our understanding of what a specific audience member thought.

Biased responses

Audience surveys have other equally large flaws. Peter Verwey’s joke was that the survey samples usually comprised “anyone who had a working pen/pencil when the survey was handed out”, though that has presumably changed to whether people have an email address and bother to open survey emails.

Surveys conducted in foyers after performances are inherently biased in that they capture only those with time to answer. And even “there is an app for that” only suits the tech savvy.

Analysis over the years shows that completion is biased in favour of the most supportive members of the audience and those keen to make their views known, sometimes complainants. You can overcome some of this by ruthless random sampling – only looking at the feet of the people to be selected to answer the questionnaire, for example – and similar techniques can be applied online. But the bias, of who actually responds when invited to, remains.

These days, when we are capable of creating a socio-economic profile of attenders who book tickets, we ought to, as a minimum, be expecting the quality metrics methodology to include a check for the representativeness of the sample.

Read more