Most Read... John McAuliffeBill Manhire in Conversation with John McAuliffe
(PN Review 259)
Patricia CraigVal Warner: A Reminiscence
(PN Review 259)
Eavan BolandA Lyric Voice at Bay
(PN Review 121)
Joshua WeinerAn Exchange with Daniel Tiffany/Fall 2020
(PN Review 259)
Vahni CapildeoOn Judging Prizes, & Reading More than Six Really Good Books
(PN Review 237)
Christopher MiddletonNotes on a Viking Prow
(PN Review 10)
Poems Articles Interviews Reports Reviews Contributors
Reader Survey
PN Review Substack

This item is taken from PN Review 232, Volume 43 Number 2, November - December 2016.

Editorial
WHEN I WAS A TEACHER, a colleague was taken ill and for a semester I covered his Literature classes. He gave me his lecture notes. After having provided the students with his take on the subject, I added my own, suggesting that a variety of differently informed points of view might exist. On his return he was not pleased. For him, the task of examining entailed students knowing and expressing clearly the correct answers to questions he carefully anticipated in his pre-examination coaching. He came back in the nick of time to prevent them from making the error of deviation. He had established effective metrics, year on year, for marking, and achieved the ‘transparency’ committee’s love, whatever it cost in fairness to the individual.

Recently I was asked to lecture on Larkin at a local college. I suggested that his poems might be read without continual reference back to his biography, of which the class had been given a sketchy account. At the end of the lecture my host thanked me, then reminded them that they were expected to incorporate biographical information into their exams. Otherwise they would be marked down. Here again were metrics for their intellectual performance, their approach prescribed and standardised.

In one sense it is useful for administrators, teachers, and students to have ‘right answers’, as in the elementary sciences or elementary history. In another sense it betrays the values that inhere in literary arts and the Humanities at large.

The notion of metrics not only in examining Humanities students, but in the valuation of the works of artists and arts organisations, is spreading. In Australia, the United States, and now in England Arts Councils are developing an expectation of metrics and are supporting research into it. In the new funding round for Arts Council ‘portfolio organisations’, which will determine the support they receive from 2018–2022, larger clients, receiving £250,000 or more, are required ‘to adopt Quality Metrics’, the capital Q and M underlining the talismanic importance accorded to this new ‘science’. Reassuringly in bold, the power-point slide declares: ‘We will not make decisions based solely on Quality Metrics scoring.’ Not solely.

Because QM is about scoring, not of effective business structures, professional administration and the like, but of artistic output, misgivings abound. The key ‘Research &   Development Report’ from 2015, available from Arts Council England, is entitled Home: Quality Metrics. What might at first surprise the reader is that this initiative is ‘sector-led’, i.e. the impetus came from Arts Council clients and not from ace itself. ‘Therefore from the outset this has been a sector-led project that has sought to create a standardized and aggregatable metric system measuring what the cultural sector believes are the key dimensions of quality.’ It uses ‘self, peer and public assessment to capture the quality of art and cultural work’.

These QMs ‘aim to help organisations understand what people value about their work, as well as allowing them to benchmark against similar organisations’. With ‘what people value’ we are entering a curious pseudo-democratic, pseudo-scientific zone in which the measurement of value is conducted by means of a statement with, under it, a line at one end of which is written ‘strongly disagree’ and at the other end ‘strongly agree’, with ‘Neutral’ in the middle. Individuals put an x on the line at whatever point they like. Responses are then ‘aggregated’ into the Big Data which is the QM Point Score. At no stage are respondents required to deploy language. But so much of the language used in presenting QMs is NewSpeak that it cries out for an Orwell.

The ‘core’ QMs ‘are open source and free for anyone to use’. Here they are, ready for ‘Self, peer and public’ to strongly agree, strongly disagree, or neither:

Concept: it was an interesting idea
Presentation: it was well produced and presented
Distinctiveness: it was different from things I’ve experienced before
Challenge: it was thought-provoking
Captivation: it was absorbing and held my attention
Enthusiasm: I would come to something like this again
Local impact: it is important that it’s happening here
Relevance: it has something to say about the world in which we live
Rigour: it was well thought through and put together

SELF AND PEER ONLY :
Originality: it was ground-breaking
Risk: the artists/curators really challenged themselves
Excellence: it is one of the best examples of its type that I have seen

Two pilot schemes led to these twelve QMs. In the Executive Summary of Home: Quality Metrics we learn that the R&D report was undertaken in Manchester at the initiative of HOME, our new arts precinct, and led by an organisation called ‘Culture Counts’, joined by academics specialising in Arts Management and Cultural Policy (not in the arts themselves) and in Museology, and a university Museum director. Their objective was to create ‘a credible and robust measurement framework for the quality of cultural experiences’. What about the quality of the report’s use of English? The style is at once abstract and opaque: ‘The novel principle of collaboration and co-production to produce more meaningful data which contributes to a feedback loop for self-evaluation and decision-making is fully adopted by the group of cultural partners in this project, and ostensibly by the wider sector’; and ‘there is policy value to arts funding bodies such as Arts Council England, in trialling [sic] quantitative metrics for quality assessment which would be more cost-effective than current peer assessment processes, and which would also avoid the messiness and perceived problems of subjectivity associated with qualitative data’.

The possibility of informed individual judgement (peer review, even) and expertise has gone out the window. They are expensive. That judgement might be something more than, or other than, subjective is no longer entertained. Expertise is a matter of analysis of post facto data: it is what you do not with the work being judged but with the averaging out of consumers’ immediate responses to it. ‘This sector “buy-in” benefits from the scrutiny of a fully tested, robust framework which originated from a reputable consortium of cross-artform partners, working with highly experienced senior arts consultants with industry backgrounds and long track-records, and a technology team […]’ The word ‘robust’ does a lot of heavy lifting. The artist is replaced by the senior arts consultant with an industry background. The differences that mark books and magazines as cultural products with readers are not considered.

The data collected might allow ‘Data Driven Decisions (DDD) to positively shape their cultural and commercial practices’. Here is a vision of the future in which no infinitive is left unsplit: imagine that the ‘arts and cultural organisations were measuring the quality of their work using a standardised set of metrics that they themselves had shaped’ (my italics). Imagine them responding to the instant responses of audiences and end users as recorded on mobile devices, all this occurring ‘in real time’ through ‘an integrated big data platform’. ‘The possible prize is to help build a new type of data culture across the cultural sector’. Soon ‘the quality of their work’ becomes interchangeable with ‘the cultural experiences of their audiences and participants’. Facebook and Twitter, instant arbiters, gather mass and force to become the new Maecenases. Another triumph of democracy.

This item is taken from PN Review 232, Volume 43 Number 2, November - December 2016.



Readers are asked to send a note of any misprints or mistakes that they spot in this item to editor@pnreview.co.uk
Searching, please wait... animated waiting image