University productivity – quality as well as quantity?

INSIGHTS & RESOURCES

University productivity – quality as well as quantity?

by The Australian
17 January 2022

Key Takeaways:

  1. The Research and Education Frontier (REEF) Index is HERG’s central methodology for measuring university productivity, using inputs like the number of students educated and research publications per million dollars of expenditure.
  2. REEF incorporates “fitness for purpose” and compliance with standards as base level quality measures, drawing from universities’ independent choice of purpose and TEQSA standards.
  3. Different usages of “quality” in higher education include difficulty, impact, expert assessment, and student feedback.
  4. REEF can be augmented to include other quality measures like world rankings, staff-student ratio, and client-specific data.
  5. The post discusses the relationship between quality and productivity, highlighting that higher quality might entail higher costs but can be multifaceted and not always directly linked to productivity changes in the short run.
  6. REEF can analyze universities’ overall research productivity comprehensively, considering “hotspots” of high-quality research without weighting them more heavily in the calibration.

Scroll down to read the full article below…

Full Article:

HERG’s central methodology for measuring university productivity is the Research and Education Frontier (REEF) Index, which is described more fully in another Explainer. The REEF method can use different inputs, but commonly we use the number of students educated and the number of research publications per million dollars of expenditure.

This may look like a purely quantitative measure not concerned with quality, but that isn’t correct. Below, we look at quality assurance already “baked” into REEF, before describing how we can augment REEF with other approaches to quality assessment. As we go along, various usages of the word “quality” are explained.

Base level quality built into REEF

Two common ideas about quality are “fitness for purpose” and “compliance with standards”. To some extent, we always draw on these in REEF.

“Fitness for purpose” is about how well the organisation is delivering what it wants to deliver. Thus, a well-known burger chain may be delivering “high quality” when each burger is exactly how they want it to be, without deviation and repeatedly, although a nutritionist might see things differently.

The fitness for purpose approach might sound unsuited to the higher education context, but the predecessor body to TEQSA, the Australian Universities Quality Agency (AUQA), used it as a main organising theme. In its periodic audits, it asked what is the university trying to achieve, how well is it doing so and how does it know?

The REEF method draws on a fitness for purpose approach in the sense that we are agnostic about a university’s choice whether to emphasise research or teaching, or to achieve a balance. Given each university’s independent choice of purpose, the REEF Index shows how close it is to the efficiency frontier spanning both research and education.

It is unlikely these days that any public body would be left entirely to decide its own purpose and whether it was fulfilling it. Even in the AUQA period, there were standards requirements applying to universities, in legislation and protocols, and the threshold stage of an AUQA audit was to check for compliance.

AUQA was replaced by TEQSA, and its central focus is on compliance with what are now the Higher Education Standards [link]; although, again, it is not an exclusive focus, and compliance itself is monitored on a proportionate risk basis.

We build compliance with Higher Education Standards into our base, because we only include within REEF those institutions which are registered with TEQSA, and that means they are periodically re-assessed at most every seven years. TEQSA can also intervene at any point and we can be confident that if there was a general problem of standards compliance within an institution it would not wait for the next registration application.

In effect, REEF’s base level results can be thought of as embodying a ‘fair average’ research and education quality sufficient to be registered for the provider category ‘Australian University’, as detailed in what are now the Higher Education Threshold Standards 2021.

But what about “higher” quality?

A university might look relatively unproductive but counter that this is because it is focusing on “quality”, such as fewer articles published but in more prestigious journals, or a better student experience by having fewer students in a class.

There are ways of using REEF to cross-check some of these claims. For example, when we compared the productivity trajectory of the University of Technology Sydney (UTS) with the University of New South Wales (UNSW), UNSW responded that it was focusing on quality over quantity. But actually, we showed that the quantity of UNSW publications per academic was still going up, but its productivity was declining because, we believe, it was spending money on things other than research and education. “Quality” was not an adequate explanation for the productivity change in that instance.

It is, however, an understandable reaction, and on other facts may well be true, as we explain later under the heading “Does higher quality research and education cost more?” Before we get to that, we need to examine different usages of “quality” in higher education and look at how REEF can be augmented to cater for them.

One alternative usage boils down to quality as difficulty. For example, if an article is published in a journal that is really difficult to get into, that might be taken as an indicator of the article’s quality. Other metrics may then buttress that notion, for example the journal might be listed as a highly cited one, and university rankings might also give extra weight to publications there.

A related indicator is “impact”, and there are metrics about the number of times a particular article has been cited within a particular period, based on the assumption that if other scholars are citing it then they think it is good not bad!

This might be thought of as an example of quality as usefulness. The Australian Research Council now has an exercise called the Engagement and Impact Assessment which assesses how well researchers are engaging with end-users of research, and how well universities are translating their research into economic, social, environmental, cultural and other impacts.

Some regard quality as in the eye of the beholder, and so in an education context this focuses on those with a practised eye: expert assessment or peer review. Australian research is assessed every few years by the Australian Research Council in an exercise called Excellence in Research for Australia, or ERA. This is a national research evaluation framework based on Fields of Research, at a high level (2 digit) or medium level (4 digit), and the outcomes for each university which conducts sufficient research in those fields are expressed in relation to “world standard”, with 5 being “Well above world standard” and 1 being “Well below world standard”. In 2018, the most recent assessment, the work of 76, 261 researchers in 42 institutions was evaluated, which entailed 506,294 unique research outputs.

Some disciplines are assessed by weighting heavily the views of experts. Others are reliant on more quantitative indicators which are presumed to have qualitative judgements at the back end.

Sometimes quality is strongly linked with positive feedback. Some believe that high quality teaching will be reflected in strongly positive student feedback on the experience. There are contrary views, but when teaching is assessed by peers it seems there is a high correlation between student feedback and peer assessment.

Feedback from current students, recent graduates and employers is now systematically collected together by the Government under the name Quality Indicators for Learning and Teaching, or QILT. Prospective students and others can compare the results of institutions using a tool called ComparED.

It is interesting to note that universities that we regard as at or near the efficiency frontier can also do well in QILT. The University of Wollongong is a good example of this, so there is no necessary conflict between positive feedback and high productivity.

How we can add further quality measures into REEF

This can be done in a number of different ways within the REEF methodology – weighting of quality outputs, filtering outputs and/or the creation of multiple-dimensional REEF productivity measures. DO WE HAVE AN EXAMPLE WE CAN SHOW?

Quality in either education or research can be incorporated in the REEF methodology as a weight – where higher quality is weighted using linear or non-linear weights. These weights are a matter determined in consultation with the institution.

A second method is to filter the relevant data – either or both education and research data. Filtering can include where lower quality results, such as publications in journals which are less impactful or less prestigious, can be excluded. Other options to suit institutional needs can be incorporated in assessing education and research productivity.

Additionally, and where this is useful to meet client needs, a third dimension to the REEF index can be added to reflect a single or aggregate measure of quality. This is a way of including quality in the REEF methodology without disturbing the comparability of the two-dimensional results. This allows for direct comparisons to other institutions including to directly assess the intertemporal change in quality.

Alternatively, this third dimension can be used to add a further characteristic, not directly relating to teaching or research quality. Other uses of a third dimension can include a single or composite measure of other factors, such as measures of diversity and equity, disability, innovation, stakeholder engagement or impact on local, regional, national or international communities or the economies more broadly.

Over half of Australia’s universities appear in one or more of the main world rankings of institutions, in particular the Academic Ranking of World Universities by Shanghai Jiao Tong University, World University Rankings published by the Times Higher Education and the QS World University Rankings.

Although some or all of these exercises have their detractors, they are seen by many as proxies for quality. They differ according to the extent, if at all, they claim to measure both research and education quality, but in any event they do not fit neatly into our standard axes.

We are able to add a world rankings score as a third dimension or use specific components of each world ranking to weight or filter the education and research axes of REEF separately.

Depending on a client’s exact wishes, we may need internal data to supplement publicly available data.

For example, where an individual university has tailored student evaluation surveys and data in addition to QILT scores, and where these are consistent in content and are administered across the entire institution, this client-specific data can be used as a valid measure of teaching quality for the entire institution. In such circumstances a more nuanced and detailed education quality assessment can be used in the REEF methodology.

Another common metric is the staff-student ratio. Many believe that more staff improves the quality of a student experience. Some staff-student ratio data exists publicly, but a more fine-grained analysis might require access to institutional data.

Does higher quality research and education cost more?

A key reason why the issue of quality is of importance to the validation of the REEF methodology, is the assumed increased cost of quality and therefore the decline in output per dollars spent (and decline in output per unit of academic staff time, being the alternative input measure used in REEF). IS THIS THE SAME THING AS ACADEMIC STAFF NUMBERS OR DO WE USE SEPARATE DATA ABOUT THEIR TIME?

We say “assumed increased cost of quality” but it depends on one’s definition of quality and on the specific facts. For example, a university which wishes to improve its impact as picked up in the EIF may focus less on publishing in a discipline’s top journals and so be less concerned with ERA results. There is no reason to assume that one form of quality is more expensive than another, although it might be on the facts.

It is also possible that some universities have better staff development than others, and can produce higher quality outputs than a comparator but for the same cost. And so on.

However, all other things being equal, it is reasonable to begin with an assumption that higher quality entails higher cost and there is some empirical research to support this. Recent work by a Principal of HERG, with others, has quantified the differential cost for one discipline, finding that publications which were categorized as ‘elite’ cost approximately four times the dollar value in expenditure compared with ‘regular’ scholarly publications. Similar results showing significant differences in the consumption of academic staff time were also found.

Does quality impact on productivity?

If quality tends to cost more, then changes in the overall balance between higher and lower quality of research or education are likely to reveal themselves in changes to productivity results because the cost inputs may go up and the outputs may go down.

Such change in either direction, however, is rarely quick and we can control for it in REEF with access to internal data. We also need to keep in mind the multiple usages of “quality” in play.

REEF is also capable of falsifying a particular claim that productivity went down or was static due to a deliberate focus on quality, by examining expenditure.

Where there is a decline in productivity using expenditure as the input, but not where academic staff numbers are the input, we have a situation where the trajectories are in opposing directions. In this circumstance, it is highly unlikely that quality changes explain a decline in productivity as measured by expenditure.

An example of this is the productivity performance of the University of New South Wales in the chart below. There is a clear net improvement in both teaching and research productivity between 2001, the first year for which HERG has data, and 2016.

Then, when measured in relation to productivity using expenditure as the base, there is a net decline from 2016 to 2019. Some might take the view that it is plausibly explained by improved research and teaching quality as both research and, to a lesser extent, teaching productivity have declined in this period. Indeed, the University made such a point when commenting in the media.

However, when publications and students compared with academic staff numbers rather than expenditure, research productivity continues to rise in 2017 and 2018. It does come down in 2019 but there is still a net increase over 2016.

In numbers, the year-by-year changes in productivity as measured by expenditure were: 2016 to 2017 -4.1%; 2017 to 2018 – 4.3% and 2018 to 2019 – 5.3%. That is to say, there is a small increase in the rate of decline over time culminating a net decline over the period of over 10%. Over the same three-year period, the improvement of UNSW as measured by academic staff productivity is around 10%. The divergence of productivity change trajectories is not consistent with the improving quality explanation.

One plausible explanation in this situation is that expenditures were directed more at the non-academic aspects of the university which had not – or at least not yet – resulted in productivity gains. Such expenditures include, for example, consultancy fees, restructuring costs, enlarged administrative overheads and the like. When targeted and successful, these activities and expenditures may yield productivity gains in later years. Where they are not, they lock in productivity losses. The jury, in effect, may still be out.

This is not to say there is any deliberate attempt to mislead. A university may be able to point to multiple anecdotal examples of research quality improvement, but that does not mean that research quality improvement is necessarily observable across the institution as a whole using the REEF methodology.

Easily observable high-quality research ‘hotspots’, often in a visible field of research (which can present itself as a research centre or program or even in individual researcher or team of researchers) can understandably become more prominent.

Such hotspots can be highly important and a deliberate part of a university’s strategy to “play to its strengths”. They can lead to a general external perception of a university going up in quality but there may nevertheless be no statistically significantly difference in quality across the institution as a whole.

At whole of institution level, the REEF methodology is intentionally designed to measure research productivity comprehensively – that is to say, hotspots are not weighted more heavily in the calibration of overall institutional research productivity. Thus, across a medium to large comprehensive university, a significant measurable shift in research is rarely achievable in the short run.

In the circumstance where internal client data (based on either academic units or fields of research/fields of education) are available, one can more readily observe the effects of quality change in ‘hotspots’ and apply the type of forensic analysis described above.

Complicated?

Yes and no. The REEF method starts with publicly available data and a whole of institution lens. The data will have been externally verified. For example, an Auditor-General will have audited the financial statements showing expenditure. Statistical reporting to the Department of Education, Skills and Employment will include staff and student numbers. There are also various well-accepted ways of counting research outputs.

We can, however, use REEF to go deeper into the institution. We can weight or filter the research and education axes, or add a third dimension. It will depend on what a client university is looking for, access to data, and the type of quality that is considered important.

Last week in The Australian, Australian National University economics professors Bruce Chapman and Rabee Tourky reignited a debate about a levy on international student tuition revenue (“Universities should pay levy on ‘foreign student industry’ ”, 15/11).

For several weeks, those interested in higher education have been contemplating the interim report of the Universities Accord review. The report makes many insightful and far-reaching observations. Undoubtedly it will be a turning point for important aspects of the educational offerings within our public universities.

It’s now virtually certain that the South Australian Legislative Council, the state’s upper house, will initiate a parliamentary inquiry into the planned merger of the state’s two largest universities.

Scroll to Top