Department for Business, Innovation & Skills
A response from the University of 天发娱乐棋牌_天发娱乐APP-官网|下载 | July 2016
There are many aspects of these criteria which we would support. However, we believe that some modifications of the criteria are necessary. Firstly, there are some additional criteria we would suggest:
There is nothing in the criteria about engagement with students as co-creators of their experience. It assumes that students are passive recipients of teaching, and this is not (and we argue, should not be) the case. There are great examples across the sector of successful engagement with students to enhance the teaching experience.
We would welcome a criterion on the teaching ability of staff, reflecting for example teaching qualifications, engagement with ongoing training and professional development, or HEA status.
We would also welcome a criterion on research-led teaching (there may be impact case studies from the 2014 REF which illustrate good practice here).
Secondly, there are areas of concern from the criteria which include:
We are concerned with duplication with the Higher Education Review (HER) process. The consultation (and the White Paper) specifically states that the intention is not to duplicate the HER process. However, the Learning Environment criteria in particular are very similar to those measured in the HER. One way to deal with this, which we believe would also have wider benefits (see the response to Q7 below), would be for one of the inputs to the TEF process to be the last Higher Education Review (or other QAA review) report. This would allow the TEF assessors to use the evidence and judgement about the learning environment which already exists.
Learning gain – we are not against this conceptually but note that there is currently no accepted definition and a range of pilot activities are still ongoing. It is difficult to endorse the introduction of a criterion at some point in the future without confidence that a common measure can be found.
Despite the assurances in Annex D, we remain concerned about the possible change of behaviour of students when it comes to the NSS if it becomes clear that a good NSS contributes to a good TEF which then? leads to higher fees. We also note that NSS performance is already incredibly important to the reputation of Universities and their recruitment of new students, so a change in student behaviour would have wider implications for universities beyond the possible implications for the TEF. Whilst use of the NSS in TEF seems inevitable, there seems an over-reliance on it and we would support a smaller element of the criteria and weighting of the TEF falling on the NSS.
The most important element to securing a highly skilled employment metric for the TEF is to develop new and effective metrics as part of the consultation on the future of DLHE. Both the Government and the sector are aware of the shortcomings of DLHE and it is vital that this work results in a credible metric in which the sector can have confidence.
We realise that this will not be ready for Year 2 of the TEF, so for that, existing data must be used, but the Government and the TEF reviewers must use it with? due caution, aware of its limitations.
In particular, we agree with the tension that is set out at the end of Paragraph 72. We would go further and state that the perceived value of the same degree from different institutions across the country is very large, and that this has a direct effect on the employment prospects of students. That perceived value is? driven as much by entry standards as by quality of teaching. A low tariff university with excellent teaching may score badly on this measure; a higher tariff institution with only average teaching may score considerably better.
Whilst this measure actually benefits high tariff institutions such as the University of 天发娱乐棋牌_天发娱乐APP-官网|下载, we do not believe that it provides a meaningful proxy for teaching quality.
NOTE: We hope that the review of DLHE will produce more robust metrics, so whilst we agree with this for TEF Year 2, we think that for this criterion in particular, it is important not to “lock in” what is used in TEF Year 2, into future years of TEF.
We note from paragraph 81 that the proposal is to include only UK domiciled graduates rather than all graduates. We are happy with this as the data for non-UK domiciled are not robust. With UK domiciled students, we are content to include all graduates provided that the benchmarks described later in the consultation are in place. Without those benchmarks, there could be distortions as the variation in students taking up highly skilled employment varies not only by institution but also by subject. Without good benchmarks, this means that a provider’s TEF performance could be driven in part by the choice of subjects that they teach. There could be an unintended incentive for universities to change the subjects they teach, as reflected in Annex D.
It is absolutely vital that benchmarks take into consideration the subject mix.
NOTE: In addition to the metrics stated, to fulfil the ambitions of the White Paper we believe that data should be collected on non-continuation rates relating to ethnicity, sex and disability as well as employment destinations relating to disability. However, we recognise that this may not be possible for TEF Year 2.
NOTE: The key issue is not how things are flagged but how the assessors use the information. It is not clear how much of their judgement will be based on the metrics and how much on the contextual information.
This seems the right compromise to allow for variations.
We are happy with the split but we note that the split in the metrics outlined in paragraph 88 is different to the contextual information used to aid interpretation of the metrics (Table 1 paragraph 95 Part A). We are not sure what use contextual information on gender will be, for example, if the metrics are not actually split out by gender. Over time, does it make sense for this to be the same list?
The Government should also recognise and plan for the challenges of splitting the data. Some splits would leave very small numbers of students, particularly in small institutions. It might be that thresholds are needed for including data.? Other splits may generate a grouping not intended by the split itself (e.g. when nearly all the students for a particular subject are on one side of a particular split, that subject will be over-represented in that group and under-represented on the other side). TEF assessors need to be specifically aware of these data issues and the care needed to interpret them.
Part A - Data: We are happy with the list of contextual information but as noted in Q5 above, we note the difference between this information (Table 1 paragraph 95 Part A) and the split in metrics outlined in paragraph 88.
Part B – Data Maps: We note use of the term in Table 1 Paragraph 95 Part B “where students who study at the provider grew up”. We assume that this? means “the address at which the student was living when s/he applied to study at the provider”, as people can grow up in more than one place. We are not sure what additional data this provides in addition to the Polar Quintile data in Part A. Whilst maps as suggested in Part B might be of interest to the institution, we are not convinced that they will provide additional informationof use to the reviewers which is not captured by the information in Part A.
The proposed methodology will work well for institutions with small numbers of students providing degrees in relatively few different subjects. The larger the number of students and breadth of degree courses, the more difficult it will be to meet the guidelines. It will also be important to establish a clear definition of what constitutes a subject.
For example, Paragraph 101 emphasises that the reviewers are looking for examples of excellence across the entire provision, not just ones affecting a small number of students. The needs of students learning different subjects is quite different (nursing vs engineering vs modern languages, for example). Large comprehensive universities with tens of thousands of students and more than 100 degree programmes might well have fantastic examples covering most of their students, but they won’t be concentrated in one or two initiatives, there will be many.
This also means that the 15 page limit will be significantly more problematic for larger, comprehensive HEIs than smaller, specialist ones.
How might these problems be addressed? 天发娱乐棋牌_天发娱乐APP-官网|下载 suggestions include:
Larger institutions might be allowed to submit longer submissions (e.g. 15 pages for institutions with less than 10,000 students, 30 pages for those with more than 10,000).
As suggested in our response to Q1 above, the TEF reviewers might have the final report from the HEIs’ latest Higher Education Review as a? piece of evidence. This would include material which the HEI would not have to reproduce.
The TEF reviewers might also be provided with the HEI’s Self Evaluation Document which was used for the HEIs’ latest Higher Education Review as a piece of evidence.
We also note that, even with the level of detail available here, it is possible that TEF reviewers will interpret things differently from each other, and significant effort in training will be needed to ensure a consistent interpretation by reviewers. It will also be important to establish transparent, published criteria which show how reviewers will make their judgements. In particular, the weighting or importance attached to the metrics compared with the commentary should be clarified.
We feel that as things stand, there are too many things set out in this list. This could have two negative consequences. The first is that the more things on the list, the more scope there is for reviewers to give different judgements to different universities on similar types of evidence. This is made more likely as the rapid timing for the introduction of TEF Year 2 means that there is insufficient time to bring in a really robust training programme for reviewers and a robust system of moderation.? Although TEF 2 is still in the development phase, it will have immediate reputational and financial consequences for universities, and BIS should anticipate appeals and judicial reviews from universities not given the highest rating. BIS could ameliorate the problem by not using this list in TEF Year 2, and instead have a much simpler system, for example where Universities are merely allowed to comment on the metrics. This would allow a more comprehensive system to be introduced in Year 3 once robust training and moderation were in place, and would reduce the likely number of appeals and judicial reviews.
The second problem of having so many areas listed is that HEIs will inevitably attempt to cover all or most of these in their submissions, which will be incredibly challenging in 15 pages. A worrying possibility is that they don’t cover some important areas for which they have good material, and are subsequently penalised. This could be mitigated in Year 2 by reducing the commentary as described in the paragraph above. In future years of TEF, it could be mitigated either by reducing the number of criteria, or by the same suggestions set out in our response to Q7 above (increasing the page limit for larger institutions and making the institutions last HER report available to the reviewers).
Although our preference would be to reduce the number of examples of additional evidence, there are other pieces of evidence which BIS might like to consider:
For all the aspects:
Commendations and features of good practice from an HEI’s latest Higher Education Review report. [We note that the TEF takes a successful HER as a baseline, but is looking for performance above this level – commendations in the HER report show performance above the baseline].
For the Teaching Quality aspect:
How good practice is shared and a culture of teaching excellence is promoted
How the institution engages with new different modes of delivery
Ongoing professional development of teaching staff
One area of concern is in the Student Outcomes and Learning Gains aspect, where one of the examples is “learning gain and distance travelled by students”. We are concerned that with no agreed mechanism to measure this, assessors will not be able to make a judgement on a fair basis. We would therefore suggest that this is removed and only reintroduced once an agreed system of assessing it is in place.
We believe that it will take at least two or three cycles of TEF before the methodology is sufficiently robust. However, TEF outcomes will have an immediate reputational effect on HEIs already from Year 2 of operation. This opens up the possibility of all sorts of challenge, and commendations will be even harder to be robustly defended as overall judgements. We believe that commendations should only be considered when the metrics and methodology have been tested over a few cycles of TEF. Although we are not in favour of them, if they do proceed, it would be helpful to clarify if the Commendations – like the TEF award – also last for three years.
The whole timeframe is tight. Given that guidance will only come out in October 2016, we would prefer that providers are given to the end of January 2017 to make their submissions, but we realise that this squeezes the available time for the assessment.
Whilst the process looks achievable, it would be helpful in the response to this consultation to given some further information about:
The moderation process, to ensure that all assessors are consistent in the levels they are assigning
When and how much of the evidence will be sampled should also be clarified. It will be important that reviewers do not make judgements that are based solely on assertions in the institution's commentary.
This seems the best compromise. It allows those with less of a track record to participate.
NOTE: Although BIS have not asked a specific question about this, we note that as a TEF Year 2 award will stand for three years, HEIs who achieve an “outstanding” rating in Year 2 will have no incentive to take part in Year 3 of TEF – and many with an “excellent” rating may not choose to do so either. As TEF is still developing in its early years, it might not be in the interests of BIS to have some of the UK’s best teaching Universities not taking part in Year 3. Itmight consider whether there were incentives for good universities to take part (e.g. If your Year 3 rating was below Your Year 2 one, you could still use your Year 2 rating).
We agree with much of what is in the descriptions but have the following suggestions:
The name “meets expectations” should be changed to “good”. This will still allow differentiation between levels but the term “good” will be less harmful to the reputation of UK universities overseas.
All three of the TEF descriptions should make clear that they are referring only to undergraduate students, until such time in future TEF rounds when postgraduates are included.
The last bullet of “Meets Expectations” refers to the accessibility and reliability of the information it makes available. We are not against this criterion per se but don’t see where in the TEF process it will be assessed. Without clear guidance it could be open to different interpretation by different assessors.
The “Excellent” and “Outstanding” descriptions are the same except for swapping those two words, which does not really explain how an outstanding institution is better than an excellent one.