Tag Archives: Evaluation

Evaluating effectiveness of teaching

During the 24th February seminar, we responded to the question: “What do we need to know in order to teach well?” The answers relate to the Core Knowledge section (K1-K6) of the UKPSF 2011.

What do we need to know in order to teach well?

In comparison to K5: “Methods for evaluating the effectiveness of teaching”, we mentioned, “making success criteria visible to ourselves and each other” but didn’t explore further. 

In a blog post, I compared educational evaluation to marketing effectiveness, which interestingly present similar challenges.

Hummelbrunner and Reynolds (2013) overlaid systems thinking with learning loops. Their framework for rigour in evaluation demonstrates that progression towards triple loop learning is determined by “applying progressively wider measures of value”. However, “Often only one specific level might be feasible or can be appropriately attained.”

A systems-based framework for rigor in evaluation (Hummelbrunner and Reynolds, 2013)

Interestingly, the former marking criteria at UAL for Subject Knowledge denotes the following as ‘A’ grade, “Contributes to the subject debate by assimilating knowledge into a personal hypothesis (or elements of/ the beginning of one)” which suggests reaching a level of criticality akin to triple loop learning (Hummelbrunner and Reynolds, 2013). However, the current Assessment Criteria does not include this, “Excellent evidence of: Critical analysis of a range of practical, theoretical and/or technical knowledge (s).” (UAL, 2019)

How we evaluate learning loops and values must be considered given the Assessment Criteria no longer indicates this. Perhaps, the Learning Outcomes should fill this gap. With the current preference for succinct Assessment Briefs rather than extensive Unit Handbooks, we are required to produce additional explanatory documents to clarify grade levels to students – and tutors.(I previously wrote a blog post about the need to streamline the Assessment Brief). I don’t have a specific solution, but this exploration encourages me to work towards a more rigorous method of evaluating effectiveness of teaching.  

References

Hummelbrunner, R. and Reynolds, M. (2013) Systems thinking, learning and values in evaluation. Evaluation Connections: The European Evaluation Society Newsletter, June 2013, pp.9-10.

The UK Professional Standards Framework (2011) Available at: https://s3.eu-west-2.amazonaws.com/assets.creode.advancehe-document-manager/documents/hea/private/resources/ukpsf_2011_english_1568036916.pdf Accessed: 17th March 2023

UAL (2019) New Assessment Criteria. Available at: https://www.arts.ac.uk/students/stories/new-assessment-criteria3 Accessed: 17th March 2023

Evaluating Effectiveness

During the onsite sessions on Friday 24th February, we were tasked with reviewing the UK Professional Standards Framework 2011. As a group, we mapped out our responses to the question: “What do we need to know in order to teach well?” These answers are connected to the Core Knowledge section (K1-K6) of the UKPSF 2011.

Mind map in response to Core Knowledge

Most of the ideas generated were about our interaction with students, understanding their needs and the context. There were also ideas about communication and how we scaffold information to take different learning styles into account. Upon comparing our ideas against the Core Knowledge points, K4-6 were not fully addressed. K5 in particular is of interest to me. It states: Methods for evaluating the effectiveness of teaching. While we had included a point in our mind map that said, “Making success criteria visible to ourselves and each other” we had not considered what success criteria might mean. While I used evaluation methods in my teaching, they tend to be broader topics to include a student’s entire experience on a unit or at UAL as a whole, through the CSS and NSS.

Effectiveness is a term used in the marketing industry (the subject area I teach), it even has its own awards – the Effie Awards – to identify best practice. However, there is much debate about how marketing effectiveness is measured and attributed. Renowned marketing effectiveness experts Les Binet and Peter Field wrote The Long and the Short of It, a publication that explains that effectiveness must be measured in terms of short-term marketing activation as well as long-term growth through brand building. Marketers are advised to apply the 60/40 rule, where 60% of budget should be allocated to long-term brand building and only 40% to short-term activations.

If we were to apply this concept to the higher education context, our methods of evaluating effective teaching would look quite different. Currently, our main method of evaluating the effectiveness of teaching is assessment, where students demonstrate how well they have learned. However, this is not the same as evaluating teaching as there is no guarantee that students have learned well due to good teaching; they may have prior experience of completing successful assignments or they may have read widely to understand the subject area. There is no direct method that we use to evaluate effective teaching. Evaluating effectiveness is problematic in teaching in the same way that it is problematic in marketing; what can be measured and what should be measured are not the same thing and lines of attribution are opaque.

We use unit evaluation surveys to assess how students felt about the unit as a whole. The results of these are used to improve the unit for the next time, and to identify successes and failures. In my experience, the successes and failures of a unit are already known prior to students completing the survey. Points about lack of communication or confusing lectures tend to get raised during the unit and the survey does not include questions that are specifically related to effective teaching.

It would be interesting to develop survey questions that could assess teaching effectiveness directly.

Additionally, Binet and Field’s 60/40 rule could be considered as a method for long-term success. If we were to view teaching as the 40% i.e. short-term activation, we could see that lectures, seminars and tutorials are the moments where we interact with students and capture their attention. We could evaluate the effectiveness of this portion through updated surveys as suggested above. However, there is little provision to examine long-term effectiveness as we don’t examine trends over time. One could argue that the FMP (Final Major Project) is the culmination of knowledge and that effectiveness of all the teaching over the previous three years could be evaluated at this point. However, this concept is flawed as once again it would equate effective teaching with effective learning. Another easy measure would be the amount of ‘good degrees’ that are awarded; once again a laughable concept given the way degrees are calculated and widespread grade inflation practices. We are reminded here that what can be measured does not equate to what should be measured.

A new way to evaluate effectiveness over time might be to create an extensive research methodology, which could be undertaken shortly after students graduate. Students could identify what information resonates from their whole degree programme, whose lectures and they remember and what will stick with them for the rest of their lives. A comprehensive research design would be required including qualitative and quantitative methods. The results could be linked to performance reviews and it could become a way for lecturers to stay motivated over the long-term rather than only focusing on the short-term rewards of unit completion.