top of page

Rethinking CPD Evaluation

Updated: Sep 27, 2022



We use professional development to bring about educational improvement, and it’s important that we evaluate whether what we’re doing is working. We have limited resources, both time and money, so knowing our ‘best bets’ are doing what we think they will, and knowing how they impact pupil outcomes is important.


I’ve long been frustrated that evaluation of PD is so ‘surface’ – reduced to a likert scale questionnaire that asks you, two minutes after you’ve put your notebook away, to rate the impact of a workshop on your teaching. Effective PD evaluation is something I’ve thought about on and off for years now, gathering new bits, adding ideas together as I work on different things. Last weekend I went for it and presented where I’m at to a room packed with people at researchED Birmingham who didn’t argue back at me, so perhaps there’s something in this. I said I’d write about it, so here’s an abridged version.


What are we evaluating?


How can we take what we know about effective CPD and the way we learn to inform how we evaluate and evaluate better?

We are gaining an ever greater idea of what makes for effective PD. The DfE’s 2016 Standard for Teachers’ Professional Development draws the evidence into five areas:

  • Focus on pupil outcomes

  • Underpinned by evidence

  • Collaboration and challenge

  • Sustained over time

  • Prioritised by leadership

And the EEF’s 2021 ‘Effective Professional Development’ guidance report presents a series of mechanisms essential for effective PD:

  • Building knowledge

  • Motivating teachers

  • Developing teaching techniques

  • Embedding practice

Of course, one of the challenges of implementation is that the ‘ideal’ never fits all situations. Rob Coe and Stuart Kime wrote in their editorial for Impact in 2021, “…there is very little in education that can be implemented according to a recipe or manual – and remain effective.” and “…implementing complex programmes that require adaptation (i.e. most educational interventions) is unlikely to be successful without effective, real-time evaluation.”


We have an ‘ideal’ process of PD-to-impact, but could the process of PD-to-impact look different for different people? Of course, when something’s different for different people it becomes a step harder to evaluate.


PD looks different for different people


I’ve written before about individual ‘depth of practice’ and routes through the learning process, drawing on (my favourite) table from David Weston and Bridget Clay’s ‘Unleashing Great Teaching’ (2018), and Argiris’ Ladder of Inference. One of my mental rabbit holes has led me to explore different models of learning, and I have been repeatedly dawn to this non-linear, non-sequential model from Clarke and Hollingsworth (2002) that allows space for different routes through processes of reflection and enactment in four domains: external, personal, practice and consequence, and offers explanation of how different people process the same learning events.


Interconnected model of teacher professional growth (Clarke and Hollingsworth, 2002)

Traditionally, lots of people follow Guskey’s (2000) linear model of evaluation:

  • Participants’ Reaction

  • Participants’ Learning

  • Organisational Support and Change

  • Participants’ Use of New Knowledge and Skills

  • Student Learning Outcomes

I like the detail of this in Guskey’s book but something has never sat quite right with me about it, and points made by Coldwell and Simkins (2011) get close to addressing my niggles. There’s an assumption that each successive level is more informative than, caused by and correlated with, the previous one, and that failure comes from inside the PD event. The other thing they point out is that ‘organisational support and change’ is more of a condition for change than a consequence of it.


Having considered individual, non-linear models of learning, I’m wondering if PD evaluation itself isn’t linear.


My cycle of CPD evaluation


Taking a pack of post-its and a table-top, I have developed my (conceptual) non-linear model of PD evaluation. The features of Guskey’s model are still there, however I’ve changed ‘Organisational Support and Change’ to ‘Support and Barriers’ and placed this at the centre of the process. The cycle draws on Clarke and Hollingsworth’s non-sequential pathways of learning, and offers an opportunity to take an individual journey of reflection and enactment between domains. My prediction is that the benefits of this are that evaluation is part of the whole cycle, from planning and implementation, and ensures organisational support from the start. It makes it possible to add additional learning, interlink multiple PD threads, adapt programmes to individual needs, and turns a tick-box process into one that’s more of a checklist. It’s still possible to track Guskey’s traditional model through, but where evaluation shows someone needs to go back or repeat a step, they can. Where someone comes into a PD programme half-way through implementation, their own process has a path.


During my rEDBrum talk I went through each element and offered an explanation of how this looks in practice, but for space reasons I’m just going to focus on my latest area of (obsessive) investigation here.


What is a barrier?


Until recently, I accepted ‘barriers and support’ was pretty obvious. Now I’ve been thinking about it in more depth and I’ve started to think about potential types of barrier. Individual, organisational, programme content, external? Is a barrier when ‘what works’ is missing? Are there things we can address in advance?


I found this (amazing) paper by McChesney & Aldridge (2019) where they introduce their ‘conceptual model of the professional development-to-impact trajectory’. They spoke to teachers in Abu Dhabi about their experiences of PD, and what prevents them from enacting what they have learnt, and they demonstrate the relationship between intended, received, accepted and applied PD, with structural, acceptance, implementation and student barriers.


Conceptual model of the professional development-to-impact trajectory (McChesney and Aldridge, 2019)


I see connections with the Ladder of Inference, the routes different people take through the same PD activity, the influence of internal and external factors, and I started to map where the PD mechanisms outlined in the EEF’s guidance report are present or missing.


My next step was to think that if we can identify what these barriers could be made up of, we can place these onto the evaluation cycle and narrow down what we want to look out for and when. I’ve started to go through a range of sources to find evidence for different barriers:


Effective CPD

  • The effects of high-quality professional development on teachers and students (Zuccollo and Fletcher-Wood, 2020)

  • Effective teacher professional development: new theory and a meta-analytic test (Sims et al, 2022)

Culture

  • A culture of improvement: reviewing the research on teacher working conditions (Weston et al, 2021)

  • Why do some schools struggle to retain staff? Development and validation of the Teachers’ Working Environment Scale (TWES) (Sims, 2021)

CPD Evaluation

  • Evaluating Professional Development (Guskey, 2000)

  • Level models of continuing professional development evaluation: a grounded review and critique (Coldwell & Simkins, 2011)

  • What gets in the way? A new conceptual model for the trajectory from teacher professional development to impact (McChesney & Aldridge, 2019)

SEND

  • Special Educational Needs in Mainstream Schools guidance report (EEF, 2020)

I’m in my early stages of this process but already have a table of potential barriers that I’ve been able to map onto McChesney & Aldridge’s structural, acceptance, implementation and student barriers.



My CPD evaluation cycle with stages of PD and barriers identified by McChesney and Aldridge (2019)


What it looks like in practice


Here’s where I’m at with the process for now. In advance of PD event – set culture/environment, include PD mechanisms. During PD event – position evaluation to check against known barriers in the order they’ll come up, act on evaluation. After PD event – check against goals. Not rocket science, but I’m hoping I’m onto something.


There’s plenty more for me to think about. I want to look for other barrier categories – particularly pupil barriers as McChesney and Aldridge only theorised this and I’ve only had a surface look at barriers related to SEND. I want to think about how to support evaluations at scale (can I make pro-formas to make it easier?), and I want to think about ‘support’ in the same way I’ve looked at ‘barriers’. I’m also interested in how to add mechanisms yourself if they aren’t ‘in’ CPD events (and if that’s possible).


Clearly I’ve not got space here to go into as much detail as I did in my rED presentation, and I have a lot more thoughts on this, but I hope this sets out a rationale for my ideas and how my various chains of thought have come together. The benefit of not being able to go into everything in blog form is that hopefully I can present on it again, maybe having thought on some of my next steps…


References (from my rEDBrum presentation, not all cited in this post)

Argyris, C., Putnam, R., Smith D.M. 1985. Action Science: Concepts, Methods, and Skills for Research and Intervention. San Francisco: Jossey-Bass.

Clarke, D. and Hollingsworth, H., 2002. Elaborating a model of teacher professional growth. Teaching and teacher education, 18(8), pp.947-967.​

Coe, R. and Kime, S., 2021. An evidence-based approach to CPD. Journal of the Chartered College of Teaching, 13.

Coldwell, M. and Simkins, T., 2011. Level models of continuing professional development evaluation: a grounded review and critique. Professional development in education, 37(1), pp.143-157.

Effective Professional Development guidance report, EEF (2021)

Fishman, B.J., Marx, R.W., Best, S. & Tal, R.T., 2003. Linking teacher and student learning to improve professional development in systemic reform. Teaching and Teacher Education, 19, 643-658.

Guskey, T.R., 2000. Evaluating professional development. Corwin press.​

Guskey, T.R., 2002. Professional development and teacher change. Teachers and Teaching: Theory and Practice, 8, 381-391.

McChesney, K. and Aldridge, J.M., 2019. What gets in the way? A new conceptual model for the trajectory from teacher professional development to impact. Professional development in education, 47(5), pp.834-852.

Sims, S., 2021. Why do some schools struggle to retain staff? Development and validation of the Teachers’ Working Environment Scale (TWES). Review of Education, 9(3), p.e3304.

Sims, S., Fletcher-Wood, H., O’Mara-Eves, A., Cottingham, S., Stansfield, C., Goodrich, J., Van Herwegen, J. and Anders, J., 2022. Effective teacher professional development: new theory and a meta-analytic test (No. 22-02). UCL Centre for Education Policy and Equalising Opportunities.

Special Educational Needs in Mainstream Schools guidance report, EEF (2020)

Weston, D., Hindley, B., & Cunningham, M. (2021). A culture of improvement: reviewing the research on teacher working conditions. Working paper version 1.1, February 2021. Teacher Development Trust.

Zuccollo and Fletcher-Wood (2020), The effects of high-quality professional development on teachers and students, EPI.


Beth is the Learning and Development Lead for the Raleigh Learning Trust and writes a half termly L&D Bulletin. She is a Teacher Development Trust Associate in CPD Leadership and an ELE with Derby Research School having led the 2019/20 Evidence Champions Programme.

This article has been reproduced with the kind permission of Beth. You can follow Beth on Twitter @bethgg or you can follow her blog here: https://impressionthatiget.wordpress.com/about/


28 views0 comments
bottom of page