Sarah Hamilton, Senior Evaluation Manager at EngineeringUK, shares her insights on evaluation ahead of upcoming Tomorrow's Engineers Live 2023 conference.
If you want to know how someone feels, you have to ask the right question. The questions we ask lead people in different directions. We ask people to think about their lives and their opinions in particular ways, perhaps in ways they haven’t considered before. We make assumptions that they understand what we mean by our question; and that we know what they mean by their answer.
This is even more so when you restrict their answer to a simple likert scale.
Evaluation frequently relies on asking people questions and restricting their answers so that they can be scored, analysed and compared. When we evaluate STEM outreach activities designed to change young people’s attitudes, beliefs and aspirations, these answers are the best evidence we have that we’ve had any impact in the short-term.
So we need to be asking the right questions, in the right ways. Which is easier said than done.
There are many existing instruments that cover a wide range of outcomes that might be of interest to STEM engagement activities – interest in science, engineering career aspiration, knowledge about engineering, engineering-related self-efficacy. Matching the right construct, or constructs, to your activity can be challenging.
Some survey instruments that ask about students experiences and circumstances may not be useful as outcomes measures which are designed to quantify change as a result of an intervention. Not every aspect of a young person’s life is open to manipulation from outside. An activity may be dismissed as ineffective because the wrong type of measure was used.
Think about your audience
Even where an outcomes measure exists that measures the right construct, you still need to assess whether the questions are right for your participants. Questions developed for students aged 16+ might not be the right questions for students aged 9 to 11. Questions developed in the United States might not make sense to students in the UK where education systems and cultural norms differ.
Perhaps most importantly, any selected measure needs to be practical for use in your context. Questionnaires measuring complex constructs often include many questions. Alongside demographic questions and programme specific feedback, surveys can easily become unreasonably long, especially if the activity you are evaluating is short. Even if you could persuade your participants to complete it, the chances that they’re still paying attention by the end are pretty slim.
At Tomorrow’s Engineers Live, we’re hoping to find out more about how other organisations are approaching this tricky challenge, and whether there is an opportunity to collaborate on finding just the right tool for the job.