Part 5: Research in practice
- What is program evaluation? (5 minute read time)
- Planning your program evaluation (20 minute read time, including video)
- Process evaluations and implementation science (7 minute read time)
- Outcome and impact evaluations (5 minute read time)
- Ethics and culture in program evaluation (10 minute read time)
Content warning: discussions of BMI/weight/obesity, genocide, and residential schools for indigenous children.
Imagine you are working for a nonprofit focused on children’s health and wellness in school. One of the grants you received this year funds a full-time position at a local elementary school for a teacher who will be integrating kinesthetic learning into their lesson plans for math classes for third graders. Kinesthetic learning is learning that occurs when the students do something physical to help learn and reinforce information, instead of listening to a lecture or other verbal teaching activity. You have read research suggesting that students retain information better using kinesthetic teaching methods and that it can reduce student behavior issues. You want to know if it might benefit your community.
When you applied for the grant, you had to come up with some outcome measures that would tell the foundation if your program was worth continuing to fund – if it’s having an effect on your target population (the kids at the school). You told the foundation you would look at three outcomes:
- How did using kinesthetic learning affect student behavior in classes?
- How did using kinesthetic learning affect student scores on end-of-year standardized tests?
- How did the students feel about kinesthetic teaching methods?
But, you say, this sounds like research! However, we have to take a look at the purpose, origin, effect, and execution of the project to understand the difference, which we do in section 23.1 in this chapter. Those domains are where we can find the similarities and differences between program evaluation and research.
Realistically, as a practitioner, you’re far more likely to engage in program evaluation than you are in research. So, you might ask why you are learning research methods and not program evaluation methods, and the answer is that you will use research methods in evaluating programs. Program evaluation tends to focus less on generalizability, experimental design, and replicability, and instead focuses on the practical application of research methods to a specific context in practice.
23.1 What is program evaluation?
Learners will be able to…
- Define program evaluation
- Discuss similarities and differences between program evaluation and research
- Determine situations in which program evaluation is more appropriate than research
can be defined as the systematic process by which we determine if social programs are meeting their goals, how well the program runs, whether the program had the desired effect, and whether the program has merit according to (including in terms of the monetary costs and benefits). It’s important to know what we mean when we say “evaluation.” Pruett (2000) provides a useful definition: “Evaluation is the systematic application of scientific methods to assess the design, implementation, improvement or outcomes of a program” (para. 1). That nod to scientific methods is what ties program evaluation back to research, as we discussed above. Program evaluation is action-oriented, which makes it fit well into social work research (as we discussed in Chapter 1).
Often, program evaluation will consist of mixed methods because its focus of is so heavily on the effect of the program in your specific context. Not that research doesn’t care about the effects of programs – of course it does! But with program evaluation, we seek to ensure the way that we are applying our program works in our agency, with our communities and clients. Thinking back to the example at the beginning of the chapter, consider the following: Does kinesthetic learning make sense for your school? What if your classroom spaces are too small? Are the activities appropriate for children with differing physical abilities who attend your school? What if school administrators are on board, but some parents are skeptical?
The project we talked about in the introductions – a real project, by the way – was funded by a grant from a foundation. The reality of the grant funding environment is that funders want to see that their money is not only being used wisely, but is having a material effect on the target population. This is a good thing, because we want to know our programs have a positive effect on clients and communities. We don’t want to just keep running a program because it’s what we’ve always done. (Consider the ethical implications of continuing to run an ineffective program.) It also forces us as practitioners to plan grant-funded programs with an eye toward evaluation. It’s much easier to evaluate your program when you can gather data at the beginning of the program than when you have to work backwards at the middle or end of the program.
How do program evaluation and research relate to each other?
As we talked about above, program evaluation and research are similar, particularly in that they both rely on scientific methods. Both use quantitative and qualitative methods, like data analysis and interviews. Effective program evaluation necessarily involves the research methods we’ve talked about in this book. Without understanding research methods, your program evaluation won’t be very rigorous and probably won’t give you much useful information.
However, there are some key differences between the two that render them distinct activities that are appropriate in different circumstances. Research is often exploratory and not evaluative at all, and instead looks for relationships between variables to build knowledge on a subject. It’s important to note at the outset that what we’re discussing below is not universally true of all projects. Instead, the framework we’re providing is a broad way to think about the differences between program evaluation and research. Scholars and practitioners disagree on whether program evaluation is a subset of research or something else entirely (and everything in between). The important thing to know about that debate is that it’s not settled, and what we’re presenting below is just one way to think about the relationship between the two.
According to Mathison (2008), the differences between program evaluation and research have to do with the domains of purpose, origins, effect and execution.
|Purpose||Judges merit or worth of the program||Produces generalizable knowledge and evidence|
|Origins||Stems from policy and program priorities of stakeholders||Stems from scientific inquiry based on intellectual curiosity|
|Effect||Provides information for decision-making on specific program||Advances broad knowledge and theory|
|Execution||Conducted within a setting of changing actors, priorities, resources and timelines||Usually happens in a controlled setting|
Let’s think back to our example from the start of the chapter – kinesthetic teaching methods for 3rd grade math – to talk more about these four domains.
To understand this domain, we have to ask a few questions: why do we want to research or evaluate this program? What do we hope to gain? This is the why of our project (Mathison). Another way to think about it is as the aim of your research, which is a concept you hopefully remember from Chapter 2.
Through the lens of program evaluation, we’re evaluating this program because we want to know its effects, but also because our funder probably only wants to give money to programs that do what they’re supposed to do. We want to gather information to determine if it’s worth it for our funder – or for us – to invest resources in the program.
If this were a research project instead, our purpose would be congruent, but different. We would be seeking to add to the body of knowledge and evidence about kinesthetic learning, most likely hoping to provide information that can be generalized beyond 3rd grade math students. We’re trying to inform further development of the body of knowledge around kinesthetic learning and children. We’d also like to know if and how we can apply this program in contexts other than one specific school’s 3rd grade math classes. These are not the only research considerations, but just a few examples.
Purpose and origins can feel very similar and be a little hard to distinguish. The main difference is that origins are about the who, whereas purpose is about the why (Mathison). So, to understand this domain, we have to ask about the source of our project – who wanted to get the project started? What do they hope this project will contribute?
For a program evaluation, the project usually arises from the priorities of funders, agencies, practitioners and (hopefully) consumers of our services. They are the ones who define the purpose we discussed above and the questions we will ask.
In research, the project arises from a researcher’s intellectual curiosity and desire to add to a body of knowledge around something they think is important and interesting. Researchers define the purpose and the questions asked in the project.
The effect of program evaluation and research is essentially what we’re going to use our results for. For program evaluation, we will use them to make a decision about whether a program is worth continuing, what changes we might make to the program in the future or how we might change the resources we devote going forward. The results are often also used by our funders to make decisions about whether they want to keep funding our program or not. (Outcome evaluations aren’t the only thing that funders will look at – they also sometimes want to know whether our processes in the program were faithful to what we described when we requested funding. We’ll discuss outcome and process evaluations in section 23.4.)
The effect of research – again, what we’re going to use our results for – is typically to add to the knowledge and evidence base surrounding our topic. Research can certainly be used for decision-making about programs, especially to decide which program to implement in the first place. But that’s not what results are primarily used for, especially by other researchers.
Execution is fundamentally the how of our project. What are the circumstances under which we’re running the project?
Program evaluation projects that most of us will ever work on are frequently based in a nonprofit or government agency. Context is extremely important in program evaluation (and program implementation). As most of us will know, these are environments with lots of moving parts. As a result, running controlled experiments is usually not possible, and we sometimes have to be more flexible with our evaluations to work with the resources we actually have and the unique challenges and needs of our agencies. This doesn’t mean that program evaluations can’t be rigorous or use strong research methods. We just have to be realistic about our environments and plan for that when we’re planning our evaluation.
Research is typically a lot more controlled. We do everything we can to minimize outside influences on our variables of interest, which is expected of rigorous research. Of course, some research is extremely controlled, especially experimental research and randomized controlled trials. this all ties back to the purpose, origins, and effects of research versus those of program evaluation – we’re primarily building knowledge and evidence.
In the end, it’s important to remember that these are guidelines, and you will no doubt encounter program evaluation projects that cross the lines of research, and vice versa. Understanding how the two differ will help you decide how to move forward when you encounter the need to assess the effect of a program in practice.
- Program evaluation is a systematic process that uses the scientific research method to determine the effects of social programs.
- Program evaluation and research are similar, but they differ in purpose, origins, effect and execution.
- The purpose of program evaluation is to judge the merit or worth of a program, whereas the purpose of research is primarily to contribute to the body of knowledge around a topic.
- The origins of program evaluation are usually funders and people working in agencies, whereas research originates primarily with scholars and their scientific interests.
- Program evaluations are typically used to make decisions about programs, whereas research is used to add to the knowledge and evidence base around a topic.
- Executing a program evaluation project requires a strong understanding of your setting and context in order to adapt your evaluation to meet your goals in a realistic way. The execution of research is much more controlled and seeks to minimize the influence of context.
- If you were conducting a research project on the kinesthetic teaching methods that we talked about in this chapter, what is one research question you could study that aligns with the purpose, origins, and effects of research?
- Consider the research project you’ve been building throughout this book. What is one program evaluation question you could study that aligns with the purpose, origins, and effects of program evaluation? How might its execution look different than what you’ve envisioned so far?
23.2 Planning your program evaluation
Learners will be able to…
- Discuss how planning a program evaluation is similar and different from planning a research project
- Identify program stakeholders
- Identify the basics of logic models and how they inform evaluation
- Produce evaluation questions based on a logic model
Planning a program evaluation project requires just as much care and thought as planning a research project. But as we discussed in section 23.1, there are some significant differences between program evaluation and research that mean your planning process is also going to look a little different. You have to involve the program stakeholders at a greater level than that found with most types of research, which will sometimes focus your program evaluation project on areas you wouldn’t have necessarily chosen (for better or worse). Your program evaluation questions are far less likely to be exploratory; they are typically evaluative and sometimes explanatory.
For instance, I worked on a project designed to increase physical activity for elementary school students at recess. The school had noticed a lot of kids would just sit around at recess instead of playing. As an intervention, the organization I was working with hired recess coaches to engage the kids with new games and activities to get them moving. Our plan to measure the effect of recess coaching was to give the kids pedometers at a couple of different points during the year, and see if there was any change in their activity level as measured by the number of steps they took during recess. However, the school was also concerned with the rate of obesity among students, and asked us to also measure the height and weight of the students to calculate BMI at the beginning and end of the year. I balked at this because kids are still growing and BMI isn’t a great measure to use for kids and some kids were uncomfortable with us weighing them (with parental consent), even though no other kids would be in the room. However, the school was insistent that we take those measurements, and so we did that for all kids whose parents consented and who themselves assented to have their weight measured. We didn’t think BMI was an important measure, but the school did, so this changed an element of our evaluation.
In an ideal world, your program evaluation is going to be part of your overall program plan. This very often doesn’t happen in practice, but for the purposes of this section, we’re going to assume you’re starting from scratch with a program and really internalized the first sentence of this paragraph. (It’s important to note that no one intentionally leaves evaluation out of their program planning; instead, it’s just not something many people running programs think about. They’re too busy… well, running programs. That’s why this chapter is so important!)
In this section, we’re going to learn about how to plan your program evaluation, including the importance of logic models. You may have heard people groan about logic models (or you may have groaned when you read those words), and the truth is, they’re a lot of work and a little complicated. Teaching you how to make one from start to finish is a little bit outside the scope of this section, but what I am going to try to do is teach you how to interpret them and build some evaluation questions from them. (Pro-tip: logic models are a heck of a lot easier to make in Excel than Word.)
It has three primary steps: engaging stakeholders, describing the program and focusing the evaluation.
Step 1: Engaging stakeholders
are the people and organizations that have some interest in or will be impacted by our program. Including as many stakeholders as possible when you plan your evaluation will help to make it as useful as possible for as many people as possible. The key to this step is to listen. However, a note of caution: sometimes stakeholders have competing priorities, and as the program evaluator, you’re going to have to help navigate that. For example, in our kinesthetic learning program, the teachers at your school might be interested in decreasing classroom disruptions or enhancing subject matter learning, while the administration is solely focused on test scores, while the administration is solely focused on test scores. Here is where it’s a great idea to use your social work ethics and research knowledge to guide conversations and planning. Improved test scores are great, but how much does that actually benefit the students?
Step 2: Describe the program
Once you’ve got stakeholder input on evaluation priorities, it’s time to describe what’s going into the program and what you hope your participants and stakeholders will get out of it. Here is where a logic model becomes an essential piece of program evaluation. A “is a graphic depiction (road map) that presents the shared relationships among the resources, activities, outputs, outcomes, and impact for your program” (Centers for Disease Control, 2018, para. 1). Basically, it’s a way to show how what you’re doing is going to lead to an intended outcome and/or impact. (We’ll discuss the difference between outcomes and impacts in section 23.4.)
Logic models have several key components, which I describe in the list below (CDC, 2018). The components are numbered because of where they come in the “logic” of your program – basically, where they come in time order.
- Inputs: resources (e.g. people and material resources) that you have to execute your program.
- Activities: what you’re actually doing with your program resources.
- Outputs: the direct products and results of your program.
- Outcomes: the changes that happen because of your program inputs and activities.
- Impacts: the long-term effects of your program.
The CDC also talks about moderators – what they call “contextual factors” – that affect the execution of your program evaluation. This is an important component of the execution of your project, which we talked about in 23.1. Context will also become important when we talk about implementation science in section 23.3.
Let’s think about our kinesthetic learning project. While you obviously don’t have full information about what the project looks like, you’ve got a good enough idea for a little exercise below.
Step 3: Focus the evaluation
So now you know what your stakeholder priorities are and you have described your program. It’s time to figure out what questions you want to ask that will reflect stakeholder priorities and are actually possible given your program inputs, activities and outputs.
Why do inputs, activities and outputs matter for your question?
- Inputs are your resources for the evaluation – do you have to do it with existing staff, or can you hire an expert consultant? Realistically, what you ask is going to be affected by the resources you can dedicate to your evaluation project, just like in a research project.
- Activities are what you can actually evaluate – for instance, what effect does using hopscotch to teach multiplication have?
- And finally, outputs are most likely your indicators of change – student engagement with administrators for behavioral issues, end-of-grade math test scores, for example.
- Program evaluation planning should be rigorous like research planning, but will most likely focus more on stakeholder input and evaluative questions
- The three primary steps in planning a program evaluation project are engaging stakeholders, describing your program, and focusing your evaluation.
- Logic models are a key piece of information in planning program evaluation because they describe how a program is designed to work and what you are investing in it, which are important factors in formulating evaluation questions.
- Imagine your research project is a program evaluation project.
- Who would the key stakeholders be? What is each stakeholder’s interest in the project?
- What are the activities (the action(s) you’re evaluating) and outputs (data/indicators) for your program? Can you turn them into an evaluation question?
23.3 Process evaluation and implementation science
Learners will be able to…
- Define process evaluation
- Explain why process evaluation is important for programs
- Distinguish between process and outcome measures
- Explain the purpose of implementation science and how it relates to program evaluation
Something we often don’t have time for in practice is evaluating how things are going internally with our programs. How’s it going with all the documentation our agency asks us to complete? Is the space we’re using for our group sessions facilitating client engagement? Is the way we communicate with volunteers effective? All of these things can be evaluated using a , which is an analysis of how well your program ended up running, and sometimes how well it’s going in real time. If you have the resources and ability to complete one of these analyses, I highly recommend it – even if it stretches your staff, it will often result in a greater degree of efficiency in the long run. (Evaluation should, at least in part, be about the long game.)
From a research perspective, process evaluations can also help you find irregularities in how you collect data that might be affecting your outcome or impact evaluations. Like other evaluations, ideally, you’re going to plan your process evaluation before you start the project. Take an iterative approach, though, because sometimes you’re going to run into problems you need to analyze in real time.
The RAND corporation is an excellent resource for guidance on program evaluation, and they describe process evaluations this way: “Process evaluations typically track attendance of participants, program adherence, and how well you followed your work plan. They may also involve asking about satisfaction of program participants or about staff’s perception of how well the program was delivered. A process evaluation should be planned before the program begins and should continue while the program is running” (RAND Corporation, 2019, para. 1).
There are several key data sources for process evaluations (RAND Corporation, 2019), some of which are listed below.
- Participant data: can help you determine if you are actually reaching the people you intend to.
- Focus groups: how did people experience the program? How could you improve it from the participant perspective?
- Satisfaction surveys: did participants get what they wanted from the program?
- Staff perception data: How did the program go for staff? Were expectations realistic? What did they see in terms of qualitative changes for participants?
- Program adherence monitoring: how well did you follow your program plans?
Using these data sources, you can learn lessons about your program and make any necessary adjustments if you run the program again. It can also give you insights about your staff’s needs (like training, for instance) and enable you to identify gaps in your programs or services.
Implementation science: The basics
A further development of process evaluations, is “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services.” (Bauer, Damschroder, Hagerdorn, Smith & Kilbourne, 2015)
Put more plainly, implementation science studies how we put evidence-based interventions (EBIs) into practice. It’s essentially a form of process evaluation, just at a more macro level. Implementation science is a relatively new field of study that focuses on how to best put interventions into practice, and it’s important because it helps us analyze on a macro level those factors that might affect our ability to implement a program. Implementation science focuses on the context of program implementation, which has significant implications for program evaluation.
A useful framework for implementation science is the EPIS (Exploration, Preparation, Implementation and Sustainment) framework. It’s not the only one out there, but I like it because to me, it sort of mirrors the linear nature of a logic model.
The EPIS framework was developed by Aarons, Hurlburt and Horwitz (first published 2011). (The linked article is behind a paywall, the abstract is still pretty useful, and if you’re affiliated with a college or university, you can probably get access through your library.) This framework emphasizes the importance of the context in which your program is being implemented – inner, organizational, context and outer, or the political, public policy and social contexts. What’s happening in your organization and in the larger political and social sphere that might affect how your program gets implemented?
There are a few key questions in each phase, according to Aarons, Hurlburt and Horwitz (2011):
- Exploration phase: what is the problem or issue we want to address? What are our options for programs and interventions? What is the best way to put them into practice? What is the organizational and societal context that we need to consider when choosing our option?
- Preparation: which option do we want to adopt? What resources will we need to put that option into practice? What are our organizational or sociopolitical assets and challenges in putting this option into practice?
- Implementation: what is actually happening now that we’re putting our option into practice? How is the course of things being affected by contexts?
- Sustainment: what can we do to ensure our option remains viable, given competing priorities with funding and public attention?
Implementation is a new and rapidly advancing field, and realistically, it’s beyond what a lot of us are going to be able to evaluate in our agencies at this point. But even taking pieces of it – especially the pieces about the importance of context for our programs and evaluations – is useful. Even if you don’t use it as an evaluative framework, the questions outlined above are good ones to ask when you’re planning your program in the first place.
- A process evaluation is an analysis of how your program actually ran, and sometimes how it’s running in real time.
- Process evaluations are useful because they can help programs run more efficiently and effectively and reveal agency and program needs.
- The EPIS model is a way to analyze the implementation of a program that emphasizes distinct phases of implementation and the context in which the phases happen.
- The EPIS model is also useful in program planning, as it mirrors the linear process of a .
- Consider your research project or, if you have been able to adapt it, your program evaluation project. What are some inner/organizational context factors that might affect how the program gets implemented and what you can evaluate?
- What are some things you would want to evaluate about your program’s process? What would you gain from that information?
23.4 Outcome and impact evaluations
Learners will be able to…
- Define outcome
- Explain the principles of conducting an outcome evaluation
- Define impact
- Explain the principles of conducting an impact evaluation
- Explain the difference between outcomes and impacts
A lot of us will use “outcome” and “impact” interchangeably, but the truth is, they are different. An is the final condition that occurs at the end of an intervention or program. It is the short-term effect – for our kinesthetic learning example, perhaps an improvement over last year’s end-of-grade math test scores. An is the long-term condition that occurs at the end of a defined time period after an intervention. It is the longer-term effect – for our kinesthetic learning example, perhaps better retention of math skills as students advance through school. Because of this distinction, outcome and impact evaluations are going to look a little different.
But first, let’s talk about how these types of evaluations are the same. Outcome and impact evaluations are all about change. As a result, we have to know what circumstance, characteristic or condition we are hoping will change because of our program. We also need to figure out what we think the causal link between our intervention or program and the change is, especially if we are using a new type of intervention that doesn’t yet have a strong evidence base.
For both of these types of evaluations, you have to consider what type of research design you can actually use in your circumstances – are you coming in when a program is already in progress, so you have no baseline data? Or can you collect baseline data to compare to a post-test? For impact evaluations, how are you going to track participants over time?
The main difference between outcome and impact evaluation is the timing and, consequently, the difficulty and level of investment. You can pretty easily collect outcome data from program participants at the end of the program. But tracking people over time, especially for populations social workers serve, can be extremely difficult. It can also be difficult or impossible to control for whatever happened in your participant’s life between the end of the program and the end of your long-term measurement period.
Impact evaluations require careful planning to determine how your follow-up is going to happen. It’s a good practice to try to keep intermittent contact with participants, even if you aren’t taking a measurement at that time, so that you’re less likely to lose track of them.
- Outcomes are short-term effects that can be measured at the end of a program.
- Outcome evaluations apply research methods to the analysis of change during a program and try to establish a logical link between program participation and the short-term change.
- Impacts are long-term effects that are measured after a period of time has passed since the end of a program.
- Impact evaluations apply research methods to the analysis of change after a defined period of time has passed after the end of a program and try to establish a logical link between program participation and long-term change.
- Is each of the following examples an outcome or an impact? Choose the correct answer.
23.5 Ethics and culture in program evaluation
Learners will be able to…
- Discuss cultural and ethical issues to consider when planning and conducting program evaluation
- Explain the importance of stakeholder and participant involvement to address these issues
In a now decades-old paper, Stake and Mabry (1998) point out, “The theory and practice of evaluation are of little value unless we can count on vigorous ethical behavior by evaluators” (p. 99). I know we always say to use the most recent scholarship available, but this point is as relevant now as it was over 20 years ago. One thing they point out that rings particularly true for me as an experienced program evaluator is the idea that we evaluators are also supposed to be “program advocates” (p. 99). We have to work through competing political and ideological differences from our stakeholders, especially funders, that, while sometimes present in research, are especially salient for program evaluation given its origins.
There’s not a rote answer for these ethical questions, just as there are none for the practice-based ethical dilemmas your instructors hammer home with you in classes. You need to use your research and social work ethics to solve these problems. Ultimately, do your best to focus on rigor while meeting stakeholder needs.
One of the most important ethical issues in program evaluation is the implication of not evaluating your program. Providing an ineffective intervention to people can be extremely harmful. And what happens if our intervention actually causes harm? It’s our duty as social workers to explore these issues and not just keep doing what we’ve always done because it’s expedient or guarantees continued funding. I’ve evaluated programs before that turned out to be ineffective, but were required by state law to be delivered to a certain population. It’s not just potentially harmful to clients; it’s also a waste of precious resources that could be devoted to other, more effective programs.
We’ve talked throughout this book about ethical issues and research. All of that is applicable to program evaluation too. Federal law governing IRB practice does not require that program evaluation go through IRB if it is not seeking to gather generalizable knowledge, so IRB approval isn’t a given for these projects. As a result, you’re even more responsible for ensuring that your project is ethical.
Ultimately, social workers should start from a place of humility in the face of cultures or groups of which we are not a part. Cultural considerations in program evaluation look similar to those in research. Something to consider about program evaluation, though: is it your duty to point out potential cultural humility issues as part of your evaluation, even if you’re not asked to? I’d argue that it is.
It is also important we make sure that our definition of success is not oppressive. For example, in Australia, the government undertook a program to remove Aboriginal children from their families and assimilated them into white culture. The program was viewed as successful, but the measures of success were based on oppressive beliefs and stereotypes. This is why stakeholder input is essential – especially if you’re not a member of the group you’re evaluating, stakeholders are going to be the ones to tell you that you may need to reconsider what “success” means.
Unrau , Gabor, and Grinnell (2007) identified several important factors to consider when designing and executing a culturally sensitive program evaluation. First, evaluators need “a clear understanding of the impact of culture on human and social processes generally and on evaluation processes specifically and… skills in cross-cultural communications to ensure that they can effectively interact with people from diverse backgrounds” (p. 419). These are also essential skills in social work practice that you are hopefully learning in your other classes! We should strive to learn as much as possible about the cultures of our clients when they differ from ours.
The authors also point out that evaluators need to be culturally aware and make sure the way they plan and execute their evaluations isn’t centered on their own ethnic experience and that they aren’t basing their plans on stereotypes about other cultures. In addition, when executing our evaluations, we have to be mindful of how our cultural background affects our communication and behavior, because we may need to adjust these to communicate (both verbally and non-verbally) with our participants in a culturally sensitive and appropriate way.
Consider also that the type of information on which you place the most value may not match that of people from other cultures. Unrau , Gabor, and Grinnell (2007) point out that mainstream North American cultures place a lot of value on hard data and rigorous processes like clinical trials. (You might notice that we spend a lot of time on this type of information in this textbook.) According to the authors, though, cultures from other parts of the world value relationships and storytelling as evidence and important information. This kind of information is as important and valid as what we are teaching you to collect and analyze in most of this book.
Being the squeaky wheel about evaluating programs can be uncomfortable. But as you go into practice (or grow in your current practice), I strongly believe it’s your ethical obligation to push for evaluation. It honors the dignity and worth of our clients. My hope is that this chapter has given you the tools to talk about it and, ultimately, execute it in practice.
- Ethical considerations in program evaluation are very similar to those in research.
- Culturally sensitive program evaluation requires evaluators to learn as much as they can about cultures different from their own and develop as much cultural awareness as possible.
- Stakeholder input is always important, but it’s essential when planning evaluations for programs serving people from diverse backgrounds.
- Consider the research project you’ve been working on throughout this book. Are there cultural considerations in your planning that you need to think about?
- If you adapted your research project into a program evaluation, what might some ethical considerations be? What ethical dilemmas could you encounter?
- “School children” by Prato is licensed under CC BY-NC-ND 4.0 © Prato is licensed under a CC BY-NC-ND (Attribution NonCommercial NoDerivatives) license
- “IMG_6705” by swayinglights is licensed under CC BY-ND 4.0 © Michael R. Shaughnessy
- “Two colleagues, a transgender woman and a non-binary person, laughing in a meeting at work.” by Zachary Drucker is licensed under CC BY-NC-ND 4.0 © Zackary Drucker is licensed under a CC BY-NC-ND (Attribution NonCommercial NoDerivatives) license
- “Winding Road” by dirk1812 is licensed under CC BY-NC-SA 4.0
- “Native american dancer” by Alan Berning is licensed under CC BY-NC-SA 4.0 © Alan Berning is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
- Pruett, R. (2000). Program evaluation 101. Retrieved from https://mainweb-v.musc.edu/vawprevention/research/programeval.shtml ↵
- Mathison, S. (2007). What is the difference between research and evaluation—and why do we care? In N. L. Smith & P. R. Brandon (Eds.), Fundamental issues in evaluation (pp. 183-196). New York: Guilford. ↵
- RAND Corporation. (2020). Step 07: Process evaluation. Retrieved from https://www.rand.org/pubs/tools/TL259/step-07.html. ↵
- RAND Corporation. (2020). Step 07: Process evaluation. Retrieved from https://www.rand.org/pubs/tools/TL259/step-07.html. ↵
- Bauer, M., Damschroder, L., Hagedorn, H., Smith, J. & Kilbourne, A. (2015). An introduction to implementation science for the non-specialist. BMC Psychology, 3(32). ↵
- Aarons, G., Hurlburt, M. & Horwitz, S. (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research, 38(1), pp. 4-23. ↵
- Stake, R. & Mabry, L. (2007). Ethics in program evaluation. Scandinavian Journal of Social Welfare, 7(2). ↵
- Unrau, Y., Gabor, P. & Grinnell, R. (2007). Evaluation in social work: The art and science of practice. New York, New York: Oxford University Press. ↵
- Unrau, Y., Gabor, P. & Grinnell, R. (2007). Evaluation in social work: The art and science of practice. New York, New York: Oxford University Press. ↵
The systematic process by which we determine if social programs are meeting their goals, how well the program runs, whether the program had the desired effect, and whether the program has merit according to stakeholders (including in terms of the monetary costs and benefits)
individuals or groups who have an interest in the outcome of the study you conduct
the people or organizations who control access to the population you want to study
The people and organizations that have some interest in or will be effected by our program.
A graphic depiction (road map) that presents the shared relationships among the resources, activities, outputs, outcomes, and impact for your program
An analysis of how well your program ended up running, and sometimes how well it's going in real time.
The scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services.
The final condition that occurs at the end of an intervention or program.
Tthe long-term condition that occurs at the end of a defined time period after an intervention.