字幕列表 影片播放
>> Welcome back to our video series
on evaluating health adaptation for a changing climate.
In this video, we use our real world scenarios
to illustrate best practices for focusing evaluation designs.
Focusing our evaluations design is the third step in the CDC evaluation framework.
This is where we determine exactly what parts of our adaptation action to evaluate and how.
In step three, stakeholders use a participatory process to identify evaluation questions
and determine how they will be interested,
always keeping in mind how they will use these answers to improve
or enhance their adaptation actions.
Working together, stakeholders decide on the evaluation study design,
appropriate indicators, and analysis techniques.
They also agree on dissemination and action plans.
Focusing and evaluation design for health adaptations can be complicated.
For example, over time, climate change may shift our baselines.
Additionally, it may be hard
to develop indicators appropriate for the scale of our adaptation.
We must also account for complicated relationships
or influences in our study designs.
We can overcome challenges in focusing evaluation for health adaptations
by following three primary practices; map to your model, mix methods, and be flexible.
We use logic models so we can consider the full breadth and depth of our adaptation,
and thus which components could be evaluated.
This process involves convening stakeholders to directly reference specific pieces
of their logic model as they answer key focus questions, such as what do we want to learn,
and how will we use this information.
This is often the first step in designing an evaluation
and helps us balance our stakeholders' diverse interests.
Given the issues of time, scale and complexity, it is often a good idea to include a mix
of both qualitative and quantitative data collection techniques,
as well as a variety of data sources.
Doing so can maximize the quality of our evaluation data.
Being open to change as we develop our evaluation design
with stakeholders can mean returning to previous steps in the evaluation framework.
This can entail identifying additional stakeholders,
or making changes to the logic model.
While this may set back the evaluation timeline, it will lead to better use of resources,
including time and stakeholder investment, and ultimately produce a more meaningful evaluation.
Let's now check in with our practitioners
to see how they are using these practices to complete step three.
Cassandra and her stakeholders begin focusing their evaluation using their logic model.
After talking about what they want to learn and what's feasible given the scale
of the greening initiative, as well as the timing and complexity of their outcomes,
they decide to build evaluation into a pilot project.
The evaluation will focus on construction, maintenance, and the effectiveness
of the pilot sites to divert storm water.
Cassandra and her stakeholders develop an evaluation plan to examine the construction
and maintenance of the sites, using administrative records and on site observations.
To keep health the focus of her evaluation, Cassandra looks to the literature
and finds an indicator shown to be associated with a reduction in waterborne illness.
She and her stakeholders decide to use this to evaluate the effectiveness of the adaptation.
Cassandra and her stakeholders continue focusing their evaluation design by identifying all
of the indicators they plan to collect and when they will collect them.
For example, they will measure percentage of reduction and runoff both before
and after the rain garden adaptation.
Elaine, our state epidemiologist working on wildfire preparedness,
begins to focus her evaluation, taking
into consideration her stakeholders' diverse interests.
During early discussions between state leadership and county partners,
they came to a consensus on usefulness of qualitative data, such as interview content
and spoken feedback during trainings for their respective needs and concerns.
Given the challenge of face to face meetings, Elaine plans a series of virtual meetings
to start a participatory process for identifying the mix of methods she
and her county partners could use to improve the volume of qualitative data collected.
During their virtual meetings, Elaine and her county partners use their logic model
to map how far they've gotten with the different interventions.
This helps them understand what evaluation questions would be most feasible and relevant.
Considering the process and outcome data her county stakeholders could collect,
Elaine lists some methods that could be used to ensure the qualitative data are of high quality.
She sends this list to county partners along with a description
of what is involved with each method.
She also distributes a brief survey asking counties
to rank the feasibility of using each method.
After she tallies the results from her survey, Elaine begins to sketch
out the evaluation design she envisions, alongside the selected methods.
Her next step is to convene a meeting with her state leaders and county partners
to collectively develop an evaluation plan that looks at the effectiveness of the outreach,
training, and resource assistance wildfire interventions.
Let's check in on Jackson's progress planning an evaluation
of home health aides' training to protect elderly from heat.
After discussing several options, Jackson and his stakeholder group hone
in on the logic model paths they like their evaluation to focus on.
While discussing the study design, one of its community stakeholders mentions
that the training module they added is already part
of an existing cohort study they could possibly join.
In order to join, however, they would need a strong evaluation design.
Jackson and his group discuss the advantages of a cohort design for their evaluation,
given the seasonal nature of their heat exposure and the swift predictable onset
of heat illness during the heat season.
Additionally, because the broader cohort study is already established,
they could expect to avoid some of the initial cost and logistics of starting a new study.
Getting to work on their evaluation plan,
Jackson and his stakeholders begin adding components to their logic model
to illustrate the path their control group would take.
Jackson and his working group follow these paths and map the indicators accordingly.
They also reflect on what past experience tells them about the data availability
and validity of those indicators.
Thinking about their ultimate outcome, reduce heat illness,
they are reminded of some data quality concerns.
A recent quality check of their county surveillance system revealed
that the hospital data was less complete than they had previously believed.
To address this, Jackson and his group decide to use two different sources
for their health outcome indicators during this evaluation cycle;
self report data and observations.
They make sure to document potential limitations with these new sources in the evaluation plan.
In the meantime, Jackson begins plans to engage his hospital partners
to improve the completeness of hospital data for the next round of evaluation.
As we can see, focusing the design can be an iterative process of considering
and reconsidering evaluation questions, indicators, and methods.
We hope these videos and their study guides help you in the planning phase
of evaluating your climate and health work.
In addition, the CDC provides many useful resources on focusing your evaluation design.
For more information, please visit our climate and health evaluation webpage.