R&E Represents…Nationally!

Cathy, Marsha, and I are avid readers of the AEA365 (the tip-of-the-day from the American Evaluation Association). Yesterday I was surprised to learn that a post I’d submitted weeks ago was selected for publication! Here’s a re-blogging:

 

 

My name is Megan Olshavsky and I’ve been an evaluator of PreK-12 educational programs for about a year and half now. Before starting my work in a public school district, I was researching learning and memory processes in rats, earning my Ph.D. in Psychology – Behavioral Neuroscience. My experiments were very controlled: the rats exhibited the behavior or they did not, neurons were active or they were not, results were statistically significant or they were not.

Moving from that environment to the “real world” of a school district which employs and serves humans in all their messiness caused some growing pains. How was I supposed to decide whether an educational intervention lead to academic improvement without proper control and experimental conditions?! One of the first projects I’ve worked on is a developmental evaluation of a technology initiative. Developmental Evaluation made me feel ever more flakey –“Hey everyone! Let’s monitor things as they unfold. What are we looking for? Not sure, but we’ll know it when we see it.”

As I’ve transitioned from researcher to evaluator, three things have helped me feel more legit.

Lesson Learned 1: Trust yourself. You may not be an expert in the area you are evaluating, but you do have expertise looking at data with a critical eye, asking probing questions, and synthesizing information from a variety of sources.

Lesson Learned 2: Collaborate with a team who has diverse expertise. Our developmental evaluation team engaged teachers, instructional technology specialists, information systems staff, and evaluators. When everyone on that team can come to the same conclusion, I feel confident we’re making the right decision.

Lesson Learned 3: Embrace capacity building as part of your work. No one would recommend training-up stakeholders to do their own inferential statistics. You can, however, influence the people around you to be critical about their work. Framing is critical. “Evaluation” is a scary word, but “proving the project/program/intervention is effective” is a win for everyone. Building relationships and modeling that expertise we talked about in Lesson #1 leads to gradual institutional shift toward evaluative thinking.

Rad Resource: Notorious R.B.G: The Life and Times of Ruth Bader Ginsburg.

rbgLet RBG be your guide as you gather and synthesize the relevant information, discuss with your diverse team, and advocate for slow institutional change.”

 

 

Megan