Friday, September 12, 2014

What if, mid career as a researcher, you become interested in Evaluation?

An old classmate, that took the market research route after completing her Research Psych Master's Degree, asked me for a couple of references to check out if she wanted to develop her evaluation knowledge and skills. What came to mind is the following professional development resources. I'm sure there's many more easily accessible ones, but this is a good start for a list!

 

Tuesday, August 12, 2014

Further Resources and Links for those who attended the Bridge M&E Colloquium on 12 August 2014


Today, I got the opportunity to present to the Bridge M&E Colloquium on the work I'm doing with the CSIR Meraka Institute on the ICT4RED project. My first presentation gave some background about the ICT4RED project. 




I referred to the availability of the Teacher Professional Development course under a creative commons licence here, - This resource also includes a full description of the micro-accreditation system or Badging system. 

What seemed to get the participants in the meeting really excited is the 12 Component model of the project - which seems to suggest that one has to pay attention to much more than just technology when you implement a project of this nature. My colleagues published a paper on this topic here.

Participants also resonated with the "Earn as you Learn" model that the project follows - If teachers demonstrate that they comply with certain assessment criteria, they earn technology and peripherals for themselves and for their schools. A paper on the gamification philosophy that underlies the course, is available here.  The Learn to Earn model was documented in a learning brief here.


And then I was able to speak a little more about the evaluation design of the project. The paper that underlies this work is available here, and the presentation is accessible below:



I think what sets our project evaluation apart from many others being conducted in South Africa, is that it truly uses "Developmental Evaluation" as the evaluation approach. For more information about this (and for a very provocative evaluation read in general), make sure you get your hands on Michael Patton's book. A short description of the approach and a list of other resources can also be found here.

People really liked the idea of using Learning Briefs to document learning for / from team members, and to share with a wider community. This is an idea inspired by the DG Murray Trust. I blogged about the process and template we used before. An example of the learning brief that the M&E team developed for the previous round, is available here. More learning briefs are available on the ICT4RED blog.

I also explained that we use the Impact Story Tool for capturing and verifying an array of anticipated and unanticipated impacts. I've explained the use and analysis of the tool in more detail in another blog post. There was immediate interest in this simple little tool.

A neat trick that also got some people excited, is how we use Survey Monkey. To make sure that our data is available quickly to all potential users on the team, we capture our data (even data collected on paper) in Survey Monkey, and then share the results with our project partners via the sharing interface on Surveymonkey - even before we've really been able to analyse the data. The Survey Monkey site, explains this in a little more detail with examples.

The idea of using non-traditional electronic means to help with data collection also got some participants excited. I explained that we have a Whatsapp group for facilitators, and we monitor this, together with our more traditional post-training feedback forms, to ascertain if there are problems that need solving. In an upcoming blog post, I'll share a little bit about exactly how we used the WhatsApp data, and what we were able to learn from it.

Tuesday, June 24, 2014

Exciting Learning from people involved in South African ICT in Education

I've been fortunate to be invited to a small gathering of people working with Coza Cares in the ICT space in South Africa. The luxury of sitting down for two days and listening to people talk about what they are passionate about, is something to truly savour.

I did a presentation on some ideas I have to define and measure learners' 21st Century Skills in the context of the ICT4RED project. I currently have more questions than answers, but I'm sure we will get there soon. Here is a link to a summary table comparing different Definitions of 21st Century Skills.

Other presentations I really enjoyed was
* Barbara Dale Jones on the role of Bridge and learning communities and knowledge management
* Fiona Wallace, on the CoZaCares model of ICT intervention
* John Thole on Edunova's programme to train and deploy youth to support ICT implementation in Schools
* Siobhan Thatcher from Edunova's presentation on Edunova's model for deploying Learning Centres in support of Schools
* Brett Simpson from Breadbin Interactive on the learning they've done on the deployment of their content repository.
*Ben Bredenkamp from Pendula ICT talking about their model for ICt integration and experience of the One Laptop per Child project in South Africa.
* Dylan Busa from Mindset speaking about the relaunch of their website content.
* Merryl Ford and Maggie Verster talking about the ICT4RED project




Wednesday, May 28, 2014

Impact Evaluation Guidance for Non-profits

Interaction has this lovely Guidance note and Webinar Series on Impact Evaluation available on their website.

Impact Evaluation Guidance Note and Webinar Series

With financial support from the Rockefeller Foundation, InterAction developed a four-part series of guidance notes and webinars on impact evaluation. The purpose of the series is to build the capacity of NGOs (and others) to demonstrate effectiveness by increasing their understanding of and ability to conduct high quality impact evaluation.
The four guidance notes in the series are:
  1. Introduction to Impact Evaluation, by Patricia Rogers, Professor in Public Sector Evaluation, RMIT University
  2. Linking Monitoring & Evaluation to Impact Evaluation,  by Burt Perrin, Independent Consultant
  3. Introduction to Mixed Methods in Impact Evaluation, by Michael Bamberger, Independent Consultant
  4. Use of Impact Evaluation Results, by David Bonbright, Chief Executive, Keystone Accountability
Each guidance note is accompanied by two webinars. In the first webinar, the authors present an overview of their note. In the second webinar, two organizations - typically NGOs - present on their experiences with different aspects of impact evaluation. In addition, each guidance note has been translated into several languages, including Spanish and French. Webinar recordings, presentation slides and the translated versions of each note are provided on the website.

Wednesday, May 21, 2014

Resources on Impact Evaluation

This post consolidates a list of impact evaluation resources that I usually refer to when I am asked about impact evaluations. 

This cute video explains the factors that distinguishes impact evaluation from other kinds of evaluation, in two minutes. Of course randomization isn't the only way of credibly attributing causes and effects - and this is a particularly hot evaluation methodology debate.  For an example of why this is sometimes an irrelevant debate - see this write up on parachutes and Chris Lysy's cartoons on the topic.

Literature on the Impact Evaluation Debate

The Impact Evaluation debate flared up after this report, titled "When will we ever learn" was released in 2006. In the States there also was a prominent funding mechanism which required programmes to include experimental evaluation methods in their design, or not get funding (from about 2003 or so).

The bone of contention was that Randomized Control Trials (RCTs) and Experimental methods (and to some extent Quasi Experimental Designs) were held up as the "gold standard" in evaluation. Which, in my opinion, is nonsense. So the debate about what counts as evidence started again. The World Bank and big corporate donors were perceived to push for Experimental Methods, Evaluation Associations (with members committed to mixed methods) pushed back saying methods can't be determined without knowing what the questions are. And others pushed back saying that RCTs are probably applicable in only about 5% of the cases in which evaluation is necessary.

The methods debate in Evaluation is really an old debate. Some really prominent evaluators decided to leave the AEA because they embarked on a position that they equated with "The flat earth movement" in geography.  Here is a nice overview article, (The 2004 Claremont Debate: Lipsey vs. Scriven. DeterminingCausality in Program Evaluation and Applied Research: Should ExperimentalEvidence Be the Gold Standard?) to summarise some of it.
The Network of Networks in Impact evaluation then sought to write a guidance document, but even after this was released, there was a feeling that not enough was said to counter the "gold standard" mentality.  This document, titled "Designing impact evaluations, different perspectives" provides a bit more information on the "other views".

Literature on Impact Evaluation Methods
 If you are interested in literature on Evaluation Methods, look at Better Evaluation to get a quick overview.

I like Cook, Campbell and Shadish to understand experimental and quasi experimental methods, but this online knowledge base resource is good too.
 
For some resources on other more mixed methods approaches to impact evaluation, you need to look at Realist Synthesis, General Elimination Method, Theory Based Evaluation, and something that I think has potential, the Collaborative Outcomes Reporting approach. 


The South African Department for Performance Monitoring and Evaluation's guideline on Impact Evaluation is also relevant if you are interested in work in the South African Context.