Using our Expertise and Networks to Provide Training for Local Nonprofits

A great way for Rotary clubs to serve their community is to rely on their members’ expertise and networks to provide training for local nonprofits in areas where they need support. As part of my club’s pro bono initiative, we organized in February 2017 two half day training events for local nonprofits on (1) monitoring and evaluation and (2) communications. This post explains what we did, and why it worked.

  

In September 2016, we applied to the Capitol Hill Community Foundation for a grant to help us organize training events for local nonprofits. We received the grant in November and organized the training events in February. The events focused on 1) essentials of monitoring, evaluation, and cost-benefit analysis for nonprofits; and 2) essentials of communications, from websites to social media and power point presentations.  The training workshops were held at the main community center for our neighborhood in Washington, DC. The focus on monitoring, evaluation, and cost-benefit analysis as well as on communications stemmed from the fact that when interacting with local nonprofits, there appeared to be great demand for support in those areas.

In order to organize the training events, we relied on the expertise of members of our club as well as friends and colleagues from organizations based in Washington, DC. Instructors for the two training workshops included staff from the Center for Nonprofit Advancement, the Communication Center, Tanzen, the Urban Alliance, and the World Bank.  In addition, between the events (one workshop in the morning and the other the same day during the afternoon), we provided a lunch to participants of both workshops with a keynote address from the CEO of Grameen Foundation, a well-known organization providing micro-credit globally.

In order to promote the training events, we designed posters/fliers and shared them widely to potential participants using a variety of networks. As an example, we contacted local foundations so that they could share the information with their grantees. Registration was brisk, and we had to close registrations when we reached 150 participants. On the day itself, about 90 people attended, which was good for us given that our room had a capacity of 90. Note that when training events are free some people who register may not come – and we had factored this in. We also had competition from a gorgeous and sunny day. Many participants attended for the whole day, but some came for just one of the two training events.

Because we had great speakers who knew their subject matter well and were engaging as well as practical in their presentations, evaluations of the two events by participants were very encouraging. As shown in the table below, on a scale from 1 (strongly disagree) to 5 (strongly agree), participants on average rated all dimensions of the training highly. This initiative overall was a great success for our club in terms of providing a valuable service to the community, and gaining in visibility as well. I encourage you to consider organizing similar events in your community.

Evaluation of the two training events by participants – scale from 1 to 5

M&E Comms
The training was well organized. 4.71 4.79
The topics covered were relevant. 4.65 4.68
Participation/interaction were encouraged. 4.44 4.58
The content was easy to follow. 4.50 4.89
The trainers were knowledgeable about the topics. 4.79 4.89
The trainers were well prepared. 4.74 4.89
The time allotted was sufficient for what was covered. 4.65 4.79
The lunch as well as the facilities were adequate. 4.56 4.68
This training experience will be useful to me. 4.68 4.84
I will come again if another training is organized. 4.62 4.79

I will recommend this type of training to others.

4.68 4.84

 

Improving Teaching and Learning in Nepal (Partnerships Series No. 8)

Many developing countries have made substantial progress towards improving education attainment (the level of education attained by students) over the last two decades. At the same time, the instruction provided by teachers to students often remains of limited quality. This results in less than stellar education achievement (how much students actually learn). While students may do well enough on portions of examinations that rely for the most part on memorization, they tend to do less well when asked to think creatively or solve complex problems. This post, which is part of a series on partnerships, innovation, and evaluation in Rotary, tells the story of an innovative teacher training program in Nepal that has the potential of improving student learning substantially.

NTTI

Importance of Teacher Training

Outstanding in-service teacher training programs can make a major difference in how teachers teach, how much students learn, and how much they learn, especially among disadvantaged groups. Many factors influence student achievement, including factors that are beyond the control of schools such as a student’s socio-economic context. But teachers are the most important factor under the control of education systems to improve learning. Teachers also account for the bulk of public spending for education in developed and developing countries alike. For these reasons, there is increasing interest in finding ways to attract, retain, develop, and motivate great teachers.

The tasks of attracting, retaining, and motivating teachers fall squarely within the scope of the mission of Ministries of Education. Developing teachers is also a key responsibility and priority for the Ministries, but in this area there is also scope for nonprofits and organizations such as Rotary to play a role by helping to create great in-service training programs for teachers. The importance of in-service training and professional development to improve instruction is recognized by practitioners and policy makers. Three lessons emerge from the literature.

First, opportunities for teacher training and professional development should be made available. But not all programs achieve the same results. When in-service programs focus on changing pedagogy, the evidence suggests that they can improve teaching and as a result student achievement. By contrast, programs that merely provide additional teaching materials for teachers do not generate substantial gains.

Second, the contents of training programs aiming to change pedagogy matter as well. In-service training programs that expose teachers to best practices in instruction and actually show teachers how to implement these practices are more likely to generate positive change. Promoting collaboration between teachers, among others through teacher networks where teachers can exchange ideas is useful. Mentoring programs whereby junior teachers benefit from the guidance of experience teachers also tend to be effective. Other approaches tend to be less successful.

Third, it is important that in-service training and professional development programs target in priority the teachers who need help the most. Teachers who are struggling may benefit more than already great teachers from various programs. Similarly, students from disadvantaged backgrounds or living in poor areas also tend to benefit more from a higher quality instruction than better off students who have more help from their family at home. Identifying priority pockets of needs is most beneficial when implementing and teacher training programs.

Innovative Program in Nepal

Traditional instruction in Nepal relies on lecturing by teachers and memorization by students. Together with the Nepali NGO PHASE, NTTI (Nepal Teacher Training Innovations) has implemented innovative teacher training programs in Nepal for several years.  NTTI aims to train public schools teachers to make the classroom more interactive by coaching them on how to lead classroom discussions, facilitate group work, and ask questions to students to encourage individual thought. Instead of relying on punishment and at times shaming in the classroom to control student behavior, teachers are trained to use dynamic inquiry-based instruction methods and provide positive encouragement to motivate the students to learn. As the classroom becomes more participatory, students engage in their own learning.

The PHASE-NTTI model does not rely on one-off training. Instead it involves a cumulative cycle of trainings and intensive follow-up support to individual teachers. The aim is to help teachers move from an awareness of effective teaching practices to actual implementation of the practices in their own classrooms. The training model includes a series of teacher development courses: Introduction to Best Teaching Practices; Girls’ Sensitivity Training; and a Training of Trainers for those selected as Mentor Teachers.

The model includes pre- and post-training classroom observations, individual feedback received by teachers from Master Trainers, and follow up individual teacher support by Mentor Overall, the program is implemented over a two-year period in each school.

While no impact evaluation is yet available to measure the impact of the program, quantitative data obtained through pre- and post-training classroom observation are encouraging. In contrast to teacher-driven and student-silent classrooms, classrooms with trained teachers seem to be closer to functioning as hubs of learning.

Instead of only lecturing trained teachers lead classroom discussions, facilitate group work, and ask questions to encourage individual thought. Students learn how to make their own novel connections and think critically about what they hear and read. Qualitative data suggest that the program is appreciated by teachers and students.

Remaining Challenges and Conclusion

There have been challenges to which the program has had to adapt. The program did not work as well in secondary schools, so it now focuses on primary schools. Support from principals for teachers changing their pedagogical approach is needed, but not guaranteed. Distances to schools in rural areas make it hard to maintain regular contact after initial trainings. Lack of time for teachers to prepare lessons as advocated by the program is also a constraint. The structure of classroom time may limit creativity and inquiry-based teaching. The persistence of traditions harmful to girls in parts of the country is a major challenge to keep girls in school.

The PHASE-NTTI program does not have all the answers to these challenges, but it does have the key features that tend to be associated with successful in-service training programs. The program is also a great example of partnership (with the Ministry of Education and public schools), innovation (in teacher training), and evaluation (at least through monitoring of teacher pedagogy). A Rotary global grant proposal has been submitted to help develop the PHASE-NTTI program further and implement it in additional areas.

Impact Evaluations, Part 3: What Are Their Limits?

by Quentin Wodon

In the first post of this series, I argued that impact evaluations could be highly valuable for organizations such as Rotary in order to assess the impact of innovative interventions that have the potential to be replicated and scaled up by others if successful. In the second post I suggested that a range of techniques are available to implement impact evaluations. In this third and last post in the series, I would like to mention some of the limits of impact evaluations. Specifically, I will discuss four limits: (1) limits as to what can be randomized or quasi-randomized; (2) limits in terms of external validity; (3) limits in terms of explanation as opposed to attribution; and finally (4) limits in terms of short-term versus long-term effects.

Can Everything Be Randomized?

The gold standard for impact evaluations is randomized controlled trials (RCTs), as discussed in the second post in this series. When it is not feasible to randomize the beneficiaries of an interventions, statistical and econometric techniques can sometimes be used to assess impact through “quasi-randomization”. But not all types of interventions can be randomized or quasi-randomized. If one wants to assess the impact on households of a major policy change in a country, this may be hard to randomize.

One example would be the privatization of a large public company with a monopoly in the delivery of a specific good. The company can be privatized, but typically it is difficult to privatize only part of it, so assessing the impact of privatization on households may be hard to do because of the absence of a good counterfactual. Another example would be a major change in the way public school teachers are evaluated or compensated nationally. At times, even with such reforms, it may be feasible to sequence the new policy, for example by covering first some geographic areas and not others, which can provide data and ways to assess impacts. But in many cases the choice is “all or nothing”. Under such circumstances techniques used for impact evaluations may not work. Some have argued that for many of the most important policies that affect development outcomes, the ability to randomize is the exception rather than the rule.

For the types of projects that most Rotary clubs are implementing, I would have doubts about an argument that randomization would not be feasible, at least at some level. This does not mean that all or even most of our projects should be evaluated. But we should recognize that most of our projects are small and local, which makes it easier to randomize (some of) them, when appropriate for evaluation. For larger programs or policy changes, one must however be aware that randomization or quasi-randomization are not always feasible.

Internal Versus External Validity

When RCTs or quasi-randomization are used to assess the impact of interventions, the evaluators often pay special attention to the internal validity of the evaluation. For example, are the control and treatment groups truly comparable, so that inferences about impact are legitimate? Careful evaluation design and research help in achieving internal validity.

But while good evaluations can be trusted in terms of their internal validity, do the results also have external validity? Do they apply beyond the design of the specific evaluation that has been carried out? Consider the case of a NGO doing great work in an area of health through an innovative pilot program. If the innovative model of that NGO is found to be successful and scaled up by a Ministry of Health, will the same results be observed nationally? Or is there a risk that with the scale-up, some of the benefits observed in the pilot will vanish, perhaps because the staff of the Ministry of Health are not as well trained or dedicated as the staff of the NGO? There have been cases of interventions when, as pilots were scaled up, their original promise did not materialize at scale.

Attribution Versus Explanation

Consider again the example of the dictionary project mentioned in the previous post. An impact evaluation could lead to the conclusion that the project improves some learning outcomes for children, or that it does not. Impact evaluations are great to attribute impacts and establish cause and effect. But they do not necessarily tell us why an impact is observed or not. For that, an understanding of the context of the intervention is needed. Such context is often provided by so-called process as opposed to impact evaluations. There is always a risk that an impact evaluation will be like a black box – impacts can be attributed, but the reasons for success or lack thereof may not be clear. This in turn can be problematic when scaling up programs that were successful as pilots because when doing so, it is often necessary to alter some of the parameters of the interventions that were evaluated, and without rich context, the potential consequences of altering some of the parameters of the original intervention may not be known.

Short Versus Long-term Effects

Another issue with impact evaluations is the time dimension to which they refer. Some interventions may have short-term positive impacts but no long-term gains. An evaluation carried out one or two years after an intervention may suggest positive impacts, but those could very well vanish after a few years. Conversely, other evaluations may have no clear impact in the short term, but positive impacts later on. Ideally, one would like to have information on both short-term and long-term impacts, but this may not be feasible. Most evaluations by design tend to look at short-term impacts rather than long-term impacts.

Implications of this Discussion

The above remarks should make it clear that impact evaluations are no panacea. They can be very useful – and I believe that Rotary should invest more in them for innovative projects that could be scaled up by others if successful – but they are not appropriate for all projects, and they should be designed with care.

I hope that this three-part series has helped some of you to understand better why impact evaluations have become so popular in development and service work, but also why they require hard work to set up well. Again, if you are considering impact evaluations in your service work, please let me know, and feel free to comment and share your own experience on this topic.

Note: This post is part of a series of three on impact evaluations. The three posts are available here: Part 1, Part 2, and Part 3.

 

Impact Evaluations, Part 2: How Are They Done?

by Quentin Wodon

Having argued in the first post in this series of three that we need more impact evaluations in Rotary, the next question is: How are such evaluations to be done? One must first choose the evaluation question, and then use an appropriate technique to answer the question. The purpose of this post is to briefly describe these two steps. A  useful resource for those interested in knowing more is an open access book entitled Impact Evaluation in Practice published by the World Bank a few years ago. The book is thorough, yet not technical (or at least not mathematical), and thereby accessible to a large audience.

As mentioned in the first post in this series, impact evaluations seek to answer cause-and-effect questions such as: what is the impact of a specific program or intervention on a specific outcome? Not every project requires an impact evaluation – but it makes sense to evaluate the impact of selected projects that are especially innovative and relatively untested, replicable at larger scale, strategically relevant for the aims of the organization implementing them, and potentially influential if successful. It is also a good practice to combine impact evaluations with a cost-effectiveness analysis, but this will not be discussed here.

Evaluation Question

An impact evaluation starts with a specific project and a question to be asked about that project. Consider the dictionary project whereby hundreds if not thousands of Rotary clubs distribute free dictionaries to primary school students, mostly in the United States. This project has been going on for many years in many clubs. In Washington DC where I work, local Rotary clubs – and especially the Rotary Club of Washington DC – distribute close to 5,000 dictionaries every year to third graders. Some 50,000 dictionaries have been distributed in the last ten years. This is the investment made in just one city. My guess is that millions of dictionaries have been distributed by Rotarians in schools throughout the US.

The dictionary project is a fun and feel good activity for Rotarians, which also helps to federate members in a club because it is easy for many members to participate. I have distributed dictionaries in schools several times, the last time with my daughters and two other Interactors. Everybody was happy, especially the students who received the dictionary with big smiles. Who could argue against providing free dictionaries in public schools for children, many of whom are from underprivileged backgrounds?

I am not going to argue here against the dictionary project. But for this project as for many others, I would like to know whether it works to improve the prospects and life of beneficiaries – in this case the children who receive the dictionaries. It could perhaps be enough to justify the project that the children are happy to receive their own dictionary and that a few use it at home. But the project does have a cost, not only in terms of the direct cost of purchasing the dictionaries, but also in terms of the opportunity cost for Rotarians to go to the schools and distribute the dictionaries. Rotary clubs could decide to continue the project even if it were shown to have limited or no medium term impact on various measures of learning for the children. But having information on impact, as well as potential ways to increase impact, would be useful in making appropriate decisions to continue this type of service project or not. It would not matter much if dictionaries were distributed only by a few clubs in a few schools– but this is a rather large project for clubs in the US.

An impact evaluation question for the project would be of the form: “What is the impact of the distribution of free dictionaries on X?” X could be – among many other possibilities – the success rates at an English exam for the children, the propensity for children to read more at home, a measure of new vocabulary gained by children, or an assessment of the quality of the spelling in the children’s writing. One could come up with other potential outcomes that the project could  affect. In order to assess impact, one would need to compare students in schools where children did receive dictionaries to students in schools where children did not. This could be done some time after the dictionaries have been distributed.

About two years ago I tried to find whether any impact evaluation of the dictionary project had been done. I could not find any. May be I missed something (let me know if I did), but it seems that this project which requires quite a bit of funding from clubs as well as a lot of time spent by thousands of Rotarians every year has not been evaluated properly. It would be nice to know whether the project actually achieves results. This is precisely what impact evaluations are designed to do.

Evaluation Techniques

In order to estimate project impacts data collection is required. Typically for impact evaluations quantitative data are used. For the dictionary project, one could have children take a vocabulary test before receiving the dictionary and again one year after having received the dictionary. One would then compare a “treatment” group (those who received the dictionary) to a “control” group (those who did not). This could be done using data specifically collected for the evaluation, or using other information – such as standardized tests administered by schools, which would reduce the cost of an impact evaluation substantially, but would also limit the outcomes being considered for the impact evaluation to those on which students are being tested by schools.

The gold standard for establishing the treatment and control groups is randomized controlled trial (RCT). Under this design, a number of schools would be randomly selected to receive dictionaries, while other schools would not. Under most circumstances, comparisons of outcomes (say, reading proficiency) between students in schools with and without dictionaries would yield (unbiased) estimates of impacts. In many interventions, the randomization is applied to direct beneficiaries – here the students. But for the dictionary project that would probably not work – it would seem too unfair to give dictionaries to some students in a given school and not others, and the impact on some students could affect the other students, thereby making the impact evaluation not as clean as it should be (even if there may be ways to control for that). This issue of fairness in choosing beneficiaries in a RCT is very important, and typically the design of RCT evaluations has to be vetted ethically by institutional review boards (IRBs).

A number of other statistical and econometric techniques can be used to evaluate impacts when a RCT is not feasible or appropriate. These include (among others) regression discontinuity design, difference-in-difference estimation, and matching estimation. I will not discuss these techniques here because this would be too technical, but the open access Impact Evaluation in Practice book that I mentioned earlier does this very well.

Finally. apart from measuring the impact of programs through evaluations, it is also useful to better understand the factors that lead to impact or lack thereof – what is often referred to as the “theory of change” for how an intervention achieves impact. The question here is not whether a project is having the desired impact, but why it does or does not. This can be done in different ways, using both qualitative and quantitative data. For example, for the dictionary project, a few basic questions could be asked, such as: 1) did the child already have access to another dictionary at home when s/he received the dictionary provided by Rotary?; 2) how many times has the child looked at the dictionary over the last one month?; 3) did the dictionary provided by Rotary have unique features that led the child to learn new things?, etc… Having answers to this type of questions helps in interpreting the results of impact evaluations.

Conclusion

Only so much can be discussed in one post, and the question of how to implement impact evaluations is complex. Still, I hope that this post gave you a few ideas and some basic understanding of how impact evaluations are done, and why they can be useful. If you are considering an impact evaluation, please let me know, and if I can help I will be happy to. In the next and final post in this series, I will discuss some of the limits of impact evaluations.

Note: This post is part of a series of three on impact evaluations. The three posts are available here: Part 1, Part 2, and Part 3.

Impact Evaluations, Part 1: Do We Need Them in Rotary?

by Quentin Wodon

Monitoring and evaluation have become essential in development and service work. When providing funding, most foundations and donors now require some type of monitoring and evaluation either ex ante to allocate funding, or ex post to assess whether projects that have been funded have proved to be successful or not. The same is true for government agencies – when deciding whether to scale up a pilot intervention, most agencies now typically rely on the results of at least some type of evaluation.

Different types of evaluations can be conducted and not all are equal in terms of what we can learn from them. In this series of three posts, I will focus on impact evaluations (as opposed to process evaluations). Specifically, I will discuss 1) whether we need impact evaluations in Rotary; 2) how impact evaluations can be implemented in practice; and 3) what are some of the limits of impact evaluations that practitioners and policy makers should be aware of. But before doing so, it is probably useful to briefly explain what an impact evaluation entails.

What Is An Impact Evaluation?

Impact evaluations aim to measure the impact of specific projects, policies, or interventions on specific outcomes. The question they ask is not whether a given outcome has been achieved or not among a target population. They look instead at the specific contribution or impact of a well-defined intervention on a well-defined outcome. Said differently, they try to measure a counterfactual: what would have been the outcome without the intervention? By comparing the assessment of what the outcome would have been without the intervention and the outcome with the intervention, impact evaluations inform us about the success (or lack thereof) of specific interventions in improving specific outcomes. When data on the cost of alternative interventions and their benefits are available, impact evaluations are especially helpful in deciding whether some interventions should be maintained or scaled up, while others should be abandoned.

Estimating the counterfactual in an impact evaluation is no simple matter, in part because most interventions are not distributed randomly in a target population. Consider children going to private or charter schools. In most countries, those children perform better than children attending public schools. Does that in itself tell us anything about the comparative performance of public versus private or charter schools? It does not.

Children attending private or charter schools may for example come from wealthier families with well educated parents who are better able to help their children at home than parents from less privileged backgrounds. The higher performance of students in private or charter schools on tests aiming to measure learning may not be related to the school in which they are enrolled per se, but instead to their socio-economic and other characteristics. The proper counterfactual question would be: how well would students enrolled today in private or charter schools perform if they had enrolled instead in public schools? Data to answer such questions are often not readily available, so the counterfactual is often not known without some further data or analysis. Special techniques – ranging from randomized control trials (the gold standard) to various statistical and econometric approaches, are often needed in order to assess the impact of specific interventions on specific outcomes. I will discuss these techniques in the second post in this series.

Do We Need Impact Evaluations in Rotary?

If as Rotarians we “do good work” in the world, why would we need impact evaluations? Isn’t it enough to serve the less fortunate? I would argue that Rotary needs impact evaluations for at least three reasons.

  1. Doing good work is not enough – we need to do the best we can. We live in a world with scarce resources. What Rotarians can contribute – whether in terms of money or time and expertise – is finite. Our resources should be devoted to projects and initiatives that have the largest positive impact on those we aim to serve. And we will be able to assess what these projects or interventions are or should be only by evaluating our work (and relying as well on the evidence generated by other organizations that are implementing robust impact evaluations.)
  2. Who says we are doing good work? We may believe we are doing good work, but do we know whether some of our projects and interventions may have unintended consequences that could be detrimental to those we are trying to serve? To consider the example of education again, if we support one dynamic school in a poor community that selects its students on the basis of their academic potential, could this have a negative effect on another school in the area which might be left only with children who tend to perform less well for whatever reason. Given that peer effects are important drivers of learning, supporting one school may have negative consequences on another school. This is of course not necessarily the case, and this does not imply  that we should never support specific schools. But we need to be aware of the potential ripple effects that our projects may have, and impact evaluation can be useful for that.
  3. Rotary has an important role to play in piloting innovative interventions. Much of what Rotary clubs do on an on-going basis is not innovative, and there is nothing wrong with that. There are a lot of needs out there that we can contribute to meet without being innovative. Being innovative is not a requirement. But at the same time, we should promote some level of innovation in Rotary. Rotarians have technical expertise in many areas and they know (or should know) their community. They are in a way uniquely positioned to propose new ways of tackling old problems, and assessing whether such new ideas actually work. Furthermore, Rotary has a relatively large foundation, but in comparison to other actors, especially in the field of development, we are small. If we could pilot more innovative interventions, evaluate them properly, and let others with more resources scale up the most promising interventions, we could achieve more for those we aim to serve.

For these reasons I believe that we need impact evaluations in Rotary. Not all projects require an evaluation, especially since evaluations take some time and cost money. But we can probably do more in this area than we are doing today. In the next post in this series, I will discuss the “how to” of impact evaluations.

Note: This post is part of a series of three on impact evaluations. The three posts are available here: Part 1, Part 2, and Part 3.

Rotarian Economist Call for Briefs and Papers

by Quentin Wodon

The Rotarian Economist blog was launched on World Polio Day in October 2014. The blog discusses challenges and opportunities encountered by Interact, Rotaract, and Rotary clubs, as well as other service clubs. It also features stories about service work and analysis of sometimes complex issues related to poverty reduction and development. This includes discussions about priority areas for Rotary International such as promoting peace, fighting disease, providing clean water, saving mothers and children, supporting education, growing local economies, and (of course) eradicating polio. The hope is that the blog and the resources posted on this website will be useful to Rotarians worldwide, as well as to others interested in service work and development.

A briefs and working papers series will soon be launched on the Rotarian Economist blog and website. This may be an opportunity for readers of the blog to feature their project, initiative, or analysis. Briefs and working papers may be submitted by Interactors, Rotaractors, and Rotarians, as well as by others interested in nonprofit service and development work. For example, great projects by NGOs could be featured even if they have not received any support from Rotary.

This initiative will not duplicate tools such as Rotary Showcase where Rotary projects can be listed with a brief description (typically a paragraph) and basic project and contact information. The idea is rather to provide a space for more in-depth analysis of service projects and development issues through briefs (about 4 pages single spaced in length) and working papers (typically 12-30 pages single-spaced; please use Times New Roman font 12 for both briefs and papers).

The series will welcome briefs and working papers on service projects as well as  thematic issues – especially in the areas of focus of The Rotary Foundation. For service projects, authors should first explain the focus area of the project typically with a few links to the literature on that area (these links to the literature are more important for working papers than for briefs). The following sections of the brief or working paper should describe the project not only generally but also with a focus on what makes it especially innovative or interesting. If quantitative or qualitative data on a project’s impact are available, these should be included. The brief or working paper should also have a conclusion and a list of references.

For work on thematic issues, the briefs or working papers should provide insights or analysis about a specific issue related to service or development work, as academic or professional papers and knowledge briefs would do. This could be an issue related to the management of service clubs, their growth, and the challenges they face. It could also be an issue related to development programs and policies, again ideally with a focus on the areas of intervention of The Rotary Foundation.

The series will be indexed with contents aggregators, and many of the briefs/papers will be announced on the Rotarian Economist blog with a post summarizing the key findings from the work. For briefs and papers on specific service projects, it is a good idea to provide one or more photos.

If you would like to submit a brief or working paper for this initiative, please send me an email through the Contact Me page.  Thank you!