This is a three part series called Principles of Scaling Up - Simple. Support. Sustain. If you haven't read the first part - click here to do so.
Now, let's talk about the second principle - Support. This isn't as straightforward it seems in the our context of social sector. I have heard and read on many occasions people complaining about their brilliant pilots (which have rigorous RCTs, impact evaluations and glowing personal testimonials & case stories backing them!) working so well at a school or block or district level, falling flat or failing miserably when scaled up and "not sustainable" or "not sticking" even in the same school, block or district after a few months and certainly after a few years. They wonder in utter consternation, disappointment and even pity that the 'system' couldn't do it or the 'system' doesn't have the capacity or the 'system-actors' lack motivation. While all, or some of it might be true, we miss two important things here.
- It was your pilot! Not of the 'system'. It was not done with the budgets, staff and resources of the 'system'. You might have worked with the system or along with the system, but it wasn't done by the system on it's own. The motivation of your staff is at a higher level because it is the only job they do and their livelihood depends on it; in contrast for the system-actor, it is one of the many jobs they do and while their livelihood depends on it, there are no incentives or disincentives to do anything different. The capacity of your staff is much better because they have done an expensive public policy course in Ivy Leave colleges, have done specialised courses, are trained on multiple methodologies and are manages, coached and supported better. Finally, since it's your project for the 'solution' that you wanted to implement, clearly you had the resources for it, not the system. How easily we say " ..90% of the government budgets is for salaries.." as if it's a bad thing like gambling to spend money on! Some might argue that " ...this is not us; we don't work in isolation, we don't run our own programmes or we don't work outside the system..". To them, I would say, yes, we do 'train the trainers' and use them to 'develop the materials' and stuff like that but that's often because we don't have the capacity and resources to do all of it ourselves. So, we choose which part we want to do and which we 'leverage the system or government'.
- This is the second problem. Often, we do the complex parts ourselves blindly and safely assuming that the more complicated, technical and difficult work can't be done by the system-actors as they don't have the skills, motivation or they will be slow etc etc. Some even take it further, "..we do the all the programme design and planning, the system has to 'just' implement it". You get the drift. Then, we expect the system actors to be able to do all the complex and complicated work themselves when we scale up! This is important to note - most of the problems that we are attempting to solve such as learning or good diagnosis & curative health care is both complex and complicated. Some people call it a 'bad wicked' problem where the solution is transaction intensive, there is no known technology or technique which works for all, there is both discretion and obligation (PDIA, Lant Pritchett, Matt Andrews et al). To illustrate, let me bring back the vaccination example from my last blog, which is hugely complicated to deliver but all the complexity of development, testing and formulation is to be done once at the factory or production stage. In comparison, delivering quality healthcare which involves proper diagnosis of the symptoms of the patient, suggesting further tests to accurately understand the root cause of the disease, prescribing the right medicines & measures and checking after a while to ensure that the disease is cured, is a hugely complex task. At the heart of it it a good, qualified medical practitioner - a doctor or paramedical staff depending on the severity and stage of the treatment. Similarly, ensuring good quality teaching which when effective, translates to learning, is hugely complex. Each child's background, her parental income, parental education levels, learning style, learning levels is very likely to be different that others in her own class. Some children might struggle to achieve the competencies that rest of the class achieves, easily and vice versa. At the heart of all of this is again a teacher who has to deliver quality learning by transacting the appropriate curriculum at the level of the child, get support from the system in terms of teaching-learning resources, get feedback and mentoring support from academic cadre and right service conditions (salary, facilities) and recognition for staying motivated to do one of the most stressful jobs/professions. Let me remind once again, only the teacher or the doctor can deliver the quality service at the last mile or to the end-stakeholder - to a child or to a patient! No one else can!
Now let me not share further complication of the problems but use this background to talk about the role of support by the system for any effective scale up through an example that I know rather well. We have recently supported many state governments to undertake annual FLN survey assessments at state and district level. This is a hugely complex and complicated work starting from development of right items or questions, piloting the tools and then administering the survey to the right sample of students, collecting and storing the data and then analysing the same to come up with reports. Then there is an angle of reliability of the data as well as use of the data for planning and other purposes such as goal-setting etc.
We started this exercise in Bihar in March 2022 where we managed to successfully conduct the survey and assess 42,000 children (I know this itself might seem like a huge scale to you, but we are India! so...) and then scaled it up across many states ultimately reaching close to 200,000 children across 7 states and their districts.
How did we start and what was the role of support in scaling it up?
- Support in diagnosing the problem: The problem has to be a shared problem i.e. which the system feels the need and wants to solve for. A few system-actors may be struggling with the symptoms of the problem and facing the challenges & complications but not necessarily being able to define the problem well or have a framework to articulate it. Can we help them to do so ? In this case, we understood that despite many achievement surveys and studies, there isn't a shared sense of what exactly are the learning levels in the state. They all 'feel' that they are low, one survey is not consistent wit the other, but "..what is the current situation (of FLN learning levels) and what can we do about it..". We leveraged this to articulate problem & need better. The states need a reliable source of evidence that is their own, reports data in detail that can be used for target setting and planning.
- Support in working out the complex design elements: Contrary to what most external organisations working with system (government or private sector) believe, the existing system has a lot of capabilities and is delivering what it is currently designed for. Whether the system is delivering it's original, articulated goal or something else, is a function of many things (in particular how the goals, priorities have been understood by different principal-agent relationships and how the system levers are used) and is a matter of serious study. The system-actors can and should be involved in the design of the critical elements of the proposed interventions. More often than not, they have invaluable insights into the workings, strengths, weaknesses of the system at different levels. Their inputs are usually based on their own experiences of seeing such interventions in the past and a deep understanding of the capacity of the system. Again, in our case, in the first critical phase of our work i.e. designing the assessment framework and planning the entire assessment process, we have always taken inputs from the system and found them to be extremely useful. The capacity of the system to develop high-quality items for multiple subjects (for e.g. Urdu, Telugu & English for languages in Telangana; Maths etc) and multiple languages (not just Hindi but five tribal languages in one other state) is something we have left awestruck in terms of the time-taken by the state (in a 2-4 day workshops) or the appropriateness & suitability of the items (they are teaching the same grades that we want to assess). Yes, they needed some support in terms of the structure of the workshop, some orientation in terms of what competencies we are assessing, what is a good item etc. but the state did the rest.
- Support in implementation at all levels and for all: This is important, we are talking about scale or super scale here. Scale brings variability - particularly if it is about service delivery. One can deliver the same quality and kind of burger (McDonanlds), coffee (Starbucks) or car (think of Model T & Mr. Ford) everywhere and to every customer but one can't do that for service. Yes, you can try standardisation of processes, codify all of that in books or apps but there will be huge variations at scale. And we have discussed and established this before in the blog that providing quality learning or healthcare is a service ! Yes, we use some products, tools - Teaching Learning Materials (textbooks, reading materials, kits etc.) or medicines and equipments (for doctors) but the real deal is how effectively these tools are used for different contexts (or situations). The critical differentiator is the system-actor who is delivering the service - the teacher or doctor. They are human. Hence, there is bound to be variability in terms of capacity, prior knowledge, past experiences, mental models, prejudices, learning & un-learning abilities. Also, variability is not necessarily just in terms of lower or higher or poor or better; variability is diversity. We need to consider the variable capacity of people (the system-actors) and context (of locations, institutions) as diversity that is not a constraint for us to be able to apply the same thing everywhere, but something more demanding of us in terms of diverse design options and approaches for the same results. So, there needs to be support at all levels and for all. To illustrate, let me use an education and FLN example. Some teachers will easily be able to use the materials (sent by state or created by themselves) and change the way they have been teaching from syllabus completion to competency based and achieve better outcomes but most others will need support. Some might need support to even get motivated to enrol (in terms of genuinely interested for the programme), some might need support to understand the purpose and use of the materials, some might need help to be able to assess children and most will need help in terms of time and approaches to support struggling learners in the class. Please see the infographic at the beginning of this article which illustrates this point. To come back to the assessment case we have been discussing. In terms of speed, reach and capability of execution, I don't think any external organisation can even think of pulling anything close to the large surveys the state does - the recent NAS, FLS etc. being one of the the largest sample surveys conducted for education. It was important to start with humble recognition of this fact. This helped us to anchor the most difficult and scale-intensive part of our work - having the right data collectors (Field Investigators or enumerator or assessors) for conducting these surveys. We decided to leverage the District Institutes of Education & Training (DIET) students i.e. the student-teachers who are enrolled in these colleges to complete their pre-service education (usually diploma courses for teaching). Assessing children 1-1 on all the FLN competencies, including timed tasks to measure reading fluency and comprehension is not easy and not all typical data collectors can do it. This is a skilled task that requires some understanding of what is being done (assessment of learning & competencies), why is it being done (to get information for academic planning), how it is being done (each item is to be tested differently and why some are timed and some untimed) and with whom (young children are being assessed here and needs that kind of sensitivity). It's important to note that the method of assessing was new (1-1); some competencies were new (such as non-words etc.) and the use of technology tool (tangerine) was new. Our training programmes were conducted in the good-old cascade model (Master Trainers at state who further trained the DIET & B.Ed college students at district level) and yes, we have reasons to believe that with careful design, trainer selection, demonstration and use of technology, we managed to limit the cascade loss to a large extent. Again, the master trainers were not left to do the trainings on their own. They were also supported in doing the trainings and detailed documentation support was provided to the data collectors along with simple videos.
- Support to sustain: No matter how simple, complicated and complex the intervention is there will be mistakes made, difficult contexts that could not be served and even despite the efforts of the system, we might fall short of the intended outcomes we want or there may be some unintended outcomes which we don't want. Hence, it's important for us to look at evidence, diagnose the gaps and learn to be able to improve and iterate better. This would need support as well. This is the final part of the 3S framework and hence will be covering this in the last part of this three part series. We will end by once again returning to the example or case study of assessments that we discussed. It was not enough to do the assessment surveys once. We worked with the states to ensure that they are planning and preparing to do the assessment surveys next year with improvements in all aspects of the work: a) much better tools and items developed by the same people who did it the last year; they reflected on 1-2 items they did not develop well and revised them with detailed instructions b) changing sampling approach to be even more representative and considering the heterogeneities of the populationc) planning for large capacity of data collectors to tackle drop-outs; so that the sample size we end up is closer to the planned sample sized) tighter planning of the training and better capacity building of the master trainers and data collectors d) better approached or data cleaning and collation; getting two sets of teams to work independently on the calculations and analysis e) three way approach to data reliability - during the assessment by external observers, post-test data forensics and finally re-testing a smaller sample of students by third party agencyWe are still learning, but as a team and organisation are codifying all of it and not just trying to "replicate or copy-paste" but iterate and improve all the time.
Non Profit | Corporate Social Responsibility | Operations | Strategy | Humanitarian & Development Program Management
1 年Parthajeet Das, this part of the series has a lot of insights. I completely agree with your point of 'variability in services/ service delivery ' and truly it cannot be a one fit for all. Also commend the part of using DIET students and teachers to conduct the studies/researches. This I think helps to only achieve the desired research outcomes but also is helpful in creating a sustainable approach to future endeavors of similar nature. I am sure it is a value add/upskilling of many in the process. Would like to connect and understand the bell-curve based analytics.