Rethink District Budgeting – Part II
In Rethink District Budgeting - Part I, we describe three budgetary forces that drive and shape how we have been approaching budgeting: 1) needs-based framework, 2) how district finance is managed, and 3) our human nature. In this second installment, we introduce three new "budget dance" moves that allow leaders to better leverage the budgeting process to continuously improve coherence and quality of programs and services to increase student achievement.
New Moves
The good news is that we do not need to re-create a whole new system of budgeting. We can build upon our current “budget dance” by adding some new moves that involve: 1) differentiating between operation and investment expenditures, 2) tracking expenses on alignment, outcome, and improvement, and 3) re-orienting how improvement is practiced.
Operation vs. Investment Expenditures
As noted above, needs play a prominent role in how districts prioritize new spending, but present challenges when it comes to making changes to existing spending. A new lens is needed to empower districts to routinely conduct systematic examination and realignment of their spending in a fair and transparent manner. One way to do that starts with differentiating between operation and investment expenditures.
Of the hundreds of thousands of district and school expenses, most funds are spent due to necessity. For example, school buildings’ utilities and maintenance costs need to be paid for teaching and learning to take place. Under the state and federal laws, schools are mandated to fund certain positions to provide services to students with special needs. At the same time, each school system likely has its own unique programs or services that are considered indispensable by the local community. For example, a town that prides its high school football team is unlikely to cut the program’s funding, even when the district faces a budget crunch (Starr, 2000). Dictated by legal, cultural, and political reasons, these must-spent expenses usually are not associated with any specific achievement goals and can be classified as operation expenditures.
On the other hand, school systems spend a smaller but considerable portion of resources to improve the quality of the services they provide. For example, a district might decide to invest in a campaign to boost school attendance, or provide merit pay to teachers in hope of improving student outcomes through economic incentives. With an aim in sight, these discretionary expenses are chosen by district and school leaders to produce returns in targeted areas. These expenses are classified as investment expenditures.
Compared with the compulsory nature of operation expenditures, investment expenditures are choices districts make out of many options. If a selected investment choice does not yield return, the district has the flexibility to replace it with another investment option. For example, districts can redirect the money spent on the unsuccessful attendance campaign to students as economic incentives for good attendance and grades (Fryer, 2011). If the invested area is no longer an improvement priority, district leaders can reinvest the dollars to meet other new improvement needs. Using attendance again as an example, if it is no longer a priority to address, leaders can redirect the funds for other improvement needs. Table 1 below summarizes the main differences between operation and investment expenditures.
Table 1 Contrast between Operation and Investment Expenditures
Differentiating between operation and investment expenditures allow districts to manage funds differently for different improvement goals. With operation expenditures, the primary improvement goal is to increase efficiency without jeopardizing or undermining services. Systematic pursuit of that goal rests upon periodic benchmark analysis both horizontally and vertically. Horizontally, a district can compare itself against similar districts every a few years on an array of operation key performance indicators (KPIs), such as those published annually by the Council of the Great City Schools (2022). Under-performing KPIs point to areas where improvements can and should be made by learning from benchmarking districts with higher efficiencies.
Vertically, a district can compare schools, departments or divisions against efficiency metrics over time. For example, leaders can compare the operation expense per student by school every a few years. If the comparison reveals significant increases in one school while the operation expense per student in other schools remains flat, analysis should be conducted to find out reasons for the increase and measures that might be needed to address this change. Through tracking and comparing efficiency metrics both horizontally and vertically, districts can make informed, intentional decisions that can maintain the same level of operation expenses but provide more services, provide the same level of services but with a lower level of operation expenses, or curtail the fast increase in operation expenses.
With investment expenditures, the improvement goal is twofold. Fiscally-responsible leadership will make sure that 1) both future and current investments are aligned with improvement priorities and 2) investments are producing high returns at the lowest possible costs. If one target area is no longer a priority, districts should reallocate the resources to focus on other areas of higher priority for improvement. If an investment is still in alignment but falls short of producing the expected return, leaders should demand changes to program implementation and consider postponing new investments in this area of need until success with the existing investment is achieved. When success is still unattainable after repeated efforts, districts have the responsibility to invest these existing resources in another promising option.
A similar cycle-based approach can be taken to achieve the two improvement goals for investment expenditures. When a new investment is launched, expected return should be clearly defined to spell out target population, outcome measures, as well as baseline and goal metrics. This information provides the basis for alignment and return on investment review. Most importantly, an investment cycle (such as every three or four years) for achieving the intended goal needs to be established to set a review schedule. Altogether, this process serves to develop a shared understanding of what needs to be accomplished by when and the potential consequence if success is not attained.
Continuous improvement is the primary objective in this cycle-based approach. If an investment remains aligned with improvement priorities but falls short of meeting the goal at the end of the first investment cycle, it can be put on another investment cycle with necessary adjustments made. The adjustments can include but are not limited to setting a new goal, focusing on a smaller but more targeted group of students, tightening program implementation, or even developing new strategies. For an investment that has gone through several investment cycles, however, lack of return after multiple efforts will make it increasingly more difficult to justify its continuation.?
Track Expenses on Alignment, Outcome, and Improvement
A strategy to systemize a cycle-based approach to managing operation and investment expenditures is to create a system to track expenses on alignment, outcome, and improvement. It is suggested that such a system should have the following components.
The first component classifies expenses as either operation or investment. The second component documents how the expenses are aligned with the strategic plan. Next, an investment cycle can be assigned to each investment to set the schedule for alignment and return on investment review. For operational expenditures, the review cycle can be set by either operational area (e.g., accounts payable, procurement, transportation) or by specific cost center to examine efficiency based on horizonal or vertical benchmarking.?
领英推荐
For expenses classified as investments, two additional sets of components are still needed. One set defines the expected return from an investment and tracks who is being targeted for improvement (target population), what outcome measures will be used to gauge success (success metrics), as well as the target population’s the current performance (baseline) and expected future outcome (goal) on the success metrics.
The other set defines an investment’s theory of change by identifying the root cause of the problem and creating a logic model that “links outcomes (both short- and long-term) with program activities/processes and the theoretical assumptions/principles of the program” (W.K. Kellogg Foundation, 2004). Table 2 provides some examples of these components for investment and operation expenditures, respectively.
Generating and executing these new components apparently requires collaboration of multiple departments that include the finance, program, and accountability teams. As a result, financial management is no longer a siloed responsibility of the finance team that mainly focuses on accounting for compliance. Rather, effective and efficient use of limited resources for increasing student achievement becomes a responsibility shared by all parties involved in the process, which is how it needs to be.?
In JCPS, we developed an online investment tracking system (ITS) for this purpose. The ITS has been tracking district investments during the past eight years. After several iterations, it is now linked to the district’s financial management system MUNIS. By connecting financial data with outcome data, ITS brings the financial team, program team, and accountability team together to conduct periodic reviews on district expenses for continuous improvement, which is discussed next.?
Table 2 Examples of Tracking Alignment, Outcome, and Improvement
Re-orient How Improvement Is Practiced
Simply put, we have been trying to improve through innovation without systematically analyzing the efficacy of the adopted programs and their impact on the other strategies implemented by the district. While innovations are undoubtedly important, problems arise when we pile them up without making sure they add up. The inadvertent accumulation of innovations often begets three problems hampering our improvement efforts.
First, as noted earlier, programs brought in by different leaders tend to reflect hot-button issues and solutions of their respective times, and thus vary in focus of change, improvement strategy, and even identification of the root causes. Without careful coordination and deliberate calibration, their co-existence leads to inconsistency and confusion (DuFour, 2003). Second, with new innovative programs taking the spotlight, existing innovative programs quickly lose their luster. The lack of sustained institutional attention and support hurts their chance of succeeding that may already suffer from inadequate implementation. Third, continuing addition without subtraction overwhelms schools, drains people’s attention and energy (also finite resources), and causes innovation fatigue and indifference (Reeves, 2020).
With the new moves introduced above, leaders are afforded a different approach to improvement that emphasizes coherence and puts continuous improvement at the center (Bryk et al., 2015; Lewis, 2015). For each new spending proposal, the new approach starts with a decision on whether the proposed spending should be treated as an operation or investment expenditure. If classified as an investment expenditure, all information listed in Table 2 should be specified in the budget proposal, which is critical for not only the subsequent adoption decision but also continuous improvement decisions down the road.?
Employing the information from both the new proposal and existing investments that are being tracked, leaders’ deliberation can expand from alignment with priorities, cost, and evidence of impact to including three additional areas (Yan & Hollands, 2018). The first area looks at whether the district has already invested in programs that share the same or similar target population and target outcomes as the newly proposed program. The second area is concerned with how many active investments the department proposing the new spending is currently implementing, how well those investments are being implemented, as well as the department’s capacity to take on a new program. Last, what adjustments to which ?programs are required for full implementation of the new program and how these adjustments could affect the performance and impact of either program.
Conclusions from deliberation in the above six areas (alignment with priorities, cost, evidence of impact, overlap/redundancy with existing programs, implementation capacity, and coherence with existing programs) have direct implications for whether the new program should be adopted and/or an existing program should be discontinued. This investigative process will also inform the timing of adoption and/or discontinuation of programs, which team will be put in charge of implementation, as well as what coordination and adjustments are needed from related programs and departments. All these decisions are important for the success of not only the new program but also the existing programs that will be impacted by its implementation (reference to education as a complex/ecological system is needed) and require participation of all stakeholders and continual recalibrations between programs.
Following an adoption decision, it is important to ensure that the newly approved investment receives sustained attention and support to succeed, and, equally important, actions will be taken if it does not produce expected return. This is achieved through the review taking place when the program reaches its end of investment cycle (EOC) specified during the approval process. As all new technologies eventually become appliances (Bertram & Hogan, 1998), all innovations become just another program and run the risk of turning into routines with an outdated mission or motions without a clear purpose. EOC review puts an existing program from something in the rear-view mirror back to the center stage under spotlight, which it both deserves and needs for success. ?
EOC review asks four big questions. First, is the investment aligned with the district’s current improvement priorities? Second, what is the actual return on investment (i.e., how many students from the intended population have been served and what are their outcomes)? Third, how has the program been implemented (i.e., how much of the budgeted money has been spent and which planned activities have taken place)? Last, if the program does not meet its goal, what would it take to help the investment succeed (e.g., increasing intentionality by focusing on a smaller student population, better collaboration and coordination from other departments, lessening schools’ burden by reducing the number of new initiatives they are asked to implement)?
Notedly, EOC review differs from end of year (EOY) program review conducted by program staff annually.?The latter usually takes place within a department involving program director and staff, and focuses on programmatic changes inside the department to improve implementation. Applying a systemic lens, EOC review requires participation of senior leaders across divisions, focuses on alignment and return on investment, and could result in changes in scope, scale, funding, as well as implementation for not only a reviewed program but potentially other programs. Table 3 shows the difference between EOC review and EOY review.?
Table 3 Differences between EOC review and EOY review
Ideally, three things can be achieved from an EOC review process: 1) investments that are ineffective after multiple improvement efforts or no longer aligned with priorities are re-invested; 2) investments that have not yet delivered returns but are worth more time or need another opportunity are put on a path to success, with criteria and timeframe set for the next review; and 3) investments with high returns are recognized and potentially replicated or expanded to benefit more students.?