Arguably the most prevalent topic of special education research over the last 20 years is the evaluation of what works, for whom, and under what conditions (Education Sciences Reform Act, 2002; Kiuhara et al., 2017). Such studies aim to identify interventions, practices, systems, and policies that support optimal educational outcomes for children and students with disabilities. The impetus for this corpus of research is often linked to the Individuals with Disabilities Education Act of 2004 and the provision that teachers were required to use “research-based intervention, curriculum, and practices” (p. 2787), therefore establishing a need to identify intervention, curriculum, and practices that meet a threshold to be deemed research-based. In responding to policies pushing for students to access research-based practices, researchers also recognized that a workforce of educators would need to be trained to effectively implement these practices. As such, traditionally utilized workshop-based models for teacher professional development (PD) were viewed as insufficient. This is exemplified within the Every Student Succeeds Act of 2015 as the newly adopted definition of PD stressed that training be “job-embedded, data driven, and classroom focused” (p. 295). Thus, alongside researchers’ efforts to identify what works for teaching students with disabilities, researchers were also trying to understand what works for training special education teachers.
As researchers, districts, and state departments of education attempted to ensure that research-based PD resulted in research-based instruction, concerns about the feasibility of such PD models began to emerge. These concerns centered on the type and dosage of PD reported throughout the research literature for effectively training teachers to implement new instructional strategies and improve teaching quality (Shepley & Grisham-Brown, 2019). This type of PD required ongoing observations by a trainer, continuous assessment of the teacher’s performance by the trainer, and multiple one-on-one meetings between the trainer and teacher (Brock et al., 2017). Such PD is particularly onerous to provide, especially if needing to be sustained over the course of a school year (Schachter et al., 2024). Such challenges are exacerbated for districts serving historically disinvested communities or those situated in rural regions needing to manage geographic obstacles (Lang et al., 2024). As to be expected, real-world PD efforts have not aligned with the dosages reported in research studies (McLeod et al., 2019).
To support the development of PD models that are both effective and feasible under real-world conditions, there have been mounting efforts for education researchers to adopt analytical frameworks that support economic evaluations (Detrich, 2020; Levin et al., 2017; Schneider, 2018). In layman’s terms: Education researchers should not only answer questions about what works, for whom, and under what conditions, but also provide answers regarding how much it costs to obtain the reported impacts. Unfortunately, such a task is more complicated than simply finding the cost to register a teacher for a training seminar on a publisher’s website. To answer questions about how much a specific PD model costs to obtain a specific impact, the first step is to identify and price the ingredients (resources) required to implement the model, which is known as a cost analysis (CA). This allows consumers to determine if the PD model in question can be adopted given an available budget. The next step is to evaluate the impact of the model relative to an appropriate counterfactual, and then compare the relative costs and impacts of the model and the counterfactual; this is known as a cost-effectiveness analysis (CEA). These evaluations help consumers understand the potential return on their investment. With regard to PD models, such evaluations can support district leaders in identifying both the total cost and cost per teacher to obtain a desired level of instructional proficiency within their district.
Understanding the cost of a PD for teachers, or the cost of an educational program for students, can have benefits beyond deciding whether a particular PD or educational program is worth pursuing and whether it is feasible to do so given resource and budget constraints. Among other benefits, ascertaining costs and, if applicable, weighing those costs against expected outcomes, can (a) help understand how a PD or educational program’s theory of change translates into implementation in concrete resource terms by specifying what inputs and their associated amounts and values are needed to replicate an intervention, (b) better specify treatment contrast by comparing intervention costs to business as usual costs that would have been incurred in the counterfactual, and (c) help to understand induced costs and mediating causal pathways by identifying indirect costs not related to the intervention itself but incurred as a result of the intervention (e.g., a high school dropout prevention program inducing more students to attend college, a positive outcome but one that also entails additional costs) (Belfield & Bowden, 2019). Given the utility that economic evaluations afford policymakers and district leaders, it may be assumed that there are ample examples of rigorous CEAs throughout the school-based literature. However, a recent systematic review identified only seven such studies over multiple decades (Barrett et al., 2024), for which CEA data pertained solely to learner outcomes and not teacher outcomes.
The purpose of the present study is to present a CA and CEA of a PD model that was developed to support the progress monitoring efforts of teachers serving young children with disabilities and individualized needs. The PD model utilized an interactive, asynchronous, online module to deliver training to teachers. The training can be accessed simultaneously by multiple teachers, and it can be completed at a teacher’s preferred pace. Teachers can also revisit the online module at any time using an internet-connected device. Additional information about the training is detailed in the next section of the manuscript. The impact of the online training module was evaluated within the context of a pre-registered randomized controlled trial (RCT), from which we derive the effects of the training to calculate cost-effectiveness (Shepley et al., 2025).
The PD model was developed in response to usability and feasibility issues with past PD efforts to improve teachers’ progress monitoring abilities. Most notably, the U.S. Department of Health and Human Services previously funded a multi-year project to develop a measure of teacher quality focused on progress monitoring in early childhood classrooms (Akers et al., 2014). The long-term goal of the project was to establish a measure that could be used to identify a teacher’s strengths and weaknesses with regard to progress monitoring, and subsequently, lead to targeted PD for improving aspects of the teachers’ progress monitoring. The resultant measure required an average of 6 hours for a trained administrator to complete on a single teacher. This amount of time did not account for the burden on the teacher who needed to gather documents for review, participate in interviews, and prepare video recordings of themselves engaged in progress monitoring activities. In total, 3 hours of a teacher’s time was also required to complete the measure (Monahan et al., 2015). In the end, the measure was not made commercially or publicly available, nor has it been used to support PD efforts for teachers.
In contrast, the PD model evaluated in this manuscript was designed to be resource-sensitive. To establish evidence of this claim, we conducted a CA and CEA of the model. In addition, to support future researchers in conducting economic evaluations of PD models, all data and syntax from our analyses are publicly available for use and review (https://osf.io/krbzc/). We have made the data and syntax available in multiple software formats (Excel, Stata) to facilitate access. To further support the adoption of economic evaluations within school-based research, we include information on nuanced aspects of CAs and CEAs, including (a) collecting and reporting induced costs, (b) handling missing data, (c) considerations surrounding data collection methods, and (d) utilizing a Monte Carlo simulation to understand the robustness of findings. Research questions guiding this study were:
What was the cost to implement the online training module for PD purposes?
How do the implementation costs of the online training module compare with the effects of the module on teacher’s progress monitoring implementation, ability, and confidence?
Method
This cost-effectiveness analysis was preregistered through Open Science Framework (https://osf.io/vkb6u). We conducted sensitivity analyses to account for missing data and multiple data collection methods. These sensitivity analyses were not included in the preregistration. We have attempted to clearly describe these analyses throughout the manuscript to ensure transparency in our methods and reporting. No other deviations from the preregistration protocol were made.
Professional Development Model
The online training module was called Progress Monitoring for Preschool Teachers (https://ProgressMonitoringForPreschoolTeachers.org/). The content of the module was specific to monitoring the progress of children with individualized needs, such as those with or at risk for disability or developmental delay. The module provided information on how to identify meaningful skills, design data sheets, collect data, analyze data, and make data-based decisions to inform teaching. The module consisted of six sections that users accessed in a sequential format using an internet-connected computer or tablet device. Within each section, the module presented a video to convey target content, a series of follow-up questions, and dynamic feedback in accordance with a user’s answers to follow-up questions. The feedback was presented such that if two users responded differently to the same question, then the users would receive different feedback aligned with their responses. No additional in-person training, follow-up, or coaching was provided alongside the online module.
Participants
A total of 28 teachers participated in the RCT to evaluate the impact of Progress Monitoring for Preschool Teachers, with 14 teachers receiving access to the module (treatment group) and 14 teachers receiving only their district-provided PD (business-as-usual group). All teachers held state-issued teaching certifications in Interdisciplinary Early Childhood Education, which allowed them to serve preschool-aged children receiving special education services within publicly funded, school-district-based classrooms. Teachers came from five districts located in a U.S. Southeastern state, with half the teachers working in rural communities and half in urban cities based on regional classifications from the National Center for Education Statistics (2021).
Costs
We used the ingredients method to determine the cost of the PD model (Levin et al., 2017). Under this method, all resources (known as ingredients) required to implement a PD model or replicate a measured effect are included and valued according to their opportunity cost, or the value of the next best alternative use. To maximize generalizability, we adopt a societal perspective, meaning we include all costs regardless of who bears them, and use national average market prices to value the ingredients. Within this method, Progress Monitoring for Preschool Teachers functioned as a supplementary program, while school-district-provided PD functioned as the business-as-usual program (American Institutes for Research, 2021).
Data Sources
Ingredients and Quantities
Data on the types of ingredients used and their quantities came from three sources. First, the program theory of change informed the overall anticipated types of ingredients. Second, data on teacher completion and engagement with the module were collected automatically through the backend of the software platform that hosted the module. This provided data on teacher and laptop time use for the direct costs of the training module itself. We also assumed that each teacher utilized their classroom to engage with the module and thus estimated facilities space costs based on the time teachers engaged with the module multiplied by the average classroom size and cost per square foot. Finally, we gathered data on induced costs or savings due to changes in how teachers spend their time collecting and reviewing student assessment data as a result of the intervention using a brief survey. Following teachers’ completion of the module, we gathered data on the amount of time that teachers reviewed and collected progress monitoring data for focal children in their classroom over an approximate four-week timespan. These data were collected through teacher report at the end of the four weeks, by emailing a two-item questionnaire that asked the teachers to estimate the amount of time they spent (a) collecting and (b) reviewing child data. The assumption of the research team was that there would be no difference in the amount of time that teachers spent engaged in collecting and reviewing data, regardless of whether a teacher completed the online module.
Ingredient Prices and Adjustments
Based on data from the U.S. Bureau of Labor Statistics Occupational Outlook Handbook (2023), we used a median K–5 teacher salary of $63,670 for our analysis. We then adjusted this value for inflation to 2024 nominal dollars using the Consumer Price Index and applied fringe benefits based on the March 2024 Employer Cost of Employee Compensation from the Bureau of Labor Statistics to calculate the hourly wage of a teacher in our study ($71.36). Using data from the Cost Analysis Project: Cost of Facilities Calculator (CAP Project, 2024) based on national average classroom size and age amortizing new construction prices per square foot over the useful life of a classroom, the cost per hour to use a classroom in 2020 dollars was $5.96. We adjusted for inflation to 2024 to obtain an updated cost per hour to use a small classroom at $7.21.
To access Progress Monitoring for Preschool Teachers, teachers used an internet-connected computer or tablet device. There was no additional cost required to access the module as it was offered free to the public. The majority of teachers reported using an HP laptop to complete the module, with others reporting to have used a Dell laptop, Acer Chromebook, or an Apple iPad tablet. For our analysis we used the price of a middle-of-the-road HP laptop from BestBuy.com valued at $529.99. We then amortized this value over the useful life of the laptop (6 years) and then applied the share of time that teachers engaged with the module (236 min) to the time available for use, which we conservatively assumed to be 1440 hours per year (8 hours per day × 180 school days). Teachers reported printing an average of 6 pages of content from the module, whereby using reasonable costs provided by the U.S. Internal Revenue Service, we valued each printed page at $0.20, totaling $1.20 per teacher.
Effect Sizes
Effect sizes for the CEA were derived from the outcomes of the RCT (Shepley et al., 2025). Four outcomes from the trial achieved statistical significance when applying a Benjamini-Hochberg correction. The first two outcomes come from the Preschool Progress Monitoring Measure (Shepley et al., 2022; Shepley et al., 2024), which assessed the quality of a teacher’s implementation of progress monitoring practices. These practices pertained to how teachers collected, analyzed, and used progress monitoring data to guide the instruction received by children with individualized needs. The measure contained 25 indicators of quality, which were scored by a trained observer as being present or not present through (a) watching a teacher collect progress monitoring data on a child’s target skill, (b) reviewing the teacher’s data sheet, and (c) appraising the teacher’s responses to questions about their analysis of the collected data and what to change about their teaching as a result (e.g., embed more opportunities for the child to practice the skill during centers). Findings from a pilot study to inform the development of indicators for the measure suggest adequate inter-rater reliability when observers use the measure across different teachers and child skills (mean agreement 93.79%; Shepley et al., 2022). Additional information regarding the development and content validity of the assessment is detailed by Shepley et al. (2024). For the CEA, one outcome from the Preschool Progress Monitoring Measure was specific to teachers when working with children on literacy skills and one outcome from the Preschool Progress Monitoring Measure was specific to teachers when working with children on math skills.
The second outcome assessed a teacher’s progress monitoring ability and was measured using the Brief Preschool Progress Monitoring Measure (Shepley et al., 2024; Shepley, Setari, et al., 2025). Through the completion of a computer-based test, this measure assessed a teacher’s abilities in collecting data, analyzing data, and making data-based decisions. A validation study utilizing Rasch modeling suggested adequate item reliability (0.94), unidimensionality (1.67), and appropriate fit statistics (Range –2.0–2.0). Item level difficulties fell between -0.89 and 0.92 suggesting some challenges in identifying teachers at the extremes of ability levels.
The third outcome from which effect sizes were derived for the CEA, assessed a teacher’s perceived confidence when engaging in progress monitoring and was measured using the Accommodating Individual Differences subsection from the Teachers’ Efficacy Beliefs System-Self (Dellinger et al., 2008). The measure consists of a series of statements in which a teacher rates their level of confidence on a scale of 1 to 4. Studies with varying sample sizes and teacher profiles have consistently demonstrated the measure to have strong reliability (α = 0.85–0.87; Dellinger et al., 2008).
For all measures, Hedges’s g was calculated using differences between the groups on post-treatment scores, resulting in an effect size of 3.39 for teacher implementation quality when working with children on literacy skills, 2.51 for teacher implementation quality when working with children on math skills, 1.77 for teacher progress monitoring ability, and 1.20 for a teacher’s perceived confidence engaging in progress monitoring.
Analysis
For the CA, we calculated the total cost of the PD model when provided to the 14 teachers in the treatment group, as well as the average cost per teacher. We also disaggregated the costs by ingredient to determine expenditures that may be the most and least prohibitive for adopting the PD model. For the CEA, we calculated cost effectiveness ratios (CER) by dividing the average cost per teacher by each outcome’s effect size, such that a separate CER was calculated for each outcome. These CERs can be interpreted as the cost per unit of change in the outcome, and thus compared with other similar PD models to determine which is most cost-effective, or compared with a benchmark willingness to pay for outcomes of interest by stakeholders.
Sensitivity
When calculating the costs of each ingredient, we encountered missing (or unreliable) data for three teachers regarding the amount of time they spent initially completing the module. The issue was likely due to a teacher completing the module over multiple sittings or days, but never exiting out of the internet browser they used to access the module. This resulted in back-end data reporting that a teacher took 100 or more hours to complete the module; however, a more accurate representation of the data would be that it reflects the amount of time a teacher’s internet browser remained on the module website. For the analysis described previously, we imputed the average duration of the available data for the three missing values. As a sensitivity analysis to examine the upper bound of the CA and CEA, we also imputed values that were 2 SD above the mean. To illustrate the ramifications of failing to impute any values for missing data, we also conducted a sensitivity analysis with values of 0 imputed for the missing data.
When calculating induced costs, the research team utilized an additional data source available to understand the amount of time that teachers spent collecting progress monitoring data. As a component of an outcome measured during the RCT, teachers recorded a video of themselves collecting progress monitoring data with two focal children in their classroom. The durations of these videos provided an observational measure about how long teachers spent collecting progress monitoring data. To examine differences in measurement systems for calculating induced costs (i.e., teacher self-report versus observational), we conducted a sensitivity analysis for the CA and CEA using the video recordings of teachers collecting progress monitoring data. On average, teachers in the treatment group collected progress monitoring data in five minutes based on the video recordings, whereas teachers in the business-as-usual group collected progress monitoring data in six minutes. It should be noted that we do not have data on the number of times that teachers engaged in data collection over the four-week timespan that these data represent. For the CA, we assumed that teachers in both groups collected data three times based on survey data from Shepley et al. (2023).
Monte Carlo Simulation
To understand the robustness of our findings, we conducted a Monte Carlo simulation. In cost analysis, a Monte Carlo analysis is a method used to estimate the potential range of costs by simulating multiple scenarios with the values of various input variables drawn from probability distributions, allowing decision-makers to assess the probability of different cost outcomes and understand uncertainty about costs, providing a more comprehensive picture of risk than a single point estimate of costs (Boardman et al., 2017; Shand & Bowden, 2021). For this analysis, we ran a simulation of 500 iterations with ingredient quantities—pages printed, teacher time spent on the PD and data collection and review, and associated computer and facilities use time—drawn from normal distributions based on the observed mean and standard deviations from survey and administrative data, and applied national average prices to those simulated ingredient quantities to arrive at a distribution of possible costs. We conducted this analysis using both Excel for accessibility and Stata for replicability.
Results
Ingredients and Costs
The value of relevant ingredients required to implement the PD model are detailed in the following sections (see Table 1). In addition, we report on induced costs, which for this study functioned as costs that were incurred as a result of changes in teacher behavior following their receipt of the PD model.
Table 1: Costs by Ingredient to Implement Progress Monitoring for Preschool Teachers.
| Ingredient Category | Ingredient Description | Quantitya Per Teacher | Costb |
| Personnel | Teacher time engaging with the online module | 3 hr and 56 min | $71 per hr |
| Facilities | Classroom space for a teacher to engage with the module | 3 hr and 56 min | $7 per hr |
| Materials | Laptop for a teacher to access the device | 3 hr and 56 min | $0.07 per hr |
| Printed pages of resources from the module | 6 pages | $0.20 per page | |
| Induced costs | Changes in a teacher’s time engaging in progress monitoring following completion of the online module | 29 min | $71 per hr |
Note. arefers to the average quantity per teacher; badditional information on cost assumptions, pricing sources, and formulas for deriving final costs are available online as supplemental materials.
Training
A teacher’s initial completion of Progress Monitoring for Preschool Teachers required an average of 128 minutes. Throughout the remainder of the study, each teacher accessed the module for an additional 108 minutes on average. In total, each teacher engaged with the online module for an average of 236 minutes or 3.93 hours. Combining these estimates with the adjusted hourly wage results in a valuation of $280.70 for a teacher’s time engaging with the module.
Facilities
The average cost for the facilities required for a teacher to engage with the module throughout the study was $28.36.
Materials
The final amortized cost for using a laptop to engage with the module was $0.27 per teacher. When engaging with the module, teachers reported printing an average of six pages of content or 84 pages total across the 14 teachers who accessed the module for a cost of $16.80.
Induced Costs
A teacher who completed the online module reported on average that they reviewed focal child data for eight fewer minutes than a teacher who did not complete the module, and that they spent an average of 21 fewer minutes collecting progress monitoring data relative to a teacher who did not complete the module. Treating these durations as a value-added component of the PD model, we multiplied the amount of time saved (0.48 hrs) by the previously detailed hourly wage of a teacher in our study ($71.36) and valued this savings at $34.49 per teacher. This value of $34.49 is subtracted from the final CA estimate, given that it is actually an induced savings rather than an induced cost.
The total cost to utilize the investigated PD model with 14 teachers was $3,864; and the per teacher cost was $276 (see Table 2, row labeled Main). This valuation includes an induced savings of $34 per teacher, due to the reported reduction in a teacher’s time spent collecting and reviewing data following their completion of the online module. The largest cost of the PD model was attributed to paying for teachers’ time, which was 89.19% ($3,447) of the total cost. The utilization of classroom space for teachers to engage with the module was the second largest cost, comprising 10.27% ($397) of the total cost. Materials (i.e., devices to access the module and pages printed) were the smallest cost, comprising 0.53% ($21) of the total cost.
Table 2: Results Across Analyses.
| Analyses | Total Cost | Cost Per Teacher | CER (Cost Effectiveness Ratio)a | |||
| Implementation: Literacyb | Implementation: Mathc | Abilityd | Confidencee | |||
| Main | $3,864 | $276 | 1 ES : $82 | 1 ES : $110 | 1 ES : $156 | 1 ES : $231 |
| Missing Data: 2 SDs above mean | $4,152 | $297 | 1 ES : $88 | 1 ES : $118 | 1 ES : $168 | 1 ES : $248 |
| Missing Data: Imputing 0s | $3,555 | $253 | 1 ES : $75 | 1 ES : $101 | 1 ES : $143 | 1 ES : $213 |
| Observational data collection methods | $4,264 | $305 | 1 ES : $90 | 1 ES : $121 | 1 ES : $172 | 1 ES : $254 |
Note. arefers to the cost required to improve a teacher’s outcome by 1 effect size unit (Hedges’s g); brefers to a teacher’s implementation of progress monitoring practices when working with a child on literacy skills as measured by the Preschool Progress Monitoring Measure; crefers to a teacher’s implementation of progress monitoring practices when working with a child on math skills as measured by the Preschool Progress Monitoring Measure; drefers to a teacher’s progress monitoring ability as measured by the Brief Preschool Progress Monitoring Measure; erefers to a teacher’s perceived confidence in their progress monitoring abilities as measured by the Teachers’ Efficacy Beliefs System-Self; ES = effect size; SD = standard deviation.
Cost-Effectiveness Analysis
Results from the CEA indicated that, on average, improving a teacher’s implementation quality by 1 standardized unit (i.e., Hedges’s g) would cost $82. That is, the CER for the Preschool Progress Monitoring Measure when assessing a teacher’s implementation while working with a child on literacy skills was 1 to $82. When a teacher worked with children on math skills, the average cost to improve the teacher’s implementation quality by 1 standardized unit was $110; the CER was 1 to $110. To improve, on average, a teacher’s progress monitoring ability by 1 standardized unit would cost $156. The CER for the Brief Preschool Progress Monitoring Measure was 1 to $156. To improve, on average, a teacher’s perceived confidence in their progress monitoring ability by 1 standardized unit would cost $231, and the CER for the Teachers’ Efficacy Beliefs System-Self was 1 to $231.
Sensitivity Analyses
When adjusting for missing data by imputing values that were 2 SDs above the mean, the total cost increased to $4,153 ($297 per teacher). This reflected a 7.47% ($289) increase above the originally estimated amount of $3,864. This also increased the CERs, with the Preschool Progress Monitoring Measure ratio at 1 to $88 for literacy skills and 1 to $118 for math skills, the Brief Preschool Progress Monitoring Measure ratio at 1 to $168, and the Teachers’ Efficacy Beliefs System-Self ratio at 1 to $248.
When failing to adjust for missing data and instead imputing a value of 0 for missing data, the total cost decreased to $3,555 ($254 per teacher). CERs also decreased at ratios of 1 to $75 for the Preschool Progress Monitoring Measure when assessing literacy skills, 1 to $101 for the Preschool Progress Monitoring Measure when assessing math skills, 1 to $144 for the Brief Preschool Progress Monitoring Measure, and 1 to $213 for the Teachers’ Efficacy Beliefs System-Self.
When utilizing an observational measure to quantify the amount of time teachers spent collecting progress monitoring data, the total cost increased from the originally estimated amount to $4,264, reflecting a 10.03% increase. This is due to the observational measure suggesting an average time savings of three minutes for teachers who completed the module relative to teacher who did not, whereas the self-report measure suggested an average time savings of 21 minutes between the teachers. Thus, by saving less time when using the observational measure, the induced costs associated with the PD model generated less savings than when using the self-report measure, which suggested a greater time savings.
Monte Carlo Simulation
Figure 1 shows the results of the Monte Carlo simulation, with the range of average per teacher costs represented on the x-axis and the frequency of cost results occurring in each bin on the y-axis. The results are roughly normally distributed around the original mean per teacher cost of about $275, with the bulk of per teacher costs ranging from about $150 to $400, which provides some reassurance that the costs will be within a reasonable range. However, there is some reflection of the uncertainty around the costs, including a few instances of negative costs where the comparison group costs exceeded those of the treatment group, likely due to high savings from the induced costs in the treatment group, and some instances of high per-teacher costs, in excess of $600.
Discussion
This study provides information on the cost and cost-effectiveness of a PD model for preparing teachers to engage in progress monitoring. In addition, the study highlights methods, tools, and examples of how to utilize a variety of contemporary techniques for conducting a CA and CEA. In the following paragraphs we highlight the utility of these techniques and overview the implications of using certain methods rather than others when planning for and conducting a CA and CEA.
To collect data on induced costs (i.e., the amount of time teachers spent reviewing and collecting progress monitoring data after completing the online module), the research team prospectively developed methods to gather these data at an appropriate point in time while the RCT was ongoing. By planning to gather these data on the frontend of the study, the research team avoided having to collect data retrospectively after the study ended. If the data had to be collected after the RCT ended, this may have led to teachers’ misremembering and misreporting the amount of time they spent engaged in data collection and review, or gathering the data might not have been possible because video recordings would not have been made when the RCT was ongoing. As such, the design of a CA and CEA should begin at the same time an evaluation study is being designed.
Regarding how data are collected to inform costs, our findings indicate that self-report and observational methods yielded different results. Although both measures were designed to capture the amount of time teachers collected progress monitoring data, it is likely that the measures represent different aspects of a teacher’s data collection. For example, when teachers reported the amount of time they perceived to have been engaged in data collection, they may have included time spent planning for data collection, such as gathering materials and designing data sheets. However, the observational measure only captured the time that teachers were actively engaged with a child and recording data on the child’s abilities. Considerations about how data are gathered to inform a CA can be critical to ensuring that the appropriate aspects of a cost are adequately captured for valuation.
With any CA and CEA, issues are likely to arise. For example, missing data can occur through instrumentation or human error. When this happens, sensitivity analyses are critical to examining the extent of the impact of the missing data. For this study, by taking a conservative approach and imputing values that were 2 SDs above the mean, we found that this resulted in a 7.47% increase to our original cost estimate. We perceive this as a relatively modest increase; however, we also had relatively minimal missing data. Had more data been missing, it is logical to assume that the impact on the total cost may have been substantially larger. In addition, had we done nothing to account for the missing data, we would have observed an 8.00% decrease in the total cost of the PD model. Thus, how missing data are, or are not, handled can impact the conclusions derived from a CA and CEA.
The Monte Carlo results suggest some small degree of uncertainty about costs associated with the intervention, likely driven by the relatively small sample. Further research about the costs of the intervention, especially in areas with the greatest uncertainty such as how completion of the module impacted the induced costs of how much time teachers spent collecting and reviewing data after the intervention, may be in order.
Implications for Practice
Regarding the adoption of the online training module by districts or school administrators, the primary obstacle that may prevent them from doing so is likely to be teacher time. Approximately 90% of the cost of the module was associated with the time teachers spent initially completing the module and then returning to the module for further review. While at face value the 236 minutes on average that a teacher spent engaged with the module may be less than what is typically reported in the early childhood PD literature (Ramey et al., 2011; Shepley & Grisham, 2019), this should still be considered by administrators. For example, if administrators are expecting teachers to complete the module during their planning time at school, this may not be feasible in states lacking legislation to mandate that teachers actually have planning times (Levitan, 2023). In contrast, if the module is included as a PD activity during an in-service day, it is logical to expect that teachers will have sufficient time to complete the module and review sections as desired. With this in mind, the training implemented in this study will likely differ from how it would be implemented in practice. For this study, the training was an additional PD activity that teachers had to complete on top of what was already required by their districts. We perceive that outside of research contexts, school districts would adopt the training as part of their planned PD for the school year. This has implications for the costs to implement the training, as such a CA would require comparing the costs of the online module provided in this study with the costs of an alternative PD activity that a district may be considering (American Institutes for Research, 2021).
Limitations
Although we took steps to mitigate common concerns about CAs and CEAs in education, including using multiple measures of costs combining self-reports and outside reports, gathering data on both treatment and comparison group teachers to estimate treatment contrast, and gathering data on both direct and induced costs, some common limitations in cost research in education may still apply. For instance, CAs are often conducted retrospectively and may suffer from faulty memory or social desirability bias among participants responding to time-use surveys and components of the theory of action with resource implications for which neither researchers thought to ask about and participants thought to bring up would be omitted from the analysis. These might include ingredients whose costs are trivially low but that might be important to consider for program replication, or costs that were not incremental in our context because they were already in place and the intervention did not generate additional costs beyond business-as-usual (e.g., Internet access), but that may not generalize to other settings or that may change with rapid scale-up (e.g., requiring additional bandwidth). These limitations are mitigated by the relatively simple and straightforward nature of the PD and the multiple sources of data used in the analysis, but exacerbated by the relatively small sample of participating teachers, so further research on how this PD might scale up and generalize to other contexts would be especially useful. Likewise, the lack of a child outcome warrants future research in relation to the investigated PD model.
Conclusion
This study provides data on the costs and cost-effectiveness when training teachers using the online PD module, Progress Monitoring for Preschool Teachers. Sensitivity analyses suggest modest changes in the total cost dependent upon how missing data are handled and the types of data sources used to inform ingredients costs. Materials have been made publicly available in multiple software formats to support consumers in replicating the methods and results reported throughout this manuscript.
Data Availability
All data, syntax, and supplemental information referenced throughout manuscript may be obtained for public review and secondary research purposes at https://osf.io/krbzc/.
Preregistration
The study was preregistered through Open Science Framework for which the protocol may be reviewed at https://osf.io/vkb6u.
Funding
The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R324B210002 to the University of Kentucky. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education.
Acknowledgement
We would like to thank Amanda Duncan and Emily Webb for their assistance with data collection.
Competing Interests
Collin Shepley receives monetary compensation for the licensing of the Brief Preschool Progress Monitoring Measure which was used as an outcome measure in this study. Robert Shand does not declare any conflicts of interest.
Author Contribution
Collin Shepley: Conceptualization, Data Curation, Formal Analysis, Funding Acquisition, Investigation, Methodology, Project Administration, Resources, Visualization, Writing Original Draft, Writing Review & Editing; Robert Shand: Formal Analysis, Methodology, Visualization, Resources, Writing Original Draft, Writing Review & Editing.
References
Akers, L., Del Grosso, P., Atkins-Burnett, S., Boller, K., Carta, J., & Wasik, B. A. (2014). Tailored teaching: Teachers’ use of ongoing child assessment to individualize instruction (Volume II). U.S. Department of Health and Human Services.
American Institutes for Research. (2021). Standards for the economic evaluation of educational and social programs: Cost analysis standards project. https://www.air.org/sites/default/files/Standards-for-the-Economic-Evaluation-of-Educational-and-Social-Programs-CASP-May-2021.pdf
Barrett, C. A., Spear, S. E., Clinkscales, A., Wood, L. L., & Maki, K. E. (2024). What interventions are cost-effective? A systematic review of cost-effectiveness analyses of school-based programs from 2000 to 2020. School Psychology, 39, 658–671. http://doi.org/10.1037/spq0000590
Belfield, C. R., & Bowden, A. B. (2019). Using resource and cost considerations to support educational evaluation: Six domains. Educational Researcher, 48(2), 120–127. http://doi.org/10.3102/0013189X18814447
Boardman, A. E., Greenberg, D. H., Vining, A. R., & Weimer, D. L. (2017). Cost-benefit analysis: Concepts and practice. Cambridge University Press.
Brock, M. E., Cannella-Malone, H. I., Seaman, R. L., Andzik, N. R., Schaefer, J. M., Page, E. J., Barczak, M. A., & Dueker, S. A. (2017). Findings across practitioner training studies in special education: A comprehensive review and meta-analysis. Exceptional Children, 84, 7–26. http://doi.org/10.1177/0014402917698008
CAP Project. (2024). Cost Analysis in Practice Project. https://capproject.org/
Dellinger, A. B., Bobbett, J. J., Olivier, D. F., & Ellett, C. D. (2008). Measuring teachers’ self-efficacy beliefs: Development and use of the TEBS-Self. Teaching and Teacher Education, 24, 751–766. http://doi.org/10.1016/j.tate.2007.02.010
Detrich, R. (2020). Cost-effectiveness analysis: A component of evidence-based education. School Psychology Review, 49, 423–430. http://doi.org/10.1080/2372966X.2020.1827864
Education Sciences Reform Act of 2002, Pub. L. No. 107–279, 116 Stat. 1940 (2002).
Every Student Succeeds Act (2015). 20 U.S.C. § 6301.
Individuals With Disabilities Education Act. (2004). 20 U.S.C. § 1400.
Kiuhara, S. A., Kratocwhill, T. R., & Pullen, P. C. (2017). Designing robust single-case design experimental studies. In J. M. Kauffman, D. P. Hallahan, & P. C. Pullen (Eds.), Handbook of special education (pp. 116–136). Routledge.
Lang, S. N., Tebben, E., Odean, R., Wells, M. B., & Huang, H. (2024). Inequities in coaching interventions: A systematic review of who receives and provides coaching within early care and education. Child & Youth Care Forum, 53, 141–171. http://doi.org/10.1007/s10566-023-09748-7
Levin, H. M., McEwan, P. J., Belfield, C., Bowden, A. B., & Shand, R. (2017). Economic evaluation in education: Cost-effectiveness and benefit-cost analysis. SAGE publications.
McLeod, R. H., Hardy, J. K., & Grifenhagen, J. F. (2019). Coaching quality in pre-kindergarten classrooms: Perspectives from a statewide study. Early Childhood Education Journal, 47, 175–186. http://doi.org/10.1007/s10643-018-0899-5
Monahan, S., S. Atkins-Burnett, B. A. Wasik, L. Akers, F. Hurwitz, & J. Carta. (2015). Developing a tool to examine teachers’ use of ongoing child assessment to individualize instruction. U.S. Department of Health and Human Services.
National Center for Education Statistics (2021). Education demographic and geographic estimates. https://nces.ed.gov/programs/edge/Geographic/LocaleBoundaries. Accessed on September 1st, 2023.
Levitan, S. (2023). Planning time may help mitigate teacher burnout-but how much planning time do teacher get? National Council on Teacher Quality. https://www.nctq.org/research-insights/planning-time-may-help-mitigate-teacher-burnout-but-how-much-planning-time-do-teachers-get/
Ramey, S. L., Crowell, N. A., Ramey, C. T., Grace, C., Timraz, N., & Davis, L. E. (2011). The dosage of professional development for early childhood professionals: How the amount and density of professional development may influence effectiveness. In J. A. Sutterby (Ed.), The early childhood educator professional development grant: Research and Practice. ProQuest.
Schachter, R. E., Knoche, L. L., Goldberg, M. J., & Lu, J. (2024). What is the empirical research base of early childhood coaching? A mapping review. Review of Educational Research, 94, 627–659. http://doi.org/10.3102/00346543231195836
Schneider, M. (2018). Message from IES Director: Changes are coming to research competitions. https://ies.ed.gov/director/remarks/researchcomp2018.asp.
Shand, R., & Bowden, A. B. (2021). Empirical support for establishing common assumptions in cost research in education. Journal of Research on Educational Effectiveness, 15(1), 103–129. http://doi.org/10.1080/19345747.2021.1938315
Shepley, C., Duncan, A. L., & Setari, A. P. (2024). Toward developing and validating a measure to appraise progress monitoring ability. Journal of Early Intervention, 42, 148–164. http://doi.org/10.1177/10538151241235557
Shepley, C., Duncan, A. L., & Webb, E. (2025). Effects of an online module to improve preschool teachers’ progress monitoring implementation and abilities. Research in Special Education, 2, 1–23. http://doi.org/10.25894/rise.2766
Shepley, C., & Grisham-Brown, J. (2019). Multi-tiered systems of support for preschool-aged children: A review and meta-analysis. Early Childhood Research Quarterly, 47, 296–308. http://doi.org/10.1016/j.ecresq.2019.01.004
Shepley, C., Grisham-Brown, J., Lane, J. D., & Ault, M. J. (2022). Training teachers in inclusive classrooms to collect data on individualized child goals. Topics in Early Childhood Special Education, 41(4), 253–266. http://doi.org/10.1177/0271121420915770
Shepley, C., Lane, J. D., & Graley, D. (2023). Progress monitoring data for learners with disabilities: Professional perceptions and visual analysis of effects. Remedial and Special Education, 44, 283–293. http://doi.org/10.1177/07419325221128907
Shepley, C., Setari, A., Duncan, A, & Webb, E. (2025). Validity of an online assessment to appraise teacher progress monitoring ability. Assessment for Effective Intervention, 51(1), 3–9. http://doi.org/10.1177/15345084251366206
