Treffer: Diminishing Marginal Returns to Computer-Assisted Learning
1520-6688
Weitere Informationen
The previous expansion of EdTech as a substitute for traditional learning around the world, the recent full-scale substitution due to COVID-19, and potential future shifts to blended approaches suggest that it is imperative to understand input substitutability between in-person and online learning. We explore input substitutability in education by employing a novel randomized controlled trial that varies dosage of computer-assisted learning (CAL) as a substitute for traditional learning through homework. Moving from zero to a low level of CAL, we find positive substitutability of CAL for traditional learning. Moving from a lower to a higher level of CAL, substitutability changes and is either neutral or even negative. The estimates suggest that a blended approach of CAL and traditional learning is optimal. The findings have direct implications for the rapidly expanding use of educational technology worldwide prior to, during, and after the pandemic.
As Provided
AN0162877501;jpa01mar.23;2023Apr05.06:42;v2.2.500
Diminishing Marginal Returns to Computer‐Assisted Learning
<sbt id="AN0162877501-2">INTRODUCTION</sbt>The previous expansion of EdTech as a substitute for traditional learning around the world, the recent full‐scale substitution due to COVID‐19, and potential future shifts to blended approaches suggest that it is imperative to understand input substitutability between in‐person and online learning. We explore input substitutability in education by employing a novel randomized controlled trial that varies dosage of computer‐assisted learning (CAL) as a substitute for traditional learning through homework. Moving from zero to a low level of CAL, we find positive substitutability of CAL for traditional learning. Moving from a lower to a higher level of CAL, substitutability changes and is either neutral or even negative. The estimates suggest that a blended approach of CAL and traditional learning is optimal. The findings have direct implications for the rapidly expanding use of educational technology worldwide prior to, during, and after the pandemic.
Numerous educational interventions have been used to improve academic achievement and increase human capital among schoolchildren in developing countries. Among these interventions, technology‐based interventions have shown promise relative to other popular interventions such as teacher training, smaller classes, and performance incentives (McEwan, [39]). It has been argued that educational technology (EdTech), such as computer‐assisted learning (CAL), can offset deficiencies that commonly plague schools, such as low teacher quality, high rates of teacher and student absenteeism, low levels of student motivation, and many students being below grade level, among others (Livingston, [34]; The Economist, [56]; World Bank Group, [14]). These arguments are consistent with the rapid substitution of EdTech for traditional teaching methods and explosion of expenditures on EdTech throughout the world happening even before the pandemic. Furthermore, COVID‐19 greatly accelerated these previous trends resulting, at least in the short run, in a whole‐scale substitution from traditional learning to EdTech, and a shift to relying on technology especially for home‐ and after‐school work that is likely to persist long after schools return fully to in‐class instruction.
The previous findings on the effectiveness of CAL, however, are mixed, ranging from null effects to extremely large positive effects (Abbey et al., [1]; Bulman & Fairlie, [13]; Escueta et al., [20]; Rodriguez‐Segura, [49]). To gain insight into this heterogeneity and add a new dimension of analysis, we design and implement a randomized controlled trial (RCT) involving approximately 6,000 grade 3 students in 343 classes (one per school) from two regions in Russia. The RCT includes three treatment arms: 1) CAL for 45 minutes per week, 2) a "double dosage" CAL for 90 minutes per week, and 3) a control that receives no CAL. Estimates of the two treatment effects allow us to explore input substitutability in the use of CAL for the first time in the literature. Importantly, CAL use was directly substituted for traditional learning, avoiding problems associated with identifying separate technology versus increased learning time effects (Ma et al., [35]).
Although extant evidence is from field experiments, heterogeneity in results may stem from variation in the substitutability between CAL and traditional learning. The focus in the previous literature on estimating the average productivity of CAL for a fixed amount of time on CAL provides only limited evidence on characteristics on how this substitutability might change. It does not provide information relevant to important questions regarding input substitutability. In fact, surprisingly, there is little evidence in the previous literature on the substitutability of any input in the educational production function. Another problem is that evaluating only one level of treatment intensity could be misleading if the level chosen for the experiment is too low or too high relative to other substitutable inputs (i.e. educational production might be suboptimal). Unfortunately, similar to many other inputs in educational production, theory provides only limited guidance on optimal levels of substitution.
This study is the first to discern how the effects of CAL change exogenously with respect to usage levels within the same educational setting. Our study is also one of the only studies that evaluates CAL as a direct substitute for traditional learning instead of being provided as a supplemental after‐school program, which likely influences impact estimates. Examining the role of CAL as a direct substitute for traditional learning is also important as countries increasingly mandate limitations on time children spend in after‐school programs and on homework. Our use of CAL is also through homework instead of in‐class substitution of CAL software. We provide new evidence on the use of CAL for homework. Finally, and perhaps most importantly, direct substitution between the two inputs in the field experiment ensures that any changes in educational production is due to input substitution and not higher inputs. Our study is one of the first to use an experiment to provide evidence on the substitutability of any input in educational production.
We find positive effects of CAL on math test scores at the base dosage level. Doubling the amount of CAL input, we find similar effect sizes relative to the control. We thus find evidence that is consistent with a concave relationship between CAL and educational production. Moving from zero to the base level of CAL, CAL is a positive substitute for traditional learning. But, moving from the base level of CAL to the higher level of CAL, CAL is a similarly productive substitute for traditional learning.
For impacts on language achievement, we find positive effects of CAL at the base level, but stronger evidence consistent with concavity. We find that CAL is a positive substitute when moving from zero to the base level of CAL, but a negative substitute when moving from the base level of CAL to the higher level. The findings for math and language in CAL do not differ when we shift the focus from mean impacts to impacts throughout the distribution (i.e., quantile treatment effects). We find no evidence of differential treatment effects by gender for either dosage level. For math and language, we do not find clear evidence of differential treatment effects for high‐ability students relative to low‐ability students.
Our findings contribute to a large literature on the effectiveness of CAL, which provides a wide range of estimates from null effects to extremely large positive effects. Generally, evaluations of supplemental learning CAL programs find large positive effects on academic outcomes (e.g. Blimpo et al., [11]; Böhmer, 2014; Ito et al., [29]; Lai et al., [[32], [31]]; Mo et al., [42]; Muralidharan et al., [43]). For the less common use of CAL as a direct substitute for regular teacher instruction in the classroom or traditional learning after school, the evidence often shows null effects (Barrow et al., [7]; Campuzano et al., [15]; Carillo et al., 2011; Dynarski et al., [19], Linden, [33]; Ma et al., [35]; Naik et al., [44]; Schling & Winters, [52]; Taylor, [55]). Related to these studies of CAL, the less structured provision of computers and laptops for home and/or school use among schoolchildren tends to show null or mixed effects. The findings from our experiment suggest that some of the wide range of estimates on the effectiveness of CAL might be due to chosen dosage levels in addition to study heterogeneity by development level of the country, substitution vs. supplemental program, and features of the software.
The evidence from this analysis helps inform decisions about optimal investment in CAL relative to traditional learning. Identifying optimal levels of investment in CAL is especially important as governments, schools, and families are currently investing heavily in EdTech and likely to increase expenditures in the future. This is especially true for the rapidly growing use of new technologies and their substitution for traditional learning methods in educating schoolchildren in developing countries, which was happening prior to COVID and likely has been accelerated because of COVID.
RESEARCH DESIGN
Field Experiment
To explore CAL and traditional learning substitutability, we design and implement an RCT involving approximately 6,000 third grade schoolchildren in 343 classes/schools in two provinces of Russia. The RCT includes three treatment arms: an "X" dosage CAL arm where students receive 10 items per subject using the software, which (as communicated to the treatment group) is approximately 20 to 25 minutes per week of math CAL and 20 to 25 minutes of (Russian) language CAL; a "2X" dosage CAL arm in which (as communicated to the treatment group) students receive 20 items per subject which is approximately 40 to 50 minutes of math CAL and 40 to 50 minutes of language CAL; and a control arm. With this design, we can explore input substitutability in educational production across different levels of CAL.
The field experiment is conducted among primary schools in Russia. Specifically, 343 schools from two regions were sampled to participate in the experiment. In each school, one third grade class was sampled, and each class has an average of 18.3 students per class. For each third grade class there is one teacher who teaches both math and language. Altogether, 6,253 students and their 343 teachers were sampled and surveyed.
In the second half of October 2018 (near the start of the school year), we conducted a baseline survey of the sampled students, their teachers, and their principals. After the baseline survey, we randomized classes to treatment conditions. Students participated in the treatment from December 2018 until mid‐May 2019. In mid‐May 2019, the end of the Russian school year, we administered a follow‐up survey with students, teachers, and principals.
CAL
The provider of the CAL software is the largest technology company in Russia (hereafter "the provider"). The provider's platform has more than 10,000 items across various math and language sub‐content areas for grades 2 to 4. The items and associated content areas align closely with national educational standards and curricula for primary schools, and thus the problems are similar to those in traditional assignments. As such, the platform was intended to be used throughout the country. After our evaluation, it was widely adopted by schools in many regions.
The CAL software is of high quality and similar to that used in previous studies. It has a graphics‐based and attractive user interface and dynamic, engaging tasks (Appendix D). It allows multiple tries per question and provides scaffolded feedback after each student response. The software also allows teachers to track and compare the performance of individual students both overall and at a granular level in subject‐specific content and sub‐content areas. Appendix A presents example screenshots of these different aspects of the CAL software.
Students in the treatment group use CAL at home as a partial or full replacement for traditional pencil and paper homework. Traditionally, teachers give students a certain number of homework exercises in class, ask students to complete the assigned exercises at home (using pencil and paper), and then turn in the completed exercises in class. For the treated students, some or all of the traditional pencil and paper exercises are replaced by time on CAL. Homework, whether traditional or replaced by CAL, reviews concepts and allows students to practice and solidify their knowledge of what was learned in class.
Baseline Survey
We administered three baseline surveys to students and teachers. The student survey collected basic background information such as student gender and time spent on math and language homework. As part of the baseline survey, we administered proctored exams in four areas: math, language, reading, vocabulary (math and language achievement were our pre‐determined main academic outcomes). As noted in Appendix B, the exams have good psychometric properties. The teacher survey further collected information on the degree to which teachers use information and computing technology (ICT) at home and their self‐efficacy with ICT.
Randomized Design and Statistical Power
To maximize statistical power, we created the sample strata, or blocks, by grouping the six classes with the closest mean grade 3 math scores in a region into a stratum. Adjusting for strata, the resulting intraclass correlation coefficients were extremely low for our two main outcomes: 0.000 in math achievement and 0.053 in language achievement. Classes were then randomly allocated within strata (conducting randomization once) to one of three different treatment conditions (T1 = CAL Dosage 1X, T2 = CAL Dosage 2X, or C = Control or No CAL), as shown in Table 1.
1 Table Assignment of classes (schools) to treatment arms
The large number of schools per treatment arm, extremely low ICCs, and rich set of baseline controls provide substantial statistical power with which to measure effects. Even without controlling for baseline test scores, minimum detectable effect sizes (MDES) are approximately 0.09 SDs (for math) and 0.12 SDs (for language) for pairwise treatment comparisons.
Balance Checks
Appendix Table A1 presents summary statistics for the baseline variables as well as tests for balance on baseline observables across the treatment arms. The exam scores are standardized as z‐scores and thus have a mean of 0 and standard deviation of 1. The percentage of students that are female is 52 percent and the average class size is 21.6 students. The table also shows the results from a total of 24 tests comparing average variable values among the treatment and control arms. These tests were conducted by regressing each baseline variable on a treatment group indicator and controlling for strata. For tests of student‐level variables, standard errors are clustered at the school level.
Out of the 24 tests, only one was statistically significant (different from zero) at the 10 percent level and none were significant at the 5 percent or 1 percent levels. The results from Appendix Table A1 indicate that balance was achieved across the three arms, especially as a small number of significant differences are to be expected (by random chance). A joint test of all baseline covariates simultaneously shows no significant difference between T1 and C (
Program (Treatment) Administration
In both the CAL Dosage 1X and CAL Dosage 2X treatment arms, the provider asked teachers to assign CAL items to their classes through their registered accounts. Teachers were given instructions to use assigned CAL items during homework but were also allowed to use them in class. The vast majority of teachers reported using CAL for homework (more than 95 percent).
One reason that increasing the dosage of CAL could result in increased effectiveness is that it might have increased
2 Table Effects of CAL Dosage 1X and Dosage 2X on student‐reported minutes per week of math and language homework
1 Notes: Dosage 1X is 10 items (approximately 20 to 25 minutes) of CAL per subject per week, Dosage 2X is 20 items (approximately 40 to 50 minutes) of CAL per subject per week.
The dosages of CAL are in line with those used in recent studies. For example, Lai et al. ([32], 2015) and Mo et al. ([42]) find large positive effects of supplemental CAL programs for Chinese schoolchildren (0.12 to 0.18σ in math) from 40 minutes of instruction, two times a week. Some studies use larger dosages. Böhmer et al. (2014) find large positive effects from an after‐school program providing CAL and student coaches in South Africa (0.25σ in math) from 90 minutes twice a week, but part of the program includes student coaches. Banerjee et al. ([6]) find that 120 minutes per week of CAL improves grade 4 math test scores by 0.35 SDs after one year. Muralidharan et al. ([43]) find large positive effects of after‐school Mindspark Center programs in India, which include software use and instructional support (0.59σ in math and 0.36σ in Hindi) from 90 minutes per session, six sessions a week. However, requiring schoolchildren to use CAL in addition to pre‐existing homework at these much higher levels is just not possible in most countries. As noted above, many countries—including China (Ministry of Education (MOE), 2018), France (MNE, 2019), and Russia ([51])—mandate limitations on time children spend in after‐school programs and on homework. In the United States, many school districts have already or are considering implementing homework restrictions (Tawnell, [54]).
Endline Survey and Primary Outcomes
We conducted the follow‐up survey with students and teachers in mid‐May 2019, at the end of the school year. As in the baseline, we administered to students a 2‐hour proctored exam that covers math, language, reading, and vocabulary. Proctors were independent from the schools. They were recruited from regional universities and educational policy organizations, such as regional centers of educational assessment. School workers were not allowed to be proctors. We also asked students about their homework time on different subjects, and we asked teachers about their preparation time for teaching different subjects.
The primary outcome variables for the trial are student math and language achievement at the end of the school year (as measured by the exam). In the analyses, we convert the math and language endline exam scores (percent correct) into
EMPIRICAL METHODS AND HYPOTHESIS TESTS
Our general approach for estimating treatment effects is to regress math and language outcomes on indicator variables for treatment assignment, baseline controls, and strata fixed effects using the following model: 1
The key parameters of interest in Equation (1) are γ<subs>1</subs> and γ<subs>2</subs>. These estimates shed light on whether the production function is concave in CAL. For example, the finding of a positive estimate of γ<subs>1</subs> and an estimate of γ<subs>2</subs> that is less than 2γ<subs>1</subs> is consistent with a concave relationship. Estimates of γ<subs>1</subs> and γ<subs>2</subs> also allow one to determine whether substitution between the CAL and traditional learning inputs increases academic achievement. We can specifically examine whether substitutability diminishes with higher levels of CAL. Having three treatment arms of different dosage (including the control arm where dosage is zero) in the RCT allows us to explore these questions.
We chose the Dosage 1X and Dosage 2X levels of CAL use because, as noted above, they fall within the range of what teachers believe are reasonable amounts, are within policy regulations, and line up well with levels implemented in the previous literature. Another important point of the experimental design is that we are increasing CAL by substituting away from traditional learning which is different than adding a supplemental CAL program. This allows us to isolate changes in educational production resulting from input substitution instead of productivity changes due to changing input levels. This is an important distinction because schools and students face restrictions on in‐school and after‐school time commitments.
RESULTS
Math Scores
Table 4 reports estimates of math test scores on treatment arms. Both specifications with only baseline score controls and with baseline score plus additional controls are reported. For Dosage 1X, we find positive and statistically significant effects on math test scores (0.10 to 0.11σ). Using CAL increased test scores and the increase at the base level of time resulted in effect sizes that are roughly comparable to estimates reported in previous studies at similar dosage levels. For example, Lai et al. ([32], 2015) and Mo et al. ([42]) find 0.12 to 0.18σ effects in math from CAL programs for Chinese schoolchildren from 80 minutes per week.
4 Table Effects of CAL Dosage 1X and Dosage 2X on math and language test scores
After doubling the dosage level, we also find positive and statistically significant treatment effects on math test scores. More importantly, however, we find point estimates that are similar to the first dosage level. Increasing the dosage level thus resulted in no additional increase in effects on math test scores. To our knowledge, these estimates are the first showing no additional effect of a higher dosage of CAL beyond the base level.
One question of interest is whether production in CAL is concave. In this case, the positive substitutability of CAL for traditional learning diminishes as CAL is expanded beyond roughly equal levels. As CAL use is expanded, one possibility is that each additional unit becomes less productive because students become less interested or engaged in the graphics‐ and video‐based learning with more use. Another possibility is that higher levels of CAL use increase the likelihood that students become distracted with other software, apps, and entertainment on the computer. On the other hand, production in CAL might not be concave. An example of this case might be that CAL and traditional learning are perfect substitutes for each other across all levels.
Having three treatment arms of different dosage in the RCT allows us to explore this question empirically for the first time. We first examine whether the estimates are consistent with concavity by comparing the impact of the 2X dosage to twice the impact of the 1X dosage (where both impacts are relative to the control). Table 4 reports the results of the test. We find some limited evidence that is consistent with concavity in educational production in CAL.
Turning to the implications for the substitutability between CAL and traditional learning, the estimates of the two treatment effects suggest different substitutability depending on the base level of CAL. We find that moving from zero to the lower level of CAL, the substitutability of CAL for traditional learning is greater than one (i.e., traditional learning can be reduced by more than one unit when CAL is increased by one unit), but moving from the lower level of CAL to the higher level of CAL the substitutability is equal to one (i.e., CAL and traditional learning are perfectly substitutable across this range). These findings also provide some suggestive evidence on the general forms of the educational production function as discussed in Bettinger et al. ([8]).
The test of two different levels of CAL is also useful beyond testing for concavity or examining input substitutability. For example, testing for the positive effect of each CAL dosage is of immediate interest to the CAL provider (the largest technology company in Russia) as well as to local and national policymakers in Russia (since, to the best of our knowledge, this is the first randomized evaluation of EdTech in Russia). Evaluating only one level of treatment intensity could be misleading for identifying whether CAL is effective if the level chosen for the experiment is too low or too high. We find positive and statistically significant effects for both treatment levels, suggesting that different choices of levels of CAL can improve math test scores.
Language Scores
We also examine treatment effects on language test scores. The previous literature focuses more on math test scores than on language test scores. Languages differ in each country, making it difficult to choose base levels and compare estimates across studies. Additionally, we might expect that educational production in CAL differs between math and language. Although math learning is mostly accomplished through school and homework, language learning is broader because reading for pleasure and family interactions also play key roles in learning.
Table 4 reports estimates for language test scores. Both specifications with only baseline score controls and with baseline score plus additional controls are reported. For Dosage 1X, we find some evidence of positive and statistically significant effects (at the 0.10 level) on language test scores. After doubling the dosage level, the treatment effect estimates become close to zero.
Table 4 also reports the estimates that provide suggestive evidence on the concavity test. For impacts on language achievement, we find positive effects of CAL at the base level, but stronger evidence that is consistent with concavity in the production function. We find a positive substitutability of CAL for traditional learning moving from zero to the lower level of CAL, but a negative substitutability moving from the lower level of CAL to the higher level. If the experiment had only estimated the treatment effect at the higher dosage level in CAL, the positive effects at the lower level, curvature, and changing substitutability would have been missed.
The findings clearly indicate that there is an optimal amount of CAL use for language that represents a relatively balanced approach instead of one with very high levels of usage (or no usage). Additionally, if the experiment only provided the higher dosage of CAL then it would have concluded with a clear null effect on language test scores. This represents a more general concern in tests of the effectiveness of CAL that rely on only one input level.
Interest in Studying Math and Language
A common argument for how CAL, or EdTech more generally, works is that it increases interest to engage with subject material. If students enjoy learning math through CAL, for example, that enjoyment could spill over to learning math more generally. Thus, one reason that substituting CAL for traditional learning at the base level might increase math achievement is because CAL's graphics and gamified nature engage kids and encourage them to study math. Additionally, any curvature in isoquants could be partly due to diminishing engagement in math as CAL is increased relative to traditional learning. Diminishing engagement could be due, for example, either to limited attention spans (that benefit from a mix of traditional and computer‐based homework) or greater fatigue (because of the more intense, interactive nature of the CAL exercises).
Table 5 reports estimates of Equation (1) for whether students are interested in studying math and language. The questions underlying the measure do not refer to CAL and are more generally focused on interest in math or language. At the base dosage level, the math interest of the treatment group is 0.09σ higher than that of the control group. Moving to the higher dosage level in CAL, the point estimates become smaller and lose statistical significance from the control, but are not statistically different from the Dosage 1X estimates. Although these results are only suggestive, they are consistent with the lower use of CAL increasing interest more generally in math and thus resulting in higher math test scores. But when using CAL more extensively and traditional learning consequently less, students might have become less interested and motivated in math and thus experienced no resulting increase in math test scores. These patterns are consistent with the concave educational production function in CAL and related curvature in isoquants.
5 Table Effects of CAL Dosage 1X and Dosage 2X on student interest in math and language
The patterns are also strong for interest in studying language. We find large positive estimates from the lower dosage of CAL. Interest to study language increases by 0.08 to 0.09σ relative to the control. Doubling the dosage of CAL results in a small negative to no change in interest relative to the control. These estimates are consistent with the findings for language test scores and are consistent with more concavity in CAL and curvature in isoquants when we focus on language relative to math.
Distributional Effects
The results from the treatment regressions provide evidence of CAL effects at the mean. Turning the focus to other parts of the distribution, we estimate quantile treatment effects regressions to test for differential treatment effects across the post‐treatment outcome distribution. Appendix Figures C1 and C2 display estimates and 95 percent confidence intervals for each percentile for the Dosage 1X and Dosage 2X effects for math and language test scores, respectively. For math test scores we find some evidence that treatment effects are larger in the middle and top of the distribution than the bottom of the distribution. For most of the distribution we find positive and similar‐sized estimates of Dosage 1X and Dosage 2X effects (except possibly at the very top of the distribution, where there is more noise).
For language scores, the patterns are consistent with the findings for mean treatment effects. Dosage 1X has positive effects throughout the distribution, whereas Dosage 2X has no effects. Although the quantile treatment estimates are not as precisely measured, they do not change the conclusion from the mean impacts reported in Table 4. Mean impact estimates do not appear to be concealing differential effects at different parts of the distribution.
Heterogeneous Effects
We next examine heterogeneous effects by two important subgroups. We focus on differences based on gender and baseline ability (above and below the median). Treatment effects might differ by gender because boys and girls use computers differently, with much higher levels of video game use among boys (Algan & Fortin, [2]; Fairlie, [21]; Rideout et al., [47]). Exploring heterogeneity by baseline ability might be important because, for example, lower‐ability students might have more room to make gains in test scores than high‐ability students from using CAL, or lower‐ability students might benefit more from engaging video‐based and gamified instruction. Differences might not reveal themselves when focusing on one treatment level (i.e., average productivity at that point) and instead might manifest in degrees of concavity.
Appendix Tables A3 and A4 report estimates of interactions by gender on achievement and interest in subject, respectively. As expected, we find evidence that girls have higher language test scores than boys, but similar levels of test scores in math (see e.g., Peña‐López, [45]). However, even with the difference in language scores, we do not find evidence of differential treatment effects by gender at either Dosage 1X or Dosage 2X for either math or language. The estimates for interest in math and language also show higher interest in language among girls than boys, but no differences in math interest or dosage effects by gender.
We next examine differences by baseline ability level. Appendix Tables A5 and A6 report estimates of interactions between the Dosage 1X and Dosage 2X treatments, and above‐median baseline ability for test scores and interest in the subject, respectively. For math, the main treatment effects are positive and significant for both dosage levels. The point estimates for the difference in Dosage 1X vs. control treatments effects between the bottom and top half of students are not statistically significant. The point estimates for the difference in Dosage 2X vs. control effects between the bottom and top half of students show some evidence of marginal significance. For language, we find little statistically significant evidence of positive or negative effects for main effects. We find only limited evidence of a positive differential Dosage 2 vs. control effect between students in the bottom and top half of the baseline language ability distribution. When it comes to liking subjects, the estimates are noisier but generally line up with the test score results. Overall, we do not find clear evidence of differential treatment effects for high‐ability students relative to low‐ability students in either test score.
CONCLUSION
Billions of dollars are spent on computer‐based learning in schools in developing countries each year and substantially more has been spent as a result the accelerated shifts to technology to facilitate remote learning resulting from the pandemic, but what are the effects of this massive shift towards EdTech? Unfortunately, there is limited theoretical guidance on what optimal levels of CAL should be, and the newness of EdTech in developing countries does not provide a long enough track record to determine what works, what does not work, and the impacts of the continued substitution of CAL for traditional learning. The empirical evidence, even from RCTs, is decidedly mixed and focuses exclusively on one dosage level in CAL. To remedy this deficiency in the literature, we study for the first time the effectiveness of CAL on the educational outcomes of school children at different levels of treatment intensity, which sheds light on the substitutability of CAL for traditional learning. Our field experiment, involving approximately six thousand Russian schoolchildren and three treatment arms varying dosage levels in CAL, generates exogenous variation in CAL use. CAL is substituted directly for traditional learning through homework in the experiment. The experiment provides novel evidence on the substitutability of inputs not only for CAL but for any input in the educational production function.
Estimates from the field experiment indicate that CAL increases math test scores at both the base and higher dosage levels. As traditional learning is substituted for CAL from the base level to the higher level, however, we find similar effect sizes. Taken together, this suggests that the substitutability of CAL for traditional learning in math is positive when moving from zero to the base level of CAL, but is neutral when moving from the base level of CAL to the higher level of CAL. These estimates are consistent with CAL having a positive return in educational production. Turning to language achievement, which has been studied less in the previous literature, we find stronger evidence of diminishing substitutability of CAL for traditional learning. The experimental estimates for the returns to CAL for language depend on the level chosen. Importantly, if the experiment had only estimated the treatment effect at the higher dosage level in CAL, the positive effects at the lower level would have been missed. Better knowledge about substitutability is important especially as the widespread substitution of EdTech that happened around the world due to COVID is not likely going away entirely, but perhaps will be succeeded by more blended approaches in the future.
A novel and important finding is that educational production does not appear to fit a situation in which teachers and students can simply substitute between CAL and traditional learning at any level to achieve the same result. For both math and language achievement, we find evidence of a diminishing marginal rate of technical substitution of CAL for traditional learning. The marginal costs of shifting from a lower level to a higher level of CAL are very low because students already have computers, the software is online, and it can be replicated for essentially no cost. Although there are fixed costs associated with developing the software and keeping it up‐to‐date, the provider has made it free of charge to all schools and teachers in the country. In any case, we do not expect that costs will shift the optimal levels much beyond what we find without analyzing detailed measures of costs. The primary constraint in this setting is total homework time mandated by the government.
Why do we find evidence of diminishing substitutability between CAL and traditional learning? One possibility that is at least consistent with our experimental findings is based on changes in interest and engagement in the subject matter. We find that for both math and language, the base level of CAL resulted in the highest levels of interest. When the dosage level of CAL was doubled, students reported lower levels of interest. The finding of diminishing substitutability might be due to these effects on interest and engagement in subject material. Another possibility is that at base level dosages of CAL, students gain from being more engaged in learning the material through the technology, but at higher dosages they lose out on the positive effects of traditional learning. In the end, a blended approach might be the optimal solution for schools and students. The blended approach might keep students engaged and at the same time expose students to more beneficial methods of learning or just keep students switching the focus of their attention. The full‐scale switch from in‐person instruction to online instruction due to COVID is a good example of potential negative impacts on engagement.
More research is needed on these important underlying questions regarding how students learn using technology and more broadly on the substitutability of other educational inputs. Findings from future research along these lines will build on the novel findings presented here on substitutability of CAL for traditional learning and help further identify optimal levels of investment in CAL, which is imperative as governments, schools, and families around the world were increasing investments in EdTech and substituting EdTech for traditional learning methods even prior to the greater movement towards EdTech in response to COVID. And the shift to relying on technology especially for home‐ and after‐school work is not likely to return to pre‐pandemic levels, but instead increase to higher levels even after schools return to in‐class instruction.
ACKNOWLEDGMENTS
We would like to thank Yandex Inc. for data and support for the study. We thank Natalia Lazzati, Jesse Li, and Jon Robinson, and seminar participants at UC Berkeley for comments and suggestions. The study was pre‐registered at the AEA RCT registry prior to endline data collection. The article was prepared within the framework of the HSE University Basic Research Program and funded by the Russian Academic Excellence Project '5‐100'. Approval for the study was obtained from the National Research University Higher School of Economics IRB and Stanford University (IRB #50207).
GRAPH: Appendix Table A1. Balance Check among Treatment Arms (Dosage 2X, Dosage 1X, and No Dosage) and Summary StatisticsAppendix Table A2. Balance Check among Treatment Arms, Non‐Missing StudentsAppendix Table A3. Heterogeneous Effects of CAL Dosage 1X and Dosage 2X on Math and Language Test Scores, by Student GenderAppendix Table A4. Effects of CAL Dosage 1X and Dosage 2X on Interest in Math and Language, by GenderAppendix Table A5. Heterogeneous Effects of CAL Dosage 1X and Dosage 2X on Math and Language Test Scores, by Student Ability (Above and Below Median Baseline Score)Appendix Table A6. Effects of CAL Dosage 1X and Dosage 2X on Interest in Math and Language, by Student Ability (Above and Below Baseline Median Score)Appendix B. Psychometric Properties of the ExamsFigure C1(a‐b). Quantile Effects of Dosage 1X and Dosage 2X (each versus Control) on Math Test ScoresFigure C2(a‐b). Quantile Effects of Dosage 1X and Dosage 2X (each versus Control) on Language Test Scores
Footnotes
1 For example, the one‐to‐one laptop or home computer programs that have been previously studied do not structure or exogenously determine time use, which is needed to study marginal productivity or input substitutability (e.g. Beuermann et al., [9]; Cristia et al., [17]; Fairlie & Robinson, [23]; Hull, 2019).
2 Hypothetically, a meta‐analysis of estimates from previous studies could be used to provide evidence on the characteristics of education production, but the CAL programs used in these studies differ by more than usage time (e.g. substitution vs. supplemental program, country, student preparation, grade level, and the presence of additional instructional support).
3 Policies to reduce time on homework exist, for example, in China (Ministry of Education of the People's Republic of China [MOE], 2018), France (Ministère de l'Éducation Nationale et de la Jeunesse [MNE], 2019), and Russia (SanPiN, 2010).
4 See, for example, Angrist and Lavy ([4]), Banerjee et al. ([6]), Barrow et al. ([7]), Blimpo et al. ([11]), Campuzano et al. ([15]), Carillo et al. (2011), Dynarski et al. ([19]), Falck et al. ([24]), Ito et al. ([29]), Linden ([33]), Lai et al. ([32], 2015), Mo et al. ([42]), Ma et al. ([35]), Muralidharan et al. ([43]), Rockoff ([48]), Rouse and Krueger ([50]), and Taylor ([55]). Also see Abbey et al. ([1]), Bulman and Fairlie ([13]), Escueta et al. ([20]), Glewwe et al. (2013), and Rodriguez‐Segura ([49]) for recent reviews of the literature.
5 Conducting a meta‐analysis of the large number of studies conducted in China, Abbey et al. ([1]) find that the pooled effect size of the 18 included studies indicates a small, positive effect on student learning (0.13 SD, 95 percent CI [0.10, 0.17]), and the strongest evidence exists for the effectiveness of CAL that is used as a supplement to existing learning inputs.
6 See, for example, Beuermann et al. ([9]), Cristia et al. ([17]), de Melo et al. ([18]), Fairlie and London ([22]), Fairlie and Robinson ([23]), Fiorini ([25]), Fuchs and Woessmann ([26]), Hull & Duch ([28]), Machin et al. ([36]), Malamud and Pop‐Eleches ([38]), Malamud et al. ([37]), Schmitt and Wadsworth ([53]), and Yanguas ([59]).
7 As specified in our pre‐analysis plan, we focus on math and language outcomes. Our primary outcomes are math and language achievement as measured by standardized test scores. Course grades for students were not available from all schools.
8 We address missing values for the baseline controls by creating a missing value dummy variable and including it in the regression.
9 The COVID pandemic, however, does not provide a good natural experiment for examining the effects of substituting towards EdTech because too many other factors changed at the same time (Bacher‐Hicks & Goodman, [5]). For examples of research examining the broad impacts of COVID on educational outcomes see, for example, Altindag et al. ([3]), Bird et al. ([10]), Bulman and Fairlie ([14]), and Kofoed et al. ([30]).
In some ways, Russia's educational system resembles the educational systems of other OECD countries. The enrollment rate in primary and secondary education is close to 100 percent. The average class size (21.6 students per teacher in our sample —see Appendix Table A1) is also roughly the same as the OECD average for primary schools at 21 students per teacher (Peña‐López, [45]). In other ways, however, Russia's educational system is closer to that of other middle‐income countries. Its educational expenditures per primary and secondary school student were low at 4,247 US dollars in 2016 (adjusted for purchasing power parity OECD, 2019). According to OECD (2019), this is less than half the OECD average (9,357 US dollars) and below Chile (5,324 US dollars) and Turkey (4,505 US dollars), but above Mexico (3,062 US dollars). Russia's GDP per capita ($10,743 current US dollars in 2017) is just below Costa Rica (11,677 US dollars), and Maldives (11,151 US dollars), and just above Brazil (9,821 US dollars), China (8,827 US dollars), and Mexico (8,910 US dollars) (World Bank Group, 2019). The two regions where the experiment is conducted, Altai Krai (93 schools) and Novosibirsk (250 schools), have GDP per capita below the national average (OECD, 2019).
Unfortunately, the company was unable to provide complete data on CAL usage across the Dosage 1X and Dosage 2X groups (which was a goal for data collection stated in our pre‐analysis plan). Interviews with teachers revealed that they generally complied with instructions on use, which is consistent with bi‐weekly follow‐ups by the provider on usage of the software.
All appendices are available at the end of this article as it appears in JPAM online. Go to the publisher's website and use the search engine to locate the article at http://onlinelibrary.wiley.com
Details of the baseline data collection (and proposed analyses) were described in a pre‐analysis plan written and filed with the American Economic Association registry before endline data were available for analysis (https://www.socialscienceregistry.org/trials). Due to minor technical difficulties in the baseline survey (before randomization), not all 6,253 students took all four tests. Rather, 6,052 students in the baseline took math and vocabulary tests, while 5,838 students took language and reading tests. We deal with missing values for these and other baseline controls by including missing value dummies (as detailed in the pre‐analysis plan).
Because the number of schools in each region was not divisible by 6, we placed nine schools (with the closest mean grade 3 math scores) in the first region in one stratum and 10 schools (with the closest mean grade 3 math scores) in the second region in one stratum.
Based on a previous longitudinal study in primary schools in Russia using the same test instruments, the estimated R‐squared between the baseline and follow‐up scores is approximately 0.50. Other parameters for the power calculation include: 18 students per class/school, an alpha of 0.05, and power = 0.8.
The dosages were chosen based on numerous pilot interviews that the provider conducted with teachers outside of the study sample and prior to the experiment. In the experimental intervention, the provider introduced the online educational platform and dosages through separate training webinars with the Dosage X and Dosage 2X teachers.
Interviews with teachers revealed that class use was minimal relative to use for homework.
Distributions of total homework time align almost perfectly for the control, Dosage 1X, and Dosage 2X groups.
When asked directly about whether they assigned more homework to their class as a result of the intervention, the vast majority of interviewed teachers said no. It was also clear from pilot interviews that teachers were highly sensitive to assigning additional homework to students because the law sets limits on the total amount of homework time that can be assigned to students (1.5 hours per day in all subjects; SanPiN, 2010).
We unfortunately do not have data about teaching styles and are thus unable to examine whether the interventions changed teaching styles.
Out of the baseline sample of 6,052 students that took the math test in the baseline, 5,552 students (92 percent) took the math test in the endline; an additional 165 students took math in the endline but not in the baseline. Out of the baseline sample of 5,838 students that took the language test in the baseline, 5,205 students (89 percent) took the language test in the endline; an additional 360 students took language in the endline but not in the baseline. The rates of missing data for the math and language analytical samples were 8 percent and 11 percent, respectively. Balance in baseline covariates across pairwise treatment comparisons was maintained among the non‐missing students. Out of 24 tests, only two were statistically significant (different from zero) at the 10 percent level and none were significant at the 5 percent or 1 percent levels (Appendix Table A2), as would be expected by chance.
A standard Cobb‐Douglas production function with equal factor returns, for example, implies concavity because of the curvature in isoquants.
A linear production function in which both inputs have similar returns, for example, implies non‐concavity.
REFERENCES
Abbey, C., Ma, Y., Akhtar, M., Emmers, D., Fairlie, R., Fu, N., Johnstone, H., Loyalka, P., Rozelle, S., Xue, H., & Zhang, X. (2022). EdTech innovations and K‐12 student learning outcomes in China: A systematic review and meta‐analysis [Working paper]. Rural Education Action Program, Stanford Center on China's Economy and Institutions.
Algan, Y., & Fortin, N. M. (2018). Computer gaming and the gender math gap: Cross‐country evidence among teenagers. In S. W. Polachek & K. Tatsiramos (Eds.), Transitions through the labor market (Vol. 46, pp. 183 – 228). Emerald Publishing Limited. https://doi.org/10.1108/S0147‐912120180000046006
Altindag, D. T., Filiz, E. S., & Tekin, E. (2021). Is online education working? [Working Paper No. w29113]. National Bureau of Economic Research. https://doi.org/10.3386/w29113
Angrist, J., & Lavy, V. (2002). New evidence on classroom computers and pupil learning. The Economic Journal, 112 (482), 735 – 765. https://doi.org/10.1111/1468‐0297.00068
Bacher‐Hicks, A., & Goodman, J. (2021). The Covid‐19 pandemic is a lousy natural experiment for studying the effects of online learning: Focus, instead, on measuring the overall effects of the pandemic itself. Education Next, 21 (4), 38 – 43.
Banerjee, A. V., Cole, S., Duflo, E., & Linden, L. (2007). Remedying education: Evidence from two randomized experiments in India. The Quarterly Journal of Economics, 122 (3), 1235 – 1264. https://doi.org/10.1162/qjec.122.3.1235
Barrow, L., Markman, L., & Rouse, C. E. (2009). Technology's edge: The educational benefits of computer‐aided instruction. American Economic Journal: Economic Policy, 1 (1), 52 – 74. https://doi.org/10.1257/pol.1.1.52
Bettinger, E., Fairlie, R. W., Kapuza, A., Kardanova, E., Loyalka, P., & Zakharov, A. (2020). Does EdTech substitute for traditional learning? Experimental estimates of the educational production function [Working Paper No. w26967]. National Bureau of Economic Research. https://doi.org/10.3386/w26967
Beuermann, D. W., Cristia, J., Cueto, S., Malamud, O., & Cruz‐Aguayo, Y. (2015). One laptop per child at home: Short‐term impacts from a randomized experiment in Peru. American Economic Journal: Applied Economics, 7 (2), 53 – 80. https://doi.org/10.1257/app.20130267
Bird, K. A., Castleman, B. L., & Lohner, G. (2022). Negative impacts from the shift to online learning during the COVID‐19 crisis: Evidence from a statewide community college system. AERA Open, 8, 23328584221081220. https://doi.org/10.1177/23328584221081220
Blimpo, M. P., Gajigo, O., Owusu, S., Tomita, R., & Xu, Y. (2020). Technology in the classroom and learning in secondary schools [Working Paper]. World Bank. https://openknowledge.worldbank.org/handle/10986/33983
Böhmer, B. (2014). Testing numeric: Evidence from a randomized controlled trial of a computer based mathematics intervention in Cape Town high schools [Unpublished master's thesis]. University of Cape Town.
Bulman, G., & Fairlie, R. W. (2016). Technology and education: Computers, software, and the internet. In Handbook of the economics of education (Vol. 5, pp. 239 – 280). Elsevier. https://doi.org/10.1016/B978‐0‐444‐63459‐7.00005‐1
Bulman, G., & Fairlie, R. W. (2021). The impact of COVID‐19 on community college enrollment and student success: Evidence from California administrative data [Working Paper w28715]. National Bureau of Economic Research. https://doi.org/10.3386/w28715
Campuzano, L., Dynarski, M., Agodini, R., & Rall, K. (2009). Effectiveness of reading and mathematics software products: Findings from two student cohorts. National Center for Education Evaluation and Regional Assistance.
Carrillo, P. E., Onofa, M., & Ponce, J. (2011). Information technology and student achievement: Evidence from a randomized experiment in Ecuador. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.1818756
Cristia, J., Ibarrarán, P., Cueto, S., Santiago, A., & Severín, E. (2017). Technology and child development: Evidence from the one laptop per child program. American Economic Journal: Applied Economics, 9 (3), 295 – 320. https://doi.org/10.1257/app.20150385
De Melo, G., Machado, A., & Miranda, A. (2014). The impact of a one laptop per child program on learning: Evidence from Uruguay [Discussion Paper No. 8489]. IZA Institute of Labor Economics. https://doi.org/10.2139/ssrn.2505351
Dynarski, M., Agodini, R., Heaviside, S., Novak, T., Carey, N., Campuzano, L., Means, B., Murphy, R., Penuel, W., Javitz, H., Emergy, D., & Sussex, W. (2007). Effectiveness of reading and mathematics software products: Findings from the first student cohort [Report to Congress]. National Center for Education Evaluation and Regional Assistance. https://files.eric.ed.gov/fulltext/ED496015.pdf
Escueta, M., Quan, V., Nickow, A. J., & Oreopoulos, P. (2017). Education technology: An evidence‐based review [Working Paper 23744]. National Bureau of Economic Research. https://doi.org/10.3386/w23744
Fairlie, R. W. (2015). Do boys and girls use computers differently, and does it contribute to why boys do worse in school than girls? The BE Journal of Economic Analysis & Policy, 16 (1), 59 – 96. https://doi.org/10.1515/bejeap‐2015‐0094
Fairlie, R. W., & London, R. A. (2012). The effects of home computers on educational outcomes: Evidence from a field experiment with community college students. The Economic Journal, 122 (561), 727 – 753. https://doi.org/10.1111/j.1468‐0297.2011.02484.x
Fairlie, R. W., & Robinson, J. (2013). Experimental evidence on the effects of home computers on academic achievement among schoolchildren. American Economic Journal: Applied Economics, 5 (3), 211 – 40. https://doi.org/10.1257/app.5.3.211
Falck, O., Mang, C., & Woessmann, L. (2018). Virtually no effect? Different uses of classroom computers and their effect on student achievement. Oxford Bulletin of Economics and Statistics, 80 (1), 1 – 38. https://doi.org/10.1111/obes.12192
Fiorini, M. (2010). The effect of home computer use on children's cognitive and non‐cognitive skills. Economics of Education Review, 29 (1), 55 – 72. https://doi.org/10.1016/j.econedurev.2009.06.006
Fuchs, T., & Woessmann, L. (2004). Computers and student learning: Bivariate and multivariate evidence on the availability and use of computers at home and at school [Working Paper No. 1321]. CESIFO. http://hdl.handle.net/10419/18686
Glewwe, P., Hanushek, E. A., Humpage, S., & Ravina, R. (2013). School resources and educational outcomes in developing countries: A review of the literature from 1990 to 2010. Education Policy in Developing Countries, 4 (14,972), 13 – 64. https://doi.org/10.4324/9780429202520
Hull, M., & Duch, K. (2019). One‐to‐one technology and student outcomes: Evidence from Mooresville's digital conversion initiative. Educational Evaluation and Policy Analysis, 41 (1), 79 – 97. https://doi.org/10.3102/0162373718799969
Ito, H., Kasai, K., & Nakamuro, M. (2019). Does computer‐aided instruction improve children's cognitive and non‐cognitive skills?: Evidence from Cambodia. RIETI. https://EconPapers.repec.org/RePEc:eti:dpaper:19040
Kofoed, M., Gebhart, L., Gilmore, D., & Moschitto, R. (2021). Zooming to class?: Experimental evidence on college students' online learning during COVID‐19 [Discussion Paper 14356]. IZA Institute of Labor Economics.
Lai, F., Luo, R., Zhang, L., Huang, X., & Rozelle, S. (2015). Does computer‐assisted learning improve learning outcomes? Evidence from a randomized experiment in migrant schools in Beijing. Economics of Education Review, 47, 34 – 48. https://doi.org/10.1016/j.econedurev.2015.03.005
Lai, F., Zhang, L., Hu, X., Qu, Q., Shi, Y., Qiao, Y., Boswell, M., & Rozelle, S. (2013). Computer assisted learning as extracurricular tutor? Evidence from a randomised experiment in rural boarding schools in Shaanxi. Journal of Development Effectiveness, 5 (2), 208 – 231. https://doi.org/10.1080/19439342.2013.780089
Linden, L. L. (2008). Complement or substitute ? : The effect of technology on student achievement in India [Working paper]. Columbia University: InfoDev.
Livingston, S. (2016). Classroom technologies narrow education gap in developing countries. Brookings. https://www.brookings.edu/blog/techtank/2016/08/23/classroomtechnologies‐narrow‐education‐gap‐indeveloping‐countries.
Ma, Y., Fairlie, R. W., Loyalka, P., & Rozelle, S. (2020). Isolating the "tech" from EdTech: experimental evidence on computer assisted learning in China [Working Paper w26953]. National Bureau of Economic Research. https://doi.org/10.3386/w26953
Machin, S., McNally, S., & Silva, O. (2007). New technology in schools: Is there a payoff? The Economic Journal, 117 (522), 1145 – 1167. https://doi.org/10.1111/j.1468‐0297.2007.02070.x
Malamud, O., Cueto, S., Cristia, J., & Beuermann, D. W. (2019). Do children benefit from internet access? Experimental evidence from Peru. Journal of Development Economics, 138, 41 – 45. https://doi.org/10.1016/j.jdeveco.2018.11.005
Malamud, O., & Pop‐Eleches, C. (2011). Home computer use and the development of human capital. The Quarterly Journal of Economics, 126 (2), 987 – 1027. https://doi.org/10.1093/qje/qjr008
McEwan, P. J. (2015). Improving learning in primary schools of developing countries: A meta‐analysis of randomized experiments. Review of Educational Research, 85 (3), 353 – 394. https://doi.org/10.3102/0034654314553127
Ministère de l'Éducation Nationale et de la Jeunesse. (2019). Encouraging student success: homework done. http://www.education.gouv.fr/cid131710/encouraging‐student‐success‐homework‐done.html
Ministry of Education of the People's Republic of China. (2018). Notice of the Ministry of Education and nine other departments on the issuance of burden reduction measures for primary and secondary school students. http://www.moe.gov.cn/srcsite/A06/s3321/201812/t20181229%5f365360.html
Mo, D., Zhang, L., Luo, R., Qu, Q., Huang, W., Wang, J., Qiao, Y., Boswell, M., & Rozelle, S. (2014). Integrating computer‐assisted learning into a regular curriculum: Evidence from a randomised experiment in rural schools in Shaanxi. Journal of Development Effectiveness, 6 (3), 300 – 323. https://doi.org/10.1080/19439342.2014.911770
Muralidharan, K., Singh, A., & Ganimian, A. J. (2019). Disrupting education? Experimental evidence on technology‐aided instruction in India. American Economic Review, 109 (4), 1426 – 60. https://doi.org/10.1257/aer.20171112
Naik, G., Chitre, C., Bhalla, M., & Rajan, J. (2020). Impact of use of technology on student learning outcomes: Evidence from a large‐scale experiment in India. World Development, 127, 104736. https://doi.org/10.1016/j.worlddev.2019.104736
Peña‐López, I. (2019). PISA 2018 results (Volume 1): What students know and can do. OECD.
OECD. (2019). Education at a glance 2019: OECD Indicators. https://doi.org/10.1787/f8d7880d‐en
Rideout, V. J., Foehr, U. G., & Roberts, D. F. (2010). Generation M2: Media in the lives of 8‐to 18‐year‐olds. Henry J. Kaiser Family Foundation.
Rockoff, J. E. (2015). Evaluation report on the School of One i3 expansion [Unpublished manuscript]. Columbia University.
Rodriguez‐Segura, D. (2022). EdTech in developing countries: A review of the evidence. The World Bank Research Observer, 37 (2), 171 – 203. https://doi.org/10.1093/wbro/lkab011
Rouse, C. E., & Krueger, A. B. (2004). Putting computerized instruction to the test: A randomized evaluation of a "scientifically based" reading program. Economics of Education Review, 23 (4), 323 – 338. https://doi.org/10.1016/j.econedurev.2003.10.005
SanPiN 2.4.2.2821‐10. Sanitary and epidemiological requirements for conditions and organization of educational process in schools. (2010). Decree of the chief state sanitary doctor of the Russian Federation.
Schling, M., & Winters, P. (2018). Computer‐assisted instruction for child development: Evidence from an educational programme in rural Zambia. The Journal of Development Studies, 54 (7), 1121 – 1136. https://doi.org/10.1080/00220388.2017.1366454
Schmitt, J., & Wadsworth, J. (2006). Is there an impact of household computer ownership on children's educational attainment in Britain? Economics of Education Review, 25 (6), 659 – 673. https://doi.org/10.1016/j.econedurev.2005.06.001
Tawnell D.H. (2018). Down with homework, say U.S. school districts. The Wall Street Journal. https://www.wsj.com/articles/no‐homework‐its‐the‐new‐thing‐in‐u‐s‐schools‐11544610600.
Taylor, Eric S. (2018). New technology and teacher productivity [Working paper]. Harvard University.
The Economist. (2018, November 17). In poor countries technology can make big improvements to education. https://www.economist.com/international/2018/11/17/in‐poor‐countries‐technology‐can‐make‐big‐improvements‐to‐education.
World Bank Group. (2018). World Bank education overview: New technologies. http://documents.worldbank.org/curated/en/731401541081357776/World‐Bank‐Education‐Overview‐New‐Technologies
World Bank Group. (2019). Open Data: World Development Indicators. https://data.worldbank.org/
Yanguas, M. L. 2020. Technology and educational choices: Evidence from a one‐laptop‐per‐child program. Economics of Education Review, 76, 101984. https://doi.org/10.1016/j.econedurev.2020.101984
By Eric Bettinger; Robert Fairlie; Anastasia Kapuza; Elena Kardanova; Prashant Loyalka and Andrey Zakharov
Reported by Author; Author; Author; Author; Author; Author
ERIC BETTINGER is a Professor of Education at Stanford University and a Research Associate at the National Bureau of Economic Research (NBER), CERAS Room 522, 520 Galvez Mall, Stanford, CA 94305 (email: ebettinger@stanford.edu).
ROBERT FAIRLIE is a Professor of Economics at the University of California at Santa Cruz and a Research Associate at NBER, Engineering 2 Building, Santa Cruz, CA 95064 (email: rfairlie@ucsc.edu).
ANASTASIYA KAPUZA is a Research Fellow at the Institute of Education, National Research University Higher School of Economics, 16 Potapovsky Pereulok, Building 10, Moscow, Russia (email: nas669@yandex.ru).
ELENA KARDANOVA is an Associate Professor at the Institute of Education, National Research University Higher School of Economics, 16 Potapovsky Pereulok, Moscow, Russia (email: e_kardanova@mail.ru).
PRASHANT LOYALKA is an Associate Professor at Stanford University and a Senior Fellow at the Freeman Spogli Institute for International Studies at Stanford University, E411 Encina Hall, 616 Serra St., Stanford, CA 95305 (email: loyalka@stanford.edu).
ANDREY ZAKHAROV is an Associate Professor at the Institute of Education, National Research University Higher School of Economics, 16 Potapovsky Pereulok, Moscow, Russia (email: ab.zakharov@gmail.com).