Volume 52, Issue 5 p. 1935-1964
REVIEW
Open Access

The effectiveness of technology-supported personalised learning in low- and middle-income countries: A meta-analysis

Louis Major

Corresponding Author

Louis Major

Faculty of Education, University of Cambridge, Cambridge, UK

Correspondence

Louis Major, Faculty of Education, University of Cambridge, 184 Hills Road, Cambridge, CB2 8PQ, UK.

Email: [email protected]

Search for more papers by this author
Gill A. Francis

Gill A. Francis

Department of Education, University of York, York, UK

Search for more papers by this author
Maria Tsapali

Maria Tsapali

Faculty of Education, University of Cambridge, Cambridge, UK

Search for more papers by this author
First published: 24 May 2021
Citations: 17

Abstract

Digital technology offers the potential to address educational challenges in resource-poor settings. This meta-analysis examines the impact of students' use of technology that personalises and adapts to learning level in low- and middle-income countries. Following a systematic search for research between 2007 and 2020, 16 randomised controlled trials were identified in five countries. Studies involved 53,029 learners aged 6–15 years. Coding examined learning domain (mathematics and literacy); personalisation level and delivery; technology use; and intervention duration and intensity. Overall, technology-supported personalised learning was found to have a statistically significant—if moderate—positive effect size of 0.18 on learning (p = 0.001). Meta-regression reveals how more personalised approaches which adapt or adjust to learners' level led to significantly greater impact (an effect size of 0.35) than those only linking to learners' interests or providing personalised feedback, support, and/or assessment. Avenues for future research include investigating cost implications, optimum programme length, and teachers' role in making personalised learning with technology effective.

Practitioner notes

What is already known about this topic?

  • Promoting personalised learning is an established aim of educators.
  • Using technology to support personalised learning in low- and middle-income countries (LMICs) could play an important role in ensuring more inclusive and equitable access to education, particularly in the aftermath of COVID-19.
  • There is currently no rigorous overview of evidence on the effectiveness of using technology to enable personalised learning in LMICs.

What this paper adds?

  • The meta-analysis is the first to evaluate the effectiveness of technology-supported personalised learning in improving learning outcomes for school-aged children in LMICs.
  • Technology-supported personalised learning has a statistically significant, positive effect on learning outcomes.
  • Interventions are similarly effective for mathematics and literacy and whether or not teachers also have an active role in the personalisation.
  • Personalised approaches that adapt or adjust to the learner led to significantly greater impact, although whether these warrant the additional investment likely necessary for implementation at scale needs to be investigated.
  • Personalised technology implementation of moderate duration and intensity had similar positive effects to that of stronger duration and intensity, although further research is needed to confirm this.

Implications for practice and/or policy:

  • The inclusion of more adaptive personalisation features in technology-assisted learning environments can lead to greater learning gains.
  • Personalised technology approaches featuring moderate personalisation may also yield learning rewards.
  • While it is not known whether personalised technology can be scaled in a cost-effective and contextually appropriate way, there are indications that this is possible.
  • The appropriateness of teachers integrating personalised approaches in their practice should be explored given ‘supplementary’ uses of personalised technology (ie, additional sessions involving technology outside of regular instruction) are common.

INTRODUCTION

Personalising education by adapting learning opportunities and instruction to individual capabilities and dispositions is an established aim of educators (Natriello, 2017). Everyday practice in schools around the world typically involves some personalisation. For example, when walking around a classroom, teachers usually personalise their teaching by giving extra support to those who are struggling, while challenging further those who are making good progress (Holmes et al., 2018). The idea of personalised learning is, therefore, not new. There are, however, considerable variations in how personalisation happens in practice.

Antecedents of personalised learning can be seen in the progressive education philosophy of John Dewey, William Kirkpatrick and others in the early 20th century (Redding, 2016). Research on the role of technology in enabling personalised learning can similarly be traced back many years (Holmes et al., 2018). More recently, the adaptive and personalisable affordances of educational technology (‘EdTech’) have been suggested as offering the potential to adjust the learning experience based on age, attainment level, prior knowledge and personal relevance (FitzGerald, Jones, et al., 2018). Personalised technology may, for instance, modify the pace of learning in a way that empowers learners to choose how and when they learn (Ogan et al., 2012). It can also facilitate different kinds of content (to reflect learners' preferences and cultural context; Kucirkova, 2018) and automatically capture and respond to students' learning patterns (du Boulay et al., 2018).

In low- and middle-income countries (LMICs), EdTech has been recognised as offering a promising means of addressing educational challenges (Bianchi et al., 2020). In particular, personalised and adaptive learning systems offer the potential to support self-led learning as well as other forms of learning (making this more accessible, impactful and engaging).1 Using technology to support personalised learning has been proposed as a way to increase learner access to education both in and out of school, enable teaching at the ‘right’ (ie, the learner's current) level and reduce the negative effects of high teacher–learner ratios (Kishore & Shah, 2019; Zualkernan, 2016). Such affordances could play an important role in tackling the greatest disruption to education in our time—an effective response to COVID-19 which saw 1.6 billion learners losing access to their classrooms in addition to causing ongoing disruption (UNESCO, 2020).

Even before the pandemic, personalised learning was enjoying a resurgence in popularity (FitzGerald, Jones, et al., 2018). As the global education community aims to rebuild, interest in using personalised learning systems, adaptive curricula and data-driven instruction are candidates to form a key part of the future educational landscape (Selwyn & Jandrić, 2020). At a time when governments and other stakeholders have turned to technology to support the immediate education response to COVID-19 as well as long-term system recovery (EdTech Hub, 2020), robust evaluations of existing evidence are needed to inform decision making about the potential of using technology to support personalised learning. This is particularly the case in LMICs where such technology may help to prevent marginalised learners from falling further behind (Azevedo et al., 2020), for instance, through enabling remediation that adapts instruction to children's learning levels on a continued basis (Kaffenberger, 2020).

This work builds on a Rapid Evidence Review (RER) that established the potential of using personalised technology to improve educational outcomes for children in LMICs (Major & Francis, 2020). Importantly, the RER revealed how a growing body of randomised controlled trials (RCTs) explored personalised learning in the context of research on computer-assisted learning and computer-aided instruction. Undertaking a meta-analysis of such research allows a rigorous and accurate synthesis of the findings of existing studies, thus providing more information about the current state-of-the-art in this area (Vogel et al., 2006). While previous systematic reviews have explored developments in technology-enhanced personalised learning in mainly high-income contexts (eg, Xie et al., 2019; Zhang et al., 2020), none have investigated the effectiveness of technology-supported personalised learning in LMICs through meta-analysis. This study is therefore the first to ask: What is the effectiveness of technology-supported personalised learning in improving learning outcomes (mathematics and literacy) for school-aged children in LMICs? In addition to contributing to improving the precision of the estimated effects of technology-enabled personalised learning (Haidich, 2010), meta-analysis can answer research questions not posed by individual studies (as considered in Section 4) and inform the generation of new hypotheses (as discussed in Sections 5 and 6). Findings will inform education decision makers and researchers about the potential effectiveness of technology-supported personalised learning, both in response to COVID-19 and beyond.

BACKGROUND

Personalised learning

As with many concepts in education, there is no universal definition of personalised learning. Cuban (2018) describes personalised learning as ‘like a chameleon it appears in different forms’, suggesting these forms can be conceptualised as a ‘continuum’ of approaches: from teacher-led to student-centred classrooms, with ‘hybrid’ approaches in between. Robinson and Sebba (2010) similarly suggest personalised learning should not be equated with ‘individual’ or ‘individualised’ learning (although it may include it): that is to say students can experience personalised learning while working individually, in small groups or in the whole class.

Although definitions of personalised learning vary, there is broad agreement that it is learner-centred and flexible, and responsive to individual learners' needs (Gro, 2017). Advocates argue that students—including those who are marginalised—can achieve higher levels of learning if they receive personalised instruction tailored to their unique needs and strengths (Jones & Casey, 2015; Zhang et al., 2020). This involves more than an individual engaging with content; it may feature addressing social needs and developing collective understanding through productive interactions with others (Holmes et al., 2018). The promise of personalisation thus lies in its ability to address a ‘one-size-fits-all’ approach to education that may disadvantage learners (FitzGerald, Jones, et al., 2018).

Research suggests that personalisation can contribute to improving learning outcomes through enhancing motivation and attitudes (Jones et al., 2013) and supporting the development of metacognitive skills and self-reflection (Arroyo et al., 2014; Kim, Olfman, et al., 2014). Higher levels of personalisation have been associated with better academic achievement, improved school culture and greater student engagement (McClure et al., 2010). Compared with their peers, students who started out behind have also been shown to catch up to perform at or above national averages in schools that implement personalised learning (Pane et al., 2015). However, while the premise of personalised learning is to provide more equitable outcomes for all learners, associated research is in its infancy and questions remain about how to scale effectively (Zhang et al., 2020).

Defining technology-supported personalised learning

Digital technology has been argued as offering a potentially impactful way of supporting personalised learning. For instance, technology can facilitate learning driven by student interests, optimise learning based on learner needs (eg, through providing differentiated feedback) and adaptively adjust learning (eg, the pace of instruction) (Office of Educational Technology, 2017). Furthermore, it may enable educators to take a more personalised approach in their teaching and inform data-driven decision making (Maseleno et al., 2018; Pane et al., 2015). This includes promoting socially interactive learning through game-like activities (Hirsh-Pasek et al., 2015; Pardo et al., 2019).

In the context of research in LMICs, terms including computer-assisted learning, computer-aided learning, computer-aided instruction and intelligent/cognitive tutoring systems have been used interchangeably to describe interventions that may personalise learning (Major & Francis, 2020). Bulger's (2016) distinction between ‘responsive’ and ‘adaptive’ personalised learning systems is, therefore, helpful when considering technology-enabled personalised learning in LMICs. Responsive systems are those that may enable learners to personalise the learning interface, choose their own tailored path through instructional material or provide some degree of personalised support or feedback. Examples are computerised game-like drills or exercises that provide learners with limited personalised feedback indicating whether their responses are correct or incorrect. Adaptive systems, on the other hand, actively scaffold learning by adapting content delivery depending on the user behaviour or performance. Such interventions may adaptively provide content that matches the level of the learner or modify the pace of instruction. Examples include computer-assisted software that adjusts the delivery of exercises to the level of the learner and intelligent tutoring systems that proactively guide learning through using high-tech data-driven features (eg, facial recognition software)2 (Bulger, 2016).

In this paper, we examine the role of technology-supported personalised learning in improving academic outcomes for school-aged learners in LMICs. Influenced by existing research (FitzGerald, Jones, et al., 2018; FitzGerald, Kucirkova, et al., 2018), we define this broadly as ‘the ways in which technology enables or supports learning based upon particular characteristics of relevance or importance to learners’. This definition encompasses both responsive and adaptive approaches to technology-enabled personalisation. Details of inductive analyses to identify the detailed personalisation affordances of interventions included in the meta-analysis are outlined in Section 3.4 and Supporting Information File 1.

Using digital technology to support personalised learning in low- and middle-income countries

Research has consistently found that digital technology is associated with learning gains for students in high-income countries although there is variation in impact (Education Endowment Foundation, 2019). In LMICs, less is known about the effectiveness of using digital technology educationally. While there is a consensus that technology can contribute to (the facilitation of) learning, many initiatives are designed without taking existing evidence—nor the local context—into consideration (Tauson & Stannard, 2018).

A seminal study by Banerjee et al. (2007)3 reported a randomised evaluation of a computer-assisted programme involving over 11,000 children. One feature was that content and tasks were personalised to each child's current level of achievement, thereby enabling them to be individually and appropriately stimulated (Banerjee et al., 2007). In addition to allowing for variation in academic content presented, this enabled different entry points and differentiated instruction (including preserving the age-cohort-based social grouping of students; Muralidharan et al., 2019). Such adaptation to learners' needs to teach at the ‘right’ (ie, the learner's current) level has been an increasing focus of research in LMICs over the past decade, both with (Rajendran & Muralidharan, 2013) and without technology (Innovations for Poverty Action, 2015; Sawada et al., 2020).

Providing complex issues relating to implementation and sustainability can be overcome (see Section 5), technology-enhanced approaches to personalised learning may offer a solution to challenges that have faced other EdTech initiatives in LMICs (Zualkernan, 2016). Complementary to enabling ‘teaching at the right level’, it has been argued that this could include helping to address teacher shortages (Ito et al., 2019); closing educational gaps through adaptive remedial instruction (Ogan et al., 2012); and performing routine tasks to free up teachers to spend more time on aspects of education where they have comparative advantages over technology (Perera & Aboal, 2017). Many of these potential benefits resonate with the UN's Sustainable Development Goal 4 to ensure inclusive and equitable quality education for all.4 However, no meta-analysis to-date has investigated the effectiveness of technology-supported personalised learning in improving learning outcomes for school-aged children in LMICs.

Related reviews

While this meta-analysis is the first to consider the effectiveness of technology-supported personalised learning in LMICs, other reviews have explored the role of educational technology more broadly. Rodriguez-Segura (2020) summarised 81 (quasi-)experimental studies undertaken in LMICs. The author found that interventions that improve the quality of instruction—or are centred around student-led learning—are the most effective for raising learning outcomes. Expanding access to technology alone was also identified to be insufficient for improving learning (although it may be a necessary first step).

Escueta et al., (2017) similarly synthesised experimental evidence, reporting that computer-assisted learning (CAL) may be more effective in LMICs given tight capacity constraints. They concluded that evidence on using CAL in LMICs is positive, suggesting that the way this adapts to learner needs may play a central role in addressing the unevenness of levels that challenges many schools. Infrastructure limitations and challenges that can impede implementation are noted.

Other reviews on personalised learning more broadly include work by Xie et al. (2019) who analysed global developments in technology-enhanced personalised learning between 2007 and 2017. Findings included that research on personalised learning typically involves traditional computers with few studies conducted on wearable devices, smartphones and tablets. Also with a focus on technology-enhanced learning, the synthesis by FitzGerald, Jones, et al., (2018) considered the representation of personalisation in the literature since 2000. Finally, a review of personalised learning by Zhang et al. (2020) found that a majority of 71 studies reported personalised learning—especially that supported by technology—to be associated with positive findings in terms of academic outcomes, engagement, attitude towards learning and meta-cognitive skills.

Research questions

While research into educational technology in developed countries may be more advanced, Kaye and Ehren (2021) argue that such work must be considered separately from that undertaken in LMICs. This is because the deployment of educational technology in LMICs faces a unique and different set of context-related infrastructural and other challenges, rendering transfer of messages from research in high-income countries often inappropriate. Recognising this issue, the present meta-analysis complements and extends aforementioned research by considering the following research questions:

  1. Does technology-supported personalised learning improve learning outcomes for school-aged children more effectively than teachers' standard educational practice (without technology) in low- and middle-income countries?
  2. To what extent do features of technology-supported personalised learning contribute to the effectiveness of interventions? Specifically, do learning outcomes vary by:
    • learning domain (mathematics and literacy),
    • personalisation level,
    • personalisation delivery type (technology only or teacher and technology) and
    • intervention intensity and duration?

METHODOLOGY

Undertaking a meta-analysis offers a transparent, objective and replicable means for investigating a field and identifying new research opportunities. Their ability to synthesise evidence on the effects of interventions mean meta-analyses are well suited to inform evidence-based policy and practice (Borenstein et al., 2009). In addition to the academic community, meta-analytic techniques have also been influential in enabling rigorous recommendations to be made to other educational stakeholders (particularly with regards to ‘what works’ in education (Ahn et al., 2012; Slavin, 2008).

Search process

The RER (Major & Francis, 2020) can be viewed as the first stage in the study search. This involved developing and refining search terms (see Appendix A) and undertaking automated searches during May 2020 using Google Scholar and the Searchable Publication Database (SPuD: a database of 3+ million records indexing ProQuest, Web of Science, Scopus, the Directory of Open Access Journals and the Education Resources Information Center up until 2019; Adam & Haßler, 2020). ‘Grey literature’ was accepted if relevant. Independent double screening of titles and abstracts was undertaken by authors LM and GF with any disagreements discussed. Importantly, the RER identified the potential for undertaking a meta-analysis as it revealed how 12 experiments with quantified outcomes explored aspects of personalised learning in the context of computer-assisted/-aided learning. It also informed the development of a more specific meta-analysis protocol outlining detailed inclusion criteria, additional study search and selection processes, critical appraisal procedures and data coding/analysis methods.

Having identified potentially relevant studies and established the feasibility of undertaking a meta-analysis (exploring impact on mathematics and literacy outcomes specifically), additional automated searches of Scopus, the Education Resources Information Center and Web of Science were undertaken in July–August 2020 to cover any new literature published in 2019–August 2020. The search terms in Appendix A were again applied and grey literature was accepted. Studies identified during the RER were also reappraised ensuring that all data assessed for the meta-analysis followed a common screening process. After title and abstract screening, studies were read in full (by both LM and GF) and inclusion criteria were applied (Appendix B). After full-text screening, forward and backward citation snowballing was carried out (by GF). This involved examining the reference lists of included studies. Authors of included studies were also contacted for their recommendations of research to include. To verify the identification of all relevant studies, the included study lists of systematic reviews reported in Section 2.4 were compared with the search results to determine if any studies were missing.

Eligibility criteria

The full eligibility criteria for inclusion in the meta-analysis are outlined in Appendix B. Briefly, for inclusion, studies must be published between 2007 and 2020; involve learners aged 5–18 years in LMICs; feature a technology-supported personalised learning intervention (that enables or supports learning based upon particular characteristics of relevance or importance to learners); feature comparison with a control group in a RCT; consider academic performance (mathematics or literacy) as a learning outcome. Details of studies excluded after full-text screening are available in Supporting Information File 1.

Research critical appraisal

Studies were assessed using a framework aligned with the Building Evidence in Education (2015) guidance on assessing research. This features six categories (see Supporting Information): (a) conceptual framing; (b) contextual detail; (c) research design; (d) validity, reliability and limitations; (e) cultural sensitivity and ethics; and (f) interpretation and conclusions. With a possible aggregate score of 21, a rating of low (1 pt), medium (2 pts) and high (3 pts) is awarded for each category (with the exception of Category 3—research design—which integrates the Mixed Methods Appraisal Tool to assess RCT designs and is double weighted out of 6 pts; Hong, Fàbregues, et al., 2018). The assessment was led by MT. To test the validity of the critical appraisal procedure, a second rater (LM) randomly appraised six included studies according to the same criteria.

Determining personalisation affordances

To demonstrate the valid inclusion of studies following the study search, inductive analyses were undertaken (led by MT) to identify and thematically categorise the detailed personalisation affordances of reported interventions. Performed using NVivo (2020), this involved five steps:

  1. Extracting verbatim text describing the personalisation affordances of interventions, before entering this into NVivo.
  2. Performing initial inductive coding to examine personalisation affordances, noting potential descriptive themes.
  3. Iteratively revisiting extracts searching for further candidate themes.
  4. Refining and merging themes.
  5. Re-coding extracted data if appropriate.

Following collaborative review and discussion amongst the research team, three final personalisation themes were identified: (a) engaging learners through matching their interests and/or experience; (b) providing personalised feedback, support and/or assessment; and (c) adapting or adjusting to learners' level (eg, through differentiated pace, learning objectives and content or tools). Returning to Bulger's (2016) typology discussed in Section 2.2, Categories (a) and (b) can be considered to represent ‘responsive’ personalised learning systems and Category (c) those ‘adaptive’.

Detailed rationales for the inclusion of each study in the meta-analysis (in addition to examples of extracted data and codes established) are available in Supporting Information File 1. As a further validation measure, authors of included studies were contacted to validate this coding of personalisation affordances and to provide any other information about the personalisation features of the technology used during their study (see Section 4.2).

Study coding and analysis

Coding for the meta-analysis initially involved mapping study characteristics including country/region; technology type and origin; learning domain; learner stage/age; experimental design and comparators; population characteristics; and sample size. At a second stage, moderator variables (variables predicting the overall effect size and selected based on existing research and the findings of the RER) were coded as follows:

  • Academic outcomes. Mathematics and literacy5 outcomes assessed through written forms (traditional or digital).
  • Personalisation level. Following the process outlined in Section 3.4., interventions were coded according to whether they (a) engage learners through matching their interests and/or experience (eg, to facilitate student engagement); (b) provide personalised feedback, support and/or assessment (eg, immediate task feedback and/or continuous or final assessment); and (c) adapt or adjust to learners' level (eg, delivering content and activities adapted to students difficulty level and/or learning pace). For each of these a code of 0 (no) and 1 (yes) was assigned. If a study was coded as adapting or adjusting to learners' level ([c]) they were coded as featuring a ‘HIGH’ level of personalisation as this factor represents a key distinction between ‘responsive’ and ‘adaptive’ personalised learning systems (Bulger, 2016). Otherwise studies were coded as ‘MEDIUM’.
  • Personalisation delivery type. This variable has two aspects referring to ‘who’ delivered the personalisation: (a) technology only or (b) teacher and technology. In the former, the role of the teacher or supervisor was limited to providing technical support when supervising the implementation of a programme. In the latter, the teacher had an active role by choosing the content or activities from possible options provided by the software to meet the learning goals, and/or by providing academic support and feedback.
  • Technology use. This variable identifies whether interventions were implemented in a supplementary, integrative or substitute way. Supplementary approaches offer students the opportunity to practice instructional content outside regular classroom instruction (eg, through additional remedial support). Integrative approaches utilise technology during regular instruction and the teacher has an active role. Substitute approaches use technology as a replacement of the regular classroom instruction (instruction delivered only by technology).
  • Intervention intensity and duration. To code for intervention intensity, Cheung and Slavin's (2012) intensity criteria were followed using a cut off of 75 min per week. For intensity × duration a cut off level of 4.5 months was used as this typically represents half of a school academic year. Interventions were coded as ‘STRONG’ when delivered for more than 4.5 months with an intensity of greater than 75 min a week. Otherwise they were categorised as ‘MODERATE’.

Studies were coded by one author (MT) with other authors independently reviewing data extracted. Codes were assigned based on what was explicitly stated in the text. Study authors were invited to feedback on coding undertaken.

Effect size calculations and statistical analysis

The overall effects of interventions are determined from estimates of the standardised mean difference or effect size for each study. Where studies report treatment effects for unadjusted and adjusted ordinary least squares regressions and account for baseline outcome measures as covariates, effect size estimates extracted were the beta coefficients and standard errors reported in data tables. According to Higgins et al., (2020), these give the most precise and least biased estimates of intervention effects. For other studies, standardised mean differences were calculated using post-intervention value scores (means, standard deviations) using Lipsey and Wilson (2001) online Practical Meta-analysis Calculator. Higgins (2020) recommends that different standardised effect size estimates can be combined in one meta-analytic calculation.

Following Borenstein (2009), where studies report multiple effect sizes for different groups (including multiple treatment arms, outcomes and independent groups) these were combined to formulate composite effect size estimates to calculate summary effects of the impact of the intervention. In cases where the data were dependent, ie, multiple treatments or outcomes, average effects were computed to yield a single effect estimate. For the multiple independent groups, weighted mean effects and standard error were calculated to obtain a combined effect. Where applicable, individual effects are used in separate meta-analyses. Only the primary outcome of interventions is reported. Reports of spill over effects or follow-up effects were excluded.

Data were analysed in Stata using the generic inverse variance method as it produces a random effects meta-analytic calculation6. Given studies were sampled from diverse countries, a random effects model was appropriate as this assumes studies will differ such that there may be different but related effect sizes (Borenstein, 2009). Missing data were not problematic with the exception of one study for which the authors were contacted but communication could not be established [S16]. Meta-regression determined the impact of moderators on overall study effects. There is no universally accepted minimum number of studies required for a meta-regression and such a number may be arbitrary in any case (Fu et al., 2011). Nonetheless, recommended lower bounds for the number of studies required in a meta-analysis (10 studies; Deeks et al., 2020), and for meta-regression involving categorical subgroup variables (eg, 4 studies; Fu et al., 2011), have been met. The average effect size and variation across studies are reported based on the identified a priori features of personalisation. Heterogeneity7 was assessed using the Q test (Hedges, 1982), tau (T2) and I2 (Higgins & Thompson, 2002) to give an indication of dispersion in the study effect sizes. Publication bias was assessed using the funnel plot method, which is used as a visual aid for detecting bias stemming mainly from negative results not being published or systematic heterogeneity (Bartolucci & Hillegass, 2010). Study limitations are considered in Section 5.4.

RESULTS

Search, screening and selection

Search results, screening outcomes and selection decisions are presented in Figure 1.

Details are in the caption following the image
Flow chart of the study selection process (following adapted PRISMA guidelines; Moher et al., 2009)

The initial automated searches returned 38,335 results, with 198 potential studies identified after title and abstract screening. The additional automated searches returned 1218 results with 8 potential studies identified after screening. Following all automated and snowballing searches (with author recommendation leading to the identification of one potential study and citation snowballing identifying four further potentially relevant studies), 54 full-text studies were assessed for eligibility.

In total, this systematic combination of automated, manual and snowballing searches led to 16 studies meeting the inclusion criteria (although 15 studies are included in the meta-analysis). No further studies were identified after comparing search results to the included study lists of related systematic reviews (indeed, the meta-analysis includes additional studies not identified by this previous work). Reasons for the exclusion of studies based on the eligibility criteria are available in Supporting Information File 1.

Most studies reported treatment effects (n = 12) for unadjusted and adjusted ordinary least squares regressions (OLS) and accounted for baseline outcome measures as covariates. For remaining studies (n = 3), standardised mean differences were calculated. Some studies reported multiple effect sizes for different groups including multiple treatment arms (n = 3), outcomes (n = 7) and independent groups (n = 1). Of the 15 studies included in the statistical analysis, authors of 12 studies confirmed that they agreed with the coding undertaken with regard to the personalisation affordances of included interventions. Communication could not be established with the authors of the three remaining studies. Collaborative review amongst the research team—and the process of consultation with study authors—led to consensus on the features of personalisation established for each intervention (Supporting Information File 1). To eliminate the bias of statistical dependency due to a number of studies coming from the Rural Education Action Program (REAP) at Stanford University8 sensitivity analysis was undertaken.

Research critical appraisal

Following a discussion between the two raters, there was no disagreement in regard to the critical appraisal process. All studies were considered to be of an appropriate standard for inclusion given the average quality score of 16.4/21. Importantly, all studies had medium or high scores for RCT design (Category 3) suggesting limited chances of bias arising due to this. The overall quality scores for each study can be seen in Table 1.

Descriptive findings

In total, 16 independent studies were identified. These were conducted9 in China (n = 9), India (n = 3), Malawi (n = 2), the Russian Federation (n = 1) and El Salvador (n = 1). Populations were typically of low socio-economic status from rural areas (eg, poor ethnic minority areas; [S7]) with the exception of three studies that included urban populations ([S12] [S13] [S14]). Most featured students aged 8–12 years (n = 14), with one study focusing on learners aged 6–8 ([S14]) and one learners aged 10–15 ([S12]).

Studies focused on mathematics (n = 6), literacy (n = 5) and both mathematics and literacy (n = 5). Outcomes for literacy included: English as an additional language; Russian; Mandarin; Hindi; and reading in Chichewa (language of instruction in Malawi primary schools). Learning outcomes were assessed in written form varying from in-app quizzes (eg, [S14]), standardised tests (eg, [S6]) and researcher-designed tests (eg, [S12]). All interventions delivered supplementary instruction (n = 16) with one study including a second computer-assisted treatment group that integrated technology into the teaching of English ([S2]).

Most studies report CAL interventions (n = 14). Two report a tablet intervention ([S13] [S14]). Specific software included: CAL software developed by the Rural Education Action Program10 (n = 8), an online adaptive version of the same software11 (n = 1), the One Billion Interactive App (n = 2), Mindspark (n = 1), Khan Academy (n = 1), a software developed by an established technology organisation ([S4]), bespoke personalised software developed by a research team ([S5]) and a combination of internally and professionally developed software (n = 1). Interventions were mostly delivered during the school day (n = 10) with others delivered after school (n = 2) and either during lunch time at school or after school with supervision (n = 4). Most studies reported a ‘STRONG’ intensity and duration level (n = 10) with others ‘MODERATE’ (n = 5). One incorporated two groups with both levels ([S4]).

Regarding the personalisation features of reported interventions (see Supporting Information File 1), most studies featured technology delivering personalisation (n = 12) with others the teacher and the software providing personalisation (n = 4). Six studies featured ‘HIGH’ personalisation and others ‘MEDIUM’ (n = 10). Personalisation features were as follows: engaging learners through matching their interests or experience (n = 15); providing personalised feedback, support and/or assessment (n = 14); and adapting or adjusting to learners' level (n = 6). Included study characteristics, main effect sizes and ID codes (eg, [S10] referring to Study Ten—Mo et al., 2014) are presented in Table 1.

TABLE 1. Study characteristics13
Study Code Study Country Population Characteristics Total Sample size Age Subject Type of technology Comparator Delivery time Type of Technology use Intensity × Duration Personalisation Delivery Type Personalisation Level Experimental Design Quality assessment Effect Size SE
S1 Banerjee et al. (2007) India Urban areas in Vadodara 11,890 9–10 Mathematics CAL No intervention During and after school Supplementary Strong Technology H RCT 16 0.39 0.07
S2 Bai et al. (2016) China Rural (poor minority area in Qinghai Province) 5917 10–11 Language (English as an OL) CAI & CAL No intervention During school CAI: Integrative Strong Technology M RCT 16 0.05 0.04
CAL: Supplementary
S3 Bai et al., (2018) China Rural China (poor minority area in Qinghai Province) 1342 10–11 Language (English as an OL) Online CAL (OCAL) No intervention During school Supplementary Moderate Teacher + Technology H RCT 18 0.25 0.14
S4 Bettinger et al., (2020) Russian Federation 2 × regions with GDP below the national average 6253 8–9 Mathematics & Language (Russian) CAL Traditional homework After school Supplementary CAL single dose: Moderate Teacher + Technology M RCT 13 0.07 0.04
CAL double dose: Strong
S5 Kumar and Mehra (2018) India Low SES background from India 232 11–12 Mathematics CAL Traditional homework During school Supplementary Strong Teacher + Technology H RCT 15 0.21 0.13
S6 Lai et al. (2015) China Migrant children in Beijing (typically of low SES background) 1717 9–10 Mathematics &Language (Chinese) CAL No intervention During lunch or after school supervised Supplementary Strong Technology M RCT 18 0.08 0.04
S7 Lai et al. (2016) China Poor ethnic minority areas in China's Qinghai Province 3164 9–10 Mathematics & Language (Mandarin) CAL No intervention During lunch or after school supervised Supplementary Strong Technology M RCT 15 0.12 0.05
S8 Lai et al., (2012) China Poor minority rural areas in Qinghai Province 1717 9–10 Language (Chinese) CAL No intervention During lunch or after school supervised Supplementary Moderate Technology M RCT 19 0.19 0.06
S9 Mo et al. (2020) China Poor minority areas of Qinghai Province 5253 10–11 Language (English as an OL) CAL No intervention During school Supplementary Strong Technology M RCT 18 0.05 0.07
S10 Mo et al. (2014) China Poor rural areas in Shaanxi (boarders and non-boarders) 4757 9–10 & 11–12 Mathematics CAL No intervention During school Supplementary Strong Technology M RCT 21 0.16 0.06
S11 Mo et al (Phase 2 only) (2015) China Shaanxi Province 2426 10–11 & 12–13 Mathematics CAL No intervention During school Supplementary Strong Technology M RCT 18 0.26 0.04
S12 Muralidharan et al. (2019) India Low-income neighbourhoods in Delhi 619 10–15 Mathematics & Language (Hindi) CAL No intervention After school Supplementary Strong Technology H RCT 18 0.29 0.29
S13 Pitchford (2015) Malawi Urban area of the capital city Malawi 283 8–10 Mathematics Digital tablet Intervention Non-Maths tablet control + No intervention During school Supplementary Moderate Technology H RCT 15 0.22 0.09
S14 Pitchford et al. (2019) Experiment 3 Malawi Seven school districts in Malawi 320 6–8 Reading in Chichewa Digital tablet Intervention No intervention During school Supplementary Moderate Technology H RCT 14 0.39 0.03
S15 Yang et al. (2013) China Migrant communities outside of Beijing 6487 8–11 Mathematics (Beijing & Shaanxi) and Language (Mandarin) (Qinghai) CAL No intervention During lunch or after school supervised Supplementary Moderate Technology M RCT 16 0.14 0.02
S16 Buchel et al. (2020) El Salvador Rural district 3528 9–12 Mathematics CAL Additional math lessons instructed by a teacher During school Supplementary Strong Teacher + Technology M RCT 12

Meta-analysis results

While 16 studies met the inclusion criteria, the meta-analysis itself is based on 15 studies. This is because [S1612] could not be included in the analysis due to missing statistical information. The total number of participants involved was 53,029 (25,850 intervention and 27,179 control group) with a minimum of 232 and a maximum of 11,890 students. The mean sample size was 3535 (1723 intervention and 1811 control group). The effect sizes for the 15 studies ranged from 0.05 to 0.39. When multiple outcomes (multiple subjects) and comparators (multiple treatments) used in subgroup analyses are factored, there are a total of 30 effect sizes ranging from 0.01 to 0.39.

RQ1. Does technology-supported personalised learning improve learning outcomes for school-aged children more effectively than teachers' standard educational practice (without technology) in low- and middle-income countries?

Overall, technology-supported personalised learning interventions had a significant positive effect of 0.18 on students' learning (95% CI [0.12, 0.24], p < 0.001). The forest plot showing the distribution of individual studies, summary effects and confidence intervals is presented in Figure 2. Blue squares indicate the size of the intervention effect and is proportional to the weight of the study. The 95% confidence interval is indicated by blue lines. The green diamond displays the weighted average overall effect size, its confidence interval and the midpoint indicates the magnitude of the effect size. The vertical line running from zero is the line of null effect or the point where there is no association between the intervention and control. The overall effect size is statistically significant as indicated by the diamond not crossing the zero line.

Details are in the caption following the image
Forest plot: overall effect of technology-supported personalised learning interventions is 0.18 (95% CI [0.12, 0.24], p = 0.001)

A significant summary effect indicates that students using technology-supported personalised learning approaches have significantly higher learning outcomes than their peers who did not use technology. Heterogeneity between individual studies was observed Q(14) = 95.95, p = 0.001 and I2 = 83.59% suggesting variation in effect sizes across the studies might be due to characteristics of the different studies (or by the features of personalisation which have been hypothesised). The results from meta-regression analysis are subsequently used to explore potential reasons for variability across studies.

Publication bias

The funnel plot in Figure 3 shows that the points (each representing study effects) are fairly evenly scattered around the reference line at the top of the graph The gap near the middle and bottom left of the graph is indicative of likely missing data due to publication bias and the single study at the bottom of the graph, small study effects. A follow-up statistical test, the trim-and-fill method, was conducted to identify and correct for funnel plot asymmetry arising from publication bias by providing an estimate of the number of missing studies and an adjusted intervention effect from including the filled studies (Duval & Tweedie, 2000; Shi & Lin, 2019). However, results from the trim-and-fill analysis recommended no imputations to achieve symmetry which suggest that the results of the meta-analysis are not systematically affected by unpublished work.

Details are in the caption following the image
Funnel plot of summary effects

Sensitivity analysis

The sensitivity analysis compared overall effects for studies using the same software developed by REAP to studies coming from other research labs (Figure 4). This is because REAP studies accounted for a larger proportion of those in the sample (n = 9). Results indicate that interventions in both groups yielded positive statistically significant results, although studies across the independent labs had a higher overall effect size of 0.26 (95% CI [0.13, 0.39], p = 0.001) and were more heterogeneous (Q(5) = 44.21, p = 0.001 and I2 = 82.64%). This is compared to studies in the REAP group with an effect size of 0.14 (95% CI [0.09, 0.19], p = 0.01) which showed less heterogeneity (Q(8) = 19.48, p = 0.01 and I2 = 62.68%). The test of group differences confirmed that the group-specific overall effect sizes were not statistically different (Qb = 3.08, p = 0.08). This supports the decision to include all studies in the meta-analysis even though several of them came from the same research lab. However, a noticeable difference is the smaller overall effect estimate for REAP studies. One possible explanation is that the software used by these studies has ‘MEDIUM’ personalisation features relative to the software used in other research. The effects of this level of personalisation as a characteristic feature of studies are investigated as a moderator in the meta-regression analysis.

Details are in the caption following the image
Sensitivity analyses for sub-group analysis

RQ2. To what extent do features of technology-supported personalised learning contribute to the effectiveness of interventions?

Features of technology-supported personalised learning (academic outcomes, personalisation levels, personalisation delivery type, intervention intensity and duration) are predicted to influence summary intervention effects. These categorical moderators are explored in four separate meta-regression analyses (see Appendix C). Graphical representations of the relationship between categories and summary effects are presented in Figure 5. For each regression model, the regression coefficient estimates indicate how the intervention effect in each subgroup differs on a nominated category and whether this difference is significant.

Details are in the caption following the image
Effect sizes and 95% confidence intervals for selected moderator variables. Significant differences between groups were reported only for Personalisation Level p < 0.001)

Academic outcome categories refer to studies which assessed learning in mathematics (n = 12) and literacy (n = 10). There was no difference (p = 0.80 I2 = 79.85) in study effects whether interventions addressed mathematics with an effect size of 0.17 (95% CI [0.11, 0.23]) or literacy with one of 0.16 (95% CI [0.08, 0.25]). This suggests that technology-supported personalised learning approaches are effective across both subject areas.

Interventions differed on the types of software used and degree of personalisation affordances provided. The six studies with ‘HIGH’ personalisation features had statistically significantly higher effect sizes (p = 0.01, I 2 = 56.76) compared to the nine studies with ‘MEDIUM’ personalisation features. Effect sizes for studies with ‘HIGH’ personalisation ranged from 0.22 to 0.39 with an overall effect size of 0.35 (95% CI [0.26, 0.42]), whereas for studies with ‘MEDIUM’ personalisation features effect sizes ranged from 0.05 to 0.26 with an overall effect of 0.13 (95% CI [0.08, 0.17]).14 This suggests that interventions using more highly personalised approaches that adapt or adjust to learners' level have a greater impact on learning.

Technology-supported personalised learning interventions may employ different personalisation delivery types. For instance, this could involve allowing students to work through remedial activities on software without pedagogical input from the teacher (technology only condition), or settings where a teacher supports students' learning through assignment of content or feedback as they use the software (teacher and technology condition). The condition for delivering the intervention, ‘technology only’ (n = 12) or ‘teacher and technology’ (n = 3), does not significantly affect reported effectiveness (p = 0.64, I2 = 83.79). It appears that interventions included in this meta-analysis are similarly impactful whether the personalisation delivery type is via ‘technology only’ with an effect size of 0.19 (95% CI [0.12, 0.26]) or through ‘teacher and technology’ with one of 0.12 (95% CI [0.00, 0.24]). Results for ‘teacher and technology’ need to be treated with caution given the lower bound CI of zero and the very few ‘teacher and technology’ studies in comparison. However, these findings can possibly be taken as preliminary evidence that suggests personalised technology may leverage positive benefits whether or not teachers also have an active role in the personalisation.

Interventions may vary by the intensity and duration of programmes such that they are delivered for at least 75 min per week and longer than 4.5 months (‘STRONG’ n = 10), or less (‘MODERATE’ n = 6). Studies grouped as strong for the dimension of intensity and duration had an overall effect estimate of 0.15 (95% CI [0.07, 0.22]), whereas studies categorised as moderate had one of 0.21 (95% CI [0.11, 0.31]). The meta-regression reveals how there is no statistical difference between studies categorised based on the intensity and duration of the intervention (p = 31, I2 = 83.23). This suggests that technology implementation for more than 4.5 months with an intensity of greater than 75 min a week may be similarly effective to that of a more moderate duration and intensity (between 2 and 4.5 months and of 45–75 min a week), although further research is needed to confirm this (as discussed in the following sections).

A related unexplored hypothesis is whether personalisation delivery type or technology that is designed to supplement instruction, substitute instruction or integrate with instruction determined the effectiveness of the intervention. This hypothesis could not be tested in the meta-regression due to a lack of variability as all studies report on ‘supplementary’ instruction only (n = 15).

DISCUSSION

The effectiveness of technology-supported personalised learning

This meta-analysis indicates how technology-supported personalised learning has been found to have a statistically significant positive effect of 0.18 on learning (p = 0.001). So how important is this and other reported effects? The US Department of Education (2020) considers effect sizes of 0.25 standard deviations or larger to be ‘substantively important’ for education. The Education Endowment Foundation15 in the UK meanwhile suggests that effect sizes of 0.18 and 0.19 translate to 2 or 3 months additional educational progress. While an effect size of 0.18 can be characterised as small according to benchmarks provided by Cohen (0.2 is ‘small’, around 0.5 is ‘medium’ and above 0.8 is ‘large’; 1988) and others (eg, Acock, 2014), there is no universal guideline for assessing the practical importance of standardised effect size estimates for educational interventions (Bakker et al., 2019). Instead, there is consensus that effect sizes should reflect the nature of the intervention being evaluated, its target population and the outcome measure(s) used (Hill et al., 2008; Pigott & Polanin, 2020). Important also is that smaller effect sizes have increasingly been accepted in education over time (Bakker et al., 2019).

In their meta-analysis of 77 RCTs undertaken in primary education, McEwan (2015) found that technology interventions yielded the highest average effect size (0.15) of all educational interventions in developing countries, which further reinforces the educational importance of this meta-analysis with overall moderator effect sizes ranging from 0.12 to 0.35. Investigation of study heterogeneity points to the level of personalisation features as the influential moderator. Specifically, findings highlight the potential significance of interventions that adapt or adjust to learners' level (effect size of 0.35) in contrast to personalised technologies that do not (effect size of 0.13).

In light of previous research, we consider reported effects to be moderate but potentially educationally significant. We also concur with Mo et al. (2014) that an overall effect size of around 0.18 is sufficiently large to attract the interest of policymakers, particularly as studies that employ adaptive instruction have been shown to be effective in LMICs (Conn, 2014). Furthermore, results indicate how ‘moderate’ use of personalised technology (eg, of between 2 and 4.5 months) was found to be similarly effective to ‘stronger’ use (eg, for longer than 4.5 months). This might corroborate research that identified a diminishing marginal rate of substitution for traditional learning from doubling the amount of technology use (Bettinger et al., 2020).

While the limitations of the meta-analysis are outlined fully in Section 5.4, the ‘supplementary’ nature of interventions should be considered when interpreting reported effects. The use of technology typically led to an increase in learning time compared to students in the control group. As most studies use passive controls or no interventions, this raises the possibility that learning gains may not solely be attributable to the use of personalised technology. In already resource-constrained environments, providing access to digital devices to administer a placebo treatment and/or developing non-technology approaches that are comparable to technology interventions is practically and ethically challenging. Despite this, the meta-analysis indicates that studies which included an active control group still report significantly greater gains in academic performance (eg, an effect size of 0.22 when comparing to a technology placebo group and a standard educational practice control; Pitchford, 2015), potentially in a way that may outperform traditional instruction (eg, where students increased their math scores by 0.21–0.24 standard deviations; Buchel et al., 2020). Additional research is strongly recommended to investigate whether the ‘added value’ of technology-supported approaches will be maintained when further RCTs with active controls, and alternative approaches to supplementary personalised learning (eg, integrative or substitute approaches), are implemented.

Cost implications

In addition to considering effect sizes, whether a programme should be implemented also depends on its potential to scale at reasonable cost (Angrist et al., 2020; Bakker et al., 2019; Harris, 2009). Educational technology interventions may not always lead to higher learning gains compared to low- or non-technology initiatives once the effect of the technology use is isolated (Evans & Acosta, 2020; Ma et al., 2020). As such, the question should not be whether a technological approach could address a problem in the educational system, but rather whether it is the most effective and cost-effective way to do so (Rodriguez-Segura, 2020). The meta-analysis did not set out to investigate cost-effectiveness given the RER revealed how synthesisable data required were likely to be limited. Nonetheless, several studies offer relevant information.

Costs associated with technology-supported personalised learning include fixed (eg, initial and on-going software development; Muralidharan et al. 2019) and variable costs of implementation (eg, hardware costs of computers; Kumar & Mehra, 2018). Other costs potentially include teacher support and social costs (Bai et al., 2018). Impact on teacher and learner time is an additional factor (Kumar & Mehra, 2018). Despite indications that technology-supported personalised learning approaches need not necessarily be prohibitively expensive (see Appendix D for an overview), significantly more research is required. This is particularly the case as other research suggests CAL interventions are amongst the least cost-effective in LMICs (McEwan, 2015). In settings without sufficient infrastructure, it is likely that implementation costs will be high (at least initially). Non-technology approaches may also offer comparable gains in learning at a lower cost (eg, Banerjee et al., 2007). Potentially, using existing hardware may help in reducing costs and increasing access (Global Education Evidence Advisory Panel, 2020). Considering the cost challenges experienced by countries with limited resources, a promising observation is that personalised software featuring moderate personalisation affordances—typically developed in close alignment with the curriculum—can still yield learning rewards. Such approaches might provide a more immediate entry point in some contexts given higher-tech alternatives may be unaffordable for some years to come.

Role of teachers and other considerations

While personalised technology appears to show benefits whether or not teachers also have an active role in the personalisation, relatively few studies have examined teachers' role in making personalised technology effective as part of their everyday practice. This is because research often reports on supplementary uses of personalised technology which enable students to practise with instructional content outside of regular classroom instruction. Integrative approaches that utilise technology during regular instruction are uncommon. Potentially, technology may also be used to empower teachers to implement personalised learning approaches that do not feature learners using technology (eg, ‘Teaching at the Right Level’). In both contexts, teachers would need to be equipped—through appropriate professional development—with the knowledge to integrate personalised learning, including diagnostic and formative assessment, with other teaching activities. Absence of teachers in the implementation of personalised technology interventions also does not negate potential teacher involvement in the planning stages (eg, aligning supplementary uses of personalised technology to the curriculum and instruction).

Several studies that did not meet the inclusion criteria must also be considered. Chong et al., (2020) evaluated a 6-month—personalised—internet-based sexual education course in high schools in 21 Colombian cities, reporting significant improvement in students' knowledge, attitudes and likelihood of redeeming vouchers for condoms. Gambari et al (2015, 2016) examined the effects of computer-assisted instruction on Nigerian secondary school students' achievement and motivation outcomes in physics and chemistry. Results revealed that students taught with personalised technology approaches in cooperative settings led to better learning outcomes than their counterparts taught using individualised computer instruction (Gambari, 2015). Finally, Ito et al. (2019) examined the effects of an app that incorporates adaptive learning on Cambodian elementary students' cognitive and non-cognitive skills, reporting positive outcomes on learning productivity and their subjective expectation to attend college in the future. These studies demonstrate the potential of technology-supported personalised learning to be effective in domains other than mathematics and literacy as well as in improving cognitive and affective skills. In addition to improving learning outcomes, there are also indications that the impact of such approaches may increase as learner socio-economic level decreases (Perera & Aboal, 2019), including when used at home (Tang et al., 2020).

Study limitations

The focus on studies in LMICs was motivated by the need to identify evidence in this specific context (particularly due to the immediate and long-term challenges caused by COVID-19; Kaffenberger, 2020). While expanding the search to include high-income countries would have increased the number of included studies, such action would have risked overlooking contextual factors specific to LMICs (Tauson & Stannard, 2018). It would also be contrary to suggestions that the challenges facing the use of educational technology in LMICs warrant independent consideration from research undertaken in high-income countries (Kaye & Ehren, 2021).

While synthesis of 2 studies is sufficient for a meta-analysis—provided these can be meaningfully pooled and their results are sufficiently ‘similar’ (Ryan, 2016)—the inclusion of 16 studies from only 5 countries (including nine from China) must be considered. This is in addition to findings possibly not being generalisable to other LMIC contexts (particularly to low-income countries with extremely limited resources). These potential implications and the relatively small number of studies included in the meta-regression mean care must be taken when interpreting findings. As outlined in Section 6, more research is now needed to investigate the complex factors involved in the use of personalised technology in LMICs (particularly in regards to the implications for policy and practice).

Other limitations may include the search involving English language research from 2007 only. The keywords used or omitted or the selection and/or nature of digital libraries searched may also have an impact on reported findings. Studies did not always refer to personalised learning directly, with several examining this in the context of ‘computer-assisted learning’ more broadly. Further, the features of reported interventions may not always be comprehensively described. There is, therefore, a risk that aspects of personalisation may have been incorrectly inferred, although the rigorous inductive approach to identifying personalisation affordances and the fact that all study authors were invited to feedback on coding (with 75% responding) helps to minimise this. All authors agreed with the coding undertaken.

Studies typically adopted an RCT design, clustered at the school level and assessed learning outcomes in diverse ways. The limitations of RCTs must be acknowledged including a potential lack of external validity and limited scope to account for the ways that interventions are implemented under different circumstances by different people (Deaton, 2020; Koutsouris & Norwich, 2018). While some studies examined non-academic outcomes (eg, self-efficacy, self-confidence, school enjoyment and meta-cognition), heterogeneity and most interventions not being designed to target these outcomes led to their omission. Potentially, additional lessons conducted by a teacher might arguably have produced similar or even better results than those involving technology (Buchel et al., 2020).

Sensitivity analysis mitigates the potential limitation of studies being conducted with the same software and the potential conflict of interest for researcher-developed software. Other mitigating actions included undertaking pilot searches and taking steps to reduce subjectivity through inter-rater coding. In terms of reported interventions, some older technology is considered along with newer technology. This is not considered to be problematic given coding focused on identifying affordances for personalisation and not technical features. It is also noted how sophisticated intelligent and cognitive tutoring systems did not feature in the analysis despite several studies exploring such technology being identified during the study search. This was because such research did not meet the eligibility criteria for inclusion (ie, this typically did not involve an experimental approach nor a focus on academic outcomes—see Supporting Information File 1). While the findings of the meta-analysis are inherently limited by the quality of evidence available, the critical appraisal of studies minimises the risk of low-quality research adversely impacting findings.

CONCLUSION AND FUTURE RESEARCH

The meta-analysis reveals how technology-supported personalised learning has a statistically significant—if moderate—positive effect on learning outcomes in low- and middle-income contexts. Such interventions are similarly effective for mathematics and literacy learning and whether or not teachers also have an active role in the personalisation. One potentially important implication for both policy and practice is how personalised approaches that adapt or adjust to the learner (eg, their level and/or pace) led to significantly greater learning gains. Whether the inclusion of more adaptive personalisation features in technology-assisted learning environments warrants the likely additional investment necessary for their implementation, however, needs to be further investigated given their development and use is anticipated to be more complex. Another outcome with potential implications for cost and resource decisions is that personalised technology implementation of moderate duration and intensity had similar positive effects to that of stronger duration and intensity, although further research is needed to investigate this. Potentially important for policy and practice too, it should also be noted that personalised technology approaches featuring moderate personalisation affordances can also yield learning rewards.

Findings open up a range of other possibilities for future quantitative and qualitative research. Critically it is not yet known whether personalised technology can be scaled in a cost-effective and contextually appropriate way. Most existing research reports on ‘supplementary’ uses of personalised technology outside of regular classroom instruction. Additional research into the viability and comparative effectiveness of teachers in LMICs integrating personalised learning approaches, featuring learners using technology in class and otherwise, would therefore make a strong contribution to informing policy and practice. There is also scope to determine the optimum duration for implementing such interventions and their long-lasting effects on academic achievement and other outcomes (see Bianchi et al., 2020 for a related discussion).

Other valuable future work would include considering the differential role (positive or negative) of personalised technology in terms of different learning domains, location (rural versus urban), gender, disability and baseline achievement level. Assumptions that underpin the use of personalised technologies also warrant consideration. This includes whether there is a risk of perpetuating a narrow idea of what it means to ‘succeed’ academically (eg, due to an emphasis on ‘drill and testing’ that may be a feature of some personalised technologies); whether personalised learning risks promoting individualistic learning aspirations (as it often involves students working alone despite personalised learning not necessarily being restricted to individualised learning); and ethical and privacy considerations (particularly if new approaches integrate AI; UNESCO, 2019).

Following COVID-19, education stands at a time of unprecedented challenge. Of particular concern is that recent progress in closing the attainment gap for the most disadvantaged risks being reversed in our ‘new normal’. While the pandemic presents significant issues, it also presents opportunities as the global education community looks to rebuild. In particular, there is a chance to revisit and question basic assumptions of the purpose and nature of education that may have previously been considered impossible or impractical at scale. This meta-analysis provides promising evidence for the effectiveness of technology-supported personalised learning in improving learning outcomes for learners in LMICs.

ACKNOWLEDGEMENTS

The authors thank all colleagues who have in some way supported this work, in particular those based in the EdTech Hub and Professor Carole Torgerson and Dr Christopher Marshall who acted as critical friends prior to submission. The authors acknowledge the support of the FCDO-funded EdTech Hub (https://edtechhub.org/). Thanks also to Ioannis Kamzolas for assisting with the figure design and to the BJET reviewers for their constructive comments.

    CONFLICT OF INTEREST

    The authors declare no conflict of interest or ethical concerns.

    ETHICS STATEMENT

    This research was undertaken in accordance with the BERA Ethical Guidelines for Educational Research (BERA, 2018).

    ENDNOTES

    • 1 Wilichowski and Cobo (2021). Considering an adaptive learning system? A roadmap for policymakers. World Bank Blogs. https://blogs.worldbank.org/education/considering-adaptive-learning-system-roadmap-policymakers (Accessed 05/02/21).
    • 2 Note, Bulger (2016) observes how more sophisticated technology-enabled personalisation approaches—such as genuinely ‘intelligent’ tutoring systems—remain mostly aspirational at present.
    • 3 Study [S1] in the meta-analysis.
    • 4 https://sdgs.un.org/goals/goal4.
    • 5 The ability to read, write, speak and listen in a way that enables effective communication and sense of the world https://literacytrust.org.uk/information/what-is-literacy/ (accessed 18/12/20).
    • 6 Using the Dersimonian and Laird method.
    • 7 Which can be interpreted using suggested thresholds 25% for low, 50% for medium and 75% for high heterogeneity (Borenstein, 2009).
    • 8 https://sccei.fsi.stanford.edu/reap/ (accessed 05/02/21)—The Rural Education Action Program (REAP) at Stanford University is an international research organisation that aims to help poor students in rural China overcome the barriers many face in gaining a proper education.
    • 9 The five countries represented are all identified by the World Bank as LMICs (https://data.worldbank.org/country/XO): Malawi (low-income); El Salvador and India (lower-middle-income); China and Russia (upper-middle-income—although note participants were typically from disadvantaged communities within this context).
    • 10 http://intro.taolionline.cn/ (accessed 18/12/20)—a game-based platform providing free remedial resources accompanied by individualised feedback to increase academic performance and interest in learning.
    • 11 https://reap.fsi.stanford.edu/research/technology/ocal (accessed 18/12/20)—a game-based online platform that also features an adaptive learning component (exercise difficulty level automatically adjusts to match individual student's learning progress).
    • 12 [S16] was excluded due to missing data that meant effect sizes could not be estimated. The study examined the effectiveness of a computer-assisted learning intervention in mathematics over traditional teaching in primary schools in El Salvador. Assignment to additional technology-supported lessons significantly increased math scores by 0.21σ when overseen by a supervisor and by 0.24σ when instructed by teachers.
    • 13 Corrections made on 4 June 2021, after first online publication: Table 1 has been updated in this version.
    • 14 Correction added on 4 June 2021, after first online publication: ‘overall effect size of 0.34’ has been corrected to ‘overall effect size of 0.35’, in this version.
    • 15 https://educationendowmentfoundation.org.uk/evidence-summaries/about-the-toolkits/attainment/ (accessed 18/12/20).
    • 16 https://data.worldbank.org/income-level/low-and-middle-income (accessed 18/12/20).

    APPENDIX A: —SEARCH TERMS

    GOOGLE SCHOLAR AND SCOPUS, EDUCATION RESOURCES INFORMATION CENTER (ERIC) AND WEB OF SCIENCE

    “Personalised Adaptive Learning”; "Personalized Adaptive Learning"; “Personalised technology-enhanced learning”; “Personalized technology-enhanced learning”; “Technology-enhanced personalised learning”; “Technology-enhanced personalized learning”; “Personalised TEL”; “Personalized TEL”; “Personalised learning environment”; “Personalized learning environment”; “Teaching at the right level”; "Combined Activities for Maximized Learning"

    The search string—

    AND “Personalised education” AND (“Edtech” OR “Education technology” OR “digital learning” OR "eLearning" OR school) AND ("africa" OR “LMIC" OR "developing world” OR “developing country*” OR “ICT4D” OR “global south”);

    also followed searches for:

    “Personalized education”; “Personalised learning”; “Personalized learning”; “adaptive learning”; “adapting learning”; “Differentiated learning”; “Computer-assisted instruction”; “Computer-assisted learning”; “Computer-aided learning”; “Intelligent tutoring system”; “Exploratory learning environments”; “Adaptive Educational Hypermedia”; “Adaptive hypermedia”; “Personalised Adaptive Learning”; "Personalized Adaptive Learning".

    SEARCHABLE PUBLICATION DATABASE (SPUD)

    “Teaching at the Right Level”; “TaRL”; “personalized”; “adaptive learning”; “intelligent tutoring system”; “computer assisted learning”

    APPENDIX B: —STUDY INCLUSION CRITERIA

    INCLUSION CRITERIA EXCLUSION CRITERIA
    POPULATION
    • Involving elementary and/or secondary school-aged learners (from 5 to 18 years old)
    • Empirical research taking place in countries defined as low- or middle-income by the World Bank16
    • Involving learners in higher education or 19 years+
    • Empirical research taking place in countries defined as high-income by the World Bank.
    INTERVENTION
    • Involved technology-supported personalisation (ie, technology enabling or supporting learning based upon particular characteristics of relevance or importance to learners)
    • An intervention duration/intensity of at least once a week for 6 weeks or more
    • Taking place inside or outside school (eg, non-formal education)
    • Not including at least one element of technology-supported personalisation (ie, focusing on access to technology with little consideration for how this is personalised to the needs of learners, or personalised learning with no use of technology).
    • An intervention duration/intensity of less than 6 weeks
    COMPARATOR
    • Learners using non-personalised learning software or learning in traditional (or supplementary) settings with no technology
    • Comparisons to an unmatched group not part of the intervention, or no control group
    OUTCOMES
    • Reporting effects on academic performance measured by grades or performance on tests (including those developed by researchers)
    • Reporting non-academic outcomes such as engagement or motivation without considering academic performance
    STUDY DESIGN
    • Describing a randomised experimental design with an independent comparison group
    • Reviews and meta-analyses or providing a ‘lessons learned’ account without presenting any empirical evidence
    LIMITS
    • Published 2007–2020: corresponding with the introduction of major mobile operating systems in 2007 (iPhone) and 2008 (Android phones), as well as 2009 (Android tablet) and 2010 (iPad)
    • English language only
    • Studies published before 2007

    APPENDIX C: —META-REGRESSION ANALYSIS RESULTS

    Model Regression Component Coefficient SE df p value 95% CI R 2
    1 Academic Outcomes 0.013 0.052 20 0.801 −0.089 to 0.116 0.0.00
    Constant 0.162 0.039 20 0.000 0.086 to 0.238
    2 Personalisation Level*** 0.209 0.048 13 0.000 0.115 to 0.303 72.07
    Constant*** 0.125 0.023 13 0.000 0.075 to 0.172
    3 Personalisation Delivery −0.042 0.091 13 0.641 −0.220 to 0.135 0.00
    Constant* 0.229 0.110 13 0.037 0.014 to 0.444
    4 Intensity × Duration −0.064 0.063 14 0.313 −0.186 to 0.059 2.05
    Constant 0.083 0.093 14 0.372 0.113 to 0.306

    Note: Figures are rounded in three digits. Statistical significance: *p < 0.05, **p < 0.01, ***p < 0.001. Predictor variables codes: Learning Outcome: 1 = Maths, 0 = Literacy; Personalisation Level: 1 = Strong, 0 = Medium; Personalisation Delivery: 1 = Technology, 0 = Technology + Teacher; Intensity × Duration: 1 = Strong, 0 = Moderate.

    APPENDIX D: —COST-EFFECTIVENESS CONSIDERATIONS REPORTED BY STUDIES INCLUDED IN THE META-ANALYSIS

    Muralidharan et al. (2019) report that, in terms of total costs, delivery of the Mindspark programme had an unsubsidised cost of NR 1000 per student (USD 15) per month (even when implemented with high fixed costs, without economies of scale and based on 58% attendance). Authors conclude that costs at policy-relevant scales are likely to be lower since the (high) fixed costs of product development have already been incurred. If implemented at even a modest scale (50 government schools), they estimate that per-student costs reduce to USD 4 per month (including hardware). For greater than 1000 schools, per-student marginal costs (software maintenance and technical support) are estimated at USD 2 annually. Because these can be amortised over a large number of students, the fixed cost of developing personalised learning software per student is considered to be potentially cost-effective at scale (Muralidharan et al., 2019).

    Other research draws similar conclusions, suggesting that the per learner cost may be as low as USD 1 if implemented for several thousand students (Kumar & Mehra, 2018). It is also noted how the marginal costs of shifting from a lower to higher level of personalised software may be low because learners already have access to the equipment required (Bettinger et al., 2020).

    Finally, it is reported that online personalised learning programmes have the potential to be more cost-effective than offline ones (Bettinger et al., 2020). Bai et al., (2018) highlights how online cost per standard deviation raised is expected to be 129 RMB (USD 20) per student, whereas that of similar offline programmes is 214 RMB (USD 33) per student.

    DATA AVAILABILITY STATEMENT

    Additional information (eg, underpinning data) can be obtained by sending a request email to the corresponding author.