Students from the lowest-income families are the most likely to attend schools that do not systematically vet their education technology, putting those students’ data privacy at greater risk, a new analysis shows.
The apps these schools use are also the most likely to contains ads, according to an analysis by Internet Safety Labs, a nonprofit group that researches tech product safety and privacy.
The same problems were true for schools with majority-American Indian/Alaskan Native student bodies.
This was despite the fact that these schools recommended or required that students use fewer apps, on average, than wealthier schools.
Schools serving the lowest-income students were three times more likely to recommend or require apps with behavioral ads than schools serving students from families earning $150,000 or more per year.
“Even though they were recommending less tech, they were getting the riskiest-behaving tech, and that was super disappointing to see that in the data,” said Lisa LeVasseur, the executive director of Internet Safety Labs.
The analysis also found that schools with majority-Black student bodies were most likely to have ads and trackers on their websites.
Together, this adds up to a potentially worrisome level of data collection on students from minority and low-income families, said LeVasseur.
An ad in an education app collecting data on a student may seem innocuous, said LeVasseur. But it’s the aggregate of all the different tech that students use in school and their personal lives collecting data and sending it to third-party data brokers that’s a major concern.
Data brokers compile detailed profiles on everyone who uses technology that they then sell to third parties. People don’t have any control over who buys their data and how it’s used, said LeVasseur.
Those profiles can contain sensitive information, she said, such as users’ religion and gender, as well as details on their location and movements, and physical and mental health.
The makers of software programs and apps that schools use will often point out that the user agreements for their technologies are clear about how they collect or share data. They encourage people to read those agreements carefully.
Even so, the personal data can be used in ways that are difficult to predict, LeVasseur said.
Children, especially those under 8, struggle to distinguish advertisements from the media in which they are embedded, say child development experts.
Even if technology products marketed to children claim that they are not selling children’s data to third parties, they are most likely still making money from that data, according to a 2023 analysis by Common Sense Media, a nonprofit research and advocacy organization.
The lowest-income schools use the riskiest apps, but there is a simple fix
Internet Safety Labs conducted an extensive audit in 2022 of the education apps a sample of school districts recommended for students or required them to use. The sample included 663 districts from all 50 states and the District of Columbia. And the original analysis found that schools are using a lot of apps that do not protect students’ data privacy, or that Internet Safety Labs rates “very high risk.”
This latest report is an analysis of that original audit to find differences among schools based on their student populations. (One caveat, the organization said it wished its sample of the lowest-income districts—18 in all—was larger and closer in sample size to the samples of other district groups. Even so, it is confident in its overall findings.)
Internet Safety Labs found that, overall, schools whose apps were vetted either at the school or district level recommended or required apps with fewer contextual ads (which serve people ads related to the content on the website or app they’re using) and behavioral ads (which use data collected on users to target highly personalized ads to them).
It also found that:
- None of the lowest-income schools in the sample, those serving predominantly students from families that earn between $20,000 and $39,000 annually, had any systematic vetting of the technology they recommend for students or require them to use;
- Schools serving students predominantly from families earning $100,000 or more a year were more likely to vet the technology their students are using than lower-income schools;
- At schools that served the lowest-income students, 9.8 percent of the recommended or required apps included contextual ads and 9.5 percent included behavioral ads compared with 2.7 percent in the highest-income schools;
- Schools serving the lowest-income students were less likely than higher-income schools to provide families with a technology notice that clearly lists all the technology products students must use;
- Schools serving a majority-Black student population had the biggest data privacy problems with their school websites.
This was the case even though the analysis is generous in what it counted as vetting, said LeVasseur.
But even basic vetting appears to work: “If you’re doing something, it seems like it’s somehow filtering out apps with ads and behavioral ads,” she said.
That should be encouraging news for schools with fewer resources. For them, LeVasseur recommends, at a minimum, checking to see if apps have a COPPA Safe Harbor Seal before recommending or requiring that students use them. An initiative of the Child Online Privacy Protection Act, the Federal Trade Commission has approved certain groups to develop COPPA Safe Harbor Certifications based on guidelines laid out in the privacy law.