Summary: In this article we look at how privacy concerns within education are impacting progress towards customised training in the classroom and consider a promising solution to the problem.
In this article we look at how privacy concerns within education are impacting progress towards customised training in the classroom and consider a promising solution to the problem.
As commercial and public sector organisations explore new ways to mine data, there are increasing concerns around data privacy. In the past few weeks alone, concerns have been raised over data privacy in education, with inBloom, the Bill & Melinda Gates foundation backed, non-profit organisation winding down and the tech ed startup Knewton being described as knowing more about your children than the NSA in the article Data mining your children. The Family Educational Rights and Privacy Act (FERPA) is being held partially responsible for the problem due to providing limited protection and being silent on the subject of data ownership. Machine learning has huge potential benefits in education but it can not be at the expense of privacy.
One of the reasons why organisations are looking to machine learning to customise education for the individual, is the push for education reform. One of the leading figures in education reform, is Ken Robinson. In 2010, during a presentation at the Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA), he made the point that education was modelled on the interests of industrialism and in its image. He argues that some children are better than others in smaller groups, in different subjects, at different times of the day and the current teaching methods are about conformity and as a result, there has been a big increase in standardised testing. The real goal should be to understand how, where and when a student learns most effectively and customise their education to maximise this learning. While the benefits of improving a child's education are clear, there are also significant direct financial benefits to making teaching more efficient, that in the US alone, are estimated to be over $300 billion a year.
The potential benefits of customised education are too great to simply give up on this initiative, so what will it take to overcome the current privacy concerns? What if it were possible to encrypt sensitive data and perform machine learning across it without being able to directly access the raw unencrypted data? Public-key cryptosystems, such as Paillier Encryption potentially hold the solution to this problem. They would allow organisations such as InBloom to “share” data with trusted partners in an encrypted form while still allowing machine learning to be performed over the data while the data is protected. While, public-key cryptosystems such as Paillier, may not be sufficient to satisfy all privacy requirements, if used correctly, they have the potential to significantly reduce privacy concerns when performing machine learning over sensitive data.