Saturday, 20 February 2021

In edtech, AI is only the answer if you know the question

There's much talk these days about how Artificial Intelligence (AI) and Machine Learning (ML) can transform education. Century Tech have just announced Century APIs, which moves them from being a learning platform to being also a form of middleware, providing partners with access to their AI technology as a service. Sparx have been busy building a personalised learning experience for schools (though they prefer to describe their approach as "Intelligence Augmentation" rather than full AI). And Quizlet (an edtech "unicorn") have a service called Quizlet Learn, which uses ML to provide an adaptive learning experience. 

Now, I'm not an expert in the area, so this blog isn't going to be a rigorous analysis of what AI/ML is, or who's doing it best. But I am an edtech entrepreneur, and I hear lots of other edtech entrepreneurs considering whether to use AI/ML, so the purpose of this blog is to propose a way of answering that question.

Or, to put it another way, I'm asking: "what problems should AI be trying to address?". I've seen too many business plans which can be summarised as: "we're a bit like other things, but with AI, so better"; and that might be true... but why? 

Let's examine the question by using the following wildly simplistic anatomisation of the learning process:

  1. Content (knowledge, facts, ideas, skills etc) is created in the form of lesson plans, videos, worksheets, quizzes and such.
  2. That content is shared somehow with a student.
  3. The student attempts to absorb the content using the materials provided.
  4. A teacher (or software) marks the student's work.
  5. (Sometimes) the student revises or revisits that content.
  6. (Sometimes) there is a final test to assess whether the content was correctly learned.

So, how could AI help with each of those steps?

Starting with point 1, I don't think many people are using AI to create content, so a pretty key point is: did you create good content for the AI to have fun with in the first place? And this is my first anxiety: 

If you're an AI-powered product, and you don't have a good rationale for why your content is better than someone else's, then why should I believe your AI is going to change that? 

AI can't (yet) turn a bad module into a good one: it's not rerecording videos with better analogies, or more succinct summaries, or whatever. Some might argue that AI can help pick the preferred content type for the specific student, but that sounds a lot to me like learning styles, and I've read enough Daisy Christodoulou to be highly sceptical as to whether that's a thing.

Moving to point 2, I guess you could make a case for AI somehow informing the sharing process, but I don't see that happening anywhere in practice. Rather, people build software, and they make tonnes of choices that are then hardwired into the learning experience, which may end up mattering more than anything their AI does. Stuff like: 

  • how you log in
  • what the user interface looks like
  • how quickly pages load
  • how many clicks are needed to move between pages

Why does that matter? Well, in a recent conversation about Carousel (the quizzing platform I've co-founded) recently, an esteemed academic made the excellent point to me that the biggest difference we can make is to get someone who previously wouldn't have attempted a task to attempt it. Until then, I hadn't spent enough time thinking about how taking a student from "didn't bother" to "learned a thing imperfectly" is perhaps a more profound change than going from "learned a thing imperfectly" to "learned a thing well". And maybe the best way to do that is to make it so easy to use that even easily distracted (or low-resilience) learners give it a crack. 

So, the point is:

If you're an AI-powered product, by all means spend money on the AI, but don't forget to make the product good in all the non-AI ways that may end up impacting the learning experience too.

OK,  time for points 3 and 5. The reason why I'm lumping these together is because it's actually pretty important - and not always clear - as to whether an AI-powered product is intended to be a full curriculum product, or a revision / homework / supplementary learning tool. Now, just from a sales perspective, I think it's much easier to sell a revision/homework edtech tool than a full curriculum product, at least for the time being. But also, from a design perspective, your focus here makes a big difference to what you'd ask your AI to do. A full curriculum product is likely to significantly more complicated, with more types of content and tasks, each of which needs deep and careful thinking about how AI might help. On the other hand, if you're just a revision / homework tool, maybe your main "thing" is just questions (or videos, or pods, or whatever).

Either way, at this step the only things I can think of that the AI can do is:

  • sequence content differently
  • select different types of content (Learning styles!) 
Now sure, I can see a value in AI for sequencing... but is it transformationally better than a traditional, well-designed curriculum? I'm not convinced, yet. I guess the complexity and popularity of a course are  factors, too: there are some pretty great and well-thought-through primary maths curricula out there, so maybe AI can't add a tonne by suggesting a different sequencing. On the other hand, perhaps there's more fertile ground with more complex and less-universal areas such as A-Level Physics, for example. In any case, my point is:

If your AI is focused on sequencing content or selecting different types of materials, you need to be able to explain why this makes you more effective than a decent teacher.

Finally, let's think about points 4 and 6. Here's where I see the most potential for AI/ML. So much edtech revolves around multiple choice questions, because they're easy for a machine to mark, so I can see AI playing a major role in expanding the range of assessment types that computers can handle. Startups like Progressay (essay marking) and Lexplore (reading dyslexia assessments using eye tracking) interest me: the appeal is that I recognise the problem, and can understand - and therefore believe in - the role of AI. 

Another angle that AI/ML can help with is spaced repetition (i.e. how frequently, and at what interval, questions are asked and re-asked to help embed knowledge). This is something we've been thinking about a lot at Carousel, and we expect to introduce innovations in this area over the next year or so. This is something that really isn't intuitive to teachers; and even if it was, it's hard to have the discipline to remember to re-quiz students on a subject when you have so much new ground to cover. So I can see AI/ML playing a really useful role in the revision and embedding process. But, at the risk of repeating myself, this will only work if the content is well-designed, and your product is designed in a way that students actually want to interact with.

Finally, it's worth remembering that many of the Big Technological Leaps Forward we need to make in edtech can be achieved without AI/ML. I'm also co-founder of a MAT assessment platform called Smartgrade, built with input from the legends at Evidence Based Education. Smartgrade uses a bunch of algorithms to standardise common assessments, and also to feed back on how well designed they are. That's not AI/ML; but it is clever use of technology to automate and improve assessment accuracy. AI isn't the only way.

So in summary, I'm not saying AI can't help with edtech. Rather, I'm saying:

Your theory of AI will be most compelling if you can articulate which part of the learning process it tackles, and why that bit needs AI in the first place.