I was lucky enough to take part in the Innovative Tutoring Actions in Digital Neighborhoods workshop on Tuesday, October 15, 2024 at Université Paris Sciences Lettres (PSL) for a day dedicated to exploring the innovative practices shaping digital education.
Thanks to Vincent Beillevaire, Marie Zocli, Solange Faria Pereira, Amandine Rannou and Joel Oudinet for welcoming me onto their panel!
Here's a short report on what we've been up to with our AI module integrated into ChallengeMe.
Why we decided to use AI
This day dedicated to exploring innovative practices in digital education was the perfect opportunity to explain why we chose to add AI to our platform.
Our decision to integrate AI into our platform is motivated by several factors:
- Personalized learning: AI enables us to offer a personalized learning experience, tailored to the needs and pace of each student.
- Improved feedback: Thanks to AI, we can provide more detailed, relevant and constructive feedback, promoting faster and more effective student progress.
- Optimizing assessment: AI helps us to automate certain aspects of assessment, enabling teachers to concentrate on higher value-added tasks, such as individual student support.
- Data analysis: AI enables us to analyze large data sets to identify trends, challenges and opportunities for improving our platform and the learning experience.
Finally, our decision to integrate AI is part of our commitment to preparing students for a future where AI will be ubiquitous. By exposing students to AI tools in an educational context, we help them develop the skills they need to interact with these technologies critically and productively. When we spoke at PSL, we emphasized that our approach to AI remains human-centric. AI is a powerful tool, but it does not replace human judgment. It is there to augment the capabilities of teachers and students, not to supplant them.
How we use AI
Our philosophy is clear: we use AI in a targeted and encapsulated way, i.e. by integrating it into specific functionalities of our platform, with clearly defined objectives.This "encapsulated" approach enables us to get the most out of AI while retaining control over its use. We do not seek to create an omniscient AI that takes charge of all aspects of learning. Instead, we develop precise AI tools designed to meet specific needs identified by our users and academic partners.
I stressed that in all these use cases, AI supports the users (students, teachers, tutors), not replaces them. It is there to facilitate, optimize and enrich human interaction, which remains at the heart of the ChallengeMe learning experience.
The different types of tutor
The flexibility of our platform in terms of tutor types, a feature particularly appreciated by utiliasant ChallengeMe schools. We distinguish between several profiles:
- Peer-to-peer: Students evaluate each other's work, encouraging commitment and critical thinking.
- Teachers: Teachers contribute their disciplinary and pedagogical expertise.
- External third party: Professionals in the field offer a real-world perspective.
- More advanced students: Vertical tutoring creates an intergenerational learning dynamic.
This diversity of profiles makes for a rich, multi-dimensional assessment, combining the benefits of peer review with those of expert assessment.
I've stressed that the AI supports the tutor in his various tasks, providing him with tools to optimize his action, but never taking his place.
Peer-to-peer issues: giving and receiving feedback
I have highlighted the two major challenges of peer review: giving relevant feedback and receiving and applying that feedback.
Providing constructive and useful feedback is a complex skill for many students. The main pitfalls are :
- The difficulty of structuring comments in a clear and balanced way
- The tendency to be too vague or, conversely, too critical
- Lack of concrete suggestions to help the other person progress
Yet this skill is essential, not only in academic careers, but also in future professional life.
Just as crucial and delicate is the art of receiving and making effective use of feedback. Students may encounter several obstacles:
- Difficulty accepting criticism constructively
- The feeling of being overwhelmed by a large volume of comments
- Difficulty in prioritizing areas for improvement and translating them into action
These two challenges lie at the heart of peer assessment. These two challenges are at the heart of peer assessment, and meeting them requires not only appropriate pedagogical support, but also well-designed technological tools to facilitate the process without distorting human interaction.
AI as a feedback assistant
At OMNES Education, we have deployed an AI feature that analyzes and synthesizes the feedback received by a student. This innovation aims to optimize the peer review process by making feedback more accessible and actionable.
How the AI assistant works
After a student has handed in an assignment and received peer reviews, our AI springs into action:
- It analyzes all the feedback received
- It identifies recurring themes and key points
- It generates a clear, structured summary
Here's an example:
Benefits for students
This AI synthesis offers several advantages:
- Clarity: Students get a quick, easy-to-understand overview of the feedback they receive.
- Prioritization: Areas for improvement are ranked in order of importance.
- Actionability: Concrete recommendations for progress are provided.
- Motivation: strengths are emphasized, encouraging the student in his or her learning process.
This AI synthesis enables the student to quickly and clearly understand the key messages of the feedback received, without being overwhelmed by a large volume of information. It also provides concrete avenues for progress, encouraging a continuous improvement approach.
AI as a feedback assistant
we have deployed an innovative AI assistant that guides students through the feedback writing process. This tool is based on evaluation criteria established by UC Leuven and extensive research in the field of peer assessment (M. Gielen, B. De Wever (2015), Structuring the peer assessment process: a multilevel approach for the impact of product improvement and peer feedback quality, Journal of computer assisted learning, 31(5), 435-449)
The AI assistant analyzes the feedback provided by the student, as well as the marks awarded to his or her peer. It then compares these elements with predefined quality criteria. On this basis, it generates personalized suggestions for improving the feedback. For example, if the student has given a low mark without providing a detailed justification, the assistant might suggest: "Your assessment seems harsh. Could you elaborate further on your observations to help your peer understand the reasons for this mark?" Similarly, if the feedback lacks concrete suggestions for improvement, the AI might suggest: "Try to include specific recommendations for each weak point identified." This approach not only improves the quality of the feedback provided, but also develops students' critical and communicative skills. By guiding them towards more constructive and thoughtful assessment practices, we prepare students to become professionals capable of giving and receiving feedback effectively in their future careers.
Here's an example:
AI to help students improve their work
At the University of Montpellier, we have deployed an innovative AI assistant which analyzes the work produced by the student and suggests areas for improvement based on the grading grid provided by the teacher. The aim is to enable students to understand precisely what is expected of them and to improve autonomously. Our AI assistant intervenes after the student has submitted his/her work. It compares the submitted work with the evaluation criteria defined by the teacher. For each criterion not fully satisfied, the AI generates concrete suggestions for improvement, based on best practices in the field. It can also recommend relevant teaching resources (articles, videos, exercises) to help the student make progress on these specific points. Initial feedback from the University of Montpellier has been very encouraging. Students appreciate this personalized help, which enables them to better understand teachers' expectations and how to improve their work in a targeted way. Teachers have noted a better match between the work submitted and the assessment criteria, and faster progress in students' skills.
Here's an example:
Next steps in AI
Building on these initial successes, we are delighted to announce our strengthened collaboration with the University of Montpellier and their research laboratory to develop new AI functionalities. This collaboration aims to explore new ways of using AI to improve learning and assessment, always putting the student at the heart of the process. Together, we will be working on different pedagogical scenarios where AI can bring added value, both for students and teachers.
For example, we'll be exploring how AI can help teachers to analyze student activities, assess their work more effectively and objectively, or quickly identify students in difficulty. One of the most interesting aspects of this collaboration is the involvement of a research laboratory that will monitor the various experiments. Researchers will analyze the impact and effectiveness of these AI assistants in a real teaching context. Their feedback will enable us to continually refine our tools to maximize their relevance and usefulness. This collaboration is in line with our long-term vision of AI at the service of pedagogy, used ethically and responsibly to enrich the learning experience of every student. We are convinced that AI, when developed and deployed in close collaboration with educational stakeholders, has the potential to revolutionize the way we learn and teach.
Ludovic Charbonnel
co-founder ChallengeMe