I had planned to release the next episode of Education Research Rounds this week, featuring a review of the research paper titled, “Beware of Metacognitive Laziness: Effects of Generative Artificial Intelligence on Learning Motivation, Processes, and Performance.” However, I’ve decided to delay it for a couple of weeks.
To develop the podcasts, I have been using Generative AI. I didn’t want to fall prey myself to metacognitive laziness in reviewing a paper on metacognitive laziness. I also want to take the time to deeply understand what I consider one of the best education research papers I’ve read recently.
Additionally, I thought I would lay bare how I use Generative AI and other technology tools in creating the podcast. My process continues to evolve with each episode, and I’ve recently begun working with multiple AI models—not just ChatGPT. A key part of this approach is to explore and evaluate the unique strengths and weaknesses of each model, as well as gaining insight into how each model “thinks”.
Some of the models I will be covering are:
OpenAI’s ChatGPT (GPT-4o)
Anthropic’s Claude AI (Claude 3.5 Sonnet)
Google’s Notebook LM (Gemini)
Open-source models run locally on my computer
I’m grading these models as research assistants, and the results are surprising—challenging some assumptions about AI performance metrics which are all developed by technologists. Stay tuned.