Back

Ethan Mollick | The Wharton School - Transcript

[00:00:00] 

Jeremy Singer: I'm Jeremy Singer, president of the College Board, and this is the Education Equation. I've spent my career grappling with what truly drives student success. On this podcast, I'll talk with people who are researching, building and scaling solutions that matter. Every episode will go beyond the hype and focus on data and evidence to see what's actually working.

Let's stop guessing and let's figure out what works.

My guest today is Ethan Mollick, one of the leading voices helping us understand what AI actually means for how we learn and how we work. He's a professor at the University of Pennsylvania's Wharton School where he studies how AI is reshaping everything from classrooms to careers. His book Co-Intelligence has quickly become a defining guide to [00:01:00] engaging with ai, and his work has earned him recognition as one of time's most influential people in artificial intelligence.

Ethan is not just observing this shift. He's actually testing it through his work with leading generative AI labs at Wharton. He's exploring how these tools actually perform in real classrooms and real work settings. He also writes one useful thing, one of the most widely read newsletters on AI today.

And if you don't subscribe, you should. Ethan Mollick, welcome to the Education Equation. 

Ethan Mollick: Thanks for having me. 

Jeremy Singer: So first things first, how do I confirm you are actually Ethan Mollick and not just an AI bot that he has created? 

Ethan Mollick: So the good thing is, I'm answering quickly enough and speaking quickly enough that I think we're okay, but it's getting harder to tell.

I don't think that is a thing you can promise, that you can easily detect in the long term. 

Jeremy Singer: Okay, good, because I planned a whole like biometric screening, but I will save our, listeners time. It doesn't make great podcast content, so I want us to warm up talking broadly about [00:02:00] ai, not specific to education, but then we'll spend the majority of our conversation focused on education.

You've been studying ai, as I noted, very closely for a number of years. I've heard a lot of people try to analogize. AI to pass innovations, whether it's the microchip or the internet, but it feels, at least to me, that the pace of AI development, at least in the past couple years, is just so much faster.

So how has your thinking evolved as the technology has improved so rapidly? 

Ethan Mollick: So I think that a lot of us who are following the technology closely have had some sense of where this is going, or maybe not where it's going, but where it's gonna go next. I don't think anyone knows where all of this is heading.

I warned about the homework apocalypse back right after chat. BT came out and it has unfortunately come to pass. I. What I actually find really interesting is that other people's thinking of finally evolved. Like I think this isn't a one person problem, it's a many person problem, but I think as the capabilities of these systems keep growing and we don't see sign of that slowdown, that was always possible, we have to start preparing for a world where if it's good at math, it's gonna be very good at math.

If it's good at coding, [00:03:00] it's gonna be very good at coding in a pretty rapid order. And I think that the capacity to change quickly, to change perspectives, to react quickly is becoming more important than ever. 

Jeremy Singer: And I, even saw, I think yesterday I read your latest one useful thing newsletter. You said, Hey, the newest release is, the pace is keep moving very quickly, which in some ways a relief.

But is there any belief that you had a couple years ago that has dramatically changed or has it been pretty, 'cause you deserve some credit? 'cause I think you were onto this very early and generally, which is a big reason we have you here today. But is there anything that has shifted that you've been surprised by?

Ethan Mollick: This is gonna sound very pollyannish, but I think a lot of the sort of big negative effects that people were like, I thought like deep fakes would have a bigger impact than they did right now. And I don't know if that's just our information environment so bad already. Also, I think I, I had a little bit of a two positive feeling that education would adjust more quickly, not systemically, but like at the individual level around the idea that, okay, we've lost a category of assignments that are useful.

There's a new. Form of work that [00:04:00] is useful. Yeah, that wasn't useful before. I think the level of delay and the level of denial that the tools are as good as they are is also has been surprising to me. So I think on the, on the positive side, I think that the negative effects in some areas have been less than I thought.

And the others, I thought some of the people would be reacting. I'm surprised, in the education section, very diverse responses, but there's still a Deb debate about whether AI is like real or not, which is a frustrating one I think. 

Jeremy Singer: Yeah, no, and I've seen, we'll talk more about it, but where people are like hoping they can just bottle it up or exclude it and everything can be as it was, pre ai, which I think we probably agree is faulty.

I've heard you say many times, and I love it and I've now incorporated, is that today's AI is the worst AI we'll ever use. So if someone hears that, what does that imply for how people should be approaching engaging with ai? 

Ethan Mollick: I think it's easy to fix on where things are today and that's what we do.

I don't like the COVID analogies 'cause it's a very different experience, but in the same way you can remember how like you just could, like when [00:05:00] something is coming and okay, there's this sort of, there's a couple cases, like it's hard for us to extrapolate where things might go. And I think that in the same kind of way, it's hard for us to say.

What happens if this keeps getting better? 'cause we don't really have a concept of that. So I see a lot of solutions planned for today's systems. it's interesting. The very first thing I went viral for in education was I put up after a week after chat came up, I put up my new syllabus for my class, which said, you can use AI in class, right?

But you're responsible for the output. And that was great with the original chat g BT 3.5. 'cause it made lots of mistakes. It was not very smart. But my students were better. It was instantly bad. Like it made stuff up. It was, the writing was weird and it was great. For four months and then GT four came out and now.

And it is better than a lot of my students at writings, and that is an obsolete system. So there was this kind of like trailing, like staying ahead of this is hard that way. 

Jeremy Singer: Yeah, No, a hundred percent. And humans aren't very good at like exponential forecasting At college board, we have an academic assembly of a lot of [00:06:00] different, instructors at college and high school and they were meeting in our office yesterday.

So I spoke to them and I talked a little bit about what we're doing with ai. You know what I see a lot when I talk to people in education is a fear. we haven't figured it out. We don't use it. One person talked, they were at a university, I won't name it, but they said they got everybody licenses, but only 5% of people were actually engaging with ai.

My take is there is a missed opportunity, but what I tried to say is it's gonna just get more accessible every day. It's gonna get easier to use. I built this very simple app a year ago and it took a lot of work today. I can just describe it and Claude and it builds it. if someone is, feels late to the game, is it okay or what?

What would you tell to them or what would you have told to this person? 

Ethan Mollick: Yeah, I mean there's a few things. as time goes on, not every piece of advice in the book will be as strong as it was, but one of the pieces of advice I give is just try it for everything. Invite to the table that for everything legally and ethically can, I think they're putting off using AI [00:07:00] until some future point where you have a weekend or a week free to do training.

Like it's, it really is a hand field thing. if you're good at working with. People, which instructors are you will be good at working with ai. And I think that getting started is a big deal. I do worry about the sort of social desirability bias issue. The AI is evil. there's a lot of bad things about ai.

There's a lot of good things in education. It will have strong negative effects and strong positive effects, but it's not going anywhere. And like we can wish that it goes away, but there is this worry I have. It's like I haven't used it. It's not that hard to use. All of the tips and tricks as you were alluding to, don't matter anymore.

Like you don't have to do fancy code of, if you're good at describing what you want, which is what we do as instructors. If you're good at correcting errors and getting intuitive sense of the errors you're facing, which we're good at in education, then you're probably gonna be pretty good at using these systems.

Jeremy Singer: I agree. Once you get in it, it's pretty intuitive and pretty easy, and it's just getting easier and easier. I wanna turn the lens to [00:08:00] education. The stakes there are obviously different, and I'm gonna break our discussion into three big parts. So the first is around how AI is changing, what students need to learn and what we should be teaching.

The second's gonna be okay. Now we, we've solved that teaching should evolve and change now. Teachers and students have access to ai. And then finally the third is how we should rethink how we assess knowing the ubiquitous access to ai. So I'm gonna start with what we teach, what students should learn.

And here, I'd say the reality I, would argue strongly that even prior to ai, the system, the K 12 higher ed, there were things that were taught that probably were. Not super useful for students to learn. And then there's some critical topics that we're missing. Things like statistics, personal finance, many others.

But regardless with AI here, it's gonna be impossible not to reconsider what is in the corpus [00:09:00] of K 12 and a higher ed. And so in an AI world, what are things in your view that used to be covered and taught that either have become obsolete or just less important for students to learn? 

Ethan Mollick: So my controversial opinion on this is there are actually very few things that have become less important, right?

I feel like part of how you keep up with AI is we need to actually, you need to have a basic understanding of and mastery on your own. Schooling's gonna become even more important in a lot of ways because we need people to have the basics. And the only way we can enforce, monitor, assess, yeah.

And help people is like in a controlled classroom environment. Like it's gonna be harder to do than the real world than we did before. 'cause people use AI and you won't know when they're using it or not. I think it shifts what we do. We have to decide the lines we wanna draw, right?

And math education in the seventies with the calculators, the canonical example, there was a chaos for a little while and then there was some decisions made that made math much strong math education much stronger about here's what you need to do by hand. And then once you have those basics, we can now go [00:10:00] far beyond that because we have better tools and you can learn more advanced concepts.

We're gonna have to do the same sort of stuff in composition. There's gonna be, we want people to learn to write and write well, and we're gonna have to do that, enforce them the kind of way we did in calculated world, people. Be doing a lot more writing in class. There'll be more monitoring.

Nobody wants it, but that's, what's gonna end up happening. But then those writing skills hopefully will go on much further developing sense of taste, reading more widely as well. Learning different writing styles because you now have tools that can help you do it. So I don't think there's a basic shift in, The things we want a well-rounded human being to know when they enter society. I think that it gives us a chance to consciously pick what we wanna do and what we don't do. We dropped cursive writing. we're gonna drop other pieces and evolve them. But I don't think this changes fundamentally just 'cause the Wikipedia knows history doesn't mean we don't wanna teach people history.

Yeah. The usual fight for the space in the curriculum. So how much of computer science gets taught, how much of financial literacy gets taught, like those fights are still real and in a coherent. Response, which we won't have, but if we add a cohort [00:11:00] response to this, part of this would be rebalancing some of that education portfolio and in some ways, things that we teach less of.

appreciating poetry might end up being more important in a world where AI writing is ubiquitous, but having a sense of taste is not, 

Jeremy Singer: you use a calculator example, there's a lot of lessons to be learned. I think in miniature, like one is though there were things like the ability to do long division isn't taught the same way it was in the seventies, So there are some things where they said, Hey, do, you really need to know this? But there are some fundamental pieces as you argue, like the multiplication table. You better know that rote. You can't be using working memory to figure out what six times five is it, you need to know that because it enables all these other things.

I still will push, I, guess I agree with you that maybe pre-K through six or whatever that stuff are all fundamental. But I do wonder whether there's things in high school chemistry, I mean we are looking at it. We haven't made changes but it, with our AP classes. Of Hey, is all [00:12:00] this content and it's more advanced content.

How important is it for students to learn? 

Ethan Mollick: So it's a really complicated question and I think there's really AI as opportunity to make changes, right? Which is we've been teaching the same thing the same way for a long time and it lets us reassess changes that we might have made because the internet made things easy, right?

Yeah. One way or another. Yeah. Like we, we give up these skills all the time. My, my grandfather. Some of the launch pads at Cape Canaveral were using a slide rule and he spent his master's thesis was doing a single matrix multiplication effort. Yeah. Like we don't do that anymore. We don't need, I, don't know how to use a slide rule.

We gave that up. That's okay. So part of this is crisis is opportunity, right? It gives a chance to read. I agree with that Curricula. and make decisions. That's what I like about the math example, was there was conscious choice about what should be by rote, what needs to be familiar with, what do you need to be expert at?

And I think that is very useful as a crisis. And then I think the danger, I think is a little bit chasing AI because when I talk to people with AI education, they say, oh, we need to teach [00:13:00] people things AI can't do, like judgment. AI could do judgment like it's a pretty good judge, like it couldn't do the things it could do for now.

Is it the same kind of judgment as humans? Should we using the same term, do we actually mean we want humans to control? Those are valid questions, but I worry that AI literacy becomes the target people aim for. Like we'll add a class in AI literacy like we added internet literacy. This is very different.

There's a fundamental reshaping of how people interact with the world around them. There's a general purpose tool in the way that calculators were general purpose for a particular kind of math problem. Now it's for everything. So to go back to your main question, yes. Part of the problem though is I can't anticipate for you what that world looks like.

So I can tell you that if you understand basic chemistry, you'll know, you have some sense of checking the ad, but the AI is gonna be better at chemistry than you. It's better at math than all of us right now. there's there statistically 0.001% of this audience is better at math than the ai, than the best AI systems right now.

So I agree that we have to rethink these things. I think the question just is more of a question of like, how do we almost get used as to bootstrap ourselves to more advanced topics where you [00:14:00] can thrive or learn or understand more. But it's always been the hard problem in education, right? You need the basics, which are rote and annoying to learn to reach the next level of scaffolding, to scaffold that moment.

And I don't know how you change that fundamentally, that's a real challenge. 

Jeremy Singer: And I, think about. Like even with GPS, how that's changed, people learning some fundamental, there's still fundamentals you under need to understand about directions and so forth, but there's a lot where you now turn it over to Waze or what have you.

But, so let me shift and 

Ethan Mollick: for better or for worse, right? Yeah. 

Jeremy Singer: Exactly, for better or for worse. So you touched on a couple things, but are there new skills or existing skills that become more important? I'll say the durable skills piece, and I'll love your view here. I think most of our listeners will have heard of these, but these are things like critical thinking, communication, problem solving, ethical decision making, and 30 plus US states.

Even before AI had created these portraits of graduate, we're like, Hey, these things are important. So these are things that have been threaded through education, whether explicitly or [00:15:00] implicitly for forever. But then the question is, in the weighting of what students need to learn to succeed, do you see these as becoming a greater import or, So again, it's hard, right? Because the education services many purposes, right? Is it preparing people for the career of work? Is it preparing people to be independent thinkers and citizens in society? Is it preparing people to, there's a lot of open questions. There is a really fundamental existential challenge coming from ai, which is, it does change how we place ourselves and what value we place to different kinds of intelligences and different kinds of approaches.

Ethan Mollick: And there, that's a bigger issue to grapple with and it's hard for me right now to say to you like, this is what, like those durable skills still seem durable, but. They're also outsourceable to a degree they never were before. Judgment and critical thinking were never outsourceable. But now, there's enough studies that say the AI gives pretty good critical thinking.

Like it gives you pretty good answers on ethical situations that ethicists would agree with. It gives you pretty good, judgment calls [00:16:00] on, and that will get better as it gets better over time. Do we wanna outsource that? Probably not, but yeah, 

we never had to ask the question before, I would add to those durable skills though, I think.

I think taste is a thing that we actually want people to develop more of. And I've, talked to some writing teachers about this. I think it's interesting that, having your unique style ends up being more interesting now in a world where if you go online, everything reads like Claude, right?

Everything is let me sit for this for a little with this for a little bit. Or it's not just this, it's that having a sense of style actually matters when you're curating it matters in these cases. I'm a professor of entrepreneurship, right? I've been. Building entrepreneurial teaching tools for a very long time.

Even pre ai, this has been a topic I care about and something that we've always struggled with teaching is that sort of entrepreneurial agency. How do you take action, experiment and learn while you do things? And that's another hard problem that I think will become more important. So agency and taste become important.

So I do agree that these critical, life skills [00:17:00] become ever more important, but now we have to make choices about them in ways we didn't before. is if you're bad at judgment, and you make bad decisions regularly, would it be better to have an AI help you make those decisions?

How do you weigh in AI's decisions if you can now get decisions, four or five different organizations or people, how do you make those, integrate that into your life or not? These are questions we haven't had to ask before. 

Jeremy Singer: Yeah, yeah. It's funny, the, sort of idea of creating, case makers or the personality becomes more important, which is not just important at dinner parties, but potentially for a lot of things moving forward.

So within this, you've written about how ai, raises whatever the task is that raises the floor. So it's a good thing for people that are performing below that level. At least in the past is struggle to outperform like the experts and what that is. Can you share some of that and is that changing?

you just talked about the math example. 

Ethan Mollick: so there's basically four ways that AI can affect skills. So it can act as a leveler. And [00:18:00] that was the first results we found in our experiments and stuff. You're a bad performer. AI makes you a good performer. The reason it does that is it just does the work for you.

So from an education perspective, we. Like leveler is an important thing to think about in the outside world, right? if somebody who has a skill gap in one area or multiple areas, but is brilliant in some area, like that was not a good educational outcome, but maybe with a leveler set up. okay, so you can't do math, but you're really, good at, putting together an incredible argument better than any ai.

Okay, the AI does the math for you. Suddenly it's a leveler. What do we do with that information? Is, a question. Second option is that it's an elevator. It raises everybody up, right? Everybody gets better. There's some evidence for. That as well. That doesn't change much about how we view the world, just everybody, performers better than they did before.

Then there's some evidence emerging that this is a kind of a king maker, that there's some people who are just, the people who are very good get better, right? If you already have good judgment and you're already good at using these systems, yeah, you're now a hundred times more productive and it'll be 10 times.

We haven't been able to measure that really well because. [00:19:00] There's a small, small set of percentage of people. We only are in the early days of doing that, but that implies a new form of inequality that we start to worry about a lot, which is a few people get all the benefits of sort of AI use.

And then there's the possibility that's even weirder, which is a really interesting one for education, which is that some people are just good at ai and that's be just like if you were, if you're good at coding. There was huge returns to that in the T 2010s being good at ai. Whatever that means.

Jeremy Singer: Right?

Ethan Mollick: Gives you huge returns. Now the problem is good at AI doesn't seem to be a skillset, right? It's not that I could teach you 17 property skills and then you're good at ai. There's some other aspect that we don't fully get yet from early studies that is some people call theory of mind of the AI that you understand how to work with these systems, but we don't really know.

So those skill effects are gonna be fairly profound. We already know the leveling effect is real, right? and you know this 'cause there's no longer bad essays when I grade students the way there was before. 

Jeremy Singer: Yeah. 

Ethan Mollick: There's a lot of essays that feel similar to each other. to varying degrees, but there's no longer bad essays.

That, level has [00:20:00] happened. How do we feel about, that's a bigger question. 

Jeremy Singer: Yeah. Yeah. There, there's so much here. I could go so many directions. It is interesting. I'd love to talk more about. We're all trying to discover are there specific skills or approaches or natures or that lead someone to be better with ai?

But it's interesting 'cause I've seen people who are very detailed, controlled people, they approach AI very differently than people who are willing to just iterate and make mistakes, and both can succeed. So at least. It's not one persona so far, but let me go back to writing. You mentioned it earlier, you know what we see and you, mentioned it with the, homework apocalypse years ago, and we're, seeing that in K 12 and higher ed, which is AI does a good enough job and getting better that a lot of students or outsourcing their writing assignments.

To ai, and there's a big problem with that, which is the way you build writing skills and not just writing skills, but if you think about critical thinking or communication and there's so much else that writing as you know [00:21:00] on, you need to write to build these other set of skills, not just writing. And if you outsource it, you never do it.

We've already seen this decline by many young people. What's your thought on how do we ensure students engage in actual writing? Don't, outsource it for the good of them and for frankly, the good of society. 

Ethan Mollick: So that is, that's a change, right? in some ways we have to face this like math teachers did too, right?

Jeremy Singer: but math was easier. You said, Hey, don't, you can't bring a calculator in just 

Ethan Mollick: to be better. And there's some tech, maybe there's some degree of okay, we're going to have the need to have disconnected, dedicated type. it's. I'm hearing some pretty crazy stories that I'm sure you're hearing too, about the data, about trying to proctor things in a world where people's phones, which are ubiquitous tools in the bathroom, are good enough to answer all the questions for them, in a yeah.

Six second break, right? let alone all the elaborate cheating mechanisms, let alone the, glasses with things built into, we administered 

Jeremy Singer: 12 million high, stakes exams last year, and we see shit that scares the [00:22:00] hell. It would scare everybody if they saw the wave. 

Ethan Mollick: fair Faraday cage testing, could, things with, with scanners?

I have no idea what the future holds, but fortunately, and we'll get there when we get to the assessment side. But go ahead. 

In classrooms, we have some sense, of as a teacher, you have a sense of what, who knows stuff and who doesn't know stuff beyond the testing. It's not a surprise to you.

It shouldn't be a surprise to you most of the time how things are doing. But I, would say that it, requires changes and some of them are uncomfortable. Like a lot of things we do in education. Worked well enough and are unexamined. the, when you look at the empirical basis for how we teach essay writing, there isn't a lot there, right?

We assign people, in, in a, in an undergraduate class we might assign people a hundred page essay and we hope something magical happens while they're writing the essay, right? And. Every professor who teaches Has been, had the magical thing happen, which is oh yeah.

Like I suddenly understood like I synthesized it and I was like, in this writing field. so we keep hoping that magic happens. So we haven't really thought deep in a [00:23:00] deep way, right? in, about some of these aspects of what we want writing assignments to do. And I think we're gonna have to start thinking more about how break those into pieces, what we wanna do in a classroom setting that's controlled what we wanna do outside of class where people might use ai but makes the AI use uncomfortable hard.

I use AI for every, all my class assignments. There are cases where the AI interviews them for questions or they have to correct the AI's writing about a topic. So there are ways of being clever, but it is gains and losses. Like I don't think anyone asked for our writing education system to be thrown into the air and have to be rebuilt from scratch, and that's what we are.

Jeremy Singer: I agree. I think the science of reading finally has been more settled of how. How you teach reading, thankfully, but the science of writing is, less, less, at least less shared. If there is a student listening, make the case to them. Say, Hey, you could use ai. It's gonna give you more time to hang out with friends or do sports, or whatever.

But you shouldn't, when you're doing a writing [00:24:00] assignment for your own good, like, how are you gonna convince them? 

Ethan Mollick: I will say I have never been perfect at convincing people to do this even in a class where, I'm the AI person, whoever knows, will use an a, can read AI writing, and I think we've all been in that kind of environment, right?

it ends up sounding like the worst adult conversation in the world, which is you're only cheating yourself. And it's true. And that's the worst part. the, the real problem, I, spent 10 years building games for teaching, right? And the end result of all of that is, we can make things 80% fun.

Like I can make learning. 80% is fun, is doing something that you actually enjoy, right? For a topic you don't enjoy for the topics you love. Great. You're, you'll read the books, you'll do, but there is not a way to get around the idea that education is effortful and, difficulties might be desirable for us as educators.

They're not desirable for the people who are living through them. 

Jeremy Singer: Yeah. 

Ethan Mollick: And, it's hard because then you make a case of if you don't learn to write, you'll never get a job. But oh no, the ar write for me. In a world without intrinsic motivation to do [00:25:00] this. Some people have intrinsic motivation, and I can appeal to their intrinsic motivation.

As you, as a student listening, you should learn to write. It is a different, it has made my life amazing, right? It's a differentiator for me. My students who can write well, they get better jobs. even in a world of ai, I can make that case to you, but the other case is, Hey, we're gonna grade you and if you don't learn how to write, you're gonna not do well on assignments.

that is the other way to handle this. 

Jeremy Singer: Yeah, yeah. Yeah. I, interviewed Daniel Willingham. Lost pot. And it was interesting 'cause, we talked about productive struggle and you need friction. So to really learn and, yet it is just that AI has, given a shortcut.

That has never been there. So simple, so easy, shortcut across all these things about learning, practicing, assessing. And so I think it's hard to tell students just don't use it. But 

Ethan Mollick: so like we were deluding ourselves a little bit about how much students were engaged in productive struggle. Fair enough.

I think the best doubt I saw is that 40,000 people in Kenya were employed full-time writing essays for people [00:26:00] in the UK alone. According to the latest stats I saw before, before ai. Like people were not, like people were short cutting work before. This is not new. 

Jeremy Singer: Fair enough. and I think people were cheating before and on tests and it's not new, but it has made it just, you don't have to pay someone in Kenya.

Now I wanna move on to the second topic, but I want one more thing here. So I'm just trying to put my shoes, I am an instructional leader at a school district listening to our conversation and I, we haven't even talked about, your, the jagged frontier, which is. It's odd that AI's super good at this, but not so good at that.

But that isn't static. That stuff keeps changing. So you can't even, like if you, the, Jagger Frontier was static, you could plan around, okay, I'm gonna, but it's not, so what are you gonna tell? I'm running curriculum for a decent sized district. What should I be doing? Should I be changing anything?

Should I be waiting, help them? 

Ethan Mollick: It's a hard question, right? [00:27:00] there is the magic wand kind of thing where I think every, there is not a single person who cares about education policy, who doesn't have a magic wand list of things. They'd wave their like, we should rebalance the classroom this way.

you're saying like, chemistry and biology should be combined into a new biochem. and we should have and this, we should take 80% of this class had turned it into financial literacy and like we all have these. Things, Anyone who's been in education for a long time, so it's and I can't give you a magic wand because as a, you're running a curriculum like you have to, the test that, your organization offers, people are gonna take those tests.

They have to know what they're doing. There's, requirements. I do think that the issue is that, so we've spent all this time talking about the negatives and it's weird to have spent so much time talking about the negatives because the positives are also quite large. this is as close to the universal tutor as we could imagine.

There are issues with the AI as a universal tutor, but they are vanishingly small compared to every other attempt we've done to do electronic tutoring or remote tutoring. And there's always access issues, there's always equity issues, but like it's smaller gaps than we've ever seen in any of these kind of [00:28:00] tools before.

And the early evidence bears that out, right? And everyone's already adopting this. Like we're pretending like this is a choice. It's not a choice. Your teachers are using this, your students are using this. So part of this is let's give people models of ways to use this positively, right? And that doesn't have to, needs to be an EdTech solution.

That could be, look, let's, share some of the cases that are working really well. Let us, let's build out a few AI educational efforts. Let's communicate to parents about how to use AI for tutoring by using learn modes. Like I, I feel like we've, it's been a very negative podcast on this so far.

It's worth we're, yeah. Sorry, I don't to be negative. 

Jeremy Singer: Yeah, sorry. And I just trying to get 

Ethan Mollick: at No, you're not. I'm worried about the same things you're worried about. It's just there's a gains and losses problem happening here, right? Yeah, And we've been talking about the losses, which are very real losses, but I wanna turn to, let me turn 'cause I wanna shift to, we've talked about what needs to be taught and we, does it change or not?

Jeremy Singer: And you got on with the tutoring, like how we teach and I think here. And I'll say there's a million things. I talked to Jenny [00:29:00] Mara from Google, and they have all this great research that teachers are already finding a lot of new time for instruction because they're lesson planning. Great. All these things that they've used AI for, which is not only more effective, but it frees up all this time and test they didn't like.

So there's a lot of examples of goodness here. Let's shift and I'll give you a softball here. For decades in EdTech, we've all been chasing this idea of personalization. And I would argue, and I've heard you talk about largely failing, at it. The idea is simple. students are always gonna be at different points of what they know and what they're ready to learn.

And so if you could deliver specific to them and in the moment they're at that, is no question better, but very few things have worked. So talk about how you see AI playing a role here as a positive factor. 

Ethan Mollick: Yeah, I'll start with the name. I was at MIT during the trailing days of one laptop per Child, where we thought like the internet would change everything.

And, we learned a lesson that way, just like the Wikipedia didn't change everything. Actually, it turns out education's complicated and just [00:30:00] constructivist of views where we airdrop laptops into villages and everyone learns to use them and becomes amazing coders, like it's complex. There's societal issues, there's teachers, there's parents, there's, not every learner's the same.

And EdTech has been this sort of series of disappointments in a lot of ways, of it's. Doesn't deliver massive changes. And it turns out that there isn't really a two sigma gain that's easy to gain for anywhere and everything does little things and not huge things, which is, a huge disappointment, right?

Like it would've been nice to have a dial that we could have dialed to make this happen. I think in some ways that has left us. As instructors and as people care about education, like cynical and gun shy, like yet another thing that's going to come in and save education that isn't, and in the end there's thousand in classrooms.

I hate to say this, I think that in some ways AI is different. That sounds like incredibly pollyannish of me and naive, but the idea that you can get a multimodal tutor that is infinitely patient that can explain concepts to people at an individual level that can, and that by the way, the early research is showing very [00:31:00] large scale improvements in pretty well constructed, not standard education policy studies, there's some really good econometrically rigor.

Studies being done in, in Kenya, there's a really cool one that just came outta Taiwan by my colleagues at Warton that are showing very large scale, for education, right? a third of a standard deviation, which is big improvements in educational outcomes. and that's without putting a lot of effort into these tools.

So I think that there's just a huge amount of upside in thinking about how do we use this for tutoring and how do we use this to close gaps and to ignore that because we've had to deal with the downside risk I feel. Feels like we're turning our back a little bit and we're, looking at the downside, but not the upside.

And I have been surprised at the lack of effort to build the universal tutor that like, and by the way, at Wharton, we've been building and releasing prompts for tutoring where we keep releasing tutor. Like AI tools are all open source, they're all, creative commons. And I'm surprised at how few organizations seem to be taking up the torch of like, how do we use this positively transform education?

And the pieces are, seem to be there to [00:32:00] make that happen. 

Jeremy Singer: I agree and I think tutoring and you referenced Bloom's two Sigma problem, which is an older study that showed a human tutors can you get two Sigma improved performance in the amount of time, the instructional time for students. There's a lot of debate if that.

Replicated, but, whatever. There is no question that tutoring is, and I, interviewed, Liz Cohen, who wrote a whole book about tutoring. That tutoring is one of the few things in education, and this is my whole purpose in the podcast, like what actually works. Tutor's, one of the few things that has been demonstrated works.

Now, not every tutoring model works. Every tutor works, but in general, it's one of the few things that has been replicated and scaled and AI can potentially take this to or not potentially is and will take it to a whole nother level if we embrace it in the right way. Let me shift to another. Place and I'll just play my cards.

I bullish here too, which is the concept of flip classroom. So for our listeners, this is where the typical structure of a classroom's inverted a student engages with the content through video or [00:33:00] online, outside of the classroom. So when they would typically do homework. And then the classroom time is used for interactive learning, such as discussions, problem solving.

It's not new like personalized learning. It's been around, but it hasn't taken off. And I will say my, son who's turning 25 next week, he had AP calculus, as a senior in high school so many years ago, and the teacher, literally, they assigned a Khan Academy video. Each night that they had to watch and then the day was working problems and the class did pretty well.

But that's, this is more sporadic or isolated example. So talk about how you see AI really enabling this thing, which again, I think most people would say is a good idea, but hasn't manifested. pre ai, 

Ethan Mollick: so this is another one of those cases where if you ask where the data's good tutoring, right?

And active learning. With all the weird definitions of active learning, which cl of which flipped classrooms are one real way of getting at this, right? So like you do your review outside of class, inside of class, your activities. work problems are one example, but this could be case studies, it could be debates, it could [00:34:00] be built, projects.

It could be the non-AI content that we were talking about here. And you make the, classroom, the non-AI space or the controlled AI space and outside of class is everything else. I think this is the. Most powerful tool for that. Like I absolutely believe that like we, we've known for a while, Sage on the stage cell lecturing in the middle of a class is not the right way to go.

So the problem has always been how do I get enough content? So my kids also were assigned Khan Academy videos outside, and that was, it was great that Cell Khan did this stuff and, but it's weird again, that was the best state of the art we had was, it was this, narrow view.

Now again, we have this ability to do a multimodal tutor that could talk to you. They could look at your problems, they can give you feedback. The current. Models have weaknesses for this because they're trained to be helpful assistance. So if students just use them in an unsupervised setting, the AI gives them answers and they don't learn anything.

But the study and learn mode seem to be much better. All of the AI models have a study and learn mode that makes the AI less willing to give you answers. And we could build, and we have built, and we're not the [00:35:00] only ones, but we're, release a bunch of free stuff too. We're a research lab, but there's, it's very easy to build tutors that are more resistant to giving you the answer.

So given that we can do that, it's makes obvious sense that the long-term purpose is that your teaching is tracked and taught outside of class. And by the way, the AI's really good at giving teachers reports about what somebody needs to do. It's actually really good at saying, Hey, based on where the class is, here's how I would adjust your activity for next time.

Or help me, let me build the activity for you. So I think. A teacher working in collaboration with an AI where the ais are also where students have their own AI they're working with. And in that kind of controlled setting, flipped classrooms become the natural possibility. This teacher gets to do the active, engaging learning stuff that they wanna do inside the classroom.

The gaps are filled in by AI outside of it, and that feels like a very natural equilibrium to reach that we're not at yet. 

Jeremy Singer: No, I love it. Back in the early aughts, I was at McGraw Hill and I was running a digital vision for higher ed and we were building a homework manager [00:36:00] product, and we were trying to distill, based on all the students' homework, what were the big takeaways for a teacher to go, Hey, the first five or 10 minutes of the class would reflect on the homework and.

It's just so funny. it was not easy to do with static assignments and now ai I just, wow. I wish I had that then. So this is great. I wanna move to the last topic in a sec, but I just, I'm gonna throw out a couple things and let you cook for a minute. a couple things I've, read of yours and seen is AI's good.

Task delegation by humans, particularly tasks that can be automated. You've written about how AI shifts the, shift we need to go is from it being a tool to teammate. So I want you to relate those concepts to what you would tell a teacher and even using your examples of teaching at Wharton. Like how do you do that?

And what will be a lesson learned? 

Ethan Mollick: The most important thing we have early evidence on is that you want to use AI interactively, right? So teachers who [00:37:00] go back and forth with the AI do better than those who just ask for output. So like part of what's great about this is, all teachers want more support, right?

hey, I'm having trouble getting this concept across. How do we do this? gimme 12 ways to do this. I like number two, but I don't think three would fly in my classroom for this reason. Could you gimme some other examples on that? Okay, that's good. But I feel like this is like that interaction.

First of all, you feel better 'cause you've got a kind of intellectual partner to work with on your ideas. But second. Teachers who use that according to small qualitative studies, have much bigger impacts on the quality of their teaching, their quality of how they feel about it, their free time. So I'd encourage to it's tough because there's lots of things to criticize about AI and I wanna acknowledge those things, but it's also here and most teachers are using this, and if you use it well, you'll have a better outcome.

Very simple things. Unfortunately, unless your, district has to pay for it or you're going down, unfortunately have to pay 20 bucks a month, there is not really a solution right, to using one of the paid models. And you're gonna wanna [00:38:00] pick, like there's all kinds of weird providers out there, but you're really gonna wanna pick chat, GPT or Anthropic or maybe Google's Gemini.

And that's it. There might be other pro but that those are the ones you're gonna wanna pick the thinking models for each of those, which, sometimes require you to click a couple buttons and then they're smart and you're gonna have a co like, and they're a couple capable of drawing diagrams and having discussions with you.

So it is that interaction. I think the more you could just talk to it like a person, the better off you are as initial output. You're a teacher, you have a good theory of mind for people. You understand what they get and what they don't get. You understand when they're faking it or not. All of that will be very useful to you in those kind of interactions.

And then push it, try different things like, Hey, how do I reconstruct this assignment? this doesn't feel like it's gonna connect to people. the Eagles just won the playoffs. Like, how do I make this connected some useful way? I'm having issues with this kind of student. Like we find that.

Seems to create value. the Walton Family Foundation survey seems to suggest that the teachers who use this a [00:39:00] lot are getting back six or seven hours, a week, at, the maximum end. Like we only have the data. That's what I do, is engage with a good model, and engage the back and forth.

Jeremy Singer: yeah. No, And, I think for listeners just broadly, the, good advice just generally using AI is don't just take the first set of output, engage with it is, useful strategy as well. Alright, so we talked about what should be taught, should it change, how teaching should change, and the last piece is.

In this world, and maybe the hardest question is how do we assess learning? We are seeing that, AI enables students to produce high quality work without actually understanding the material. We talked a bit earlier, but I'd like to go back. You, talked about the homework apocalypse for lower stakes things.

What are you advising? What do you do? What do you do in your class? What, else can we do? 

Ethan Mollick: I have to do a lot, like a much more low stakes in class assessment. [00:40:00] we talk about two things that always work, right? Active learning works and tutoring works. The third pillar that works right is low stakes testing.

Low stakes testing is good for everything, right? It, has educational benefits. people don't like it, right? But that is one of the cleanest pieces of research that we have. And and that's how you assess people like you need more low stakes tests without a phone or computer nearby.

They can be micro quizzes. I, like little micro assignments of different kinds. Here's a micro case study, solve classroom discussion kind of interactions, but. The assessment has to be in a controlled classroom environment. if you're lucky enough to have a small classroom size and have discussion be something you can grade and stuff like that, actually AI can help with that.

that can be useful, but I think we have to realize that the take home assignment without very elaborate controls. Is going to be AI influence. And the only people you're catching are the people who are not good at using ai. And the people who are good at using AI do not read like AI writing. They can fool almost all the AI detectors on a regular basis, or they're just getting AI in influence one way or another.

So [00:41:00] we have to go back to more in class. I don't see a way around that. 

Jeremy Singer: At college board, we're working on a bunch of stuff and I won't go through it all, but it is interesting. I, do wanna double down on, I think there's claims by some that the, you, there's AI detectors and, I, it's just I think it's a joke.

Like it's easier and easier to create. you could tell AI to create something in your voice, load some stuff, you can then. Launder it through two other LLMs. So I, think this, that's a false, 

Ethan Mollick: and, it's a race you're gonna lose in the long run. there, no question of the detectors out there.

Pangram has at least low false positive rates, but it doesn't catch, it has also has high, very high false negative rates. As you said, people can. People can launder this set of stuff. And it's also a fight, a running battle, right? And it's not that hard to fool the detectors, so you can't rely on detectors.

And oh, by the way, the number, like the number three use people put AI to in classroom surveys is asking the ai whether something was written by ai. The AI will never know. In fact, in early stage of G four, we number more recent ones. 95% of the time [00:42:00] it says that something was AI written that wasn't because it wants to make you happy.

So that doesn't work either. 

Jeremy Singer: I'm with you on that and back to the positive. Piece here of ai. So for college board, we've traditionally had two instruments, two tools in our assessment inventory, a free response question of multiple choice. And what AI has provided now is a couple things that I think you know well is one is a Socratic bot where it can be interactive with the student.

And so when I was interviewing for management consulting, I was giving case studies and it wasn't like there's a right answer. It was how you. Problem solve. And so we can do that with students and try to sort of assess things that, that you couldn't assess in a multiple choice or free response. And then also we've built simulations where a student's interacting with multiple, other quote unquote students.

And seeing how they interact with 'em and so forth. So I'm very excited about these, of expanding our ability to assess new things. Again, it has to be done in a securely and so forth, [00:43:00] How do you see these and how do you, what else do you see from an ai, from an assessment standpoint of opening new opportunities?

Ethan Mollick: I've been building simulations for years. I think that we have a whole bunch of techniques that we knew would be good for assessment and teaching. Discussions and simulations and Yeah. case experiences, creative projects, stuff that was just hard to grade, hard to scale. And part of what makes this, and, This is now an exciting thing that we can start to do. So I think that part of this is realizing, wow, a whole bunch of stuff that wasn't possible is possible. Like I know, professors of history who built simulations of the Black Plague and then Yeah. student went through and write about their experience.

Like we could do cool stuff now. I think there's a degree of if we just view this as retreat from the kinds of assessment we knew we're in bad shape, but there's some exciting new stuff that we can do as well at scale. And by the way, the evidence is these models are pretty good at grading with a little bit of help in scaffolding.

Like they give good human kind of grading. They give better feedback than people in most studies that we have. [00:44:00] and I know there's a lot of people listening here who are like, that is not true. They make stuff up all the time. I will again, go to what I said before, which is they used to.

It's not like they're error free right now, but I promise you that if you use a more recent model, you're not gonna be finding these kind of problems anymore. Where it's like making up citations and doing Yeah, if you're using the thinking models. So I think that this 

Jeremy Singer: and thinking model, and I'd say custom instructions, like people need to know that you put custom instructions like, hey, verify, et cetera.

I've had bad experience, but once I did that it's been much better. 

Ethan Mollick: and a lot of that has closed as the models themselves just incorporate those things on their own. But, what I'm saying is this is a really an exciting opportunity. Like simulations, are a really interesting. Way to do assessment.

You live through it, solve the problem. I mean we, we've been playing with things like Advise Hamlet on what to do as and yeah, that's great. It's interesting because that's different in a psych course than it is in an English literature course, and you start to see oh, wouldn't it be cool to start having projects that really actually cross boundaries across parts of, of the teaching curriculum?

So I'd love more instructors to think about this as [00:45:00] opening up the frontier of what we can do. Losses come with gains, right? Yeah. But both things are true. 

Jeremy Singer: I well said and, agree a hundred percent. I love the Hamlet reference, but I wanna respect your time. So I have, two last things. So one is I ask all my guests a series of rapid fire questions to get to know you better.

There's no right answer. Four questions. First is, what's one education buzzword you wish we could retire? 

Ethan Mollick: By the way, there's nothing more ominous than the person who does all the testing saying there's no right answer. Here's multiple choice questions. I don't believe you for a second. 

Jeremy Singer: We will judge. I didn't say we wouldn't judge, but 

Ethan Mollick: I worry a lot about, it's only emerging now, but like AI literacy being a single topic that, because I think AI integrates into many things that we do and I think, I worry that kind of putting into a box is gonna end up resulting in pretty bad AI education especially.

'cause what AI literacy is changing. 

Jeremy Singer: Yeah. It changes so rapidly. It's a hard thing. what's your favorite book about education or one that shaped your thinking? 

Ethan Mollick: [00:46:00] That's a tough one. I've read a lot. Okay, so I am a person who builds games for learning and tools for learning. And I would be rema I, it's the most cheesy possible answer, right?

And it's a terrible educational tool, but a lot of my career has been shaped by Diamond Age and Young Ladies illustrated primer, right? everything's wrong with that kind of approach of the custom tutor that tutors you, but like it's also clearly what. The future should be, even though every part of that was pedagogically wrong and it was actually social commentary, but that idea of a universal tutor that turns things into stories and games and reaches you at your level and gives you empathy, like how can that not be every teacher's goal?

Jeremy Singer: All right. I love it. And a lot of people are probably gonna look this up right away. One thing that makes you bullish for future learners, and we hit a lot, but just gimme one. 

Ethan Mollick: I think there has never been a better time for learners to pursue the things that interest them and get answers to questions like, like it, it's funny, we talk about how the internet didn't budge the needle as much as you think it would for education.

I think [00:47:00] we thought in the late nineties that you give access to every piece of information ever written and everyone will become educated. It turns out it's a harder problem that we thought about. who? Go figure. But I do, I am bullish now that having something that can explain things to you at your level, like we're already seeing, for example, that AI bots online are.

Moderate political extremism because it has a kind of moderating view. There's bad parts of that too, but like the idea that I could go ask questions and get reasonably good answers when a world of information pollution is very exciting to me. 

Jeremy Singer: Yeah. I agreed. Last, rapid question, one class you wish all students had to take.

Ethan Mollick: So I actually think that, 

Jeremy Singer: and I know it's not AI literacy, And and I, by the way, I, I teach entrepreneurship, so I, everyone has their bugbear of we should be teaching entrepreneurship more than I, I think we should. But I actually really think that a real literature class, where you're reading bits of different kinds of liter, like, I think that there's value in developing a sense of taste.

Ethan Mollick: So any of the humanities taste classes, it can be poetry class, it can be an art [00:48:00] history class, it can be an architecture class, but something where you're forced to learn how to be a critic and assess. What something is good or bad based on subjective criteria, I think is a really important thing to do in this, day and age.

Jeremy Singer: You could have done me a solid enlisted AP Lit as an example, but it's okay. 

Ethan Mollick: I'm not ap the number one. Sorry, you asked. 

Jeremy Singer: I am just kidding. alright, last question. ultimately all this is about how do we improve educational outcomes. Let's imagine we're back on this three years from now, reflecting on positive change in education as it relates to ai.

what would make you most excited or in the stream scenario, what would've happened? 

Ethan Mollick: So what would've happened would be some larger scale education focused organizations, whether that's, and I know you're doing this in college board, but also I'd like to see state governments, national governments around the world, [00:49:00] nonprofits invest seriously.

In trying to build the thing that will actually help transform education and make it available widely and deal with some of the issues of diverse access and everything else, right? But I'd like to point to a thing and say, yes, everybody, kid, who is struggling, uses this thing. And we have evidence that it helps them do stuff and I don't think it's an unrealistic thing to believe.

I would love to hear that the AI labs themselves continue to take education seriously and have advanced learning mode. They continue to exist in their products as they move forward. I would love us to be able to share dozens of use cases of teachers who obviously are like, are semi well known for this, have developed really cool teaching approaches that other people have copied using these AI systems.

So a thousand flowers blooming even as we see these sort of large scale efforts underway. I'd like to see us be critical, continue to be critical about AI and its influence on how we think as people, and that we as educators, our job now is to help make decisions about what people should know in a world where the job is no longer the [00:50:00] educating tool in the world, but it has to be us.

And that the changing role of education is something we're acknowledging and is happening. And so I, think that I'd like us to, debate to be more vibrant in three years than it is today. 

Jeremy Singer: I love all of that. I think we're on a big ride and it's really important that we stay engaged and figure out how do we maximize the positive impact and how we mitigate, risks that it introduces.

Conversation's been great, Ethan. again, you don't need more promotion, but I love co intelligence. I read it more recently and it didn't feel dated even with the pace of change. So much resonates and so much of what we discussed. I also, one useful thing is my favorite email I get when you, put it out, Thank you so much for spending the time, thank you for all the work you are doing and, I appreciate it. 

Ethan Mollick: Thank you so much. This was a really good conversation. 

Jeremy Singer: Thanks for tuning in today. Join the conversation by following the education equation wherever you listen to podcasts. [00:51:00]