Episode Transcript
[00:00:00] Speaker A: Luis, thank you so much for joining, man. I appreciate you being here.
[00:00:03] Speaker B: No, thanks for inviting me, Quinton. It's a pleasure to talk with you, as always.
[00:00:08] Speaker A: Yes, absolutely. The pleasure is all mine. So I want to open up with something that I found in your research paper that I found really interesting. And then I'm going to ask you a question, okay?
[00:00:18] Speaker B: Yeah, sure.
[00:00:19] Speaker A: All right.
Despite the promising benefits of generative AI and education, a central concern is the risk of over reliance where students lean too heavily on automated outputs at the expense of engaging with underlining reasoning.
While access to AI can boost short term performance, it often weakens problem transfer skills, reduces retention and erodes metacognitive awareness.
The convenience of outsourcing intellectual effort leads to long term cognitive atrophy, diminished creativity, and a weakened sense of authorship.
With that in mind, do you think that there is a place for AI in education today?
[00:01:10] Speaker B: Right, That's a really good question. And you're starting directly in the hot spot.
Sure. Okay, so let's start with what we have seen based on research so far. So there was this massive MIT study, like 250 pages of paper, you know, the behemoth of a paper. And what they find out is that we tend to fall into a trap of we see AI doing this amazing stuff for us and we believe that it was actually our creation.
So we have like this sense of, you know, like fluency on a topic just because we have this amazing tool. But when we are tested out of the scope on which we have AI, then we run into the problem that, oh, actually we have no idea of how to do this. This is not something new. Like online education have had this problem before. When you look at the online education area, you have what they call like tutorial purgatory. You are looking at the video, you feel that you're learning, but in fact you are not. Okay, so this is not something new for education. But with AI, it has become a little bit more problematic. It has become more problematic because it feels really comfortable to outsource our task to AI. And if we do this constantly, we might get into a place on which we really don't know what we are doing. And if you strip away AI, then you have somebody that is completely incompetent.
Okay.
Another aspect to take into consideration is that you and me were using AI, but we had the benefit to be educated in a system that did not have AI, so we were able to develop some critical thinking, some ability to question what the model is giving us. As a result, the new generations, they are going to born into a world in which AI is already there.
It's impossible to erase it. So they are not going to know what is to do all of this without AI.
So going back to your question, does AI has a role to play in education?
Of course it does, but it needs to be compensated with more classes, more lessons, more education about real, real critical thinking. Okay? Now, it has never been more important. It has never been more important because the ability to question this tool, it has become fundamental. You don't want zombies, you know, regurgitating what AI it's giving to them without, you know, a clear sense of, is this correct? Does it even make sense?
So what is the role of AI in education? AI is going to.
AI is going to actually free the professor or the instructor from this menial lack of value, tasks that are repetitive. That doesn't make sense to waste a human brain doing this stuff that students automatically can do.
But it comes at the cost that now the instructor has a bigger burden of promoting problem solving skills, critical thinking, you know, like these more internal aspects of education. Now we really, really need to focus on this.
I don't know if I was able to answer.
[00:04:44] Speaker A: No, no, that was perfect.
Yeah, no, that was, that was great. I think one thing that I'm hearing here is now that we're introducing AI to a new generation that, to your point earlier, isn't going to know anything different, it's almost incumbent upon the teacher or the professor to be much more proactive in getting their students to be, you know, focused on how to think critically, how to ask questions of AI, how to decipher whether or not AI is generating a bias based on cultural norms, social norms, whatever the case is. So it almost seems like the burden is now on the, on the professor or the teacher.
[00:05:27] Speaker B: It is totally on the teacher. Now the teacher needs to. Now the teacher needs to be knowledgeable about what the tools can do, what is the current state of the technology and what they cannot do. I mean, now I have the obligation to actually give my students an introduction of what is under the hood of AI. Even if I don't go into the mathematics like, okay, I'm not going to make tensorflows and explain them the algorithm to train, but I have to let them know, okay, what are the limits of this technology and where things might go, like really wrong.
We need to now to teach them what can be outsourced to AI, what is safe to outsource given the state of the art, and what is definitely a no go.
And there are Areas that, look, even if we get to the point in which, yeah, we are at Skynet level, okay, it's super intelligent, there is a debate like, do we really, as humankind want to outsource these decisions to AI? Is it ethical to do so?
I don't want to go into these touchy areas, but there are areas on which at the end of the day, it's going to be a human being taking the decision. Because our society is not prepared to fully outsource all of these decisions to AI. So as a professor now, we have the responsibility to be knowledgeable about AI and what are the limitations and to create the guardrails that will avoid a disaster when they, when you use AI. Because if you let a student go into, you know, like all in hallucinations with AI and they don't have the means to actually, you know, discern if something is real or not, then this might be really problematic. Yeah, really problematic.
[00:07:13] Speaker A: Yeah, no, I couldn't agree more with that.
So with that, with that in mind, like, how would you think that we need to start educating the educators?
Right? Because we're at a kind of an important time in history where it's like now we have to start thinking about how we want to educate teachers and professors. And I don't know, I know you're based in France. In the US Schools get funding, but let's be honest, the funding is pretty minimal at best.
My sister is a teacher and so obviously she's limited to resources and so on and so forth.
These people are already adults. They've already gone through the schooling system, they've already been educated, they've already been trained as professors and teachers. How do we get them to now take a step back and understand that they're going to be responsible for building and kind of, you know, architecting the guardrails that you just mentioned.
[00:08:15] Speaker B: Right. Okay. So this is a complicated area where the burden of doing, of going through this transition, it's shared between the governments, the governments of the different countries and the companies developing this technology. They are the ones that have to responsibly promote this transformation in society.
I'm not going to lie to you. This whole transition period is going to be painful for some jobs, some industries and some people. We all are going to be required to be retrained. And this is something natural. This is not the first time that humankind has been through these sort of transformations. When you introduce cell phone, telecommunications, Internet, and we can go even back to industrial revolution machines.
Yes, there are people that is going to suffer in the transition period.
But we do this because we expect benefits in the long term and we all need to adapt to this.
[00:09:18] Speaker A: Yeah.
[00:09:18] Speaker B: In this specific case, how do you train these people? Well, you're going to have to co create with the technological companies this sort of specialized applications and the training for them.
There are companies that are already doing this. Like for example, you go and you look at OpenAI or Google. Both platforms have allow new features in their products that are instructor mode.
They give you a special feature in which the model does not directly answer your questions with the answer, but kind of guides you via questioning, via promoting reasoning.
So that already gives me some optimism that hey, they are willing to collaborate, they are willing to promote the responsible use of their products.
Now what we need is full coordination with the governments to let them know what is that. Educators need to adapt to this new world. And you need the intervention of these companies because let's face it, our representatives, our government, they're not technology fluent.
If you have seen when you invite a tech CEO to one of these Congress or Senate hearings and it's all around the world the same, it's embarrassing.
It's embarrassing. We need actually these companies collaborating and telling our government, hey, this is what is important, this is what is not. This is what we should do. But at the same time, hey, somebody else to out them because we don't want them to abuse the system.
[00:11:02] Speaker A: No, it's funny that you bring up the government thing.
I don't know if you pay attention to what happens in the U.S. but when Elon was working with Trump, I think one of the more significant findings that he presented to the public was the fact that the government is using old technology and nothing communicates with itself. And, and so there is like this, you know, siloed process, you know, within government and then. Yeah, but, yeah, look, I, I agree. I, I tend to err on the side of not bringing in government whenever possible. I'm. That's just my own personal thought. However, when it comes to something like AI that is going to be so disruptive and is moving so fast, and it's almost like what other choice do we have? Because as a collective, we can't come to a conscious agreement without somebody stepping in, in an authoritative way to implement rules and regulations, it's going to be.
[00:11:58] Speaker B: Really difficult to implement.
Like, it will be awesome if we could, you know, as a society, coordinate all of this effort without the need of a central government doing it. But I mean, when you look at what we require in terms of Infrastructure in terms of training, in terms of new policy, because this needs to be regulated at some point for certain applications. It's, it's almost impossible.
[00:12:21] Speaker A: Yeah.
[00:12:22] Speaker B: To get away with it without a government intervention.
[00:12:25] Speaker A: I couldn't agree more.
[00:12:27] Speaker B: Yeah, I mean, those are the ones that actually need first training. So it can percolate them to other layers of society.
[00:12:33] Speaker A: Yeah, no, I couldn't agree more, man. I really do think that there is like, I, I don't believe in large government, but I believe in a strong one. And this is a, this is a sector that needs strong oversight. I just, I think my, my biggest concern is who is overseeing it. Who's going to be the one to, to ultimately decide, you know, what models can be built, how they're built, what information is being put into these models. I think the knowledge transfer really kind of scares me in a lot of ways because look, let's be honest, people are just at times corrupt and the people who are in power are typically the most corrupt. And if they're the ones that are overseeing how these models get built, then they're ultimately the ones that get to decide what information is being pushed out there.
So that's my concern personally.
[00:13:23] Speaker B: Yeah, and you're totally right. The only thing, like I don't want to go into the idea of a conspiracy theorist theory, but if you are afraid about your data and you know, public data leaking into these models, it's most likely already happened. So it's already like too late. What we need to push a society is for open source models.
Right now you can see that there are like two broad type of approaches to AI. Like you have the fully corporate capitalist approach on which hey, we're going to develop this private and we're going to, you know, try to profit as much as possible. And this is going to be a close environment. And we have the other approach that is no, we're going to make these models public. That people can just download the models, start using them, they can retrain them if they want. And we're going to disclose everything that we are doing here via science and via the model itself.
I think the second approach, it's easier to regulate than the first one because it has an additional layer of transparency that give more reassurance to the public, but also is the one that is actually going to conduct us to more technological development. Maybe we can get away with developing the technology without violating so much individual liberties on the process.
It brings more collaboration, more innovation as a society. What can we do do? Because you are not going to be able to stop AI. Like I'm sorry, that's not going to happen. Like, like you, you can put a law in the US and what these companies are going to do because these companies, you think they are from the US but they are from all around the world. They just, they will, they will just move to another country that doesn't have this regulation and there's always going to be a country willing to collaborate. So you cannot stop it. But what you can do is as a customer, praise and give a clear signal to these corporations that you value open source models that. No, I want to be able to download llama 4 and have it in my own server. I want to be able to dissect what you are doing there. So we need to send this signal clear. No, we want transparency and it's kind of nice to see that certain companies that you will put them in the evil spectrum are actually taking this direction like meta. Okay, you receive a lot of shit for a lot of stuff, but they are pursuing this open source avenue and so far, I mean they have been very consistent in this regard.
[00:15:56] Speaker A: Yeah, no, I, I think you are spot on with that assessment. My whole thing is, to your point, we're not going to stop AI and the idea of regulating it, it doesn't make sense, particularly when we're talking about like national defense.
Because if you, if you want to put regulations on, you know, companies here in the US and then China, no, no disrespect to the Chinese, but they're just going to do whatever they want and they're going to build the models however they want. And if they're going to use it to go after their adversaries, then we have some serious problems. But your solution I think is absolutely brilliantly spot on. Is I think that we, the idea of having a centralized power that, that has, you know, that can control AI doesn't make any sense. But if we have multiple different decentralized private companies that are doing things in a certain way, first of all, you, you decentralized power. Right? That's the first and foremost thing. Secondly, we need to understand which company is doing it the right way. And then we as consumers, we as people get to use our money or our compute power against these models to say, hey, we like the approach that company is, is taking, we like the direction of which this company, it gives power back to the people. So I think that that is a spot on answer and I couldn't agree more.
[00:17:12] Speaker B: Yeah, it does. Look at the, we have recent events to, to actually Feel optimistic from the point of view of consumers. So we had at some point an gemony of OpenAI. Like we were scared a few months ago that okay, this is the only company in town, nobody can compete. And then all of a sudden you have a deep seq.
This lead company, out of nowhere in China, then boom, releases an open source model that, I mean the performance was at that point kind of par with OpenAI at a substantial reduced cost of what OpenAI was doing.
And okay, yeah, at that moment everybody was scared, but this was beneficial to everybody else. It actually opened like new ways of thinking of how we're going to train these models, inefficiencies that the current architecture had. So you see this ability to compete, to innovate via competition and the open source model I think is going to be beneficial for everybody in this area. Everybody. You can always rely on the greediness of the market to innovate, but at the same time that competition will bring some control to these companies.
[00:18:25] Speaker A: Absolutely. And luckily, I think that's the direction that I see it heading as it stands right now. You know, you have Claude, you've got OpenAI. I know you got Grok. And you know, I. There's probably a handful of other, you know, LLMs out there that are being launched and gen AI companies being launched.
[00:18:42] Speaker B: Yeah, don't forget the Chinese.
I know Alibaba. It's really good and deep seq. I mean, I have used it.
[00:18:50] Speaker A: Yeah.
[00:18:51] Speaker B: Okay, maybe there's concerns of my data privacy. Sure. But I mean, for correcting my email, it's totally fine.
[00:18:59] Speaker A: Yeah, I. Look, I don't know how to, how do you, how do you protect your, your data at this point? It feels like, you know, outside of a VPN or something that's going to.
[00:19:10] Speaker B: Well, now with these models, remember that if it's a commercial one, do not share like sensitive stuff there because all of that, I cannot prove it, but I'm certain that it's going back to their own servers.
How can you use these models without concerns to your data?
Okay, so if you have an open source model, that means that you can download it and you can use it privately in your own server. That means it doesn't share the information with a parent company because there is not such a thing as a parent company, but few individuals, few companies can do this.
Okay. So right now I don't think there is too much that you can do to protect your stuff. Just don't put it there in the chat.
[00:19:59] Speaker A: Yeah, no, absolutely. You can't feed the models your bank information and address. Right?
[00:20:06] Speaker B: Don't do that.
Please don't.
[00:20:08] Speaker A: I think that's as common sense, or at least that should be taught as common sense moving forward.
We could go on and on and on and about, you know, where, where AI is going to go. But I'm. I want to bring this back to education. I. In your research paper, you talked about how AI can help speed up the development of schools that are under resourced or areas that are under resourced. I really, really love this. I want to touch on this subject.
How does it, how does it bridge the gap? How does it help these underdeveloped schools and these underdeveloped countries bridge the gap between education in a higher developed area?
[00:20:51] Speaker B: Right. So let's start facing a sad reality.
I have all the respect in the world for everybody working in education.
But when you consider not university education, but when you consider, okay, teachers in schools and high school, sometimes you have some people there that truly feels passion for what they are doing and they are motivated to do the best. And sometimes you have people there that really should not be teaching people that they choose that career path because maybe it was the only available option.
And sometimes even worse, you don't even have enough professors to cover classrooms.
Something similar also happens in the university spectrum. Like only the best institutions with the biggest budgets can attract the best professors.
So what you end up is with a system that has massive disparities.
The difference in experience between an MIT student and, I don't know, like somebody that, let's go back to my home country in Venezuela in a really far away university. It's like that. It's impossible to compare. I mean, we are orders of magnitudes indifferent.
But now all of a sudden you have the possibility to interact with a really, really experienced individual that has a knowledge base that is, I will say, amazing in general, larger than the knowledge base of the majority of professors out there. And the only thing that you are required is access to Internet because let's face It, I mean, OpenAI, the ChatGPT accounts, I mean the free ones are available to anybody. So then all of a sudden you can ask about any topic. Especially the topics that we cover in high school and first year of university are so general that we can pretty much rely that the answers are correct.
So now you're leveling the field because it's like having one of the most knowledgeable professors there for you 24, 7 on demand, whenever you want it. Okay? So if, if now you have these students that have access to ChatGPT and somehow you complement the other aspects that I already stress.
A critical thinking, some of the soft skills. You combine these two then at least for the basic stuff, first, second year, why do you need this super expensive, super difficult to source a professor.
You see, it's leveling the field because now it opens the possibility to have a high quality instructor for every single student. And the beauty is that each interaction in the chat is individualized. All students can ask different stuff because they will struggle with the content in different ways, but the chat will answer them. Yeah, so you have this, there was a recent study made in Nigeria with, with the kids.
And this, this study was financed by the World Bank. Okay. So we have a large institution trying to see if we can get a benefit. And what they found out is that they could produce the same output in terms of education in six months compared to one year and a half, if I remember correctly.
So if it's done correctly, I mean, these studies needs to be confirmed. Okay, so we have only one data point.
It's early on to claim victory.
But if that's true, if that's true, like imagine, imagine what we can do.
Like right now the world has like the rural areas of the world of extreme poverty, like there are zero chances of education. But all of a sudden you are telling me, look, if we manage to get some computers out there and manage to get some access to Internet, we can bring high quality education to these people. Yeah, it's, it's incredible what we could potentially do.
[00:25:08] Speaker A: Absolutely. So two things that really stood out to me when I was reading the paper again. My sister is a teacher and she loves her job, but she hates the meaningless tasks that go along with the job. I think most teachers, most people who are in academia, they do it for the right reason.
They want to make a difference in somebody else's life. I really don't think that a lot of people get into it because they think, nah, it's just a paycheck because it's a hard job. Right.
It's a demanding job. It's emotionally. So I think most people get it in the right reason or get into it for the right reasons.
With AI, I think that it allows them to show up with more passion and enthusiasm because they don't have to do so much of the backend work that we can automate those tasks via AI with. Right?
[00:25:58] Speaker B: Yeah.
[00:25:58] Speaker A: So that's, that's number one.
Number two, one thing that you talked about in here was general localization.
Basically a use case in the US is going to be much different than a use case in Venezuela. Right. Depending on the subject. And so you can really personalize and customize the material that the students are learning, which is obviously going to resonate with them better. It's going to help them learn. I mean, it's. It's endless. Right. In terms of just making sure that the content that they're.
They're learning is unique to their environment, to their culture, and to their specifications. So I think that that's absolutely brilliant in that sense.
[00:26:40] Speaker B: Yep.
[00:26:43] Speaker A: The. The one thing that I have a question for you on is if we are personalizing the material based on the student's level of understanding, how do we maintain collaboration within the classroom?
[00:27:02] Speaker B: Oh, okay. Yes. Okay. So let's be clear. We're still talking on a scenario in which we have preferential classes and we complement these classes with AI, right?
[00:27:15] Speaker A: Yeah.
[00:27:15] Speaker B: This is not fully online.
[00:27:17] Speaker A: Okay.
[00:27:18] Speaker B: Okay. So if that's the context now, the usage of AI is that you outsource what you know, it doesn't make sense to discuss in class. Let's imagine that I'm teaching you, like a basic course about optimization. There is an area when you're doing mathematical optimization where you need to remember a certain number of rules of how to linearize, how to define variables.
That's really repetitive. That's already in the book.
But since we didn't have AI before, we had to cover it in class.
So now what we are going to do is I'm going to give you a series of recordings of videos, material, and AI. So you, Quentin, you will learn that in your home whenever you want. And when we come to classes now, we're going to fully focus on the real application of this that requires collaboration, interaction with others. Okay. So it's like you are. You are going to increase the level of expectation and the size of the problems that we're going to tackle in classes. Because now I am assuming that you have this private tutor with you at all times. And for these real challenging problems that we're going to cover in class, there is no way that you're going to be able to tackle them without considering collaboration or aspects of the topic.
Like, let me be more specific.
When you look at these kind of problems, these are so challenging problems that you have to consider aspects of accounting, finance, marketing, not only operations, but in class, because we need to finish during the time slot of the class, we only consider the operational part because we have to cover the subject. Now, the subject was already covered. Like, you arrive to class, you know, all this stuff. So now what we are going to consider is the holistic system on which. No, now you have to talk with the marketing people, with the finance people, with the accountants. And the problem that you're going to solve is the one that is kind of emulating the real life stuff.
[00:29:28] Speaker A: Sure.
[00:29:29] Speaker B: So it's like AI allow us to increase the volume.
[00:29:35] Speaker A: Absolutely. And I also feel like it forces you to interact in more real world experiences. Right.
The idea of memorizing material, in my opinion is going to be of the past. And now you have to learn how to apply the material that you've learned.
Traditionally, I've always sat in the classroom, been told how to think, what to think, and then regurgitate what I learned as opposed to having hands on experience with whatever it is. Right. Obviously, you know, mathematics and sciences.
Yeah, there's equations, you got to study it, you've got to memorize it, you gotta, that's totally fine. And I don't mean to, you know, discount that by any means, but there's very little actual application. And I think that AI is gonna now free up the. An individual having to memorize and sit there and study all day and now have to be in the real world and actually apply the material.
[00:30:38] Speaker B: But remember that in the hierarchy of difficulty and pedagogy, applying is way more challenging than just memorizing. Because now to be able to know where to apply, I mean that means that you are context aware and you need to be able to connect real dots. Memorizing, like ask yourself why the educational system has been based on memorization. Well, first of all, we have the assumption that we could not have a big enough large database, so we need to recall of this knowledge. Second, it's easier to evaluate. We are obsessed with quantification and I mean grading exams and assigning you a grade based on a. How many, how many of these topics were you able to memorize? Perfectly. It's easy, it's comfortable for education.
So we shaped the entire educational system to basically promote exam takers and memorization without focusing on the application. Because we were on the assumption that, okay, these kids, they will never have the ability to search instantaneously and have access to this knowledge. So it's better that they record it. But now we're in a world in which, yes, I can ask anything, it's super easy to access.
So we need to progress to the next stage. And the next stage is, okay, I assume that you can get the knowledge, apply it. I need to show you how to apply it, how to provide value from the knowledge and not in the memorization process.
[00:32:11] Speaker A: Yeah, I honestly think that that's one of the more exciting aspects of AI in the classroom, because I don't remember anything that I was ever told to memorize. Does that make sense? Like, I don't remember any of it. I, I, I memorized it for an exam and then I forgot it and then it went out the. But when you apply it to your point earlier, like, you have to be, you have to think critically. You have to connect the dots, and that's how you actually learn. And you learn. And then another thing is, is you learn by failing.
You learn by failing in an application. That is a far more painful process than not memorizing a particular equation. Like that doesn't, you know, oh, damn, I got to try again. But when you're in the real world, when you're in an environment where you have to apply your knowledge, that is, you know, put up or shut up.
[00:33:01] Speaker B: Yeah.
[00:33:01] Speaker A: And I think that that's a much better way to learn.
[00:33:04] Speaker B: You don't forget those kind of experience. At the end of the day, learning works with something called a neural path. Like, imagine that you are, you know, you're crossing a field of grass.
Like, if you cross it with a high frequency, eventually you're going to create a path. And within this path, I mean, you can flow easier with our brain is the same. Like if you have to recall this subject and use it and using it again, then you learn it. Memorization for one exam and then never using it because you don't see any application, it's doomed to be forgotten. Yeah, but solving a problem and knowing and re encountering the problem and solving it again because you have that knowledge base that creates real learning. Like, there are tons of studies supporting active learning, active recall on all of these concepts. I think that if we instructors, we raise the bar in our classrooms, we move towards problem solving, real problem solving, and the application of knowledge, then we can encourage real learning. But this only comes if we have the support of this technology.
If not, it's impossible, it's impossible to cover it in the time slot that.
[00:34:24] Speaker A: We are given totally now in terms of interfacing with AI itself, I think it's really easy. I think we've talked about it earlier. It's really easy just to rely on the LLM to give us a response based on our prompt. But that doesn't help at all in terms of building cognition or getting us to think critically. You mentioned two things in your research paper that I really, really loved. The first one was Socratic style teaching. Right. Gently pushing the user towards a specific reasoning or answer, not giving it to them directly. And then AI boxing.
Can you explain the difference between the Socratic style and the AI boxing a little bit?
[00:35:10] Speaker B: Right, right, right. So the Socratic style is the one on which it's basically emulating what the, what OpenAI and Gemini are doing with the instructor mode.
Instead of you asking directly a question to AI and receiving an answer, you're going to ask the question. But AI is not going to provide you an answer. It's going to provide you with a follow up question in a very coaching way telling you like, hey, have you considered this topic? Or what do you think we could use to solve these tools? And it's going to present you like options, it's going to give you clues, hints, but it's never going to provide you the answer. So with this back and forth of question, questioning, questioning, questioning is going to help you get yourself the correct answer, but it's you at the end of the day, with the help of AI that is going to uncover the answer. It's not AI giving you the answer. And we have scientific evidence that this work works.
There is a study published in Science and this is a really respectful journal and they uncover that if you give students a question and they are working in pairs, the act of discussing the question, even if they don't know the answer, actually helps them uncover the correct answer, even if they don't know a clue of the correct answer.
Just this question and answer between them being completely ignored and on the subject, but just discussing it helps to create knowledge, helps to uncover the answer. So here what we are trying to do is do something similar but with a digital version of this opposite student, perhaps one that knows the answer but is giving, you know, like a breadcrumbs of how to go there. Does the AI Socratic tutor the boxer?
We actually turn this concept into, into the next level instead of actually giving you hints. No, it's going to attack your aggregate. It's going to go ahead and say, look, what you're saying doesn't make sense because of this, this, this, it's going to try to look at the flaws of the, of your argument and this requires you to fight back like, no, I think this is correct, this is correct, this is correct.
The only way that you are going to be able to beat these AI boxes is to be knowledgeable, you know, to actually think of the counter arguments and say, no, I think AI is wrong because of this argument, this argument and this argument and this is actually becoming a master, you know, on the subject. Like when you are at that level that you can reason, you can argument why you think you are correct, you are almost at the level of mastery. Like if I, if I could upgrade this more will be to, instead of discussing with AI, actually teach AI like consider AI like as a student and then teach the subject. But this one, it's, it's like at the last level of mastery, this, I will do it with more advanced students. But already the boxer, like, you cannot, you cannot, you will not be able to fake it with AI AI knows a lot. Especially if you augment the knowledge of these AI models with your own content, which is what I do. I provide the correct answers, all of the scientific papers, material that it needs. So the AI really knows.
[00:38:43] Speaker A: Yeah.
[00:38:44] Speaker B: About the subject.
[00:38:45] Speaker A: I absolutely love the idea of an AI boxer because I don't believe in nurturing people to a particular solution. If we're talking about gaining knowledge, I think that you need to flush out your weaknesses. I know that sounds harsh, but I think it's really important for people to be able to articulate themselves in a very logical and decisive way as opposed to just, you know, having somebody kind of pat them on the back. Like, yeah, you were close enough. I, I don't believe in second place trophies. Right. And I don't mean that to be insensitive. I, I'm just saying, like, if we want to, if we want to be the best version of ourselves, we really need to test ourselves. And the idea of an, of an adversarial conversation with AI I don't know how you prevent somebody from smashing their computer, but I love the idea of it because I really do think that it gets people to think very, very critically and very methodically and they have to approach the conversation with, with intention.
[00:39:48] Speaker B: Yeah, yeah. For sure it can be frustrating. Like, yeah, those scenarios on not smashing the keyword but getting really angry, they do happen because first of all, I mean, everybody that have discussed with AI knows that they can be cringy as hell. Like they can be really, really weird.
[00:40:07] Speaker A: Yeah.
[00:40:08] Speaker B: So this debate, like with a human being, you will have these debates and they can get healed, but they remain human. With AI, you're going to have like a bunch of emojis, you know, like weird, sometimes condescending, sometimes like friendly text. So yeah, it gets weird. But I think that we, we are more like total tolerant with AI like we allow AI to get away with this stuff. Second, we are in a safe place.
[00:40:32] Speaker A: Yeah, totally.
[00:40:33] Speaker B: Student. I feel more Comfortable telling whatever to ChatGPT because ChatGPT is not going to judge you. Or well, at least the current versions of ChatGPT, they will not remember the specific of you, they will not judge you. They are there just to, you know, like encourage you to discover the final answer.
Doing this in the classroom, well, while everybody's looking at you and with the professor doesn't feel as comfortable. So that's why I think this is powerful.
Because in the classroom the chances that this will happen, they might, but it's not that often, especially when you consider that not all cultures can be as open, for example, as Americans here in France, it's really difficult to have these debates open in the classroom. But with my chat, my own privacy, of course, I can do it all day.
[00:41:25] Speaker A: Yeah, no, absolutely. And I think that you brought up a really good point. Like when you're having a conversation with AI, it's not going to get emotional.
And as you know, the current climate in the US right now, debate culture is at an all time high, for better or for worse.
And a lot of people get very emotional and I think that a lot of that emotion comes from them not being able to articulate their argument. And so they just submit to, you know, defamatory words or slander or coming and attacking your personal credibility.
You can't do that with AI it's not going to play that game. It's not going to, it's not going to, you know, it's not going to react in a way that is going to give you that satisfaction. You just got to show up with the logic and either you're right or you're wrong.
So I really, really, really like that. My question for you in terms of the AI Boxer is at what point do you win? Because I could imagine that AI is just going to continue. I mean, it obviously depends on the subject because if we're talking about philosophy, philosophy is a very open ended way of thinking. Isn't. There's no right answer.
You can either have this idea or that. Same thing with theology, the whole, the whole nine. So it's like, it's almost like the AI bosser is great for certain use cases but also can be detrimental in other use cases, if that makes sense.
[00:42:50] Speaker B: Yeah, no, you're totally right. Like I am not trying to promote this as a, you know, I don't know if you use this word in English, panacea, like something that is, you know, like a silver bullet. This is, this is not going to work in all, all cases. Like definitely in my Area I am, you know, like, I have the benefit of teaching something really quantitative on which the answer is really clear.
I can really differentiate between a wrong and a clear and correct answer. And I provide this answer to AI. So in the AI Boxer debate, it ends when the student arrive to an answer that I consider that is sufficient. Sufficient of.
Has enough quality that I provide it in advance to AI. So me as the instructor, Luis, provides what is a stopping point. But if we are discussing something more open, I don't know, like, I don't want to bring any hot topic into the equation, but, you know, like this more societal philosophy, ethics topics, when there is really not a clear answer of right or wrong.
I just feel that, hey, in these ones particularly, you need to be more careful because, you know, like, the debate can disintegrate really fast and you can either go into the hallucination area of AI, which is also not beneficial, or if you, as an instructor, you provide a clear answer when in fact you know that the topic doesn't have a right or wrong answer, then you are going to bias your students towards your own opinion, your own cognitive biases, and that's. That's unfair to them.
Okay, so. So I will not use it for everything.
[00:44:38] Speaker A: Yeah, it's interesting. I mean, as you're. As you're talking, I'm thinking about, you know, just in different subjects. I mean, you have to consider culture, you have to consider political environments. I mean, you know, there's just so many little things. And that's like, the whole question is like, how do we adapt to it? How do we train these models to be, you know, objective and subjective simultaneously? It's just. There's so many, like, it's just so nuanced, if you. If you know what I mean. It's a very, very complex world to navigate. And so I'm just. And again, kind of coming back to the academic space.
I think the common theme that I continue to go back to is like, it really is incumbent on the professor and the teacher to direct the students on how to use AI efficiently, but also ethically.
And that's tricky, man. That's a really tricky question, and it's a really tricky. And again, I really think that education, particularly in the States, is really underappreciated. I don't think that we take our teachers as serious as we should because they're teaching the next generation of individuals. So I guess the question for me that I would have for you is if AI can be a tutor. If it can come up with lectures do you think that teaching and professors are in danger? Do you think that that occupation at some point might become obsolete?
[00:46:08] Speaker B: I can only speak based on the current state of technology, based on what we currently have as AI I cannot speculate into the future, because in the future, if we have. If we have real, real general artificial intelligence, you know, like we are talking Skynet level, like, okay, yeah, can really simulate a human being without any humans, then that will be different. But right now, with what we have, the answer is, it has never been more important to actually have educators out there. It's just that we have to redefine what it means to educate what we are going to promote in the educational system.
If what you were doing in classes was asking students to memorize pieces of text or just like parrots repeating some exercises without promoting anything beyond that, then, yes, I'm sorry to tell you, but your job is in danger because it's really easy to replace.
But now, if what you were doing in classes was trying to tackle challenging problems to promote critical thinking, then, no. Now this technology opens you the door to become a better version of yourself.
Now, you can safely assume that AI is going to help you to cover the fundamentals, and you can go into class to touch directly the fun part, the complicated things, the ones that really promote value and change in society.
So to make my answer short, no, I don't think so, Quinton. I think instructors are relevant for all of the use cases that I talk in my paper, which are kind of simple to implement. If the instructor is not there, then everything will be a disaster. Like asking the students to do this by themselves without any guidance. It's actually maybe arguably, even worse than the current system that we have right now.
[00:48:11] Speaker A: Yeah, I would agree with you in certain subjects.
I think that when it comes to, like, soft skill subjects, there's no replacing a human being in the classroom. There's no way. Right. But when we're talking about, like, mathematical equations, do you think the algebra teacher is in trouble? Do you think. Do you see what I'm saying? Because in. In. In my. And this is just me thinking out loud, and I. I want to hear your opinion, Obviously, it just feels like there are certain subjects where it doesn't make sense to learn it if we have a calculator.
Right.
And. And I don't mean that disrespectfully by. By any means. I'm just, you know, pointing out what I think is almost obvious.
There are certain subjects where. Why would I want to learn this equation when I can just Ask AI why.
Why learn? And this is a. This is kind of a hard one to talk about, but, like, why learn how to format an essay?
You know what I'm saying? Like these little things now, when we're talking about social skills, when we're talking about being able to collaborate with people, have an open dialogue, having the ability to, you know, put your hands on something that's different.
AI does. How could AI replace that?
So I'm just thinking out loud, that's all.
[00:49:28] Speaker B: No, but I mean, you have a fair point. It's just that for some of these subjects that we conceive them as menial, repetitive, that now seems to be easy to replace. We have forgotten from where they come.
Let me give you the example of calculus. Calculus, it's one of the things that, hey, we say that this is going to be replaced, specifically calculus one, okay?
Yes. It's a bunch of rules for derivatives, limits and integrals. So with AI, this should be easy to replace in the current form that we teach it. Yes, but we forget from where humankind developed this, this topic. It was a model to predict the behavior of natural phenomena. When Kepler, when Galileo, when Newton, Leibniz, Descartes, Fermat, they were looking at how, developing this theory, what they were looking for is tools to predict the movement of planets, falling objects, of how light travels.
So calculus was a tool to look at nature and frame it into a model that you could understand.
A lot of the problems in the books for which we teach calculus, specifically the last ones, that nobody does it in the homework and teachers do not include them in the exams, are actually framed with that objective in mind. Storing a natural phenomenon and translate from words into the problem, into the model, and then apply all of those rules to solve it. We never do those in classroom because we don't have the time to develop enough skills. But now, if I have AI, I will go directly into those ones. I will actually show you. Hey, Quinton, this is the real application of calculus. This is what you should be learning. How I can see a natural phenomenon, and I can immediately look at that phenomena and say, you know what? I think a derivative might help me here. Let me try to write it down.
You see, and we can do this with a lot of the subjects you mentioned, writing, formatting an essay, for sure, that doesn't have value anymore. But creative writing, how do we actually convey emotions in writing?
It is still relevant. Hey, I cannot do it, at least not at the levels of quality that we expect.
[00:51:57] Speaker A: Right? Okay, a couple of things here to unpack first of all, brilliant answer. That is an absolute, because I think a lot of times we forget where mathematics comes from. Right? It's basically just humans trying to understand the universe or our environment around us. And so we. We come up with math.
I. This is just an. This is a story. I never realized how intertwined philosophy and mathematics really are. When I first got into college, long before we got in the MBA program, I took a course called Language, Logic and Persuasion. And it's basically just a study of how to formulate a logical argument, how to, you know, find fallacies in your argument, so on and so forth. It was so mathematically driven that I was blown away and I had to take that course twice because I failed the first time.
So I just, I want to call that out because I think a lot of times we do separate those. What we think is kind of like, you know, very, very hard knowledge skills with more soft, more, you know, outside the box type of thinking. So I think I just wanted to call that out because I never even really put that together. So very brilliantly pushed.
The other thing that I want to say is, like, in terms of writing and being creative, I see a lot of AI models generating songs, and it's beautiful. It's beautiful. What some of this. What AI can do. I personally don't think that there's ever going to be a point where people appreciate the art coming from AI as much as coming from a human being. And I, you can. And I associate that with everything. I associate that with painting and drawing, sculpting, music, you know, writing a book. It's almost like the same thing with cubic zirconiums and diamonds.
Yes, we can produce a diamond in a lab. No one gives a sh. About that diamond. You know, like they're, they're basically worthless. But if you get it from a mine, then everyone's like, oh, this is amazing. So I, I just wanted to call that out because I think that's an interesting concept. But I'm. But, but my question is, how is the next generation going to look at it? That's just how I look at it. And it's, It's a product. That thought process is a product of how I was raised. It was a product. It's a product of my experience and my look. I can't draw for diddly squat. So when I see somebody else do it, I'm like, wow, that person's brilliant. They're so talented. Will the next generation or the generation after that appreciate it the same way that I do? Because they haven't had the same experiences that I have.
[00:54:38] Speaker B: Right, right, right.
Quinton, I like this conversation because you're. You're touching the, you know, like, the really important stuff. I had this debate, like, one week ago.
Okay, so is the next generation going to perceive art and the value of art in the same way that we are perceiving it? Of course, I will argue that no, because you. You are valuing the technique, the hyper realism, the skill of looking and translating that into. Into any medium. Okay, but if that's already a given, because, I mean, you have these AI tools that can do it perfectly, then we have to reevaluate, okay, what is the value of art. And I will argue that at the end of the day, all of the forms of art, of artistic expression, the final objective is to provide an emotional reaction. If you look back at Greek theater, the objective was to promote catharsis. Like, hey, I want to see a drama and cry. I feel emotionally attached to this art. Music, writing, I would say video games, movies, they all have, at the end of the day, the same purpose. So now the next generation, it's like, okay, I value human art because it allows me to reach that state of catharsis, of emotional attachment faster than just hyper realistic drawing of something that might look beautiful from the technique perspective, but actually it's soulless.
But this has happened.
Let's go back, for example, to the period on which we were moving from the neoclassic art to realism and to Impressionism.
We reach a point in which technique was highly, highly, highly developed. You could basically almost produce a photograph with a hand painting.
And then we have this bunch of artists that says, look, this is nice, but this is boring. These subjects are the same religious or Greek, repetitive.
I want to portray real people suffering the vicissitudes of life. And that's what they did. And then they came up with, I want to do the same, but I want to make a painting that is so intertwined with the viewer that everybody will have a different impression when they look at it. And then you have other artists that took this step beyond, and they say, how much can I decrease the methodology, the technique, but still convey the final meaning of the drawing? And then that's how you reach a modern art on which, hey, somebody might draw a triangle and few squares, but I am still able to see what the artist intended, you know, so it's.
It's very subjective.
[00:57:45] Speaker A: Yeah.
[00:57:46] Speaker B: And I am certain that we are going to reevaluate what. What we value in art, especially art produced by a human. But having said that, and one final point here is that with AI we have opened the floodgate for more art. And I will tell you why. You said that you cannot draw, but now you can speak to one of these models and make a materialize one idea that you have here in your head now, it's going to be real. And from that starting point, maybe you can modify it and translate into your division that only existed in your head. Now you're going to be able to share it before, because you lack technique, you lack the fundamentals. It was always going to remain there.
Now you can share it with the world, Quinton. And that's something that I found amazing.
[00:58:34] Speaker A: I love it. I love it. What a wonderful, wonderful way to put that together, my man. Yeah, look, I think there are.
We're going to lose something with AI Something very, very human. But there's always going to be another door that opens.
And it just takes optimism to be able to see that. That door. Right.
I don't think that there's ever going to be anything that can replace live music.
I really don't. I mean, you can listen to, you know, a classical concert on your. In your headphones. Sounds beautiful. Go to a concert. It's a different. It is a completely different experience in of itself. You can say that about rock and roll. Hip hop sucks when it's live, but so don't go to live hip hop. But apart from that, every other, you know, form of music I don't think is going to go anywhere.
But I love the idea that AI gives us now the people who don't have those fine motor skills, who want to be creative. It gives us ability to be creative just based on what's living in our mind, if we can prompt AI properly. So.
So my last question for you is, by 2030, is the definition of being educated going to be different than what it is today?
[01:00:02] Speaker B: Okay, let's differentiate between the original intention of our educational system and the status quo that we have right now because something was corrupted along the way.
Our current education system was built to provide means of production for the different states. That's the reality after the Industrial Revolution.
We educate to create problem solvers that goes into the job market and they keep pushing the boundaries of the economy. That's. That's the reality.
The original intention at the end of the day was to transfer skills so that these people can increase production and solve problems. Okay? Innovate and do this stuff.
Along the way, we have created a system that actually values memorization and Exam taking.
So is the definition of education going to change? Well, this corruption one, the one that should never existed, yes, for sure it's going to change. Of course it's going to change. The other one.
I don't think so. I think it's still going to be relevant. At the end of the day, my responsibility as an educator is still going to remain to provide skills for my students to adapt correctly to the job market, to provide value, to foster innovation when they should, and to be adaptable to what is going to come. So that's the critical thinking and problem solving skills combined with whatever skills I can transfer at the moment. Okay, that's still going to be the same thing in 2030. It's just that the tools and the skills that we teach are going to change. Okay, that virtual definition, no remain unchanged.
The one about memorization and exam taking, that's, that's you. It was already useless a long time ago. Now it's even more useless.
[01:02:01] Speaker A: Okay, yeah, yeah, no, I, I couldn't agree more, man.
Look, I know everything that we talked about today is absolutely theoretical. We don't know what's going to happen.
I think, I think it's going to be really interesting. If and when humans can master quantum computing, that's going to be.
Here's my theory about that really quick. I think once we master quantum computing, we are just going to become a singularity, ultimately a black hole. And in fact, I think every black hole in the universe is just the remnants of societies that have figured out quantum computing and now live in a completely different reality. But I don't know. That's just my theory.
[01:02:45] Speaker B: I have a more optimistic perspective because once you get into these topics, you discover that, okay, it's like a stone that you turn it upside down and you discover that, oh, there are even more layers and more plot holes. It's like you discover the more you know, the more you know that you don't know. So mastering quantum, I don't think that's ever going to happen. We can push the, the boundaries of knowledge a lot, but there's always going to be something that we don't master, like whoever. I don't know if you, you believe in creation or if you are a believer, but it's someone, something created this universe. It really didn't want one of humankind to have all powers. Like, yeah, it will always left something, you know, hidden, remain hidden for us.
[01:03:33] Speaker A: I'm going to be totally honest. That makes me feel so good that you say that. No, I don't know the first thing about quantum computing? I mean, I do, you know, to a very, you know, small degree, but it scares the crap out of me. So I'm glad that, you know, somebody as brilliant as you isn't fully bought into the ability to master that at this point.
[01:03:54] Speaker B: No, no, man. Well, thanks for saying that, but you need to. We all need to be critical of what we hear, especially about these technological leaps, because the majority of the stuff that we hear is from companies that are basically trying to sell this stuff. It's their objective, their responsibility, to create hype so that they can come up with funding and they can perpetuate the company. But us as consumers, as members of this society, we have to challenge back and say, is this really going to be that generational leap that they are calling Quantum computing is going to play a really important role in technological development, especially with stuff that is really difficult to compute.
But it's not going to give us these automatic superpowers. I mean, it's not going to be like, ah, all of a sudden we are, you know, like this higher entity, all powerful. No, no, it's not going to happen. And the same is kind of like with AI right now. Like, it's a massive hype.
When things settle down, we're going to look at, okay, we, we can provide value here, but it's not superpowers. It's still not Skynet. Terminator is still far away. So let's, let's. Let's remain calm.
[01:05:11] Speaker A: Awesome.
Louise, I just want to say thank you so much for, for jumping on this podcast today. I've always had a tremendous amount of respect for you as a person, but also just your relentless pursuit for, you know, intellect. And, you know, you've always led with curiosity. You were one of the most brilliant people in our class. And so to have you on is a true honor. So thank you so much, brother. I appreciate it.
[01:05:37] Speaker B: Thanks for inviting me, Quentin. I really enjoy this conversation. Cool.
[01:05:41] Speaker A: Awesome. Well, we'll run it back, my man. Before we go, Anything that you want to plug, anything that you want to talk about, any final thoughts or anything.
[01:05:49] Speaker B: So one of your questions was about the job market, like, ideas that are going to disappear. But I think right now it's like, perhaps not like a. Like it doesn't fit with the. With the final conversation that we have.
[01:06:03] Speaker A: Okay, awesome.
[01:06:04] Speaker B: We just leave it out.
[01:06:05] Speaker A: All right, brother. Well, thank you so much again, Luis. You're the man. I appreciate you, brother.
[01:06:09] Speaker B: Thanks, bro.
Thanks again for inviting me.
[01:06:12] Speaker A: My pleasure.
[01:06:13] Speaker B: By the way, if you come to France, please let me know. Like, I will travel to Paris to see you if you are there.
[01:06:19] Speaker A: Absolutely, my man. Absolutely.
[01:06:22] Speaker B: Like, it would be really, really nice to share a beer or. Absolutely.
[01:06:26] Speaker A: We'll talk about. I'll study quantum computing and we'll debate what happens if we are able to master that.
[01:06:33] Speaker B: Okay? Perfect, bro.
[01:06:34] Speaker A: Awesome, man.