I got in a huge (for me) argument the other day with a fella who was hell-bent on using AI to teach kids to read. I don't think I got through to him at all. It was all the more frustrating because he turned out to be a teacher!
To me it seems obvious that we let the robots do the drudgery to free up our time to do the important things in life, like teaching our children.
Ever since I learned of Bloom’s 2 sigma problem, I have wondered how far we can advance humans with the aid of computer tutors. I would never suggest that AI should be a replacement for human teachers, but it could definitely supplement them. It seems to me that cultures that do this will maximize human capability better than cultures that do not.
The challenge of AI (and technology is general) is to answer "What are people for?" or, in other words, "What is good?"
(Not to get ahead of the discussion, but I would suggest that this is a question no computer tutor can help answer, except perhaps in the negative?)
------
edit to add: "Bloom’s 2 sigma problem" I just looked that up on the wikipedia. The guy I was arguing with was a follower of Bloom. AFAICT I couldn't even begin to rattle him on the idea that Bloom's whole thing is goofy from first principles.
I don't want to get into another argument on the subject of Bloom.
There are many humans not getting as much as enjoyment out of life as they could because there was not enough human teacher labor to help them.
I, at 51 years old, am making use of GPT-4 as a tutor on topics I do not understand well enough.
I have zero doubt that computer tutors will help many people flourish.
edited to add:
It sounds like you don't want to discuss this further, but I would love to understand what makes you think Bloom's findings are "goofy". I had to read his paper to form an opinion about it.
> It is not clear to me what you are trying to say.
Two things, really.
The first is an economic argument: We've got these machines that are [about to be] able to do anything a human can do (at least in terms of economic activity) and we have to figure out how to live with them, and I posit that they should be made to do the drudgery to free us up to do the good, human things, like teaching kids how to be human.
The second point is epistemological. I don't think the purpose of education is to secure a place in the economic hierarchy ("Go to college to get a good job." however common, is wrong.) The purpose of education is to teach humans to be human, and that can only be done by interaction with other humans. We start out as talking apes, a kind of meat-robot, and we become self-aware beings, or at least we have the opportunity to so become, if we have good teachers around us.
A machine can help you accumulate facts, but it can't educate you (in the sense that I'm talking about.)
(And that's why Bloom is goofy to me: you can't measure "mastery" in any meaningful sense.)
> There are many humans not getting as much as enjoyment out of life as they could because there was not enough human teacher labor to help them.
I'd say that the problem is to supply more "human teacher labor", not replace it with ersatz.
> I, at 51 years old, am making use of GPT-4 as a tutor on topics I do not understand well enough.
I imagine at your age you've acquired the depth of character and education to make good use of its capabilities. Cheers!
> I have zero doubt that computer tutors will help many people flourish.
If the machines turn out to be inexpensive inexhaustible excellent teachers (and dare we hope, psychologists!?) I'll be delighted to be proved wrong. I still think it's preferable to have human tutors, and I'm excited by the possibility of machines relieving people of drudgery to free them up to spend more time with their families, and so on...
> A machine can help you accumulate facts, but it can't educate you (in the sense that I'm talking about.)
We are clearly on the precipice of having machines that can model the human mind and help us figure out where we are blocked on learning a topic.
> you can't measure "mastery" in any meaningful sense
This seems a ridiculous assertion to me. Bloom's "Learning for Mastery" paper explains what he means by mastery, which is quantified in tests.
I don't see how you can reconcile your belief that "We've got these machines that are [about to be] able to do anything a human can do" with your belief that "A machine can help you accumulate facts, but it can't educate you".
I never thought or wrote that "the purpose of education is to secure a place in the economic hierarchy". I specifically wrote that we can use computer tutors to augment human tutors.
The best we can do with human tutors is 1:1 tutoring. With computer tutors, we will have a lot more capacity available.
I meet many people who have been conditioned to think that they are "not good at math" or that they "don't have a brain for math or science" or that they are not "artistic" enough to paint or play an instrument. I think, in most cases, this is simply wrong and a rationalization or conditioning. Most people just did not have enough personal tutoring to get them past whatever obstacle was next in the way to learning.
Computer tutoring is going to unlock potential, not just for economic success, but to give humans more options to explore whatever they want. Computer tutors will be able to model the human brain and, like human tutors, know how to rephrase something to help a student understand.
When I was a boy learning algebra and calculus, I had to contend with my classmates for time from the teacher to get explanations on certain questions. When I wanted to learn an instrument as a boy, there was simply not enough tutors available. Computer tutors will greatly amplify teaching capacity.
So would you agree with Weizenbaum that it’s immoral to have a computer teach a child to read? If the child doesn’t have a teacher available I assume you’d find that immoral as well but then who should suffer for that immorality?
> So would you agree with Weizenbaum that it’s immoral to have a computer teach a child to read?
I'm not a moralist, so technically, no. As I pointed out in sib comment, you have to have a theory of "good" before you can have a moral structure to guide you to it, eh?
That's the challenge I see AI (et. al.) raising for humanity: We have effectively defeated the prior order of things with science and technology, now what? What is good? What are people for? What IS the ultimate question of Life, the Universe, and Everything?
> If the child doesn’t have a teacher available...
Then that's the problem to solve, eh?
Let the robots do the work and we can hang out with our families, eh?
To me it seems obvious that we let the robots do the drudgery to free up our time to do the important things in life, like teaching our children.