I assume you disagree with there being such a thing as "responsible AI use", because besides that, I completely agree with everything you write, including my own experience of "spot wrong answers, but feel becoming lazy".
So I suppose you think that becoming lazy is always irresponsible?
It seems to me, then, that either the Amish are right, or there is a gray zone.
Being a CS teacher, my use of "responsible AI use" probably comes from a place of need: If I can say there is responsible AI use, I can pull the brake maybe a little bit for learners. It seems like LLMs in all their versatility are a great disservice to students. I'm not convinced it's entirely bad, but it is overwhelmingly bad for weak learners.
So I suppose you think that becoming lazy is always irresponsible?
It seems to me, then, that either the Amish are right, or there is a gray zone.
Being a CS teacher, my use of "responsible AI use" probably comes from a place of need: If I can say there is responsible AI use, I can pull the brake maybe a little bit for learners. It seems like LLMs in all their versatility are a great disservice to students. I'm not convinced it's entirely bad, but it is overwhelmingly bad for weak learners.