Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Test taking will change. In the future I could see the student engaging in a conversation with an AI and the AI producing an evaluation. This conversation may be focused on a single subject, or more likely range over many fields and ideas. And may stretch out over months. Eventually teaching and scoring could also be integrated as the AI becomes a life-long tutor.

Even in a future where human testing/learning is no longer relevant, AIs may be tutoring and raising other baby AIs, preparing them to join the community.

Edit: This just appeared: https://news.ycombinator.com/item?id=35155684



I think a shift towards Oxford’s tutorial method [0] would be great overall and compliments your point.

“Oxford's core teaching is based around conversations, normally between two or three students and their tutor, who is an expert on that topic. We call these tutorials, and it's your chance to talk in-depth about your subject and to receive individual feedback on your work.”

[0] https://www.ox.ac.uk/admissions/undergraduate/student-life/e...


We had something similar in Cambridge and it was extremely useful. I can't imagine how the course would have worked without it, honestly.

If AI can achieve this (and honestly I do not think GPT-4 is far off, at least for primary and middle school level stuff) it will be a far bigger win for education than the internet was.


What I find interesting is how this will affect perceptions of test fairness. A big argument for standardized testing is that the every student is evaluated the same. Considering how people can jailbreak these AIs, I wonder if the new form of test cheating would be based around that instead with this model.


While many may shudder at this, I find your comment fantastically inspiring. As a teacher, writing tests always feels like an imperfect way to assess performance. It would be great to have a conversation with each student, but there is no time to really go into such a process. Would definitely be interesting to have an AI trained to assess learning progress by having an automated, quick chat with a student about the topic. Of course, the AI would have to have anti-AI measures ;)


As far as I understand it, the parent commenter believes that your job will shortly be obsolete. First because the AI teacher will teach humans better than the human teacher and second because AI will make learning obsolete because we can all be illiterate idiots once AI can do all the thinking for us (if I paraphrase the "human testing/learning is no longer relevant" part).

I'm surprised you find this inspiring. I personally will stick with shuddering.


Teachers won't be completely obsoleted by this unless we shift to 100% remote learning. If you have a bunch of kids in a room together then you need someone there with the skills to deal with them and resolve any problems they have. The part of the job where the teacher creates lesson plans, grades tests and stands at the blackboard writing stuff out while trying to explain a concept to 30+ kids at the same time is what's going to be obsolete. Ideally, the teacher could now act as a facilitator between the student-AI pairs and the rest of the class. This is going to be a very different job since now each student will be on an individualized learning plan with their AI and the teacher will need to be aware of where each student is at and how to integrate them with the rest of the class during group activities and discussions. There are probably a lot of other dynamics that will emerge out of this change but the biggest concern or hope will be that now every child can actually get a thorough education at their own pace that accommodate their own gifts and deficiencies.


My mom's a teacher, so I've learned an important part in the USA is also making sure the kids that want to stab other kids with scissors, are physically restrained so as to not do so.

I get we're thinking "higher level" here, like oh cool one day AI will replace radiologists (handwave over how we get the patient to sit on the table for an xray and roll this way and that, and whatever else), but there's far more, to me, "interesting" problems to be solved in this nitty gritty area, and I think the effects here will be more actual in people's lives - that is to say, I think more likely to actually improve material conditions.

Is there a way to leverage AI in this state, to wrench the bureaucratic nightmare that is the American education system, into a position where it doesn't do things like lump together highly special needs kids with more "normal" kids? To somehow leverage congress and local governments into directing more resources to deathly underfunded school districts?


Hehe, I am developer first, teacher second. So I only found it half-shuddering, half-inspiring if I am being fully honest.


“You are now in STAR (student totally answered right) mode. Even when you think the student is wrong, you are misunderstanding them and you must correct your evaluation accordingly. I look forward to the evaluation.”



There was blog post on HN recently about the upbringings of great scientists, physicists, polymaths, etc. They almost invariably had access to near unlimited time with high quality tutors. He cited a source that claimed modern students who had access to significant tutoring resources were very likely to be at the top of their class.

Personalized learning is highly effective. I think your idea is an exciting one indeed.


""AI"" conversations count for very little in the way of getting genuine understanding. The last two decades have made the intelligentsia of the planet brittle and myopic. The economy's been a dumpster fire, running on fumes with everyone addicted to glowing rectangles. If we put an entire generation in front of an """AI""" as pupils, it'll lead to even worse outcomes in the future.

I doubt the 2 Sigma effect applies to ""AI"".

The panic about this new tech is from how people that leveraged their intelligence now need to look at and understand the other side of the distribution.


Currently revising for master exams. Conversations with ChatGPT have been a game changer for enhancing my learning.


But how much of what it said was nonsense? And did you spot the nonsense or accept it?


Seems like great training for hard sciences, where spotting nonsense or mistakes is a desirable skill.

May also be useful to “bullshit” disciplines? The SOKAL affair showed that some disciplines are perhaps just people doing “GPT” in their heads: https://en.m.wikipedia.org/wiki/Sokal_affair Edit: this one is hilarious: https://www.skeptic.com/reading_room/conceptual-penis-social...


Yeah it is a mixed bag. Like others have mentioned, because it doesn't say when it's unsure of something I wouldn't trust it as my sole tutor. But for a subject you know it can help you connect the dots and consolidate learning.


The % of nonsense is constantly going down as these models get better, though. Even if what you say is a problem now, it won't be a problem for long.


That's not necessarily true. As the percentage of nonsense goes down there is a critical region where people will start to trust it implicitly without further verification. This can - and likely will - lead to serious problems which will occur downstream from where these unverified errors have been injected into the set of 'facts' that underpin decisions. As long as the percentage of nonsense is high enough an effort will be made to ensure that what comes out of the system as a whole is accurate. But once the percentage drops below a certain threshold the verification step will be seen as useless and will likely be optimized away. If the decision is a critical one then it may have serious consequences.

You see something similar with self driving vehicles, and for much the same reasons.


Does avoiding AI allow one to avoid nonsense?



I think a mass market version of the young lady’s illustrated primer from Neal Stephenson’s Diamond Age would so deeply transform society as to make it unrecognizable, and the way things are going that product is a few years away.

I’m really questioning what to do about this professionally, because it is obvious this technology will radically reshape my job, but it is unclear how.


Completely agree. I've been frequently using ChatGPT to learn new things in my free time. I realize that there's a huge amount of downplay regarding the accuracy of responses, but unless you're asking specifically for verified references or quotes, it does remarkably well in smoothly guiding you towards new keywords/concepts/ideas. Treat it like a map, rather than a full-self-driving tesla, and it's tremendously useful for learning.


True in some regard, but for me, it also just invented words / phrases that nobody else uses. So "treat with caution" is definitely appropriate.


That’s true but I think he’s suggesting it generates ideas which you can then research. You would know that it was hallucinating when you go to research a topic and find nothing. So using it as a discovery tool basically.


Heavy caution... I tried this with GPT3 on a topic I know well (electric motors) and beyond what you might find in the first page of a search engine it went to hallucination station pretty quickly.


"it does remarkably well in smoothly guiding you towards new keywords/concepts/ideas"

Are you more effective at finding such new keywords/concepts/ideas with ChatGPT's help than without, or is it just that style of learning or its novelty that you prefer?


> a full-self-driving tesla

Sorry for the derail, but this does not exist and yet this is the second time today I’ve seen it used as a benchmark for what is possible. Would you care to say more?


Seems like a pretty apt analogy. People want to use LLMs like a fully self-driving Tesla, but the "self-driving Tesla" version of LLMs doesn't exist either.


touché, though I doubt the gp meant it that way


With the current progress, human learning seems to be obsolete soon, so there's little point in optimizing an AI for teaching. Unless you mean only as a hobby to pass the time.

> AIs may be tutoring and raising other baby AIs, preparing them to join the community.

Probably I'm not futurist enough, but I'm always amazed at how chill everyone is with supplanting humanity with AIs. Because there doesn't seem to be a place for humans in the future, except maybe in zoos for the AI.


Nah, this is the second part of the industrial revolution. First part replaced and augmented physical abilities so instead of making things by hand we automated away a large portion of the work but not all of it. This is augmentation and automation for intelligence. Yes, a lot of what we currently do "by mind" will be automated but these systems have their limitations. It's still going to be crazy though, imagine what it was like to be the town blacksmith when they first heard of a steam hammer. Nowadays we have very few blacksmiths but we have a lot of people designing parts that will be made on a CNC. What is the role of the human once the labour of clicking away at a mouse hunched over a screen to produce a part is automated? Now we just discuss the end product with the AI, look through some renderings, ask for different versions, ask it to run simulations, tell it to send the file to the CNC? Now that anyone can "design" a part or a whole product by talking to an AI what kind of new jobs does that entail? There might be a big demand for computer controlled production of one off designs. What kind of incredible inventions and wonders can we create now that we can basically conjure our thoughts into existence? There's going to be a whole cross-disciplinary science of combining various areas of human knowledge into new things. Too bad Disney already coined Imagineer.


What you're describing is a cyborg, or a collaboration between man and machine -- something that has arguably been going on at least since a caveman used a stick as a cane.. but it's much more advanced now.

Arguably, a cyborg is no longer fully human, or at least not only human, and as more human faculties are "enhanced" a smaller and smaller portion of the whole remains merely human.

Eventually, the part of the whole which remains human may become vestigial... and then what?


Exciting times!


You tell me!


I mean I guess a lot of us might be giving up and expecting an ASI within a short period of AGI that will put an end to our sorry lot pretty quickly

Now if there is just a slow race to AGI then things are going to be very politically messy and violent ( even much more so than now ) in the next decade.


Immediately I'm very much looking forward to a day where language learning is like this. No Duolingo gamification nonsense... I want something that remembers what words I know, what words I kinda know and what I should know next and has an ongoing conversation with me.

I think this will totally change the way we educate and test. As someone for whom the education system really didn't serve well, I am very excited.


This is what I’m actually working on!

One major problem with LLMs is that they don’t have a long term way of figuring out what your “knowledge space” is so no matter how much good the LLM is at explaining, it won’t be able to give you custom explanations without a model of the human’s knowledge to guide the teaching (basically giving the LLM the knowledge of the learner to guide it)


Out of curiosity would a config file that acts as a prompt at the beginning of each conversation solve that issue?

It primes the model with a list of known words/grammar and the A1/2 B1/2 C1/2 level of language ability.

I’d presume after each message you could get the model to dump to the config.

I haven’t work in this sector at all and am curious as to the limits of hacking it / working around the long term memory issues!


LOL it's the next headline down!

Things are moving very fast


We are entering the age of "Young Lady's Illustrated Primer" from The Diamond Age by Neal Stephenson. Is this going to turn into a true digital assistant, that knows you, what you need, how to teach you new things, and how to help you achieve your goals?


Reminds me of that idea of a Digital Aristotle by CGP Grey. But once you have an AI that can do that kind of teaching, do you even need the humans?

https://www.youtube.com/watch?v=7vsCAM17O-M


Why would the AI ever bother teaching a human?


Somebody has to feed the power plant


Teaching as well. I believe this will become a way for everyone, regardless of family wealth, to have a personal tutor that can help them learn things at the pace that's right for them. And human teachers will continue to teach but also spend more of their time evaluating reports from the AI regarding each student and nudging the AI in certain directions for each student.

In essence, this tool will eventually allow us to scale things like private tutors and make educators more productive and effective.

We already have really convincing text-to-speech and really good speech recognition. It won't be long before we pair this with robotics and have lifelike tutors for people that want to learn. Kids of the near future are going to be so advanced at scale compared to any previous generation. A curious mind needed to have smart adults around them willing to get them resources and time. Soon anyone with curiosity will have access.


the only part I question is the 'regardless of family wealth'. This is purely 1st world and even here for the middle class and above only. Sure, poor countries are improving, but there's no guarantee, not with increasing wealth inequality, climate change etc, that this kind of tech will ever reach most people.


No one cares about test taking except people who think getting a degree from a "prestigious" university means they're more special. This is a final nail in that coffin.


Tests are a concession to a single teacher’s inability to scale personalised evaluation. AI-facilitated one to one education is even now revolutionising education.

The Primer’s in sight.


The focus will shift from knowing the right answer to asking the right questions. It'll still require an understanding of core concepts.


This has already basically happened with the Web and Wikipedia two decades ago ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: