Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm all for moving to higher-level skills, when I was an adjunct professor I usually didn't put a huge emphasis on syntax unless I thought it was important to explain a concept, and I very rarely took off points for syntax.

That said, I don't know that I would allow AI for learning to code in the class. I do not consider "prompt engineering" to be a substitute for programming, and it's very easy to get AI to write some code and copypaste it, not understand anything it said, and be done.

Last year, when I was administering a test, I said that they were allowed to use any IDE they wanted, but they could not use any AI assistance. I am pretty convinced one student just copypasted my test questions into ChatGPT or BingAI and then copied the answers. I didn't have a way to prove it so I had to give him the benefit of the doubt and grade it assuming he did it honestly.

Before someone says "YOU SHOULD TEACH THEM TO USE THE AI TOOLS LOL THAT'S THE FUTURE!!!", stop. That's not true, not yet at least. Just because I can call an Uber doesn't mean I get to say I know how to drive. Just because I can go to McDonalds doesn't mean I get to say I know how to cook. I was teaching them how to write Java and Python, the goal of the class was for them to learn Java and Python, it wasn't learning how to copypaste into OpenAI.



You can't learn to play the guitar by reading books about it, or watching YouTube videos, or reading sheet music, or getting "personalized AI tips".

You can only learn to play the guitar by picking it up and spending a lot of time mucking about with it. There's a bit more to it, but this is really the core: you need to play and get that muscle memory and "feeling" for it. You need to "rewire your brain".

Coding, or any other skill for that matter, is no different. The only way to learn to code is to actually write code. Even if you fully understand anything ChatGPT gives you (which most students probably don't), that's no substitute for actually writing the code yourself.

I don't see how it's not hugely harmful for the development of students and junior programmers to use AI. Even if these tools were perfect (which they're not), you need to develop these basic skills of learning to read the code. You need to make mistakes and end up with crummy unmaintainable code, and realize what you did wrong. Etc. etc.

Even for senior programmers I'm not so convinced AI tools are actually all that beneficial in the long run.

Only when AI systems will be able to fully understand entire systems and full context and can completely replace human programmers will that change. I'd estimate that's at least 50 years off, if not (significantly) longer. Anything before that: you need someone who fully understands the code and context, in depth.


Having a vague, conceptual understanding of a topic is often confused with expertise. Also known as knowing enough to be dangerous.

What worries me is that AI code is getting complicated and harder to correct. I recently noticed that after my IDE generated some code I felt a palpable sense of dread.

I often have to go over it in detail because it makes subtle, hard to catch errors. It pulls me out of my flow. I am starting to dislike that effect and I am certain huge swaths of the (new) coding population will lack the skill and motivation to correct these things (as various earlier comments here show).

Power tools don’t make you a master carpenter.


This same issue existed with Stack Overflow, but AI further lowers the barrier to be dangerous by throwing a brick on the gas pedal.

I’ve also found it makes otherwise knowledgeable and experienced people think they can do stuff without learning it. I’ve had several people on my team tell me they think Copilot will help them get up to speed and help out with some of the stuff I’m working on. So far none of them have done anything and I’m not sure how Copilot explaining a block of code is going to do anything for them. It’s already a very easy syntax to read, and they all already have coding experience in other languages. It’s a tool to let them think helping will be easy, so they volunteer, then do nothing because it isn’t the reality of the situation.


Coding, or any other skill for that matter, is no different.

Back in the early days of computing people wrote software by working it out on paper, encoding it on to punch cards, and then giving that program (deck of cards) to an operator who loaded them and ran the code. If it didn't work properly you would get back a print out of the 'debug' which amounted to a memory dump. You'd then patch your punch cards based on working out where you'd screwed up, and try again.

That was the computer software industry for about a decade before time-sharing, VDUs, etc.

People absolutely did learn to code by reading books and nothing much else. Access to computers was so restricted (because time to use a computer was shared between lots of people) there weren't any other options.

Heck, I learned a lot of early web stuff like TCP, HTML, etc reading books at my parents house when I was at home from uni and didn't have my computer with me, and that was the late 90s. Of course you can learn coding by studying the theory without practicing. It's just a lot less fun.


That is actively engaging your brain, just at a very slow pace with a very slow feedback loop.

Dijksta's "I had everything worked out before I wrote the code, because the computer didn't exist yet" is the same. It's like working mathematics out: that's not "just reading", it's actively engaging. Actively writing. Completely different from the passive consumption of AI output.

> Of course you can learn coding by studying the theory without practising. It's just a lot less fun.

No you can't. When you first started an editor or IDE you wrote some vague code-shaped junk that probably wasn't even syntactically correct, and didn't learn programming until you practised. Of course you need to learn some some theory from a book or instructor, but that's not the same as actually learning something.


Yep Knuth learned by reading an IBM manual with source code while sitting on a beach during summer vacation. Decades later systems hackers learned by reading illicit copies of Lions' Commentary on Unix w/Source


Exactly, it's the same with autonomous driving or any other activity (even art!) that has always required a human. While it can "look cool" and like it "understands the context" on the surface, when diving deeper it will _always_ misunderstand the nuance, because it doesn't have all of the lived human experience and reasoning.

Being even 80% (or 90 or even 95) there isn't enough - something will always be missed because it's only able to "reason" probabilistically within a narrow area not far away from the training data.


This is very well put. I think the next point of discussion is what fraction of "software developers" "need" to know how to code in the deep way you describe.


I don’t think the guitar analogy is a great one. You have to learn by picking it up because no expert plays the guitar using personalised AI. But there are expert programmers who use lots of AI tools, so AI is more of an important part of learning to code.


Doesn't matter what (some) experts use; it's about learning fundamentals. You can only become fluent by writing and reading code, at times struggling to do so. This is the only way to learn any serious skill in life, from programming to music to chess to woodworking to cooking to physics to ... anything.


I'd say that especially because assistants are not good enough yet, it makes sense to allow students to use them. LLMs will produce output that is messed up, students will spend time figuring out how to fix that and learn the language grammar this way.

I recently learned Swift & Metal this way - knowing Python and other languages, I kept pasting my pseudo-code, asking GPT to convert it into Metal/Swift, and then iterating on that. Took me a week or two to get the language without reaching for any sort of manual/tutorial. Speaking from experience - it would take me 2-4x as much time to learn the same thing otherwise.

If GPT was any better, I'd have no need to fix the code, and I wouldn't learn as much.


I want to emphasize that I'm not really a luddite here. I pay for ChatGPT (well, Kagi Professional), I use it daily, and I do think it's a very valuable learning tool. I even encouraged my students to use it outside of class if they needed help with any of the theory.

I use ChatGPT all the time for learning new stuff, but a) I already know how to program well enough to teach a class on it, and b) it's a supplement to doing stuff on my own, without AI help. I don't feel I learn that much from just copying and pasting.

Totally agree with your process though; correcting and arguing with ChatGPT is an insanely good way to learn stuff. It's really helped me get better at TLA+ and Isabelle proofs.


Just FYI, Kagi's FastGPT is not remotely comparable to GPT4 in quality.


I don't use FastGPT, I just Kagi Professional with the GPT-4 model.


Ah ok. I'm on Kagi Family which doesn't have access to that, didn't realize it was an option for Pro!


You can upgrade any family member to Kagi Ultimate plan for $15/mo and it will give you unlimited access to GPT-4, Claude 3 Opus, Gemini 1.5 Pro and other state of the models.


Yeah, they don't advertise it very well. It was actually a pretty good value add for me; I was already paying $10 for Kagi search, and $20 for OpenAI, so the $25 for both actually saved me $5/month.

I think you can upgrade to Ultimate at a per-account level for another $15/month.

EDIT: Accidentally wrote "pro" when I meant Ultimate.


Probably still not worth it for me; I use gpt4 fairly heavily via API and still my API charges only come to about $5-7 a month.


Did you write your own tool to query ChatGPT via API, or what are you using?


I use typingmind.


Do you mean Kagi Ultimate? I was under the impression that the Pro plan didn't have access to GPT4.


Yep, I mistyped...I meant ultimate. Edited the comment.


> I recently learned Swift & Metal this way - knowing Python and other languages, I kept pasting my pseudo-code, asking GPT to convert it into Metal/Swift, and then iterating on that.

Significant here is that you knew other languages already. I've had the same experience with GitHub Copilot, but I'm cautious about recommending it to new learners who don't yet know the fundamentals.

All indications I've seen indicate that it's easy for people who already know the foundations of programming to use AI tools to learn new tech, and that can actually be the most effective way for them, but that it doesn't work nearly as well for people who don't have knowledge of other languages and frameworks to lean on.


> and that can actually be the most effective way for them

A lot of it is just a general lack of learning material for that demographic. For most languages, you get to choose between an introductory text that assumes you've never even heard of a "pointer" or an advanced text that is meant as a reference for an experienced developer of the language.

There's not really a lot of stuff out there to teach someone who already knows another language, who just wants to know the syntax and common idioms in that language. For example, a C programmer can easily pick up Python, learn some syntax, and go nuts -- but they just won't know about stuff like list comprehensions or dataclasses unless someone points it out. They'll write Python code that reads like C, not like Python.


It's similar with maths - calculators are invaluable, but being actually good at advanced maths just NEEDS you to grind through basic calculations so you adopt those basic "pattern recognition" skills on how numbers work, can be broken down and swappe out.


> ... students will spend time figuring out how to fix that and learn the language grammar this way.

That is a cute thought. Instead, they do submit the LLM generated code as is, messed up or not, and still expect to get full grade.


do they really expect a good grade from a messed up code? How many of them?


Yup. I gave a lab homework to an SE class of 42. After reading through at least 12 LLM generated, similar looking, and wrong code submissions, I've cancelled the whole lab assignment. And students complained in class evaluations about it.


An alternative is to keep the lab assignment but grade them on a final exam. Then it is in their interest to do the lab assignment by themselves rather than with an LLM.


>LLMs will produce output that is messed up, students will spend time figuring out how to fix that and learn the language grammar this way.

I’ve never met anyone who prefers debugging over writing fresh code. Did LLMs just automate away the fun part and leave us with the drudgery? That sounds like a horrible way to learn. All work and no play…

It also would mainly teach how to get good at spotting those little mistakes. But without context from writing code, it seems like it would be harder to pickup on what doesn’t look right. They’d also miss out on learning how to take a problem and break it down so it can be done with code. That’s a foundational skill that takes time and effort to build, which is being farmed out to the LLM.


With super difficult algorithmic code, for me, it works best to just start writing and then fix, than to get stuck not knowing how to write.

Algorithmic code aside, I’m learning cuda now, and I just told gpt to write me some code and I debugged it then (with gpt’s assistance as well). It took me an hour to produce something that I expected it would take 2-3 days otherwise.

As for picking up what doesn’t look right - if it doesn’t work, you can pinpoint place where sth is messed up using a debugger/prints, and then you learn how the code works on the way.

I remember that when I was learning to program 30 years ago was the same - I rewrote pong game from a magazine, then kept changing and messing up things until I learned how it should work.


It’s like having infinite stack overflow examples (and explanations) to sample from and iterate on. It’s unlikely any one of them will solve a complex problem you’re working on but by reading and experimenting with enough of them eventually you grok it.


I think it would be quite mentally difficult to be successful at writing device drivers (any hardware device, many different fields) if a person had the slightest reservation about debugging. I've got past a lot of blockers by creative trial and error and making it work was the satisfaction driver. I actually enjoy the making things work part. I have had employees who felt the same, one of my jobs was telling management to fuck off while the person worked on the bug, for days without result if need be[1]. The "just writing code" part is how one actually gets to the interesting part where I earn my salary.

I have no idea how an LLM could help with that process.

[1] I successfully found a bug in a CFD solver (not a device driver) in the early days of templated g++ only after 2 weeks of fairly grueling half time work. Missing ampersand in an argument list. Believe it or not I was very happy and doubled down on the code. However! I once fucked up a job situation where my MPI-IO driver failed because I had a very subtle linux RAID hardware bug that I could not find after weeks of effort. I found it later, too late. That truly sucked. I really don't know how LLMs could possibly help with any of this.


LLM can't find a missing ampersand? Sad!


In 1993 or so g++ couldn't do it, but I suspect that all c++ compilers today would. Then why would I need anything else? However the point is that when dealing with proprietary hardware devices you occasionally get a situation where the incantations as "documented" should work, but they don't, and the usual software process diagnostics are silent about it. Some domain specific experienced creativity is required to coax a response to begin finding the illness. Yes you can pay for support and escalate but small shop management sometimes is a bit hesitant to pay for that.

I am very curious how an LLM is supposed to be trained on situations whose context does not exist on the open internet.


Perhaps only after an assignment where prompts and answers are shown, and most of the answers look plausible but have a bunch of subtle mistakes, and students need to determine which ones are wrong and why.


I speak English, not any code languages.

I recently learned javascript this way - knowing English and other languages (Thai, Mandarin), I kept pasting my pseudo-code, asking GPT to convert it into javascript, and then iterating on that. Took me a week or two to get the language without reaching for any sort of manual/tutorial. Speaking without experience - it would take me 2-4x as much time to learn the same thing otherwise.

:) My story is not entirely true, but close. My point being llms are learning language and logic (mostly English currently). Programming languages are just languages with logic (hopefully).

And if you think the ability to shorten meaning of a complex idea is exclusively the purview of code, think of a word like "tacky" or "verisimilitude"- complex ideas expressed in a shorter format, often with intended context with significant impact on the operations in the sentence around them.


How circumlocutious.


Maybe I’m dumb but I’ve never found a single textbook, manual or tutorial that taught me everything I need to know about a subject. I always have to make a circuit of multiple sources and examples before I understand fully.


> I recently learned Swift & Metal this way

I guess what you really learned is that the Apple-specific ecosystem technologies are so crappy, you always use a middleware to defeat them. Either Unity, React Native, or in this case, a 170b parameter super advanced LLM.

There's just no getting through to those Apple guys to stop making their own thing and being a huge PITA.


I don't do Swift or JS/React professionally (well, tiny bit, but I'm not a "React professional"). However, I've made non-toy Swift/SwiftUI* (~7k loc) and React (~4k loc) project.

Imho Swift/SwiftUI is way more of a joy to work with. In particular, SwiftUI is much easier to visually parse whereas JSX is noisy and IMO hideous. I also ended up needing to bring in way more third-party dependencies for JS/React, something I hate doing if avoidable**.

* It is my understanding that many iOS professionals prefer UIKit bc SwiftUI is still missing stuff. I only ended up needing to fall back to it once or twice.

** ClojureScript/Reagent do solve most of my issues with JS/React (by essentially replacing them). Clojure is well-designed with a great standard library (so less need for dependencies or implementing stuff that should be in a standard library). Hiccup is way preferable to JSX.


I know multiple very successful iOS developers and they uniformly hate react native and adore swift..


why is Swift crappy? I thought its a nice language


Both Swift and Mateal are beautifully clean - and I’m saying this as someone who spent years hating on curly brackets and static typing.

Ditto Xcode.

The biggest issue is the lack of documentation for Metal.


The assistants might be good enough to produce homework level or test level answers to contrived questions. But then utterly fall over in a real world codebase.


I have yet to find an AI tool or combination of AI tools that can hold my hand and give me correct solutions for my coding problems.

I have worked exclusively with React/Next, Node, and Mongo for years until I took over a legacy code base in October.

I had to teach myself Python/Flask, Vue, and Laravel without any kind of documentation or guidance from the previous developers or anyone else in my company.

AI has been able to help me get a bit further down the road each time I run into an issue. But AI hasn't been able to hand me perfect solutions for any of my problems. It helps me better understand the existing code base, but thus far none of the answer provided have been 100% accurate.


Real world tasks don't fit in a chat window, and using AI is barely an improvement over not using it. Just from one of my latest LLM coding adventures: it gets versions wrong, imports wrong, messes up versioning when learning documentations, resulting in many wasted hours. It's ok to fill in boilerplate, not ok to help you do things you couldn't do without AI.

If anything, it's like cooking with kids, not like going to McDonalds. I'd give extra points if they can solve the task with LLM in time. The hardest parts in programming is finding subtle bugs and reviewing code written by others - two tasks that LLMs can't help us with.

If they can easily solve the tasks with LLMs then it is a legitimate question to ask if you should be teaching that skill. Only common ones can be solved that way though. Why don't you give them a bugged code to fix, that way LLM inspiration is not going to work, if you check first to make sure LLMs can't fix the bug?


I don't think I agree with your last paragraph. ChatGPT is getting better and better, I have reason to think it won't continue to keep incrementally improving. As such, I think allowing students to use it to answer coding questions is going to make it so they don't actually understand anything.

I said in sibling thread that I'd be fine enough with having a class like "Software Engineering Using AI" or something, but when the class is specifically about learning Object Oriented programming and Java and Python, I do not think having heavy use of ChatGPT is a good idea.

Also, not all the questions were pure coding, I had some more conceptual questions on there, and ChatGPT is really good at answering those.


> The hardest parts in programming is finding subtle bugs and reviewing code written by others - two tasks that LLMs can't help us with.

I like to imagine LLM assistance as over-enthusiastic interns, except they don't actually improve with mentoring.

The trick becomes knowing which tasks will be improved by their participation... and which tasks will become even harder.


If you throw the same question at it 15 different ways, it can eventually give you ideas for optimizations that you probably wouldn't have thought of otherwise. It knows parts of APIs that I've never used. ByteBuffer#getLong, ByteBuffer#duplicate, StringBuilder#deleteCharAt, RandomGenerator#nextLong(long)


> I do not consider "prompt engineering" to be a substitute for programming

It's also not nearly mature enough for learning it be a good ROI in a degree program. Community college or adult education class? Maybe. Bootcamp track? Sure, I guess. Novelty elective for a few easy credits and some fun? Totally.

But if I'm a student in the midst of a prolonged, expensive program spanning years, learning how to coax results out of today's new generative AI tooling is not preparing me very well at all for what I can expect when I try to enter the workforce 2 or 4 or 8 more years. The tools and the ways to interface them, "Prompt Engineering" or whatever else it's called will inevitably evolve dramatically between now and then. So why am I learning it while I'm still deep in my academic bubbles? And what are prospective employers getting from a degree that focused heavily on some now-defunct and dated techniques? My degree is supposed to mean that I've learned foundational material and am ready to be productive on something, but what that mean when too much of what I've learned is outdated?


If there were a class called "Utilizing AI for software engineering" or something, even at the university level, that wouldn't bother me.

What bothers me is that people have told me that I should just allow AI for literally everything because it's the future and you should be teaching them the future or something, but I think that's kind of dumb. In those classes, the goal was for them to leave having some competence in with Python, Java, and Object Oriented programming, and I firmly do not believe you can get an understanding of that just by copying and pasting from ChatGPT, and I think even Copilot might hinder the process a bit.

To be clear, I love ChatGPT, I use it every day, it's a very valuable tool that people probably should learn how to use, I just don't feel that it's a substitute for actually learning new stuff.


> What bothers me is that people have told me that I should just allow AI for literally everything because it's the future and you should be teaching them the future or something,

I wonder what "people" told you that. My personal experience is that such advice usually comes from people who understand neither AI nor what I teach. Most of them are university administrators of sorts, and AI is a problem for them more than for me.

Introductory classes teach skills that can be performed reasonably well by AI. Those skills are the foundation you need to build higher level skills. Just like kids need to know how to read to be functional in society and in their later classes, despite screen readers doing an excellent job.

When I teach a foundations class this is my focus, and I don't fool myself or my students into thinking that they will be using those skills directly, but I try to convey the idea that the skills pervade through much of what they will later learn and do.

However that means that I cannot force students to learn. They can cheat, and it's easy, and I prefer spending my efforts on helping the learners than catching the cheaters.

The university administrators, however, are in the business of selling diplomas, which are only worth what the lowest common deminator is worth. So cheaters are a big problem for them. Typically for such people, they just bury their heads in the sand and prefer to claim that teaching students to use chatGPT (that lowest common denominator) is where the value is.


I'm not sure if you're implying that I'm making shit up by putting "people" in quotes, but here's at least a little evidence. [1] [2] [3]

Otherwise it's been with in-person conversations and I didn't record them, there's a spectrum to how completely they suggest I allow AI.

Everything else you said I more or less agree with. Obviously if someone wants to cheat they're going to do it, but I feel that until we restructure the job market to not take GPAs as seriously (which I think I'd probably be onboard with), we should at least have cursory efforts to try and minimize cheating. I'm not saying we have to have Fort Knox, just basic deterrence.

I'm not an adjunct anymore, partly because I took stuff way too personally and it was kind of depressing me, partly because it was time consuming without much pay, but largely because I realized that most universities are kind of a racket (particularly the textbook corporations are a special kind of evil).

[1] https://news.ycombinator.com/item?id=36089826

[2] https://news.ycombinator.com/item?id=36087840

[3] https://news.ycombinator.com/item?id=36087501


I wasn't implying anything. Because your use of the word "people" left the context very vague, I was just cautiously trying to not speak in your name when discussing the people who are pushing me to teach students how to use chatgpt.

With respect to deterring teaching I totally agree that we should go for it. There are ways to mitigate the value of cheating and ways to promote the value of learning, both of which are deterrents. Personally I like having lots of small tasks that build and follow on one another. If the student is working and trying to learn it makes sense and we get to reinforce the high level skills that matter. If the student is cheating it should become increasingly harder to keep a consistent story.

However if we turn this into a cop and robbers game that's what we're going to get.

As for the focus on GPA I think that the tide is turning. Employers need to find an alternative that doesn't eat their time.

And yes universities are rackets and aren't good value employers. They don't even offer job security anymore.


If it’s an introductory course I can see how using generated code would be harmful but I’m guessing the students can’t since they don’t have the experience needed to make a distinction between good and bad code but perhaps that’s the thing you could teach them about LLMs by having a class where they are given an “easy” problem that you might normally assign that an LLM can solve and then also a hard problem that that it will fail to do properly and let the students see the difference for themselves when they have it generate code for both. That may provide valuable insight.


Tell them you'll pay them in USDT. It's the future, after all :)


I agree. I attempted to use Copilot more this week for a couple projects at work I had to get done quickly. I hadn’t been using it at all for the past few months, but when a co-worker asked for help getting it working it reminded me to take another look. These particular things seemed better suited for Copilot than some of the other work I had.

I found that having a solid foundation was critical for knowing if something would work, and even just to write a prompt that was semi-decent.

At one point I asked for what I wanted, but it kept only doing half of it and I could tell just by glancing at the code it was wrong. I then had to get very, very specific about what I wanted. It eventually gave a correct answer, but it was the long annoying option I was trying to avoid, so it didn’t change my end result, and I don’t even think it saved me any typing due to all the prompts to get there. It just gave me some level of confirmation that there wasn’t an obvious better way that I’d be able to find quickly.


Knowing syntax is a side-effect of having written a minimum number of LoC, so grading syntax is in effect rewarding experience. Similar to how a large vocabulary is a side-effect of having read a lot.

To me, exams are taken in halls, written on paper, proctored, under deadline. Points may be deducted for syntax mistakes or unclarity as the examiner wishes.

Separately, home work is graded in ways that already makes cheating pointless; usually for 1) ambition/difficulty in the chosen problem 2) clarity in presentation and proofs, argued in person to TAs.

LLM should have no bearing on any part of education (CS or otherwise) unless the school was already a mess.


I dunno I wasn't all that familiar with Python (I'd done a Django project a decade ago) and I picked Python for a recent project writing some control code for some hardware connected to a Linux box and I have learned quite a bit about Python by doing it.

Perhaps the lesson is about how to read and evaluate code and how to test code. If students get good at how and when to spot errors and how to construct test scenarios that ensure the code is doing what it should then perhaps that will lead to even higher quality code than if they were learning, producing bugs, and then learning how to evaluate the code and test what they had written.


> If students get good at how and when to spot errors and how to construct test scenarios that ensure the code is doing what it should

They lack the skills required to determine that, to fix those basic errors. But they'll still submit the code.


Well then they should be submitting it to a compiler/interpreter first!


> it's very easy to get AI to write some code and copypaste it, not understand anything it said, and be done.

This is why I think that the old way of having tutorials instead of tests is vastly superior. When you are in a tutorial group (typically five or fewer students) and your tutor asks you to explain something to the other members of the group you can't hide behind an AI, a textbook, or even your own notes. Your lack of preparedness and understanding is made abundantly clear.


> Last year, when I was administering a test, I said that they were allowed to use any IDE they wanted, but they could not use any AI assistance. I am pretty convinced one student just copypasted my test questions into ChatGPT or BingAI and then copied the answers. I didn't have a way to prove it so I had to give him the benefit of the doubt and grade it assuming he did it honestly.

If I were a teacher, I would have asked the popular bullshit generators to generate solutions for the test questions, and if a student’s solutions were very similar, I’d have them do some 1:1 live-coding to prove their innocence.


I did actually try and get ChatGPT and BingAI and Google Bard to generate it. I didn’t know about Claude at the time.

The issue is that the thing he submitted was correct, so I was going completely off “vibes”; I never got it dead-to-rights with AI generating a one-for-one match. It got pretty similar, and ChatGPT text does have kind of a recognizable style to it, but I didn’t feel comfortable reporting a student for cheating and risking them getting expelled if I wasn’t 100% sure.

I might have tried to get him to do a one on one coding session, but this was the final exam and literally three hours after it I had to fly to the UK for an unrelated trip for three weeks. Grades were due in one week, so I didn’t really have a means of testing him.


> Just because I can call an Uber doesn't mean I get to say I know how to drive. Just because I can go to McDonalds doesn't mean I get to say I know how to cook.

I hear what you are saying but where does it end? Is Hello Fresh cooking? Is going to grocery store cooking or do you need to buy it direct from a farmer? Do you need to grow the food yourself?

Is renting a car “driving”? Is leasing a car “driving”? If you can’t fix a car and understand how it works are you really driving?

Yes, most of those questions are ridiculous but they sound the same to me as some complaints about using LLMs. Those complaints sound very similar to backlash against higher-level languages.

Python? You’re not a real developer unless you use C. C? You’re not a real developer unless you use assembly. Assembly? Must be nice, unless you’re writing 0’s and 1’s you can’t call yourself a developer. 0’s and 1’s? Let me get out my soldering gun and show you what it takes to be a real developer….


So many strawmen. Following a recipe, even with the ingredients coming from a meal kit, is still cooking. If you’re behind the wheel, you’re driving, no matter who owns the car or if you can perform any maintenance.

A C developer working in notepad.exe is a real developer, as is a Python developer working in PyCharm and using the standard IDE features to improve their productivity. Someone blindly copy-pasting output from a LLM is not a developer.


> So many strawmen

Good, I’m glad you grasped the point of my comment. I was talking about the absurdity of people “gatekeeping” programming. Those arguments are just as silly as people saying using an LLM (in any capacity) is wrong and means you aren’t programming anymore.

Yes, blinding pasting code from an LLM does not a developer make. However that’s not what I suggested. I believe LLMs can be useful but you need to understand what it’s generating. The same way that SO is useful as long as you understand the code you are reusing (ideally modifying and reusing instead of a straight copy/paste).


> I believe LLMs can be useful but you need to understand what it’s generating

That was never in dispute and not disagreed with in what I wrote. I didn’t say “using an LLM is wrong in any capacity” nor is that implied by anything I wrote. I use ChatGPT daily, I even told students they should use it if they needed help with understanding concepts outside of class”. I didn’t “gatekeep” programming, I just said they couldn’t use AI during an exam.

> I believe LLMs can be useful but you need to understand what it’s generating.

Yeah, if only we had some way of EXAMining if the students understand what they were generating. Like, crazy idea, maybe we could have some kind of crazy test where they aren’t allowed to use ChatGPT or Copilot to make sure they understand the concepts first before we let them have a hand-holding world.


This isn’t productive.

I never said you said that, I said some people. I never said you should or shouldn’t use LLMs on exams, I have no idea why that’s being brought into this conversation.

I can only assume you’ve lost the thread and/or think you’re replying to someone else. This will be my last reply.


Your first post was in response to me claiming that going to McDonalds wasn’t cooking or getting an Uber isn’t the same as knowing how to drive. Specifically, they you were responding to was an anecdote about a student cheating on an exam by using ChatGPT, at least it appeared that way to me. It’s not weird to think your response was in regards to that.

Maybe I misread your intent there, I just got a vibe that you were defending the use of LLMs during exams since that was the thing you were responding to. Apologies if I misread.


> I am pretty convinced one student just copypasted my test questions into ChatGPT or BingAI and then copied the answers. I didn't have a way to prove it so I had to give him the benefit of the doubt and grade it assuming he did it honestly.

I wonder if a more AI-resistant approach is to provide poor code examples and to ask students to improve it and specifically explain their motivations.

They can’t just rely on code output from an LLM. They need to understand what flaws exist and connect the flaw to the fix, which requires (for now, I think) higher-level comprehension.


I paste shit code into the openai playground and ask for it to be improved and ask for the reasoning why. Generally works.


Welp never mind. I welcome our new AI overlords.


I would honestly think it doesn't matter. Students, like professionals, should be given the world and all of it's resources to generate a solution. If I can just punch the question into AI and get a valid answer then maybe the question should be modified.


If you always defer to a computer then you never learn to disagree with the computer, you become just as much of an automaton as it.

I love LLMs for coding but only because I have the experience to instantly evaluate every suggestion and discard a double-digit percentage of them. If an inexperienced programmer uses Copilot they aren’t going to have the confidence to disagree and it will stunt their development.


Does that really parse? There's plenty of things that have definite answers that are a Google search away, but we still expect things be able to do.

Did we just stop teaching kids basic spelling because spell check was built into MS Word? No, of course not, because even though spelling has been a more-or-less solved problem for people who already know how to read for the last thirty years, having kids learn to spell helps with their actual comprehension of a subject.

Also, if they do not learn the fundamental concepts then they will be able to differentiate a good solution from a bad solution. There's a lot of really shitty code that does technically accomplish the goal that it sets out for, and until you learn the fundamentals that fact won't be clear.


Do you also think that primary school students should not be taught, say, multiplication? Because they can, of course, use a calculator. No need to ask them what 3 times 4 is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: