Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A good example is the 100s of conversation histories I have with GPT-4 where it does everything from help me code entirely novel and original ideas, or develop more abstract ideas.

Every single day, I get immense use out of modern language models. Even if an output is similar to something it's already processed, that's fine! Such is the nature of synthesis.



> entirely novel and original ideas

They are not novel if there is an equivalent pattern in the training dataset. I guess you are not really trying anything that isn't available already on github or google in some form. If you think you do then please show an example of "entirely novel and original idea", that GPT-4 developed for you. I had at least 4 cases in which ChatGPT failed to produce correct solution (after pushing it for hours to correct itself in many ways) in an actual novel problem (solution not longer than 200 lines of code) for which there was no solution on Google or github. But you can't blame statistical model that was trained to create the most probable outcomes based on it's training data.


What you need to understand is the concept of metapatterns. GPT-4 has generalized so much, that it is able to learn the "patterns of patterns" in many domains. I don't require anything more fancy to drastically improve my workflow right now.

Usually, drawing from existing knowledge is the whole appeal of using GPT. Over time, you actually begin to get a sense of what the model is good at and bad at, and it's good at an incredible amount of things. I get it to write novel code constantly and I think that playing with it and confirming that for yourself is better than me showing you.


That's anything at scale. Emergence isn't a sole feature of NNs. NNs are to emergent behavior what crypto is to cash; hyping an enormous waste of resources with the promise to solve every problem, when any given problem has already been solved more elegantly. If you don't believe me about NNs, look at the caloric burden of the human brain, for fuck sake.


I agree the energy cost is concerning. And we are lucky we don't have unlimited coal, unlimited power and unlimited GPUs because we'd hit 4 degrees warming by Christmas with everyone trying it out.

The human brain is a salient point because often we are using AI so that the human brain can do less. Get this GPU to RTFM instead of the human. The human time is more valuable. All the while making the human brain probably less effective (compare someone who learns another language vs. someone who speaks it through an AI translator only).

I hold both points of view that AI is both marvelous, but also concerning in terms of energy use.

To nitpick - in " NNs are to emergent behavior what crypto is to cash " applies more to large language models. Simpler NNs for easy tasks that don't consume much power wouldn't apply (that might be like a VISA card?)


I'm sorry, what is your argument?

Is it that this behavior is the result of any system at scale? That is undeniably preposterous.

Is it that the human brain is more efficient? At energy usage, sure, but for me to find an individual who is capable enough to assist me in the manner GPT does, at the speed and level of breadth and depth that it does, would be next to impossible. If I did, their required compensation would be astronomical.

What are you arguing for or against? Are you aware that these systems will, like all previous computationally intensive systems, become drastically more efficient over time?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: