The easier "AI" gets to use (as it is being "promised" it will), the quicker a skilled engineered is going to be able to adapt to it whenever they give up and start using it. They'll likely be ahead of any of those previous adopters who just couldn't resist the allure of simply accepting whatever is spit out without thoroughly reviewing it first.
Usually these types if things never change. I understand that all code is a liability, but npm takes this way too far. Many utility functions can be left untouched for many years if not forever.
It's not NPM. It's JS culture. I've done a lot of time programming in TypeScript, and it never fails that in JS programmer circles they are constantly talking about updating all their packages, completely befuddled why I'd be using some multiple year old version of a library in production, etc.
Meanwhile Java goes the other way: twenty-year old packages that are serious blockers to improved readability. Running Java that doesn't even support Option (or Maybe or whatever it's called in Java).
Java writes to a bytecode spec that has failed to keep up with reality, to its detriment. Web development keeps up with an evolving spec pushed forward by compatibility with what users are actually using. This is "culture" only in the most distant, useless sense of the word. It is instead context, which welcomes it back into the world of just fucking developing software, no matter how grey-haired HN gets with rage while the world moves on.
EDIT: Obvious from the rest of your responses in this thread that this is trolling, leaving this up for posterity only
It depends on how technical you want to get. It's certainly not a loop in the classic sense, like a for loop etc. Like there are no `continue` or `break` semantics and you can't return out of it (and yes yes of course you can still do all that stuff, just not with dedicated keywords). I call it a loop, though. Whenever I make a lil' server in Elixir (mostly always for teaching) I always call the function `loop`:
defmodule Server do
def start_link do
spawn_link(&loop/0)
end
defp loop do
receive do
msg ->
IO.puts(msg)
loop()
end
end
end
Of course, that assumes that you write your functions so that they can be converted into iteration!
Did you know that some C/C++ compilers will do tail-call -er- optimization? I learned this quite a while ago, so I'd expect every major C/C++ compiler to do it by now, but I was pretty impressed at the time.
> Of course, that assumes that you write your functions so that they can be converted into iteration!
I was indeed assuming this, however due to my lack of compiler knowledge, not explicitly writing your functions--at least in the classic way--doesn't necessarily mean you Erlang functions won't be optimized. See section 2.3 [0]
Though again, I don't know shit about compilers and while I can imagine what callER optimized calls could look like, I would need some examples!
> ...while I can imagine what callER optimized calls could look like...
What do you mean by "callER" optimized calls? That's a term I think I'm quite unfamiliar with.
> Though again, I don't know shit about compilers...
Oh, I also know fuckall about compilers. I'm just a bumbler who has muddled through a moderately-successful programming "career".
IME, on the topic of recursive functions, the big difference between Erlang and -say- a C++ program compiled with GCC is that the latter has a specific (but surely configurable somehow) call stack size limit which terminates the program if you exceed it. Whereas Erlang's limit seems to be "Well, how much RAM do you have, and how much space do I need to store away the arguments for each call?".
When I mentioned that some C/C++ compilers do tail-call optimization, what I intended to say was that they converted the recursive call into something more like iteration. I'm pretty sure that historically [0] the only way you could do this optimization is if the very last thing a function did was to call itself... doing anything after that call meant that the conversion to iteration was not possible. I have no idea if things have gotten quite a lot fancier in the years since I made this discovery and now.
If Erlang had a call stack whose size was limited by something other than the available RAM in the system, then whether or not your functions were written tail-recursive [1] would be quite a lot important, I think.
[0] And maybe it's still the case today? Refer back to my near-zero knowledge of compilers.
[1] I hope I don't forget that term again. It's much more succinct than my jumble about "tail-call optimization"
Although in Erlang it's closer to an equation with assignment being a side effect. `=` is the pattern matching operator.
Eshell V15.2.6 (press Ctrl+G to abort, type help(). for help)
1> X = 1.
1
2> X = X.
1
3> 1 = X.
1
4> X = 2.
** exception error: no match of right hand side value 2
5>
I'm not sure. I think it's asymmetric: high upside potential, but low downside.
Because when the AI isn't cutting it, you always have the option to pull the plug and just do it manually. So the downside is bounded. In that way it's similar the Mitch Hedberg joke: "I like an escalator, because an escalator can never break. It can only become stairs."
The absolute worse-case scenario is situations where you think the AI is going to figure it out, so keep prompting it, far past the time when you should've changed your approach or gfiven up and done it manually.
reply