Hacker Newsnew | past | comments | ask | show | jobs | submit | more kryptiskt's commentslogin

The fear may just be a result of thinking about who is making the decisions. I know I'm good, my peers know I'm good. But how far up in management chain does that knowledge go?


R6RS has syntax-case macros which is superior to Common Lisp macros in every respect, they're both hygienic and can be used to implement a sloppy macro system if one so wishes.


`syntax-rules` is very good and you can do a whole lot with them. However, you are limited to template -> pattern transformations, and there are plenty of macros that you cannot write this way. (E.g. anything requiring a predicate on the source syntax that you can't express in the template language, etc.) For that, you need the full power of procedural macros.

Racket improves on Scheme: its macros are fully hygienic whilst not being limited to template -> pattern transforms. See https://docs.racket-lang.org/guide/macro-transformers.html

EDIT: syntax-case -> syntax-rules; R6RS specifies the latter—I believe the former is a Racket construct equivalent in power to `syntax-rules`.


I think the parent meant that R6RS has `syntax-rules`, which has enough power to implement CL `defmacro` as well as `syntax-case`.


My mistake: R6RS has `syntax-rules`, not `syntax-case` as far as I can tell. However, `syntax-rules` and `syntax-case` are equivalent in power. [1]

It does not have the same power as `defmacro`: you cannot define general procedural macros with `syntax-rules`, as you are limited to the pattern-matching language to compute over and construct syntax objects.

[1]: https://docs.racket-lang.org/reference/stx-patterns.html#%28...


I think you got your wires crossed. R5 and R7 only have `syntax-rules` macros. R6 has both (`syntax-rule` can be trivially defined as a `syntax-case` macro).

R6 having `syntax-case` macros is one of the more controversial things about it; a surprising number of implementers don't care for them.


I found the relevant documentation and you are absolutely correct and I was mistaken. Thank you for setting me straight.

https://www.scheme.com/tspl4/syntax.html#./syntax:h3


I think we're talking past each other. I mean something like:

  (macroexpand '(when-let (foo (frob bar))
      (jib foo)))
  ;; (let ((foo (frob bar)))
  ;;   (when foo
  ;;     (jib foo)))


So where is the product? Why haven't the vibecoders built a browser or a kernel or anything remotely ambitious? They have had years at this point. With their fabled productivity increase, making a better kernel than Linux in that time should be child's play. So where is it?


Why are you conflating people who use LLMs to work more efficiently with vibe-coding shills? Real engineers only write in assembly right? Lol. It’s giving anxiety.


AI is an assistant not a magician.


So where is the revolution then? How can it be both a revolution and not a magician at the same time?

At the same time when studies are coming out that experienced developers are losing 19% of productivity using AI tools, it makes me question whether it's not a devolution. Especially considering how widely unprofitable is for Claude to be run at scale where it's at least a net neutral for the average dev, where is that revolution you are talking about?

Is it the same revolution like NFts or blockhain or whatever web3 was, cause I am still waiting for those?


An alternative doesn't have to match all capabilities of the current tech. It "only" has to be competitive in one niche, a la The Innovator's Dilemma. Then it can improve and scale from that beach head, like when CMOS went from low-power applications to world domination.


Having the code, the callstack, locals, your watched variables and expressions, the threads, memory, breakpoints and machine code and registers if needed available at a glance? As well as being able to dig deeper into data structures just by clicking on them. Why wouldn't you want that? A good GUI debugger is a dashboard showing the state of your program in a manner that is impossible to replicate in a CLI or a TUI interface.


I don’t disagree that a visual debugger made with a proper GUI toolkit is better than a TUI. However, nvim-dap-ui[0] does a pretty good job.

[0] https://github.com/rcarriga/nvim-dap-ui


You get all of that in the terminal debugger. That’s why dwarf files exist.


All the information is there, but the presentation isn't. You have to keep querying it while you're debugging. Sure, there are TUI debuggers that are more like GUI debuggers. Except that they are worse at everything compared to a GUI debugger.


I don’t know what debugger you’ve used but the entire query command is `f v` in lldb for the current stack frame


Yes, but in a GUI debugger the stack and everything else is available all the time and you don't have to enter commands to see what the state is. It even highlights changes as you step. It's just so plainly superior to any terminal solution.


Japan holds the record for the smallest rocket to reach orbit with the SS-520, which put a cubesat into orbit in 2018.

Its dimensions according to Wikipedia:

Height – 31 feet (9.54 meters)

Weight – 2.9 tons (2.6 metric tons)

Diameter – 20 inches (52 centimeters)

Payload to Low-Earth Orbit – ~9 lbs (4 kg)


I believe they can do 140kg to 800km, but #5 was only 4kg to a 180km x 1800km orbit..


I see this thing in the EU lately too. I hate it, how about a fund to spend on the local academic talent? Like, should the way to get a decent salary as an European academic in Europe be to get a job in the US so they can be headhunted with earmarked money for "foreign talent" a couple of years later?


They already have the local academic talent, and they're not looking to move away. Poaching US academics who are left in the lurch by the current administration's policies is a once-in-a-generation opportunity for Japan to achieve research dominance in economically and geopolitically important fields like semiconductors, defense, and biotechnology.

Japan learned this lesson the hard way when they lost their semiconductor researchers to Taiwan, Korea, and the US in the early 90s. The US's misplay has given them a chance at redemption.


Isn't the problem they're trying to fix that there isn't enough local academic talent (because they chronically under-spent in the past)?

Are they trying to jump start and gain momentum?


Presumably they already spend money on local academic talent? This is an opportunistic investment because the US government is defunding and politicizing academia.


SASL (the predecessor to Miranda) isn't statically typed, but it's very small and very much feels like a language in the ML tradition.


If we see farther away galaxies moving away at the same speed as nearby galaxies that means that all the expansion of the universe happens between us and the nearby galaxies (since the farther away galaxies wouldn't be moving away from the nearby galaxies). So we would have some kind of local expansion of space around our galaxy. This is if we see a redshift in all directions, if we see a redshift in one direction and a blueshift in the opposite direction, that just means that our galaxy is moving relative to the observed galaxies (also, such a dipole can be seen in the microwave background).


I'd think LLMs would be more dependent on compatibility than humans, since they need training data in bulk. Humans can adapt with a book and a list of language changes, and a lot of grumbling about newfangled things. But an LLM isn't going to produce Python++ code without having been trained on a corpus of such code.


It should work if you feed the data yourself, or at the very least the documentation. I do this with niche languages and it seems to work more or less, but you will have to pay attention to your context length, and of course if you start a new chat, you are back to square one.


I don't know if that's a big blocker now we have abundant synthetic data from a RL training loop where language-specific things like syntax can be learned without any human examples. Human code may still be relevant for learning best practices, but even then it's not clear that can't happen via transfer learning from other languages, or it might even emerge naturally if the synthetic problems and rewards are designed well enough. It's still very early days (7-8 months since o1 preview) so to draw conclusions from current difficulties over a 2-year time frame would be questionable.

Consider a language designed only FOR an LLM, and a corresponding LLM designed only FOR that language. You'd imagine there'd be dedicated single tokens for common things like "class" or "def" or "import", which allows more efficient representation. There's a lot to think about ...


It’s just as questionable to declare victory because we had a few early wins and that time will fix everything.

Lots of people had predicted that we wouldn’t have a single human-driven vehicle by now. But many issues happened to be a lot more difficult to solve than previously thought!


How would you debug a programming language made for LLMs? And why not make an LLM that can output gcc intermediate representation directly then?


You wouldn't, this would be a bet that humans won't be in the loop at all. If something needs debugging the LLM would do the debugging.


One has to wonder, why would there be any bugs at all if the LLM could fix them? Given Kernighan's Law, does this mean the LLM can't debug the bugs it makes?

My feeling is unless you are using a formal language, then you're expressing an ambiguous program, and that makes it inherently buggy. How does the LLM infer your intended meaning otherwise? That means programmers will always be part of the loop, unless you're fine just letting the LLM guess.

  Kernighan's Law - Debugging is twice as hard as writing the code in the first place.


The same applies to humans, who are capable of fixing bugs and yet still produce bugs. It's easier to detect bugs with tests and fix them than to never have introduced bugs.


But the whole idea of Kernighan’s law is to not be so clever that no one is available to debug your code.

So what happens when an LLM writes code that is too clever for it to debug? If it weren’t too clever to debug it, it would have recognized the bug and fixed it itself.

Do we then turn to the cleverest human coder? What if they can’t debug it, because we have atrophied human debugging ability by removing them from the loop?


Yeah, I can imagine how that goes:

Oh, there's a bug in this test case, deletes test case.

Oh, now we're missing a test case, adds test case.

Lather, rinse, repeat.


Lol.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: