Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most/all languages today take away options. Any OTL would just provide defaults. Details are optional but always possible. I thought I was pretty clear about that re assembly. That's barely even one of the hard parts.

I'm well aware of what it means to have a language for everyone to use. I'm thinking of everything from bootloaders to machine learning to interactive shells. The reason there isn't one yet is that it's really hard. Lots of basic theory about how to think about computation is still being sounded out. Unifying frameworks have been known to take a few decades after that. Still no reason to think it's impossible.

You're just repeating that there will always be gaps, with no evidence except gaps in languages produced by today's rushed, history-bound ecosystem. You're trying to use JS as a illustration of an OTL, which is baffling. Having a limited set of integer sizes would obviously not fly.

I'm apparently not getting the vision across. This is not even a type of thing that exists today, which is why I keep saying to use more imagination. Racket is only close due to its radical flexibility in inputs and outputs.



>Any OTL would just provide defaults.

And you will inevitably have defaults that contradict each other which motivates another language.

Another way of saying "default" is "concepts in the programming language we don't even have to explicitly type by hand or have our eyeballs look at."

What should the OTL default be for not typing any explicit datatype in front of the following _x_ that works for all embedded, scientific numeric, and 4GL business?

  x = 3
Should the default _x_ be a 32bit int or 64int or 128bit int? Or a 64bit double-precision? Or a arbitrary precision decimal (512+ bits memory expandable) or arbitrary size integer (512+ bits expandable)?

Should the default for x be const or mutable? Should the default for x have overflow checks or not? Should default for x be stored in a register or on the stack? Should the name 'x' be allowed to shadow an 'x' defined at a higher scope? What about the following?

  x = 3/9
Should x be turned into approximation of 0.3333... or should x preserve the underlying symbolic representation of 2 rationals with a divide operator (3,div,9)?

The defaults contradict each other at a fundamental level. The default for x cannot be simultaneously be both a 32-bit int and a 512-bit arbitrary precision decimal at the same time. We don't need a yet-to-be-discovered computer science breakthrough to understand that limitation today.

If we go meta and say that the default interpretation for "x = 3" is that it's invalid code and the programmer must type out a datatype in front of x to make it valid, then that choice of default will also motivate another language that doesn't require manually typing out an explicit datatype!

Therefore, we can massively simplify the problem from "One True Language" to just the "One True Datatype" -- and we can't even solve that! Why is it unsolvable? Because the OTD is just another way of saying "read the mind of the programmer and predict which syntax he doesn't want to type out explicitly for convenience in the particular domain he's working in". This is not even a well-posed question for computer science research. Mind-reading is even more intractable than the Halting Problem.

As another example, the default for awk language -- without even manually typing an explicit loop -- is to process text line-by-line from top-to-bottom. This is not a reasonable default for C/Javascript/Racket/etc. But if you make the default in the proposed OTL to not have implicit text processing loop in the runtime, it motivates another language (such as awk) that allows for it. You can't have a runtime that has both simultaneous properties of implicit-text-loop and text-loop-must-be-manually-coded.

Whatever choice you make as the defaults for OTL, it will be wrong for somebody in some other use case which motivates another language that chooses a different default.

>Details are optional but always possible.

Yes, but any extra possibilities will always require extra syntax that humans don't want to type or look at. Again, it's not what's possible. It's what's easy to type and read in the specific domain that the programmer is working in.

>You're just repeating that there will always be gaps, with no evidence except gaps in languages produced by today's rushed, history-bound

Are you saying you believe that abstractions today have gaps but tomorrow's yet-to-be-invented abstractions can be created without gaps and we just haven't discovered it yet because it's really hard with our limited imagination? Is that a fair restatement of your position?

Gaps don't just exist because of myopic accidents of history. Gaps must exist because they are fundamental to creating abstractions. To create an abstraction is to create the existence of gaps at the same time. Gaps are what make the abstraction useful. A map (whether fold paper map or online Google maps) is an abstraction of the real underlying territory. The map must have gaps of information loss because otherwise, the map would be the same size and same atoms as the underlying territory -- and thus the map would no longer be a "map".

The mathematical concept of "average or mean" is an abstraction tool of summing a set of numbers and dividing by its count. The "average" as one type of statistics shorthand, adds power of reasoning by letting us ignore the details but to do so, it also has gaps because there is information loss of all the individual elements that contributed to that average. The unavoidable information loss is what makes "average" usable in speech or writing. You cannot invent a new mathematical "average" which preserves all elements with no information loss because doing so means it's no longer the average. We can write "the average life expectancy is 78.6 in the USA". We can't write "the average life expectancy is [82,55,77,1,...300 million more elements divided by 300 million] in the USA" because that huge sentence's text would then be a gigabyte in size and incomprehensible. You can invent a different abstraction such as "weighted average" or "median" or "mode" but those other abstractions also have "information loss". You're just choosing different information to throw away. We can't just say we're not using enough imagination to envision a new type of mathematical "average" abstraction that will allow us to write an alternative sentence that preserves all information of individual age elements without the sentence being a gigabyte in size.

>JS as a illustration of an OTL, which is baffling.

No, I was using JS as one example about surjection that affects forward engineering. When I say "this means Javascript can't be the OTL for all situations", it's saying all programming languages above NAND gates will have gaps and thus you can't make a OTL.

What's baffling is why anyone would think Racket's (1) defaults + (2) DSL + (3) macros -- would even be a realistic starting point for the universal OTL. The features (1,2,3) you propose as ingredients for universal OTL are the very same undesirable things that motivates other alternative languages to exist! Inappropriate defaults motivates another language with different defaults. The ability to write DSLs motivate another language that doesn't require coding that DSL. The flexibility of coding macros motivate another language that doesn't require coding the macro.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: