Hacker News new | past | comments | ask | show | jobs | submit login

> The above APL takes a similar liberty, but instead of a fiat declaring, we just a fiat declare that our data is sufficiently normalized.

Why would you make an assumption like this and try to compare code? If the sample code were simply subtraction then OP would've written subtraction. What use is demonstrating APL code that solves a different problem? If you're gonna do that, instead of removing part of the problem you might as well remove most of the problem and just say the solution is:

⌈/dest-source

And then write some sentence about saying how that's valid because you declare they are normalized even further.

> Here's my take on performScan, too: > Log⍪←onSourceTime+slewTime

How is this valid if you don't actually do any work to calculate onSourceTime or slewTime? You took the last line of performScan and implemented it. You do need to also implement the other lines in the function.

Maybe I'm just completely missing something here.






Thanks for asking!

> You do need to also implement the other lines in the function.

What lines, though?

All we have in the original are three opaque functions composed in a Dovekie pattern, i.e. (f (g x) (h y)). Everything else—besides the discussed overcomplications—is just suggestive naming.

I actually agree with you! The demonstrated code is trivial, and I believe you're right to complain. It's just that APL makes the triviality painfully obvious.

> If the sample code were simply subtraction then OP would've written subtraction

The reason is that functional design patterns are very different from APL design patterns.

In functional programming we manage complexity by hiding it behind carefully-designed functions and their compositional structure. APL manages the same by normalizing data and operating over homogeneous collections. These lead to very different looking code, even without the symbols.

So, where Haskell hides a bunch of calculation behind timeToRotate and timeToAscend, with APL we usually choose to pre-process the data instead. This means that where Haskell needs timeToRotate etc. APL can just use really simple primitive.

While we're granting the author that these opaque functions are correctly implemented, I think it's fair to grant that the APL correctly normalizes it's data. The unshown work is going to be about the same, either way.

Don't you think it's intriguing if we end up with less work and clearly trivial code?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: