Hacker Newsnew | past | comments | ask | show | jobs | submit | more olivia-banks's commentslogin

I wrote a pretty complicated set of GNU Makefiles for a simulation library at work, but was annoyed I had to work so hard to avoid collisions, so I'm working on a "more sanitary" build-your-own-build-system/build-system-kernel type deal.


It would be nice to see some comparison with other fonts on the GitHub page; the symbols look normal to me, at least. It looks very pretty regardless!


thanks for pointing it out. i would need to consider that idea.

the symbols are all pure ASCII and are supposed to look normal. it is not a ligature font and neither focusses on Unicode symbols. the symbols are just more evenly adjusted with the letters and with each other.


That's something I really like about this front. I'm not a huge fan of ligatures and think they're counter productive. Makes it a bit harder to see the differences though, so I think a comparison would be great.


i've now added a table comparing Myna's symbols with a few other popular fonts in the README.md (without naming the fonts explicitly).


That looks really helpful. Now that I can better see the difference, I'll definitely try it the next time I write Perl or Haskell!


You could make a PR to add it to https://www.programmingfonts.org so people can become aware and compare


thanks for the suggestion. i'd definitely add it once some obvious rendering bugs on Windows are found and resolved.


It's incredible they've lasted this long.


> OCaml is weird in that it’s a garbage collected language for systems, similar to Go.

I don't understand why this isn't more popular. For most areas, I'd gladly take a garbage collector over manual memory management or explicit borrow checking. I think the GC in D was one of it's best features, but it's downfall nonetheless as everyone got spooked by those two letters.


I'd say that functional programming is probably one of the only domains where linked lists actually make sense. Vectors certainty have their use in more places though.


A good compiler will make the lists disappear in many cases. No runtime overhead. I actually love single linked lists as a way to break down sequences of problem steps.


In Haskell, yes, because laziness permits deforestation. ML, including OCaml, is eager and consequently cannot do this.


I believe it's possible in theory - Koka has a whole "functional but in-place" set of compiler optimisations that essentially translate functional algorithms into imperative ones. But I think that's also possible in Koka in part because of its ownership rules that track where objects are created and freed, so might also not be feasible for OCaml.


I mean, the set of valid deforestation transformations you could do to an OCaml program is not literally the empty set, but OCaml functions can do I/O, update refs, and throw exceptions, as well as failing to terminate, so you would have to be sure that none of the code you were running in the wrong order did any of those things. I don't think the garbage collection issues you mention are a problem, though maybe I don't understand them?


Part of what Koka's functional-but-in-place system relies on is the Perceus program analysis, which, as I understand it, is kind of like a limited Rust-like lifetime analysis which can determine statically the lifetime of different objects and whether they can be reused or discarded or whatever. That way, if you're, say, mapping over a linked list, rather than construct a whole new list for the new entries, the Koka compiler simply mutates the old list with the new values. You write a pure, functional algorithm, and Koka converts it to the imperative equivalent.

That said, I think this is somewhat unrelated to the idea of making linked lists disappear - Koka is still using linked lists, but optimising their allocation and deallocation, whereas Haskell can convert a linked list to an array and do a different set of optimisations there.

See: https://koka-lang.github.io/koka/doc/book.html#sec-fbip


Why would a specific way of structuring data in memory be relevant to breaking down sequences of problem steps?

If what you mean is the ability to think in terms of "first" and "rest", that's just an interface that doesn't have to be backed by a linked list implementation.


I’m curious what you mean. Surely there’s the overhead of unpredictable memory access?


Not GP but bump allocation (OCaml's GC uses a bump allocator into the young heap) mitigates this somewhat, list nodes tend to be allocated near each other. It is worse than the guaranteed contiguous access patterns of a vector, but it's not completely scattered either.


Ah yes, our old friend - the sufficiently smart compiler.


> It just genuinely felt like the Go language designers didn’t want to engage with any of the ideas coming from functional programming.

You'd be right.

"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. – Rob Pike 1"

"It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical. – Rob Pike 2"

Talking as someone who wrote OCaml at work for a while, the benefits of functional programming and the type guarantees that it's ilk provides cannot be understated; you only start to reach them, however, once most developers have shifted their way of thinking rather extremely, which is a time cost that that designers of Go did not want new Googlers to pay.


You would be surprised how many Googlers think the Go team is making bad decision after bad decision.


Not just Googlers. There is a certain pragmatism and then there is being stubborn.


How so? I’m not that familiar with Go anymore.


Search any Go codebase for err != nil, for starters.


Well, sure, but this is a problem nearly as old as Go itself. I thought they were referring to larger more recent things, like how generics work, or something to that effect.


> is a problem nearly as old as Go itself

It's as old as C. C and Go are the only two significant languages which end up constantly checking for errors like this.


I am exactly the type of novice programmer described in the quotes.

>have shifted their way of thinking rather extremely

What could I read to shift my way of thinking?

The signals & threads episode about OCaml strongly piqued my interest, and not because I have any JS delusions (they would never, lol).


Functional programming is a different way of thinking about solving problems. People say it's about thinking about "what you're trying to compute" versus "how you're computing it," but quite frankly that never made sense to me. I prefer to think about it in terms of thinking about relationships (FP) versus thinking about steps over time (procedural).

This, at least for me, brings the act of writing a specific piece of code more inline with how I think about the system as a whole. I spend less energy worrying about the current state of the world and more about composing small, predictable operations on relationships.

As for what you can read, I find it's just best to get going with something like OCaml or F# and write something that can take advantage of that paradigm in a relatively straightforward way, like a compiler or something else with a lot of graph operations. You'll learn pretty quickly what the language wants you to do.


I did a small small experiment in dependent typing for Python (and catching those errors before run time). It's a decently good fit, and with HM Type inference some neat things just fall out of the design.

But the real challenge is deciding how much of Python's semantics to preserve. Once you start enforcing constraints that depend on values, a lot of idiomatic dynamic behavior stops making sense. Curious what ways there are to work around this, but I think that's for a person better at Python than I :)


I love how this is written. A lot of things nowadays on this site, if only vaguely, make me think it was written in part by an LLM, but this didn’t fall into that category. Great read, bravo!


Donated!! I run a small cluster of a few nodes I bought for cheap, and I’m experimenting with SSI on them. The kernel is really nice to read and modify.


Thanks a lot:)


I'm toying around with a language that's like Python but with Hindley-Miller interference and some functional stuff. It's not a superset or anything, because I can't do that, but it's interesting how well HM (plus some well-encapsulated escape hatches) map onto the Python ecosystem with all its dynamism.


Very interesting idea. Good luck!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: