Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you listening, Java people?!


I've come to realize the problem with Java is not the programmers. Most of the strategies they want to use are legitimate improvements in flexibility, and in other languages they do make the source code smaller. Java is simply a poor vehicle.

Java forces programmers to choose a point on the spectrum of "more flexibility" <-> "smaller program size". In languages like Lisp, by contrast, the more generic a function is, the smaller it is. When I choose to make a function less flexible (i.e., making it more specific to improve performance), it gets longer. The spectrum in Lisp is "more flexibility + smaller program size" <-> "more performance + larger program size", and that's almost always an easy choice to make. Every function and macro in core.clj, for example, is impressively concise.

When I write Java, I tend to go for simple first, and then have to rewrite everything 20 times as I discover what axes of flexibility I need. In Lisp, I tend to write a function once in the simplest possible way, and then re-use it in its original form forever. It's already at max-simplicity and max-conciseness by default. Except in the rare case where it ends up being a bottleneck, it's done.

What I want above all is "more maintainable". In Java, the two factors that drive this are on opposite ends of the spectrum. There's no ideal design, and whatever point I pick today will turn out to be the wrong choice later.


One problem is that Java lacked functions until Java 8. It still sort of lacks them internally, but tat least you can write free-standing functions here and there, and especially as lambdas inline.

Another problem is that Java lacks type aliases. If your data have a complicated type like List<Set<Pair<Foo, ? extends Bar>>>, you have to copy-paste this type everywhere, without a way to name it succinctly.

On top of that, Java lacks type inference, even the weakest syntactic form. This is why you often need to write a long declarations, so long that they exceed the space you may have saved by factoring out a small function. Lombok and the diamond operator somehow alleviate that, but not completely.

The lack of pattern matching, or named arguments, or null vs Optional vs Result, goes without saying.

This makes Java a very verbose language, even if the logic of your code is streamlined and economical. In many standard library APIs, it is not.

(Hence Kotlin, obviously, or Scala if you can tolerate the build times.)


> If your data have a complicated type like List<Set<Pair<Foo, ? extends Bar>>>, you have to copy-paste this type everywhere, without a way to name it succinctly.

Java has had generic type inference since at least 7 [0], so all you need is <>

It's not perfect, but I believe it's been improved with each version.

[0] https://docs.oracle.com/javase/7/docs/technotes/guides/langu...


Yes, this is what I called "diamond operator".

I'm talking about a different use case where inference via assignment does not work, e.g. a method declaration:

    public List<Set<Pair<Foo, ? extends Bar>>> combine(
      List<Set<Pair<Foo, ? extends Bar>>> a,
      List<Set<Pair<Foo, ? extends Bar>>> b
    ) {...}
It would be great to have something like

    type Quux = List<Set<Pair<Foo, ? extends Bar>>>;

    public Quux combine(Quux a, Quux b) {...}
Unfortunately, I'm not aware of any plans to introduce that.


Huh, I always thought I was doing Java wrong when I would try to write the equivalent of four Python list comprehensions in a row and end up declaring a ton of List<Set<Pair>>> things. Didn't think to blame the language.


Java 10 added the var keyword so it has some type inference finally.


I would argue that no matter what language you pick, adding abstraction always behaves like this:

Suppose I have a thing I need to do, called X. I write a program to do X. Then later I need to do Y, and I realize that if I think of X as (A + B) and Y as (A + C), then if I re-organize to have one segment of code to do A, then wrap it so that B or C happen afterwards based on context (whether that be polymorphism or different scripts or procedural flow control, whatever the branching mechanism), then I've achieved abstracting A out into its own standalone piece, hopefully because A describes some independent process that makes sense to stand alone. Therefore, I've made it so that B and C's concerns can be separated from A's concerns, and X and Y are now just compositions of these nice linkable, re-usable pieces.

No matter what language you pick, if you want program X to do the same thing, the X-iness needs to still live somewhere. In Java that might look like going from two classes (X and Y) to three classes (A, X and Y). In lisp it might be three functions. I feel like in your example just now, you're comparing A from lisp to X in Java.

Please correct me if I'm wrong. I just feel like when you consider a program holistically, more genericness, and therefore more nuanced program description, always results in more text.

I agree that Java is more verbose than say, lisp or python. But I think that syntactic verbosity is really limited to a per-statement or per-block scope. I disagree that the language itself is responsible for verbosity in the higher-order composition of these pieces, weighted for, of course, how dynamic each language is. I hope you won't fault Java for not having the terseness of Ruby when Ruby doesn't have the performance or rigidity of Java.

I think you only really make order-of-magnitude leaps in reducing verbosity/excess abstraction by sliding up or down the dynamicness vs performance/safety spectrum. Tit-for-tat, I think an equivalent Ruby and Python program will be about the same size, and an equivalent Java and C# program will be about the same size.

The huge asterisk to all this is, of course, the humans actually writing the programs. Obviously a sufficiently motivated developer will be able to make an abstract mess out of any language.


I'll give a concrete example where adding abstraction will make code shorter. This comes from my recent experience, with some details changed for the sake of discretion and simplicity.

Suppose you wanted to simplify the generation of contextual metadata for structured logging in an API service. The service handles requests that manipulate stored records, run logic, etc. Basic CRUD plus business logic.

The starting point is a bunch of raw calls to MDC.put() if a Java service, or an equivalent in another language[0].

An abstraction-free approach might give you a logUser method, a logUserAndAccount method, a logAccount method, a logTransaction method, a logTransactionAndAccount method, etc. This does at least simplify the actual request processing code from the starting point and make the logging consistent, but makes the program longer.

Alternatively, one could have a generic Loggable interface, with a function that returns metadata for the object to be logged, and a logWith method that takes a Loggable as a parameter. You can get fancy and provide a default implementation if all of your entities have common methods like id(). There are probably still ways to improve from here, but now instead of a dozen functions, you have one.

[0] Years ago I wrote a rubygem for this, but was not able to open source the bulk of it.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: