Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yea the expert abstraction thing is bad advice, too many juniors (who probably already over abstract everything) will read this and think they need more abstraction.

The difference I think the author was trying to hit is that an expert uses abstraction to achieve separation of concerns, an important and difficult thing to do.

A junior doesn't know what separation of concerns is but they see the experts abstracting stuff and think they should abstract things then they will be like the experts.



There is an easy test for wrong abstractions.

A right abstraction makes everything smaller. Most wrong abstractions make everything bigger (in the name if "flexibility", etc).


Are you listening, Java people?!


I've come to realize the problem with Java is not the programmers. Most of the strategies they want to use are legitimate improvements in flexibility, and in other languages they do make the source code smaller. Java is simply a poor vehicle.

Java forces programmers to choose a point on the spectrum of "more flexibility" <-> "smaller program size". In languages like Lisp, by contrast, the more generic a function is, the smaller it is. When I choose to make a function less flexible (i.e., making it more specific to improve performance), it gets longer. The spectrum in Lisp is "more flexibility + smaller program size" <-> "more performance + larger program size", and that's almost always an easy choice to make. Every function and macro in core.clj, for example, is impressively concise.

When I write Java, I tend to go for simple first, and then have to rewrite everything 20 times as I discover what axes of flexibility I need. In Lisp, I tend to write a function once in the simplest possible way, and then re-use it in its original form forever. It's already at max-simplicity and max-conciseness by default. Except in the rare case where it ends up being a bottleneck, it's done.

What I want above all is "more maintainable". In Java, the two factors that drive this are on opposite ends of the spectrum. There's no ideal design, and whatever point I pick today will turn out to be the wrong choice later.


One problem is that Java lacked functions until Java 8. It still sort of lacks them internally, but tat least you can write free-standing functions here and there, and especially as lambdas inline.

Another problem is that Java lacks type aliases. If your data have a complicated type like List<Set<Pair<Foo, ? extends Bar>>>, you have to copy-paste this type everywhere, without a way to name it succinctly.

On top of that, Java lacks type inference, even the weakest syntactic form. This is why you often need to write a long declarations, so long that they exceed the space you may have saved by factoring out a small function. Lombok and the diamond operator somehow alleviate that, but not completely.

The lack of pattern matching, or named arguments, or null vs Optional vs Result, goes without saying.

This makes Java a very verbose language, even if the logic of your code is streamlined and economical. In many standard library APIs, it is not.

(Hence Kotlin, obviously, or Scala if you can tolerate the build times.)


> If your data have a complicated type like List<Set<Pair<Foo, ? extends Bar>>>, you have to copy-paste this type everywhere, without a way to name it succinctly.

Java has had generic type inference since at least 7 [0], so all you need is <>

It's not perfect, but I believe it's been improved with each version.

[0] https://docs.oracle.com/javase/7/docs/technotes/guides/langu...


Yes, this is what I called "diamond operator".

I'm talking about a different use case where inference via assignment does not work, e.g. a method declaration:

    public List<Set<Pair<Foo, ? extends Bar>>> combine(
      List<Set<Pair<Foo, ? extends Bar>>> a,
      List<Set<Pair<Foo, ? extends Bar>>> b
    ) {...}
It would be great to have something like

    type Quux = List<Set<Pair<Foo, ? extends Bar>>>;

    public Quux combine(Quux a, Quux b) {...}
Unfortunately, I'm not aware of any plans to introduce that.


Huh, I always thought I was doing Java wrong when I would try to write the equivalent of four Python list comprehensions in a row and end up declaring a ton of List<Set<Pair>>> things. Didn't think to blame the language.


Java 10 added the var keyword so it has some type inference finally.


I would argue that no matter what language you pick, adding abstraction always behaves like this:

Suppose I have a thing I need to do, called X. I write a program to do X. Then later I need to do Y, and I realize that if I think of X as (A + B) and Y as (A + C), then if I re-organize to have one segment of code to do A, then wrap it so that B or C happen afterwards based on context (whether that be polymorphism or different scripts or procedural flow control, whatever the branching mechanism), then I've achieved abstracting A out into its own standalone piece, hopefully because A describes some independent process that makes sense to stand alone. Therefore, I've made it so that B and C's concerns can be separated from A's concerns, and X and Y are now just compositions of these nice linkable, re-usable pieces.

No matter what language you pick, if you want program X to do the same thing, the X-iness needs to still live somewhere. In Java that might look like going from two classes (X and Y) to three classes (A, X and Y). In lisp it might be three functions. I feel like in your example just now, you're comparing A from lisp to X in Java.

Please correct me if I'm wrong. I just feel like when you consider a program holistically, more genericness, and therefore more nuanced program description, always results in more text.

I agree that Java is more verbose than say, lisp or python. But I think that syntactic verbosity is really limited to a per-statement or per-block scope. I disagree that the language itself is responsible for verbosity in the higher-order composition of these pieces, weighted for, of course, how dynamic each language is. I hope you won't fault Java for not having the terseness of Ruby when Ruby doesn't have the performance or rigidity of Java.

I think you only really make order-of-magnitude leaps in reducing verbosity/excess abstraction by sliding up or down the dynamicness vs performance/safety spectrum. Tit-for-tat, I think an equivalent Ruby and Python program will be about the same size, and an equivalent Java and C# program will be about the same size.

The huge asterisk to all this is, of course, the humans actually writing the programs. Obviously a sufficiently motivated developer will be able to make an abstract mess out of any language.


I'll give a concrete example where adding abstraction will make code shorter. This comes from my recent experience, with some details changed for the sake of discretion and simplicity.

Suppose you wanted to simplify the generation of contextual metadata for structured logging in an API service. The service handles requests that manipulate stored records, run logic, etc. Basic CRUD plus business logic.

The starting point is a bunch of raw calls to MDC.put() if a Java service, or an equivalent in another language[0].

An abstraction-free approach might give you a logUser method, a logUserAndAccount method, a logAccount method, a logTransaction method, a logTransactionAndAccount method, etc. This does at least simplify the actual request processing code from the starting point and make the logging consistent, but makes the program longer.

Alternatively, one could have a generic Loggable interface, with a function that returns metadata for the object to be logged, and a logWith method that takes a Loggable as a parameter. You can get fancy and provide a default implementation if all of your entities have common methods like id(). There are probably still ways to improve from here, but now instead of a dozen functions, you have one.

[0] Years ago I wrote a rubygem for this, but was not able to open source the bulk of it.


Most wrong abstractions are created when people anticipate further growth that never comes. I wouldn’t say they are wrong by themselves, but it could be argued you should just refactor your code when you need them.


     > ...too many juniors will read this and think they need more abstraction.
Yes. And that's a good thing.

The way you become "an expert" is by making ALL the mistakes yourself, seeing the results, and doing it over and over again in different contexts until you understand and can do it right (most of the time).


A good thing is making those mistakes but never landing most of them into production. It will boost your progress immensely if you have somebody available to critique and guide you in the right direction. Making mistakes but learning only a month later when significant amount of time has been wasted in wrong things won't make you any smarter than figuring them out before-hand. Also you'll be a lot less confident afterwards doing anything by yourself after such a disaster.

So no, don't make "ALL the mistakes". Learn by observing others around you, emulating it and then finally at some point understanding it clearly enough to make something completely novel.


    > ...making those mistakes but never landing most of them into production. It will boost your progress immensely if you have somebody available to critique and guide...
I mostly agree.

But the consequences of what "landing a mistake in production" actually means depends on a lot on the environment and the project. Does it mean the deliverable has a hiccup and skids past the deadline a few days? Does it mean there's <gasp> a bug? Does it mean a hard-to-maintain big ball of mud that makes people miserable for years? Or does it mean a rocket blows up? All those things happen, of course, but to assign blame on any significant number of these to uppity juniors reading articles that are too advanced for them is a bit of a stretch. There are so many ways projects can fail.

I think we all can agree that juniors need the agency to try things out (hopefully with a few guard-rails installed). Sadly having benevolent mentor watch-over juniors is, in many places, a luxury and they're forced to read "articles on the internet" for guidance. It's not optimal but it's OK.


Sure, it's a vague line. And it's hard already for juniors to have the confidence to make their own decisions or code in general. To have them also fear failure is even worse with everybody fixated on not making any mistakes. But as a constant failurer myself, I don't view falling on your ass all the time as that great of a thing that you should be so careless to let it continuously happen. You have to try your best and be focused, otherwise you'll always half-ass things without learning the right mindset so to speak.


> Learn by observing others around you

Or even better, learn by making mistakes and have great coworkers who can catch those in code reviews.


Mistakes are always a part of the process and are one way to learn.

However I don't think we would have had Ramanujan if he were left to rediscover all of mathematics on his own. Having an expert validate your insights or point you in the right direction can speed up the learning process a lot.


Learning abstractions is really hard, mainly because over or under abstracting doesn't break the code, it just wastes work, and if you keep on erring the same way you won't have a reference for how much work something should have taken.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: