You are very confused... "strong" in strong typing doesn't mean you have to write much or at all. Actually, it doesn't mean anything really. But, let's say, Haskell is probably at least as "strongly typed" as Java -- at least that's how most people understand that wording. And you don't have to write types in Haskell at all. It will be a nightmare (as if Haskell can be anything else, but even by the very low Haskell standards, it would still be a nightmare) code, but it will "work".
Until very recently, Java was just tedious, repetitive, high-entropy language, where you had to write everything multiple times.
Eg.:
VeryLong<Type> variable = VeryLong<Type>();
In this expression, it's obvious what would be the most likely type of variable, but you still have to write it twice -- which is just worthless use of your time, especially when trying to read this very repetitive mess, and very easy to make a typo.
Recently, Java tried to improve this situation by allowing to omit types when the type inferred by default is the one you want. So, you need to write less. You also need to learn the inference rules, obviously, so it increases the cognitive load and rises expectations towards programmer's skill a bit. But, I think it's fair, since Java from a language for the brain-dead transformed into a language that requires more effort to master. While I'm not sure it's a positive change... this decision by Java developers kind of flies in the face of your argument:
No, it's not necessary to repeatedly spell out what you want your program to do. It's just boring and contributes nothing to the program's correctness. If anything, it only causes the programmer to lose focus and introduces mechanical errors that would've been totally avoidable, had the language was more terse.
Type inferencing can get you in trouble quickly. Consider this code:
var fireable = someMethod();
firable.fire();
The programmer intended this code to fire an employee. Let's say this code is in a military application, and another programmer modified the someMethod() function to return a missile. As long as the missile object has a fire() method this code will compile just fine... and do something the code didn't intend to do.
How likely are you to have employee firing and missile firing in the same program? Not very likely, but the principle is valid, regardless. You need to express your intention more clearly like this:
Employee fireable = someMethod();
Now if someMethod() is modified to return a missile you get a compilation error, and the world will be a lot safer. You don't want missiles being fired by accident!
This is a problem of dispatch. Not so much of type inference... The problem with dispatch says that if a method gets overloaded too much and too often, then programmers tend to make mistakes when they assume how the method they call will work.
You should be insane if your program accidentally confuses people with missiles, but I understand that this is a stretch to amplify the point. What tends to happen in practice is that eg. the same method which used to only read something from the local storage gets a new overload that goes out into network, and brings with it a whole new bunch of problems the calling code wasn't prepared to handle. Or, when a string was passed to the method that used to do something simple, and then the method gets overloaded with a more sophisticated method that interprets substrings in the passed string as commands to run some code -- and then you get security problems.
Have you ever seen the IntelliJ interface when it deals with Rust code? I'm not sure if it does the same for other languages with type inference feature. Anyways, the idea here is that the programmer can toggle the display of inferred types to qualify every expression. So, it's really trivial to make sure you aren't calling fire() of a missile instead of an employee. And, in practice, this doesn't lead to problems. This is so because dispatch, essentially, blurs the difference between multiple different objects, while type inference is just a kind of abbreviation. It may cause confusion, but it will be of a different type: the reader might not know what's being abbreviated, whereas with dispatch, the reader may be convinced they know what the author meant, but still be wrong.
> the programmer can toggle the display of inferred types to qualify every expression
That's a terrible solution. Eyeballing is not as reliable as compiler doing a check. The language should provide a way for the programmer to express intent, and the compiler should do the check.
You can express intent more clearly when you say:
Employee foo = someMethod();
as opposed to:
var foo = someMethod();
A real-life example: I once helped a kid with her Python program to sort numbers. I looked at the code and the algorithm seemed correctly implemented, but the output was incorrect. The bug turned out to be that she hadn't called int() to convert the user input to integer, so all of the data passed through the entire program as strings, and got sorted as strings. There was nowhere in the code where the programmer could express intent that this is supposed to be integers. This sort of thing would never have happened in Java. Well, unless you misuse 'var'.
This is kind of a ridiculous thing to worry or think about. I’m seeing a lot of type inference in modern Java, and it’s the default way of writing Kotlin.
It is dangerous to call a method on an object whose type is not being checked by the compiler. Whether that's a ridiculous thing to worry about depends on how important your program is. If you work for NASA and your code will run inside the Mars rover, it is not ridiculous to worry about such things, but if you're writing some code that will be used once and thrown away then yeah, it might be ridiculous.
It is being checked by the compiler though. For your example to work both would need to extend a common object (unless the only thing you ever do to that object is call fire(), which is nonsensical) or you would get a compile time error right off the bat.
It's a ridiculous thing to worry about in either of your cases. That incredibly specific situation simply isn't ever going to happen in practice because so many other conditions need to be met for the compiler to let you proceed.
> For your example to work both would need to extend a common object
Not necessarily at all.
> unless the only thing you ever do to that object is call fire(), which is nonsensical
In this example, when the object is returned by a particular method, only fire() is being called, but other ways of obtaining the object can have more methods being called.
> It's a ridiculous thing to worry about in either of your cases.
Not at all... if your program is running inside a NASA rover, or if your program is running inside a robotic surgery machine you have to worry about safety and you want to maximize compile-time checks.
someMethod() could return an implementation of Employee that overloads the fire method to dispatch missiles.
The point being, someMethod is doing some job that you want it to do. It is incredibly unlikely to simultaneously start doing a completely different job, while also returning something of the same shape.
> an implementation of Employee that overloads the fire method to dispatch missiles
If you intentionally break it, that's on you.
> It is incredibly unlikely to simultaneously start doing a completely different job, while also returning something of the same shape.
But it doesn't need to return something of the same shape! The shape is not being checked. All the compiler checks is for the existence of a single method of the same name. You call that type checking?
F# is strongly typed as well, and I almost never have to write type annotations. Hindley milner type inference is pretty useful. And my IDE of choice also shows the inferred types for easy checking. Altough, without that, it'd be a PITA, as you said about haskell.
Until very recently, Java was just tedious, repetitive, high-entropy language, where you had to write everything multiple times.
Eg.:
In this expression, it's obvious what would be the most likely type of variable, but you still have to write it twice -- which is just worthless use of your time, especially when trying to read this very repetitive mess, and very easy to make a typo.Recently, Java tried to improve this situation by allowing to omit types when the type inferred by default is the one you want. So, you need to write less. You also need to learn the inference rules, obviously, so it increases the cognitive load and rises expectations towards programmer's skill a bit. But, I think it's fair, since Java from a language for the brain-dead transformed into a language that requires more effort to master. While I'm not sure it's a positive change... this decision by Java developers kind of flies in the face of your argument:
No, it's not necessary to repeatedly spell out what you want your program to do. It's just boring and contributes nothing to the program's correctness. If anything, it only causes the programmer to lose focus and introduces mechanical errors that would've been totally avoidable, had the language was more terse.