> I've been using Python for ML for the last 3 years and I've never felt this way. It might be that I'm not all about the hip new languages, but I don't really see the benefit of making Python more ML/Haskell-ish.
Julia is not ML-ish. In fact, at first blush, Julia reads very similarly to Python. You even get list comprehensions in Julia!
> Sure, the code that you used to implement your model may improve slightly, but I don't see that code improving significantly. The fruit of your labor is usually a protobuf file (encoding the TensorFlow graph) or whatever your framework uses to encode the model you built. The code actually surrounding it is very minimal for most use cases.
It's really about breaking down the compositional boundaries here. Upthread, someone talked about how they could play around with Bayesian Nets by simply mixing Flux.jl (the differential programming library) with Turing.jl (the Bayesian programming library). Mixing Tensorflow/PyTorch with PyStan, for example, can be a nightmare. That's also why there's so many implementations of probabilistic programming frameworks (Edward, PyMC3, PyStan, Pyro, etc); they all use different underlying libraries. In Julia they just all compose together.
> I don't see why you would compose a TensorFlow model with a PyTorch model without incurring tons of overhead in your application around gluing these two things
I find the change to be illuminating. Before I started spending more time with Julia, I would often do derivations on paper for more experimental work, and then implement it using TF/PyTorch in Python from scratch, reading other code where I could for some help. In Julia I can import a library and I'm ready to go. It feels just like working with math itself.
Julia also lets you compose long trains of operators together. That also helps when I'm doing long sessions in the REPL exploring data. It lets me define a few functions in a file, import the file, and just pipe (Julia has a pipe operator which is just function application) output between functions to plot data or calculate errors.
Moreover Julia is a lot more performant for REPL work than Python. In Python I'll usually work with a subset of a dataset to get an idea for it, then run a file with code to process the entire dataset. In Julia I can often prototype and run the algorithm in the REPL itself.
I also want to stress that Julia is quite a bit faster for REPL development, both in terms of raw speed
Julia is not ML-ish. In fact, at first blush, Julia reads very similarly to Python. You even get list comprehensions in Julia!
> Sure, the code that you used to implement your model may improve slightly, but I don't see that code improving significantly. The fruit of your labor is usually a protobuf file (encoding the TensorFlow graph) or whatever your framework uses to encode the model you built. The code actually surrounding it is very minimal for most use cases.
It's really about breaking down the compositional boundaries here. Upthread, someone talked about how they could play around with Bayesian Nets by simply mixing Flux.jl (the differential programming library) with Turing.jl (the Bayesian programming library). Mixing Tensorflow/PyTorch with PyStan, for example, can be a nightmare. That's also why there's so many implementations of probabilistic programming frameworks (Edward, PyMC3, PyStan, Pyro, etc); they all use different underlying libraries. In Julia they just all compose together.
> I don't see why you would compose a TensorFlow model with a PyTorch model without incurring tons of overhead in your application around gluing these two things
I find the change to be illuminating. Before I started spending more time with Julia, I would often do derivations on paper for more experimental work, and then implement it using TF/PyTorch in Python from scratch, reading other code where I could for some help. In Julia I can import a library and I'm ready to go. It feels just like working with math itself.
Julia also lets you compose long trains of operators together. That also helps when I'm doing long sessions in the REPL exploring data. It lets me define a few functions in a file, import the file, and just pipe (Julia has a pipe operator which is just function application) output between functions to plot data or calculate errors.
Moreover Julia is a lot more performant for REPL work than Python. In Python I'll usually work with a subset of a dataset to get an idea for it, then run a file with code to process the entire dataset. In Julia I can often prototype and run the algorithm in the REPL itself.
I also want to stress that Julia is quite a bit faster for REPL development, both in terms of raw speed