> I prefer having a Python monoculture for AI to be able to mix and match different algorithms.
Julia is like, the poster child for being able to mix and match and compose code and algorithms: Flux/Zygote can do auto-differentiation on the entire language without modification and you can drop in quite literally arbitrary Julia code as components of your network and it works.
> don't think that LLVM IR that uses is the best IR for the optimizations.
What makes you say this? They community has been able to do some pretty amazing things performance wise-e.g. pure-Julia implementation of BLAS/LAPACK reaching and in some cases exceeding performance parity, plus there’s been plenty of work in CUDA support for arbitrary Julia code, which is impressive.
Julia is like, the poster child for being able to mix and match and compose code and algorithms: Flux/Zygote can do auto-differentiation on the entire language without modification and you can drop in quite literally arbitrary Julia code as components of your network and it works.
> don't think that LLVM IR that uses is the best IR for the optimizations.
What makes you say this? They community has been able to do some pretty amazing things performance wise-e.g. pure-Julia implementation of BLAS/LAPACK reaching and in some cases exceeding performance parity, plus there’s been plenty of work in CUDA support for arbitrary Julia code, which is impressive.