Yeah I'm not really sure what's going on here. Sonar has been using ML classifiers for decades but afaik stream splitting with 100% confidence is currently considered magic. So what did they apply or what advance did they make? Afaict they threw some audio into a GPT blender without a closer look at what's being done.
Edit: I found the link to the paper. It isn't stream splitting so much as it is GPT-assisted beamforming estimation. Good stuff for sure.
Edit: I found the link to the paper. It isn't stream splitting so much as it is GPT-assisted beamforming estimation. Good stuff for sure.
https://dl.acm.org/doi/10.1145/3613904.3642057