That's laughable in 2025, and together with the wimpy 153 GB/s memory bandwidth (come on, Strix Halo is 256GB/s at a fraction of the price!) they really don't have a leg to stand on calling this AI-anything!
As pointed out in other places as well a better comparison will be the upcoming Pro & Max variants.
Also, as far as I know, Strix Halo mainly uses the GPU for inference not the little AI accelerator AMD has put on there. That one is just to limited.
I'm saying this is pretty weaksauce for AI-anything in 2025, especially considering the price tag. Sure, there will be later models with more memory and bandwidth (no doubt at eye-watering prices), but with 32 GB this model isn't it.
I'm sure it's a perfectly fine daily driver, but you have to appreciate the irony of a massive chip loaded to the gills with matrix multiplication units, marketed as an amazing AI machine, and yet so hobbled by mem capacity and bandwidth.