Unless something's changed, every AI-backed language server I've tried in Helix suffers from the same limitation when it comes to completions: Suggestions aren't shown until the last language server has responded or timed-out. Your slowest language server determines how long you'll be waiting.
The only project I know of that recognizes this is https://github.com/SilasMarvin/lsp-ai, which pivoted away from completions to chat interactions via code actions.
I feel like an LSP is very insufficient for the ideal UX of AI integrations. LSP would be fine for AI autocompletes of course, but i think we want a custom UX that we don't quite yet know. Eg what Zed offers here seems useful. I also really like what Claude Code does.
I don't know the LSP spec well enough to know if these sort of complex interactions would work with it, but it seems super out of scope for it imo.
Also, the Helix way, thus far, has been to build a LSP for all the things, so I guess you'd make a copilot LSP (I be there already is one).