LLMs are at the current forefront of FHE research. There are a few papers doing some tweaked versions of BERT in <1 minute per token. Which is only ~4 orders of magnitude slower than cleartext.
This paper uses a very heavily modified version of an encoder-only BERT model. Forward pass on a single 4090 is cited there at 13 seconds after switching softmax out for a different kernel (21 seconds with softmax). They are missing a non-FHE baseline, but that model has only about 35 million parameters when you look at its size. At FP16, you would expect this to be about 100x faster than a normal BERT because it's so damn small. On a 4090, that model's forward pass probably runs at something like 100k-1M tokens per second given some batching. It sounds like 6 orders of magnitude is still about right.
Given individual LLM parameters are not easily interpreted, naturally obfuscated by the diffuse nature of their impact, I would think leaning into that would be a more efficient route.
Obfuscating input and output formats could be very effective.
Obfuscation layers can be incorporated into training. With an input (output) layer that passes information forward, but whose output (input) is optimized to have statistically flat characteristics, resistant to attempts to interpret.
Nothing like apparent pure noise for obfuscation!
The core of the model would then be trained, and infer, on the obfuscated data.
When used, the core model would publicly operate on obfuscated data. While the obfuscation/de-obfuscation layers would be used privately.
In addition to obfuscating, the pre and post-layers could also reduce data dimensionality. Naturally increasing obfuscation and reducing data transfer costs. It is a really good fit.
Even the most elaborate obfuscation layers will be orders and orders of magnitude faster than today's homomorphic approaches.
(Given the natural level parameter obfuscation, and the highly limited set of operations for most deep models, I wouldn't be surprised if efficient homomorphic approaches were found in the future.)
https://arxiv.org/html/2410.02486v1#S5