Hell, 30 years ago I was working for the MOD (they sponsored my PhD and turned it into an RA) in the UK creating context-aware neural network inference engines for FLIR (Forward-looking infra-red) data. We had all sorts of "fun" stuff running on a Meiko computing surface, with parallelised network training and implementations, temporal and spatial averaging, and relaxation labelling all thrown into the mix to aid the recognition engine, done with a voting system of various architectures sharing to a "blackboard" where information could be posted to and read from. Visualisation was all on high-end (for the time) Silicon Graphics workstations.
The context (together with the features extracted) was the killer (forgive the pun) feature though - everything else reduced noise, but context increased signal.
My gast remains flabbered that the sort of thing I was working on back then hasn't become commonplace in the interim. The computing power available today, compared to then, and the accuracy we had (I know for a fact at least one of the designs was made into real hardware, it was called RH7, and "RH" stood for "Red Herring" - oh how we laughed) ... It beggars belief that it was just left to digitally rot.
The context (together with the features extracted) was the killer (forgive the pun) feature though - everything else reduced noise, but context increased signal.
My gast remains flabbered that the sort of thing I was working on back then hasn't become commonplace in the interim. The computing power available today, compared to then, and the accuracy we had (I know for a fact at least one of the designs was made into real hardware, it was called RH7, and "RH" stood for "Red Herring" - oh how we laughed) ... It beggars belief that it was just left to digitally rot.