Of note, this is a common pattern when programming embedded devices without an OS. You can set up scheduling using hardware timers that command an interrupt on firing. Then in the interrupt handler (ISR), you run certain code. Can be used for periodic tasks where the timer automatically resets upon countdown complete, or one-shot timers that fire when a certain condition occurs.
As soon as your application grows to the point of using a real time OS for multitasking, though, it's generally a good idea to minimize the use of hardware timer interrupts to run non-trivially complex code. Otherwise, you lose a lot of the benefits of a preemptive real-time scheduling. For a lot of periodic tasks, the resolution of an OS tick is good enough (using sleeps and timeouts). If you need the precision of a hardware timer, you can use a semaphore to wake up a non-interrupt task from the ISR.
One of the more hairy real-time tasks I had to implement was a software defined pulse-per-second signal. An embedded system needed to be able to generate a consistent PPS edge even during transient GPS outages. The time reference came from multiple GPS receivers. I ended up waking up a task a few milliseconds before the edge was due using low precision sleep calls, then configuring a timer peripheral within a short critical section in order to generate the PPS edge. This allowed for sub-microsecond accuracy and precision without any complicated bespoke interrupt handlers.
You are right when the interrupt handling is so simple.
However I cannot remember how many years have passed since the last time when I had the opportunity to write something like that.
Nowadays even the simplest microcontrollers costing a fraction of a dollar include DMA controllers or hardware FIFO queues so that for most I/O interfaces it does not make sense to handle one interrupt request for every transferred byte.
So you have fewer interrupt requests, but with more complex handling, which should be executed outside the interrupt context.
There are microcontrollers that cost a penny. Admittedly, even some of those have DMA. But it is wholly legitimate for a main program to be a two instruction loop "a: sleep; jump a", and all the work is done in interrupt context. That is not to claim it is better; rather, in a microcontroller, whatever works, works.
Indeed. It doesn't really matter as long as the firmware reliably does what it needs to do. A lot of MCUs only have one CPU, and so interrupts are one of the features to allow multi-tasking in a predictable way. If you only have one thing to do, then go ahead and stick it in a main loop. If you only have two things to do, then spending a bunch of CPU cycles in an interrupt handler might not be a problem at all. If you're working on a sufficiently complex embedded system with a bunch of competing real-time goals, you learn to be very careful with what your CPU is doing at any given time.
Quite often, "inefficient" interrupt based or bit-banged peripheral drivers are the right answer, even if the hardware supports DMA. It all depends on the complexity of the application and how much time and money you have.
I once needed to generate a good 60Hz sine wave from Linux via a DAC. All it did was copy a sample from a table to the DAC every degree [so every 1/(60*360)] seconds.
Used the PREEMPT-RT patch and a timer interrupt whose (non-threaded) handler produced the sample on the DAC. Worked great, but if it had needed to be more complicated I wouldn't have done the whole job in the handler.
x86 APIC/TSC is actually very well specified. As is older stuff like HPET. Even the original IBM PC PIT had a clean datasheet and documented board integration.
RISC-V and OS-A 22 platform are different than x86 & the IBM PC platform in that it has put effort into ensuring these things are standardized from the very start.