I suspect that the emphasis is an artifact when the Dragon Book was written - getting a decent parser was rather more of a challenge, and heavily optimising compilers a lot less common.
Not just that, but the theory that drives modern compilers, like graph coloring register allocation and static single assignment form, weren't conceived until the mid 1980s (1984 and 1986 each), and better implementations of those theories were written about in the 90s. Linear scan register allocation was first written about in 1999.
You can compare these books:
Engineering a compiler: VAX-11 code generation and optimization, which was published in 1982, talks about the design and implementation of the PL/I compiler for the DEC VAX-11.
You can compare it to Bob Morgan's Building an Optimizing Compiler, which was published in 1997. The techniques he discusses there are a lot closer to how LLVM works today than the 1982 book.
Tis one of those things, I guess.