Do you work on systems that handle this? I'd like to know if any still exist.
In modern systems I'm familiar with, malloc only reports failure on bogus inputs like -1, or address space exhaustion. Your process is likely to be killed before exhausting your address space (think iOS OOM handling, or Linux overcommit), especially on 64 bit. So checking for allocation success just isn't that useful any more.
The second is exhausting kernel data structure space for things like page mappings. I've seen this recently in AIX when allocating lots of memory that alternates mprotect permissions.
I worked on an embedded system that aggressively cached images in memory dedicated to the GPU. It wasn't uncommon for gl texture allocation errors to occur - and in addition to that, we needed to decompress images packed in various formats into whatever format the GPU supported (typically from png to rgba32). In low memory situations, it also wasn't uncommon to not have enough contiguous memory to perform that decompression - in which case malloc would fail.
Gotta love putting forth every effort in software to keep the BOM down :)
As @jjnoakes points out, you'll get malloc()==NULL if your ulimits are set. For a long-running program you definitely want to have a ulimit that will kick in before the OOM killer does.
Even in the absence of the OOM killer (i.e. the old days) you had to do this -- otherwise the machine might be swapping itself unresponsive for ages before you ever get malloc()==NULL
In modern systems I'm familiar with, malloc only reports failure on bogus inputs like -1, or address space exhaustion. Your process is likely to be killed before exhausting your address space (think iOS OOM handling, or Linux overcommit), especially on 64 bit. So checking for allocation success just isn't that useful any more.