Hacker Newsnew | past | comments | ask | show | jobs | submit | muzaffarpur's favoriteslogin

I am not surprised that the cost of context switching due to I/O readiness can often be roughly equal between async tasks and kernel threads. Normal blocking I/O can be surprisingly efficient because of various factors, such as a reduced need for system calls.

Think about it this way—if you have a user-space thread which wakes up due to I/O readiness, then this means that the relevant kernel thread woke up from epoll_wait() or something similar. With blocking I/O, you call read(), and the kernel wakes up your thread when the read() completes. With non-blocking I/O, you call read(), get EAGAIN, call epoll_wait(), the kernel wakes up your thread when data is ready, and then you call read() a second time.

In both scenarios, you’re calling a blocking system call and waking up the thread later.

Of course, there are scenarios when epoll_wait() returns multiple events, which reduces the number of context switches. But the general result is that it’s not always easy to beat blocking I/O and kernel threads.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: