Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Recovering Redis Data with GDB (bigeng.io)
22 points by webhat on May 22, 2015 | hide | past | favorite | 3 comments


Since you can call target process functions from GDB, it would be interesting to have some form of "emergency dump" function you could call, to get out the data even though the process as a whole has hung.

This would have the benefit by being implemented in the target process itself, so it could of course have full knowledge of the data structures and so on. It could potentially be way better than a raw (manual) memory-level inspection.

That would of course be quite hard to implement in a general (and large) server-type process, but it might be useful even if less than perfect.

I/O might become a problem, so perhaps just format the data in RAM, then let the debugger read it out. The RAM itself might be pre-allocated to ensure that it exists at the point of failure (when heap allocations likely won't work very well).

Has anyone done this?

EDIT: On second thought, I guess it becomes unwieldy for real-world use cases due to the volume of data. I wouldn't want to pre-allocate multiple GBs of RAM to use an export buffer for some database server, of course. Paging ... Hm.


>> Since you can call target process functions from GDB, it would be interesting to have some form of "emergency dump" function you could call, to get out the data even though the process as a whole has hung.

I have seen these before in some server products. Not necessarily printing out everything you needed to know, but to give some important internal stats, etc. Functions not called from anywhere in code that cause the process to output stuff if called from a debugger.

>> This would have the benefit by being implemented in the target process itself, so it could of course have full knowledge of the data structures and so on. It could potentially be way better than a raw (manual) memory-level inspection.

So it is possible, as I have worked on products that do this, to have a custom debugger that knows everything about the process.

You write the program in such a way that you avoid dynamic allocation, almost everything is implemented in predefined global buffers of one sort or another, and then the debugger knows where everything is at compile time. This may seem inflexible and painful to code, but if you are in a highly resource-constrained environment in the first place then it has some advantages.

You can then use the debugger to watch and modify absolutely everything, at runtime or from a memory dump. It can be quite powerful.


As I recall,

kill -SIGQUIT process_id

will force a core dump (with the various caveats etc), at which point you can use GBD to run around the various thread stacks at that point.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: