In our case, we used Postgres. Our event volume was quite small, and we needed strict consistency (i.e. all participants in the auction see the same state). So we stored the log of events for an auction lot as a JSON blob. A new command was processed by taking a row-level lock (SELECT FOR UPDATE) on the lot's row, validating the command, and then persisting the events. Then we'd broadcast the new derived state to all interested parties.
All command processing and state queries required us to read and process the whole event log. But this was fairly cheap, because we're talking maybe a couple dozen events per lot. To optimize, we might have considered serializing core aspects of the derived state, to use as a snapshot. But this wasn't necessary.
Batch looks pretty cool! I'll keep that in mind next time I'm considering reinventing the wheel :)
All command processing and state queries required us to read and process the whole event log. But this was fairly cheap, because we're talking maybe a couple dozen events per lot. To optimize, we might have considered serializing core aspects of the derived state, to use as a snapshot. But this wasn't necessary.
Batch looks pretty cool! I'll keep that in mind next time I'm considering reinventing the wheel :)