If you have wal-mode enabled then the automatic locks are table level. So one process can be updating one table and another one can work on a different one. Also, you only need to enable wal mode on the DB once (pragma journal_mode=wal), it "sticks" for each connection.
In my application that uses SQLite (Snebu backup), as data comes in (as a TAR format stream) I have one process extracting the data and metadata, then serializing the metadata to another process that owns the DB connection. This process dumps the metadata to a temp table, then every 10 seconds "flushes" the metadata to the various tables that it needs to go to. This way I can easily have multiple backups going simultaneously, as each process spends a small amount of time (relatively) flushing the data to the permanent tables, and a greater part of the time compressing and writing backup data to the disk vault directory.
I've been working with this for the past 8 years or so, and have picked up a few tricks on keeping as much as possible batched up in transactions, but also keeping the transaction times short relative to other operations. So far seems to work out fairly well.
Note, that in addition to journal_mode=wal, you need to have a busy handler defined that infinitely retries transactions with a 250 ms delay between each retry.
Edit: On further review of the docs, I'm not sure if wal mode enables table-level locking, it may be that when writing to a temp table, that temp tables are part of a separate schema (or are otherwise separate from the main DB) -- which makes sense, as temp tables are only visible to the process that owns them. So a temp table can be locked in a transaction, while the rest of the DB is writable.
It seems like the locking is page-level in "wal" and "wal2" modes:
> Usually, SQLite allows at most one writer to proceed concurrently. The BEGIN CONCURRENT enhancement allows multiple writers to process write transactions simultanously if the database is in "wal" or "wal2" mode, although the system still serializes COMMIT commands.
> When a write-transaction is opened with "BEGIN CONCURRENT", actually locking the database is deferred until a COMMIT is executed. This means that any number of transactions started with BEGIN CONCURRENT may proceed concurrently. The system uses optimistic page-level-locking to prevent conflicting concurrent transactions from being committed.
> When a BEGIN CONCURRENT transaction is committed, the system checks whether or not any of the database pages that the transaction has read have been modified since the BEGIN CONCURRENT was opened. In other words - it asks if the transaction being committed operates on a different set of data than all other concurrently executing transactions. If the answer is "yes, this transaction did not read or modify any data modified by any concurrent transaction", then the transaction is committed as normal. Otherwise, if the transaction does conflict, it cannot be committed and an SQLITE_BUSY_SNAPSHOT error is returned. At this point, all the client can do is ROLLBACK the transaction.
The page also mentions:
> The key to maximizing concurrency using BEGIN CONCURRENT is to ensure that there are a large number of non-conflicting transactions. In SQLite, each table and each index is stored as a separate b-tree, each of which is distributed over a discrete set of database pages. This means that:
> Two transactions that write to different sets of tables never conflict
+1. I was not aware of the 'stickiness' of the wal pragma. Thank you for the tip. I will be defaulting to this going forward. For a quick confirmation, I used:
The code is on github.com/derekp7/snebu if you want to take a look. The next task on my list is to write up developer / contributor documentation on how the whole thing is put together along with various tricks (and simplified demonstration code).
But specifically, look in "snebu-main.c" that is where the opendb function is (so you can see the pragma statements), and there is a busy_retry function that gets referenced (all it does is sleep for .1 seconds). I believe that you don't need the busy-retry function, if you use the built-in busy handler, but I'm not really sure and don't want to take a chance and break working code.
For the temp tables, look in snebu-submitfiles.c -- the function at the top handles the DB operations, one towards the bottom handles the non-DB tar file consumption operations, and there is a circular queue in the middle to handle buffering so the data ingestion can keep going while the data is getting flushed (these three run as separate processes). I should learn threads, as there may be more flexibility in that, but not comfortable enough with thread programming yet.