hypothetically, the requirements were to receive data (at a hypothetical example rate of 20,000 packets per second), execute some tasks (say, hypothetically for example 10 tasks per packet) and (hypothetically) report some hypothetical information. however, the tasks needed access to data, which needed to be updated on every packet coming in. So we were (say) looking at, hypothetically, a sustained and simultaneous set of 200,000 database reads, 200,000 writes and 200,000 deletes per second
so when benchmark tests of Py-LMDB showed a random access write speed bearing in mind that this is python - in excess of 250,000 records per second, and and random read access of almost 900,000 records per second from a single process, it was pretty obvious that LMDB was the right tool for the job.
to summarise: Py-LMDB has been successfully deployed as the core back-end in quite literally the most complex programming task i have ever done, to date.