Mercurial > p > roundup > code
view scripts/dump_dbm_sessions_db.py @ 6610:db3f0ba75b4a
Change checkpoint_data and restore_connection_on_error to subtransaction
checkpoint_data and restore_connection_on_error used to commit() and
rollback() the db connection. This causes additional I/O and load.
Changed them to use 'SAVEPOINT name' and 'ROLLBACK TO name' to get a
faster method for handling errors within a tranaction.
One thing to note is that postgresql (unlike SQL std) doesn't
overwrite an older savepoint with he same name. It keeps all
savepoints but only rolls back to the newest one with a given name.
This could be a resource issue. I left a commented out release
statement in case somebody runs into an issue due to too many
savepoints. I expect it to slow down the import but....
| author | John Rouillard <rouilj@ieee.org> |
|---|---|
| date | Sat, 29 Jan 2022 11:29:36 -0500 |
| parents | 61481d7bbb07 |
| children | 1188bb423f92 |
line wrap: on
line source
#! /usr/bin/env python3 """Usage: dump_dbm_sessions_db.py [filename] Simple script to dump the otks and sessions dbm databases. Dumps sessions db in current directory if no argument is given. Dump format: key: <timestamp> data where <timestamp> is the human readable __timestamp decoded from the data object. """ import dbm, marshal, sys from datetime import datetime try: file = sys.argv[1] except IndexError: file="sessions" try: db = dbm.open(file) except Exception: print("Unable to open database: %s"%file) exit(1) k = db.firstkey() while k is not None: d = marshal.loads(db[k]) t = datetime.fromtimestamp(d['__timestamp']) print("%s: %s %s"%(k, t, d)) k = db.nextkey(k)
