Mercurial > p > roundup > code
comparison test/db_test_base.py @ 7668:5b41018617f2
fix: out of memory error when importing under postgresql
If you try importing more than 20k items under postgresql you can run
out of memory:
psycopg2.errors.OutOfMemory: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
Tuning memory may help, it's unknown at this point.
This checkin forces a commit to the postgres database after 10,000
rows have been added. This clears out the savepoints for each row and
starts a new transaction.
back_postgresql.py:
Implement commit mechanism in checkpoint_data(). Add two class level
attributes for tracking the number of savepoints and the limit when
the commit should happen.
roundup_admin.py:
implement pragma and dynamically create the config item
RDBMS_SAVEPOINT_LIMIT used by checkpoint_data.
Also fixed formatting of descriptions when using pragma list in
verbose mode.
admin_guide.txt, upgrading.txt:
Document change and use of pragma savepoint_limit in roundup-admin
for changing the default of 10,000.
test/db_test_base.py:
add some more asserts. In existing testAdminImportExport, set the
savepoint limit to 5 to test setting method and so that the commit
code will be run by existing tests. This provides coverage, but
does not actually test that the commit is done every 5 savepoints
8-(. The verification of every 5 savepoints was done manually
using a pdb breakpoint just before the commit.
acknowledgements.txt:
Added 2.4.0 section mentioning Norbert as he has done a ton of
testing with much larger datasets than I can test with.
| author | John Rouillard <rouilj@ieee.org> |
|---|---|
| date | Thu, 19 Oct 2023 16:11:25 -0400 |
| parents | 027912a59f49 |
| children | 25a03f1a8159 |
comparison
equal
deleted
inserted
replaced
| 7667:08e4399c3ae4 | 7668:5b41018617f2 |
|---|---|
| 3059 | 3059 |
| 3060 # This is needed, otherwise journals won't be there for anydbm | 3060 # This is needed, otherwise journals won't be there for anydbm |
| 3061 self.db.commit() | 3061 self.db.commit() |
| 3062 | 3062 |
| 3063 self.assertEqual(self.db.user.lookup("duplicate"), active_dupe_id) | 3063 self.assertEqual(self.db.user.lookup("duplicate"), active_dupe_id) |
| 3064 self.assertEqual(self.db.user.is_retired(retired_dupe_id), True) | |
| 3064 | 3065 |
| 3065 finally: | 3066 finally: |
| 3066 shutil.rmtree('_test_export') | 3067 shutil.rmtree('_test_export') |
| 3067 | 3068 |
| 3068 # compare with snapshot of the database | 3069 # compare with snapshot of the database |
| 3149 tool.db = self.db | 3150 tool.db = self.db |
| 3150 tool.verbose = False | 3151 tool.verbose = False |
| 3151 self.assertRaises(csv.Error, tool.do_import, ['_test_export']) | 3152 self.assertRaises(csv.Error, tool.do_import, ['_test_export']) |
| 3152 | 3153 |
| 3153 self.nukeAndCreate() | 3154 self.nukeAndCreate() |
| 3155 | |
| 3156 # make sure we have an empty db | |
| 3157 with self.assertRaises(IndexError) as e: | |
| 3158 # users 1 and 2 always are created on schema load. | |
| 3159 # so don't use them. | |
| 3160 self.db.user.getnode("5").values() | |
| 3161 | |
| 3154 self.db.config.CSV_FIELD_SIZE = 3200 | 3162 self.db.config.CSV_FIELD_SIZE = 3200 |
| 3155 tool = roundup.admin.AdminTool() | 3163 tool = roundup.admin.AdminTool() |
| 3156 tool.tracker_home = home | 3164 tool.tracker_home = home |
| 3157 tool.db = self.db | 3165 tool.db = self.db |
| 3166 # Force import code to commit when more than 5 | |
| 3167 # savepoints have been created. | |
| 3168 tool.settings['savepoint_limit'] = 5 | |
| 3158 tool.verbose = False | 3169 tool.verbose = False |
| 3159 tool.do_import(['_test_export']) | 3170 tool.do_import(['_test_export']) |
| 3171 | |
| 3172 # verify the data is loaded. | |
| 3173 self.db.user.getnode("5").values() | |
| 3160 finally: | 3174 finally: |
| 3161 roundup.admin.sys = sys | 3175 roundup.admin.sys = sys |
| 3162 shutil.rmtree('_test_export') | 3176 shutil.rmtree('_test_export') |
| 3163 | 3177 |
| 3164 # test props from args parsing | 3178 # test props from args parsing |
