comparison test/db_test_base.py @ 6433:c1d3fbcdbfbd

issue2551142 - Import of retired node ... unique constraint failure. Title: Import of retired node with username after active node fails with unique constraint failure. More fixes needed for mysql and postgresql. mysql: add unique constraint for (keyvalue, __retired__) when creating class in the database. On schema change if class is changed, remove the unique constraint too. upgrade version of rdbms database from 5 to 6 to add constraint to all version 5 databases that were created as version 5 and didn't get the unique constraint. Make no changes on version 5 databases upgraded from version 4, the upgrade process to 5 added the constraint. Make no changes to other databases (sqlite, postgres) during upgrade from version 5 to 6. postgres: Handle the exception raised on unique constraint violation. The exception invalidates the database connection so it can't be used to recover from the exception. Added two new database methods: checkpoint_data - performs a db.commit under postgres does nothing on other backends restore_connection_on_error - does a db.rollback on postgres, does nothing on other backends with the rollback() done on the connection I can use the database connection to fixup the import that failed on the unique constraint. This makes postgres slower but without the commit after every imported object, the rollback will delete all the entries done up to this point. Trying to figure out how to make the caller do_import batch and recover from this failure is beyond me. Also dismissed having to process the export csv file before importing. Pushing that onto a user just seems wrong. Also since import/export isn't frequently done the lack of surprise on having a failing import and reduced load/frustration for the user seems worth it. Also the import can be run in verbose mode where it prints out a row as it is processed, so it may take a while, ut the user can get feedback. db_test-base.py: add test for upgrade from 5 to 6.
author John Rouillard <rouilj@ieee.org>
date Thu, 10 Jun 2021 12:52:05 -0400
parents 97a45bfa62a8
children 269f39e28d5c
comparison
equal deleted inserted replaced
6432:97a45bfa62a8 6433:c1d3fbcdbfbd
239 def inject_fixtures(self, caplog): 239 def inject_fixtures(self, caplog):
240 self._caplog = caplog 240 self._caplog = caplog
241 241
242 def testRefresh(self): 242 def testRefresh(self):
243 self.db.refresh_database() 243 self.db.refresh_database()
244
245
246 def testUpgrade_5_to_6(self):
247
248 if(self.db.dbtype in ['anydbm', 'memorydb']):
249 self.skipTest('No schema upgrade needed on non rdbms backends')
250
251 # load the database
252 self.db.issue.create(title="flebble frooz")
253 self.db.commit()
254
255 self.assertEqual(self.db.database_schema['version'], 6,
256 "This test only runs for database version 6")
257 self.db.database_schema['version'] = 5
258 if self.db.dbtype == 'mysql':
259 # version 6 has 5 indexes
260 self.db.sql('show indexes from _user;')
261 self.assertEqual(5,len(self.db.cursor.fetchall()),
262 "Database created with wrong number of indexes")
263
264 self.drop_key_retired_idx()
265
266 # after dropping (key.__retired__) composite index we have
267 # 3 index entries
268 self.db.sql('show indexes from _user;')
269 self.assertEqual(3,len(self.db.cursor.fetchall()))
270
271 # test upgrade adding index
272 self.db.post_init()
273
274 # they're back
275 self.db.sql('show indexes from _user;')
276 self.assertEqual(5,len(self.db.cursor.fetchall()))
277
278 # test a database already upgraded from 4 to 5
279 # so it has the index to enforce key uniqueness
280 self.db.database_schema['version'] = 5
281 self.db.post_init()
282
283 # they're still here.
284 self.db.sql('show indexes from _user;')
285 self.assertEqual(5,len(self.db.cursor.fetchall()))
286 else:
287 # this should be a no-op
288 # test upgrade
289 self.db.post_init()
290
291 def drop_key_retired_idx(self):
292 c = self.db.cursor
293 for cn, klass in self.db.classes.items():
294 if klass.key:
295 sql = '''drop index _%s_key_retired_idx on _%s''' % (cn, cn)
296 self.db.sql(sql)
244 297
245 # 298 #
246 # automatic properties (well, the two easy ones anyway) 299 # automatic properties (well, the two easy ones anyway)
247 # 300 #
248 def testCreatorProperty(self): 301 def testCreatorProperty(self):
2899 self.db.setid(cn, str(maxid+1)) 2952 self.db.setid(cn, str(maxid+1))
2900 klass.import_journals(journals[cn]) 2953 klass.import_journals(journals[cn])
2901 2954
2902 if self.db.dbtype not in ['anydbm', 'memorydb']: 2955 if self.db.dbtype not in ['anydbm', 'memorydb']:
2903 # no logs or fixup needed under anydbm 2956 # no logs or fixup needed under anydbm
2904 self.assertEqual(2, len(self._caplog.record_tuples)) 2957 # postgres requires commits and rollbacks
2958 # as part of error recovery, so we get commit
2959 # logging that we need to account for
2960 if self.db.dbtype == 'postgres':
2961 log_count=24
2962 handle_msg_location=16
2963 # add two since rollback is logged
2964 success_msg_location = handle_msg_location+2
2965 else:
2966 log_count=2
2967 handle_msg_location=0
2968 success_msg_location = handle_msg_location+1
2969 self.assertEqual(log_count, len(self._caplog.record_tuples))
2905 self.assertIn('Attempting to handle import exception for id 7:', 2970 self.assertIn('Attempting to handle import exception for id 7:',
2906 self._caplog.record_tuples[0][2]) 2971 self._caplog.record_tuples[handle_msg_location][2])
2907 self.assertIn('Successfully handled import exception for id 7 ' 2972 self.assertIn('Successfully handled import exception for id 7 '
2908 'which conflicted with 6', 2973 'which conflicted with 6',
2909 self._caplog.record_tuples[1][2]) 2974 self._caplog.record_tuples[success_msg_location][2])
2910 2975
2911 # This is needed, otherwise journals won't be there for anydbm 2976 # This is needed, otherwise journals won't be there for anydbm
2912 self.db.commit() 2977 self.db.commit()
2913 finally: 2978 finally:
2914 shutil.rmtree('_test_export') 2979 shutil.rmtree('_test_export')

Roundup Issue Tracker: http://roundup-tracker.org/