Mercurial > p > roundup > code
diff test/db_test_base.py @ 6433:c1d3fbcdbfbd
issue2551142 - Import of retired node ... unique constraint failure.
Title: Import of retired node with username after active node fails
with unique constraint failure.
More fixes needed for mysql and postgresql.
mysql: add unique constraint for (keyvalue, __retired__) when
creating class in the database.
On schema change if class is changed, remove the unique
constraint too.
upgrade version of rdbms database from 5 to 6 to add constraint
to all version 5 databases that were created as version 5
and didn't get the unique constraint. Make no changes
on version 5 databases upgraded from version 4, the upgrade
process to 5 added the constraint. Make no changes
to other databases (sqlite, postgres) during upgrade from
version 5 to 6.
postgres: Handle the exception raised on unique constraint violation.
The exception invalidates the database connection so it
can't be used to recover from the exception.
Added two new database methods:
checkpoint_data - performs a db.commit under postgres
does nothing on other backends
restore_connection_on_error - does a db.rollback on
postgres, does nothing on other
backends
with the rollback() done on the connection I can use the
database connection to fixup the import that failed on the
unique constraint. This makes postgres slower but without the
commit after every imported object, the rollback will delete
all the entries done up to this point.
Trying to figure out how to make the caller do_import batch
and recover from this failure is beyond me.
Also dismissed having to process the export csv file before
importing. Pushing that onto a user just seems wrong. Also
since import/export isn't frequently done the lack of
surprise on having a failing import and reduced
load/frustration for the user seems worth it. Also the import
can be run in verbose mode where it prints out a row as it is
processed, so it may take a while, ut the user can get
feedback.
db_test-base.py: add test for upgrade from 5 to 6.
| author | John Rouillard <rouilj@ieee.org> |
|---|---|
| date | Thu, 10 Jun 2021 12:52:05 -0400 |
| parents | 97a45bfa62a8 |
| children | 269f39e28d5c |
line wrap: on
line diff
--- a/test/db_test_base.py Mon Jun 07 10:50:45 2021 -0400 +++ b/test/db_test_base.py Thu Jun 10 12:52:05 2021 -0400 @@ -242,6 +242,59 @@ def testRefresh(self): self.db.refresh_database() + + def testUpgrade_5_to_6(self): + + if(self.db.dbtype in ['anydbm', 'memorydb']): + self.skipTest('No schema upgrade needed on non rdbms backends') + + # load the database + self.db.issue.create(title="flebble frooz") + self.db.commit() + + self.assertEqual(self.db.database_schema['version'], 6, + "This test only runs for database version 6") + self.db.database_schema['version'] = 5 + if self.db.dbtype == 'mysql': + # version 6 has 5 indexes + self.db.sql('show indexes from _user;') + self.assertEqual(5,len(self.db.cursor.fetchall()), + "Database created with wrong number of indexes") + + self.drop_key_retired_idx() + + # after dropping (key.__retired__) composite index we have + # 3 index entries + self.db.sql('show indexes from _user;') + self.assertEqual(3,len(self.db.cursor.fetchall())) + + # test upgrade adding index + self.db.post_init() + + # they're back + self.db.sql('show indexes from _user;') + self.assertEqual(5,len(self.db.cursor.fetchall())) + + # test a database already upgraded from 4 to 5 + # so it has the index to enforce key uniqueness + self.db.database_schema['version'] = 5 + self.db.post_init() + + # they're still here. + self.db.sql('show indexes from _user;') + self.assertEqual(5,len(self.db.cursor.fetchall())) + else: + # this should be a no-op + # test upgrade + self.db.post_init() + + def drop_key_retired_idx(self): + c = self.db.cursor + for cn, klass in self.db.classes.items(): + if klass.key: + sql = '''drop index _%s_key_retired_idx on _%s''' % (cn, cn) + self.db.sql(sql) + # # automatic properties (well, the two easy ones anyway) # @@ -2901,12 +2954,24 @@ if self.db.dbtype not in ['anydbm', 'memorydb']: # no logs or fixup needed under anydbm - self.assertEqual(2, len(self._caplog.record_tuples)) + # postgres requires commits and rollbacks + # as part of error recovery, so we get commit + # logging that we need to account for + if self.db.dbtype == 'postgres': + log_count=24 + handle_msg_location=16 + # add two since rollback is logged + success_msg_location = handle_msg_location+2 + else: + log_count=2 + handle_msg_location=0 + success_msg_location = handle_msg_location+1 + self.assertEqual(log_count, len(self._caplog.record_tuples)) self.assertIn('Attempting to handle import exception for id 7:', - self._caplog.record_tuples[0][2]) + self._caplog.record_tuples[handle_msg_location][2]) self.assertIn('Successfully handled import exception for id 7 ' 'which conflicted with 6', - self._caplog.record_tuples[1][2]) + self._caplog.record_tuples[success_msg_location][2]) # This is needed, otherwise journals won't be there for anydbm self.db.commit()
