view website/issues/detectors/newissuecopy.py @ 6610:db3f0ba75b4a

Change checkpoint_data and restore_connection_on_error to subtransaction checkpoint_data and restore_connection_on_error used to commit() and rollback() the db connection. This causes additional I/O and load. Changed them to use 'SAVEPOINT name' and 'ROLLBACK TO name' to get a faster method for handling errors within a tranaction. One thing to note is that postgresql (unlike SQL std) doesn't overwrite an older savepoint with he same name. It keeps all savepoints but only rolls back to the newest one with a given name. This could be a resource issue. I left a commented out release statement in case somebody runs into an issue due to too many savepoints. I expect it to slow down the import but....
author John Rouillard <rouilj@ieee.org>
date Sat, 29 Jan 2022 11:29:36 -0500
parents 35ea9b1efc14
children
line wrap: on
line source

from roundup import roundupdb

def newissuecopy(db, cl, nodeid, oldvalues):
    ''' Copy a message about new issues to a team address.
    '''
    # so use all the messages in the create
    change_note = cl.generateCreateNote(nodeid)

    # send a copy to the nosy list
    for msgid in cl.get(nodeid, 'messages'):
        try:
            # note: last arg must be a list
            cl.send_message(nodeid, msgid, change_note,
                ['roundup-devel@lists.sourceforge.net'])
        except roundupdb.MessageSendError as message:
            raise roundupdb.DetectorError(message)

def init(db):
    db.issue.react('create', newissuecopy)
#SHA: 6ed003c947e1f9df148f8f4500b7c2e68a45229b

Roundup Issue Tracker: http://roundup-tracker.org/