diff roundup/backends/sessions_redis.py @ 6814:3f60a71b0812

Summary: Support selecion session/otk data store. Add redis as data store. Allow admin to select the backend data store. Compatibility matrix: main\/ session>| anydbm | sqlite | redis | mysql | postgresql | anydbm | D | | X | | | sqlite | X | D | X | | | mysql | | | | D | | postgresql | | | | | D | --------------------------------------------------------------+ D - default if unconfigured, X - compatible choice DETAILS roundup/configuration.py: add config.ini section sessiondb with settings: backend and redis_url. CHANGES.txt, doc/admin_guide.txt, doc/installation.txt, doc/upgrading.txt: doc on config of session db and redis. Plus some other fixes: admin - clarified why we do not drop __words and __testids table in native-fts conversion. TYpo fix. upgrading - doc how you can keep using anydbm for session data with sqlite. Fix dupe sentence in an upgrading config.ini section. roundup/backends/back_anydbm.py, roundup/backends/back_sqlite.py: code to support redis, redis/anydbm backends respectively. roundup/backends/sessions_redis.py new storage backend for redis. roundup/rest.py, roundup/cgi/actions.py, roundup/cgi/templating.py redis uses a different way of calculating lifetime/timestamp. Since expiration of an item occurred if its timestamp was more than 1 week old, code would calculate: now - 1 week + lifetime. But this results in faster expiration in redis if used for lifetime/timestamp. Convert code to use the lifetime() method in BasicDatabase that generates the right timestamp for each backend. test/session_common.py: added tests for more cases, get without default, getall non-existing key etc. timestamp test changed to use new self.get_ts which is overridden in other tests. Test that datatypes survive storage. test/test_redis_session.py: test redis session store with sqlite and anydbm primary databases test/test_anydbm.py, test/test_sqlite.py add test to make sure the databases are properly set up sqlite - add test cases where anydbm is used as datastore anydbm - remove updateTimestamp override add get_ts(). test/test_config.py tests on redis_url and compatibility on choice of sessiondb backend .travis.yml: add redis db and redis-py
author John Rouillard <rouilj@ieee.org>
date Thu, 04 Aug 2022 14:41:58 -0400
parents
children fe0091279f50
line wrap: on
line diff
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/roundup/backends/sessions_redis.py	Thu Aug 04 14:41:58 2022 -0400
@@ -0,0 +1,246 @@
+"""This module defines a redis based store that's used by
+the CGI interface to store session and one-time-key
+information.
+
+Yes, it's called "sessions" - because originally it only
+defined a session class. It's now also used for One Time Key
+handling too.
+
+It uses simple strings rather than redis hash structure
+because the hash the values are always strings. We need to
+be able to represent the same data types available to rdbms
+and dbm session stores.
+
+session_dbm uses marshal.dumps and marshal.loads. This seems
+4 or 18 times faster than the repr()/eval() used by
+session_rdbms. So use marshal even though it is impossible
+to read when viewing (using redis-cli).
+"""
+__docformat__ = 'restructuredtext'
+
+import logging, marshal, redis, time
+
+from roundup.anypy.html import html_escape as escape
+
+from roundup.i18n import _
+
+
+class BasicDatabase:
+    ''' Provide a nice encapsulation of a redis store.
+
+        Keys are id strings, values are automatically marshalled data.
+    '''
+    name = None
+    default_lifetime = 60*60*24*7  # 1 week
+
+    # FIXME: figure out how to allow admin to change this
+    # to repr/eval using interfaces.py or other method.
+    # marshalled data is not readable when debugging.
+    tostr = marshal.dumps
+    todict = marshal.loads
+
+    def __init__(self, db):
+        self.config = db.config
+        url = self.config.SESSIONDB_REDIS_URL
+
+        # Example at default port without auth.
+        #    redis://localhost:6379/0?health_check_interval=2
+        #
+        # Do not allow decode_responses=True in url, data is
+        # marshal'ed binary data that will get broken by decoding.
+        # Enforce this in configuration.
+        self.redis = redis.Redis.from_url(url=url, decode_responses=False)
+
+    def makekey(self, key):
+        '''method to namespace all keys using self.name:....'''
+        return "%s:%s" % (self.name, key)
+
+    def exists(self, infoid):
+        return self.redis.exists(self.makekey(infoid))
+
+    def clear(self):
+        '''Delete all keys from the database.'''
+        self.redis.flushdb()
+
+    _marker = []
+
+    def get(self, infoid, value, default=_marker):
+        '''get a specific value from the data associated with a key'''
+        infoid = self.makekey(infoid)
+        v = self.redis.get(infoid)
+        if not v:
+            if default != self._marker:
+                return default
+            raise KeyError(_('Key %(key)s not found in %(name)s '
+                             'database.' % {"name": self.name,
+                                            "key": escape(infoid)}))
+        return self.todict(v)[value]
+
+    def getall(self, infoid):
+        '''return all values associated with a key'''
+        try:
+            d = self.redis.get(self.makekey(infoid))
+            if d is not None:
+                d = self.todict(d)
+            else:
+                d = {}
+            del d['__timestamp']
+            return d
+        except KeyError:
+            # It is possible for d to be malformed missing __timestamp.
+            # If so, we get a misleading error, but anydbm does the
+            # same so....
+            raise KeyError(_('Key %(key)s not found in %(name)s '
+                             'database.' % {"name": self.name,
+                                            "key": escape(infoid)}))
+
+        ''' def set_no_tranaction(self, infoid, **newvalues):
+        """ this is missing transaction and may be affected by
+            a race condition on update. This will work for
+            redis-like embedded databases that don't support
+            watch/multi/exec
+        """
+        infoid = self.makekey(infoid)
+        timestamp=None
+        values = self.redis.get(infoid)
+        if values is not None:
+            values = self.todict(values)
+        else:
+            values={}
+        try:
+            timestamp = float(values['__timestamp'])
+        except KeyError:
+            pass  # stay at None
+
+        if '__timestamp' in newvalues:
+            try:
+                float(newvalues['__timestamp'])
+            except ValueError:
+                # keep original timestamp if present
+                newvalues['__timestamp'] = timestamp or \
+                                        (time.time() + self.default_lifetime)
+        else:
+            newvalues['__timestamp'] = time.time() + self.default_lifetime
+
+        values.update(newvalues)
+
+        self.redis.set(infoid, self.tostr(values))
+        self.redis.expireat(infoid, int(values['__timestamp']))
+        '''
+
+    def set(self, infoid, **newvalues):
+        """ Implement set using watch/multi/exec to get some
+            protection against a change committing between
+            getting the data and setting new fields and
+            saving.
+        """
+        infoid = self.makekey(infoid)
+        timestamp = None
+
+        with self.redis.pipeline() as transaction:
+            # Give up and log after three tries.
+            # Do not loop forever.
+            for _retry in [1, 2, 3]:
+                # I am ignoring transaction return values.
+                # Assuming all errors will be via exceptions.
+                # Not clear that return values that useful.
+                transaction.watch(infoid)
+                values = transaction.get(infoid)
+                if values is not None:
+                    values = self.todict(values)
+                else:
+                    values = {}
+
+                try:
+                    timestamp = float(values['__timestamp'])
+                except KeyError:
+                    pass  # stay at None
+
+                if '__timestamp' in newvalues:
+                    try:
+                        float(newvalues['__timestamp'])
+                    except ValueError:
+                        # keep original timestamp if present
+                        newvalues['__timestamp'] = timestamp or \
+                                (time.time() + self.default_lifetime)
+                else:
+                    newvalues['__timestamp'] = time.time() + \
+                                               self.default_lifetime
+
+                values.update(newvalues)
+
+                transaction.multi()
+                transaction.set(infoid, self.tostr(values))
+                transaction.expireat(infoid, int(values['__timestamp']))
+                try:
+                    # assume this works or raises an WatchError
+                    # exception indicating I need to retry.
+                    # Since this is not a transaction, an error
+                    # in one step doesn't roll back other changes.
+                    # so I again ignore the return codes as it is not
+                    # clear that I can do the rollback myself.
+                    # Format and other errors (e.g. expireat('d', 'd'))
+                    # raise exceptions tht bubble up and result in mail
+                    # to admin.
+                    transaction.execute()
+                    break
+                except redis.Exceptions.WatchError:
+                    logging.getLogger('roundup.redis').info(
+                        _('Key %(key)s changed in %(name)s db' %
+                        {"key": escape(infoid), "name": self.name})
+                    )
+            else:
+                raise Exception(_("Redis set failed afer 3 retries"))
+
+    def list(self):
+        return list(self.redis.keys(self.makekey('*')))
+
+    def destroy(self, infoid=None):
+        '''use unlink rather than delete as unlink is async and doesn't
+           wait for memory to be freed server-side
+        '''
+        self.redis.unlink(self.makekey(infoid))
+
+    def commit(self):
+        ''' no-op '''
+        pass
+
+    def lifetime(self, key_lifetime=None):
+        """Return the proper timestamp to expire a key with key_lifetime
+           specified in seconds. Default lifetime is self.default_lifetime.
+        """
+        return time.time() + (key_lifetime or self.default_lifetime)
+
+    def updateTimestamp(self, sessid):
+        ''' Other backends update only if timestamp would change by more
+            than 60 seconds. To do this in redis requires:
+               get data _timestamp
+                 calculate if update needed
+               if needed,
+               set new timestamp
+            why bother. Just set and forget.
+        '''
+        # no need to do timestamp calculations
+        lifetime = self.lifetime()
+        # note set also updates the expireat on the key in redis
+        self.set(sessid, __timestamp=lifetime)
+
+    def clean(self):
+        ''' redis handles key expiration, so nothing to do here.
+        '''
+        pass
+
+    def close(self):
+        ''' redis uses a connection pool that self manages, so nothing
+            to do on close.'''
+        pass
+
+
+class Sessions(BasicDatabase):
+    name = 'sessions'
+
+
+class OneTimeKeys(BasicDatabase):
+    name = 'otks'
+
+# vim: set sts ts=4 sw=4 et si :

Roundup Issue Tracker: http://roundup-tracker.org/