view roundup/token.py @ 5710:0b79bfcb3312

Add support for making an idempotent POST. This allows retrying a POST that was interrupted. It involves creating a post once only (poe) url /rest/data/<class>/@poe/<random_token>. This url acts the same as a post to /rest/data/<class>. However once the @poe url is used, it can't be used for a second POST. To make these changes: 1) Take the body of post_collection into a new post_collection_inner function. Have post_collection call post_collection_inner. 2) Add a handler for POST to rest/data/class/@poe. This will return a unique POE url. By default the url expires after 30 minutes. The POE random token is only good for a specific user and is stored in the session db. 3) Add a handler for POST to rest/data/<class>/@poe/<random token>. The random token generated in 2 is validated for proper class (if token is not generic) and proper user and must not have expired. If everything is valid, call post_collection_inner to process the input and generate the new entry. To make recognition of 2 stable (so it's not confused with rest/data/<:class_name>/<:item_id>), removed @ from Routing::url_to_regex. The current Routing.execute method stops on the first regular expression to match the URL. Since item_id doesn't accept a POST, I was getting 405 bad method sometimes. My guess is the order of the regular expressions is not stable, so sometime I would get the right regexp for /data/<class>/@poe and sometime I would get the one for /data/<class>/<item_id>. By removing the @ from the url_to_regexp, there was no way for the item_id case to match @poe. There are alternate fixes we may need to look at. If a regexp matches but the method does not, return to the regexp matching loop in execute() looking for another match. Only once every possible match has failed should the code return a 405 method failure. Another fix is to implement a more sophisticated mechanism so that @Routing.route("/data/<:class_name>/<:item_id>/<:attr_name>", 'PATCH') has different regexps for matching <:class_name> <:item_id> and <:attr_name>. Currently the regexp specified by url_to_regex is used for every component. Other fixes: Made failure to find any props in props_from_args return an empty dict rather than throwing an unhandled error. Make __init__ for SimulateFieldStorageFromJson handle an empty json doc. Useful for POSTing to rest/data/class/@poe with an empty document. Testing: added testPostPOE to test/rest_common.py that I think covers all the code that was added. Documentation: Add doc to rest.txt in the "Client API" section titled: Safely Re-sending POST". Move existing section "Adding new rest endpoints" in "Client API" to a new second level section called "Programming the REST API". Also a minor change to the simple rest client moving the header setting to continuation lines rather than showing one long line.
author John Rouillard <rouilj@ieee.org>
date Sun, 14 Apr 2019 21:07:11 -0400
parents 0942fe89e82e
children 7d276bb8b46d
line wrap: on
line source

#
# Copyright (c) 2001 Richard Jones, richard@bofh.asn.au.
# This module is free software, and you may redistribute it and/or modify
# under the same terms as Python, so long as this copyright message and
# disclaimer are retained in their original form.
#
# This module is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# 

"""This module provides the tokeniser used by roundup-admin.
"""
__docformat__ = 'restructuredtext'

def token_split(s, whitespace=' \r\n\t', quotes='\'"',
        escaped={'r':'\r', 'n':'\n', 't':'\t'}):
    '''Split the string up into tokens. An occurence of a ``'`` or ``"`` in
    the input will cause the splitter to ignore whitespace until a matching
    quote char is found. Embedded non-matching quote chars are also skipped.

    Whitespace and quoting characters may be escaped using a backslash.
    ``\r``, ``\n`` and ``\t`` are converted to carriage-return, newline and
    tab.  All other backslashed characters are left as-is.

    Valid examples::

           hello world      (2 tokens: hello, world)
           "hello world"    (1 token: hello world)
           "Roch'e" Compaan (2 tokens: Roch'e Compaan)
           Roch\'e Compaan  (2 tokens: Roch'e Compaan)
           address="1 2 3"  (1 token: address=1 2 3)
           \\               (1 token: \)
           \n               (1 token: a newline)
           \o               (1 token: \o)

    Invalid examples::

           "hello world     (no matching quote)
           Roch'e Compaan   (no matching quote)
    '''
    l = []
    pos = 0
    NEWTOKEN = 'newtoken'
    TOKEN = 'token'
    QUOTE = 'quote'
    ESCAPE = 'escape'
    quotechar = ''
    state = NEWTOKEN
    oldstate = ''    # one-level state stack ;)
    length = len(s)
    finish = 0
    token = ''
    while 1:
        # end of string, finish off the current token
        if pos == length:
            if state == QUOTE: raise ValueError
            elif state == TOKEN: l.append(token)
            break
        c = s[pos]
        if state == NEWTOKEN:
            # looking for a new token
            if c in quotes:
                # quoted token
                state = QUOTE
                quotechar = c
                pos = pos + 1
                continue
            elif c in whitespace:
                # skip whitespace
                pos = pos + 1
                continue
            elif c == '\\':
                pos = pos + 1
                oldstate = TOKEN
                state = ESCAPE
                continue
            # otherwise we have a token
            state = TOKEN
        elif state == TOKEN:
            if c in whitespace:
                # have a token, and have just found a whitespace terminator
                l.append(token)
                pos = pos + 1
                state = NEWTOKEN
                token = ''
                continue
            elif c in quotes:
                # have a token, just found embedded quotes
                state = QUOTE
                quotechar = c
                pos = pos + 1
                continue
            elif c == '\\':
                pos = pos + 1
                oldstate = state
                state = ESCAPE
                continue
        elif state == QUOTE and c == quotechar:
            # in a quoted token and found a matching quote char
            pos = pos + 1
            # now we're looking for whitespace
            state = TOKEN
            continue
        elif state == ESCAPE:
            # escaped-char conversions (t, r, n)
            # TODO: octal, hexdigit
            state = oldstate
            if c in escaped:
                c = escaped[c]
        # just add this char to the token and move along
        token = token + c
        pos = pos + 1
    return l

# vim: set filetype=python ts=4 sw=4 et si

Roundup Issue Tracker: http://roundup-tracker.org/