Mercurial > p > roundup > code
view roundup/cgi/accept_language.py @ 6628:2bb6d7baa47d
"Comment" out the meta data - will not process under 1.7.5 sphinx
Apparently field names with : fail on 1.7.5 sphinx which is the
virtual env version on sourceforge. It works on my 1.6.7 python2
install.
Looks like I need to add sphinxext-opengraph to get this to work.
However that is python3 only so need to spin up new virtualenv etc.
Looks like no python3 on sourceforge which may be an issue.
On sourceforge in /home/project-web/roundup/src/docbuilder these
packages are used and must be scp'ed as pip has no network access
outside of sourceforge:
Babel-2.6.0-py2.py3-none-any.whl
Jinja2-2.10-py2.py3-none-any.whl
MarkupSafe-1.0.tar.gz
Pygments-2.2.0-py2.py3-none-any.whl
Sphinx-1.7.5
Sphinx-1.7.5-py2.py3-none-any.whl
Sphinx-1.7.5.tar.gz
alabaster-0.7.11-py2.py3-none-any.whl
certifi-2018.4.16-py2.py3-none-any.whl
chardet-3.0.4-py2.py3-none-any.whl
docutils-0.14-py2-none-any.whl
idna-2.7-py2.py3-none-any.whl
imagesize-1.0.0-py2.py3-none-any.whl
packaging-17.1-py2.py3-none-any.whl
pip-10.0.1
pip-10.0.1.tar.gz
pyparsing-2.2.0-py2.py3-none-any.whl
pytz-2018.5-py2.py3-none-any.whl
requests-2.19.1-py2.py3-none-any.whl
setuptools-39.2.0-py2.py3-none-any.whl
six-1.11.0-py2.py3-none-any.whl
snowballstemmer-1.2.1-py2.py3-none-any.whl
sphinxcontrib_websupport-1.1.0-py2.py3-none-any.whl
typing-3.6.4-py2-none-any.whl
urllib3-1.23-py2.py3-none-any.whl
| author | John Rouillard <rouilj@ieee.org> |
|---|---|
| date | Sun, 27 Mar 2022 13:57:04 -0400 |
| parents | 3b945aee0919 |
| children | 63c9680eed20 |
line wrap: on
line source
"""Parse the Accept-Language header as defined in RFC2616. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.4 for details. This module should follow the spec. Author: Hernan M. Foffani (hfoffani@gmail.com) Some use samples: >>> parse("da, en-gb;q=0.8, en;q=0.7") ['da', 'en_gb', 'en'] >>> parse("en;q=0.2, fr;q=1") ['fr', 'en'] >>> parse("zn; q = 0.2 ,pt-br;q =1") ['pt_br', 'zn'] >>> parse("es-AR") ['es_AR'] >>> parse("es-es-cat") ['es_es_cat'] >>> parse("") [] >>> parse(None) [] >>> parse(" ") [] >>> parse("en,") ['en'] """ import re import heapq # regexp for languange-range search nqlre = "([A-Za-z]+[-[A-Za-z]+]*)$" # regexp for languange-range search with quality value qlre = r"([A-Za-z]+[-[A-Za-z]+]*);q=([\d\.]+)" # both lre = re.compile(nqlre + "|" + qlre) whitespace = ' \t\n\r\v\f' try: # Python 3. remove_ws = (str.maketrans('', '', whitespace),) except AttributeError: # Python 2. remove_ws = (None, whitespace) def parse(language_header): """parse(string_with_accept_header_content) -> languages list""" if language_header is None: return [] # strip whitespaces. lh = language_header.translate(*remove_ws) # if nothing, return if lh == "": return [] # split by commas and parse the quality values. pls = [lre.findall(x) for x in lh.split(',')] # drop uncomformant qls = [x[0] for x in pls if len(x) > 0] # use a heap queue to sort by quality values. # the value of each item is 1.0 complement. pq = [] order=0 for l in qls: order +=1 if l[0] != '': heapq.heappush(pq, (0.0, order, l[0])) else: heapq.heappush(pq, (1.0-float(l[2]), order, l[1])) # get the languages ordered by quality # and replace - by _ return [ heapq.heappop(pq)[2].replace('-','_') for x in range(len(pq)) ] if __name__ == "__main__": import doctest doctest.testmod() # vim: set et sts=4 sw=4 :
