Franck Pommereau

API doc extraction

Showing 102 changed files with 26081 additions and 0 deletions
syntax: glob
doc/api/*
dist/*
dput.sh
*.pyc
*~
.gone
,*
*.class
abcd crash on [foo>>(s), foo<<(s+s)]
[foo(x,y,z)] creates a multi-arc instead of a tuple
abcd substitues for loops variables:
net Foo (x) :
buffer b : int = [x for x in range(4)]
^ ^
substituted (should not)
errors during expressions evaluation are directly sent to user, should be wrapper to indicate this is a model error. And should be catched in abcd --simul/check
recursive-include doc *
include *
all:
@echo "Commands:"
@echo " release prepare source for release"
@echo " tgz build a source tarball"
@echo " doc build API documentation"
@echo " dput build and upload Ubuntu packages"
@echo " clean delete some garbage files"
@echo " test run tests through supported Python implementations"
@echo " next-deb increments debian/VERSION"
@echo " next-ppa increments debian/PPA"
@echo " lang build generated files in snakes/lang"
@echo " emacs compile Emacs files"
committed:
hg summary|grep -q '^commit: (clean)$$'
next-deb:
echo 1 > debian/PPA
echo $$((1+$$(cat debian/VERSION))) > debian/VERSION
emacs:
emacs -batch -f batch-byte-compile utils/abcd-mode.el
next-ppa:
echo $$((1+$$(cat debian/PPA))) > debian/PPA
release: committed test doc tgz
hg tag version-$$(cat VERSION)
echo 1 > debian/PPA
echo 1 > debian/VERSION
hg commit -m "version $$(cat VERSION)"
hg push
lang:
python mklang.py
tgz: committed
hg archive snakes-$$(cat VERSION)-$$(cat debian/VERSION)
cd snakes-$$(cat VERSION)-$$(cat debian/VERSION) && make doc
tar cf snakes-$$(cat VERSION)-$$(cat debian/VERSION).tar snakes-$$(cat VERSION)-$$(cat debian/VERSION)
rm -rf snakes-$$(cat VERSION)-$$(cat debian/VERSION)
gzip -9 snakes-$$(cat VERSION)-$$(cat debian/VERSION).tar
gpg --armor --sign --detach-sig snakes-$$(cat VERSION)-$$(cat debian/VERSION).tar.gz
doc: snakes/*.py snakes/plugins/*.py snakes/utils/*.py snakes/lang/*.py
make -C doc
dput.sh: VERSION debian/*
python mkdeb.py
dput: committed dput.sh
sh dput.sh
clean:
rm -f $$(find . -name ",*")
rm -f $$(find . -name "*.pyc")
rm -f $$(find . -name "*~")
rm -f $$(find . -name "*.class")
rm -rf $$(find . -type d -name __pycache__)
test:
python2.5 test.py
python2.6 test.py
python2.7 test.py
python3 test.py
unladen test.py
pypy test.py
spypy test.py
stackless test.py
jython test.py
+ emacs mode for ABCD
! fixex nodes merge in plugin labels
! fixed nets.MultiArc.flow (thanks to Jan Ciger's report)
version 0.9.13 (Fri Jul 16 17:01:22 CEST 2010):
+ added inhibitor arcs
+ added Ubuntu Lucid (10.04) package
! fixed data.WordSet.fresh when base is used
! fixed reduce(xor, []) is some __hash__ methods
+ added PetriNet.layout method in snakes.plugins.gv
version 0.9.12 (Thu Apr 1 19:42:33 CEST 2010):
+ removed PyGraphviz dependency (layout method supressed temporarily)
+ now compatible with PyPy (1.2), Unladen-Swallow and Jython (2.5.1)
! fixed snakes.plugins.clusters.rename_node
! fixed snakes.nets.Flush.flow
! fixed an uncaught exception in snakes.data.MultiSet.__eq__
and snakes.data.Symbol.__eq__
! hopefully fixed LALR built in ABCD compiler
! fixed hash-related issues
* moved PLY stuff to snakes.utils.abcd
* snakes.compyler has been completely replaced (PLY dependency removed)
version 0.9.11 (Thu Mar 25 19:32:31 CET 2010):
! fixed various doctests
! fixed issues with attributes locking
! fixed issues related to missing __hash__ methods
! fixed renaming a node to itself in ABCD
+ added snakes.nets.Evaluator.__contains__
+ added base argument to snakes.data.WordSet
+ added option --symbols to ABCD compiler
+ added snakes.data.Symbol
+ added net instances naming in ABCD
+ added logo
+ added let function to update bindings from expressions
+ added snakes.data.Subtitution.__setitem__
version 0.9.10 (Fri Jun 19 13:32:45 CEST 2009):
! fixed inconstent hashing on clusters
! fixed mutability of hashed multisets
! fixed snakes.nets.Tuple.mode
version 0.9.9 (Tue, 19 May 2009 15:00:00 +0100):
+ added mkdeb.py to build deb packages for multiple distributions
+ ported to Python 2.6
version 0.9.8 (lun, 23 mar 2009 17:30:34 (CET)):
+ added graph_attr option to snakes.plugins.gv.StateGraph
* plugin gv now draws partially constructed marking graphs
! fixed expression compilation in present of net parameters in ABCD
compiler
! fixed flush arcs binding
* plugin lashdata has been removed because its too experimental
* merged plugin decorators into a single one
version 0.9.7 (Tue, 20 Jan 2009 13:04:15 +0100):
! fixed sharing of globals between net components
! fixed loading of PNML when some plugins fail to load
+ added cpp option to abcd
+ added some docstrings and doctests
! fixed sharing of globals between net components
version 0.9.6 (Fri, 28 Nov 2008 15:22:27 +0100):
+ added doc for ABCD
! fixed False and True handling in ABCD compiler
+ added cross product types to ABCD
version 0.9.5 (Wed, 19 Nov 2008 21:42:05 +0100):
+ added distutils setup.py
* strip PNML data before decoding (work around invalid XML)
! fixed Multiset.__pnmlload__ on iterable values
version 0.9.4 (mar, 28 oct 2008 12:28:02 (UTC+0100)):
! fixed nets.Value.__eq__, __ne__ and __hash__
! fixed nets.Token.__repr__
+ added Flush arcs
! fixed nets.Tuple.bind
! fixed PNML dump/load of subclasses
! fixed nets.PetriNet.merge_* in the presence of Tuple arcs
! fixed plugins.clusters.Cluster.path
! fixed plugins.status.PetriNet.__pnmlload__
! fixed ABCD lexer
+ improved ABCD compiler with parametric nets, Python declarations
! fixed black token values in ABCD arcs and flush
+ ABCD compiler launches pdb.post_mortem when an error occurs in
+ added TCP mode to query plugin debug mode
* removed dependency to epydoc in snakes.typing
+ updated PNML doc
+ added a producer/consumer ABCD example
version 0.9.3 (lun, 29 sep 2008 08:04:55 (CEST)):
! fixed a bug in place pnml loading
! fixed wrong association of pnml tags to classes
+ improved query and associated programs, added documentation
* improved loading of PNML files with plugins
* PNML tag <snakes> can only occur once as a child of <pnml>
+ abcd compiler adds many structural information to PNML (including
+ test.py checks is snakes.version is correct AST if asked)
+ query plugins now has two verbosity levels
+ added snakes.compyler.Tree.__pnmldump__()
version 0.9.2
+ added query plugin and demon client/server programs
! various small bugs fixed
+ improved PNML serialisation of standard objects
+ snakes.plugins.load puts new module in caller's environment
! fixed snakes.pnml.Tree.update
* Substitution.image() returns a set
+ added apix.py
! PNML export empty arcs fixed
! fixed broken abcd.py
* updated abcd.py so it uses gv
* improved clustering efficiency
version 0.9.1
+ added plugin gv to replace graphviz
+ added plugin clusters to manage clusters of nodes
+ updated plugins ops so it builds clusters
* the plugin posops is deprecated since gv does the work much better
version 0.9
* dropped compatibilty with Python 2.4
+ finished pnml
* the compyler module has been completely replaced
Changes in 0.8.4 (06.09.2007 11:30:21):
+ updated the tutorial
Changes in 0.8.3 (29.06.2007 11:57:39):
+ snakes.plugins.graphivz: added options to control the layout
+ snakes.plugins.graphviz: improved the rendering
+ updated the tutorial
Changes in 0.8.2 (22.06.2007 13:09:02):
+ snakes.plugins.ops: added a name hiding operator (PetriNet.__div__)
! fixed several copy/clone problems
Changes in 0.8.1 (20.06.2007 10:18:17):
* updated tutorial
+ snakes.plugins.pos: the position of a node can be redefined when
! snakes.plugins.pos: accept float positions when loading from pnml
Changes in 0.8 (19.06.2007 17:19:19): First public release.
SNAKES is the Net Algebra Kit for Editors and Simulators
========================================================
////////////////////////////////////////////////////////////////
This file is formatted in order to be processed by AsciiDoc
(http://www.methods.co.nz/asciidoc). It will be more comfortable
to render it or to read the HTML version available at:
http://www.univ-paris12.fr/pommereau/soft/snakes/index.html
////////////////////////////////////////////////////////////////
SNAKES is a Python library that provides all then necessary to define
and execute many sorts of Petri nets, in particular those of the PBC
and M-nets family. Its main aim is to be a general Petri net library,
being able to cope with most Petri nets models, and providing the
researcher with a tool to quickly prototype its new ideas. SNAKES
should be suitable to provide the data model for editors or
simulators; actually, any editor that use SNAKES may also be a
simulator as SNAKES can execute any net.
A key feature of SNAKES is the ability to use arbitrary Python objects
as tokens and arbitrary Python expressions in many points, for
instance in transitions guards or arcs outgoing of transitions. This
is what makes SNAKES that general. This relies on the capability of
Python to run dynamically provided Python code (the $eval$ function).
This feature may not be efficient enough for model-checking: speed is
the price to pay for the wide generality. However, in the case of a
new model, SNAKES may happen to be the only available tool.
Another important feature of SNAKES is the plugin system that allows
to extend the features and work with specialised classes of Petri
nets. Currently, the following plugins are provided:
pos:: adds to nodes the capability of holding their position. Nodes
can be moved or shifted, Petri nets can be shifted globally and their
bounding box can be computed.
gv:: adds a method to draw a Petri net or a state graph using the tool
http://www.graphviz.org[GraphViz] (through the Python binding
http://networkx.lanl.gov/wiki/pygraphviz[PyGraphViz]). This module
replaces the previous plugin called _graphviz_ and provides more
flexibility and security, _graphviz_ is still provided but deprecated.
status:: extends the Petri net model by adding status to the nodes.
This is similar to what is used in the models of the PBC or Mnets
family. Nodes can then merged automatically according to their status.
ops:: this plugins defines control flow operations on Petri nets
usually found in the PBC and Mnets family. Nets can be composed in
parallel, sequence, choice and iteration. These operations rely on the
places status.
labels:: allows to add arbitrary labels to most objects (places,
transitions, nets, ...)
posops:: combines the features of pos and ops plugins: the control
flow operations are modified in order to rearrange the nodes position
in order to provide well shaped nets. This plugin is deprecated
because the new _gv_ does the work much better.
synchro:: it defines the label-based transition synchronisation
defined in the Mnets model.
// export:: allows to save Petri nets objects in the format of the tools
// http://pep.sourceforge.net[PEP], http://helena.cnam.fr[Helena] and
// http://maria[Maria]. http://pnml[PNML] is also supported as it is
// built-in SNAKES.
lashdata:: allows to define data that is not handled in the places of
the Petri net but stored instead in the special structures handled by
the http://www.montefiore.ulg.ac.be/~boigelot/research/lash/[library
Lash]. This allows in particular to aggregate possibly infinite states
into one meta-state.
clusters:: this is an auxiliary plugin that allows to group nodes in a
Petri net. This feature is used by _ops_ in order to record how a net
is constructed, which is exploited by _gv_ in order to build a nice
layout of composed nets.
Getting SNAKES and installing it
--------------------------------
Download http://www.univ-paris12.fr/lacl/pommereau/soft/snakes/snakes-{VERSION}.tar.gz[$snakes-{VERSION}.tar.gz$]
({sys:../stat snakes-{VERSION}.tar.gz})
To install SNAKES, uncompress the archive and copy the directory
snakes in a location where Python can find it (_i.e._, in a directory
listed in your $PYTHONPATH$ environment variable).
SNAKES should work with a Python version at least 2.5 but will _not_
work for an older version. Optionally, you may want to install
additional software required by some plugins:
gv:: depends on http://www.graphviz.org[GraphViz] and its Python
binding http://networkx.lanl.gov/wiki/pygraphviz[PyGraphViz]. The
plugin _graphviz_ depends on GraphViz only but is now deprecated.
lashdata:: requires
http://www.montefiore.ulg.ac.be/~boigelot/research/lash[Lash] and
the
http://www.univ-paris12.fr/lacl/pommereau/soft/index.html#PyLash[Python
Lash binding].
[NOTE]
=====================
(C) 2007 Franck Pommereau <pommereau@univ-paris12.fr>
This library is free software; you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as
published by the Free Software Foundation; either version 2.1 of the
License, or (at your option) any later version.
This library is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301
USA
=====================
Contact
-------
Please feel free to send me comments, questions, bug reports or
contributions by mailto:pommereau@univ-paris12.fr[email].
If you wish to be notified of the new releases of SNAKES, please
register at the http://freshmeat.net/projects/snakes[FreshMeat page].
You may contribute to SNAKES by either send patches by email, or by
using the https://launchpad.net/snakes[SNAKES Launchpad page].
Documentation
-------------
A good starting point may be the link:tutorial.html[tutorial]. Then,
you may find the link:api/index.html[API reference manual] useful, it
documents all the API and gives number of examples of how to use the
various classes and functions.
If you do not program Python, you can learn it in a few hours thanks
to the very good http://docs.python.org/tut/tut.html[Python tutorial].
In order to know more about the PBC and M-nets family or the Petri net
compositions defined in the plugins, you may read papers from
http://www.univ-paris12.fr/lacl/pommereau/publis[my publications page]
(in particular those with _calculus_ in the title).
* add name to transitions, eg, [buff-(x) as name]
make it consistent with instance names?
* zero test/inhibitor arc
! parameter and global buffer with same names => parameter ignored
! accept net instances with too much parameters
0.9.16
#!/usr/bin/env python
import snakes.utils.abcd.main as abcdmain
abcdmain.main()
#!/usr/bin/env python
import socket, sys, readline
from snakes.pnml import dumps, loads
from snakes.plugins.query import Query
env = {}
def public (fun) :
env[fun.__name__] = fun
return fun
@public
def set (*larg, **karg) :
"""set(name, value) -> None
assign value (object) to name (str) on the server"""
return Query("set", *larg, **karg)
@public
def get (*larg, **karg) :
"""get(name) -> object
return the last value assigned to name (str)"""
return Query("get", *larg, **karg)
@public
def delete (*larg, **karg) :
"""delete(name) -> None
discard name (str)"""
return Query("del", *larg, **karg)
@public
def call (*larg, **karg) :
"""call(obj, ...) -> object
call obj (str or result from another call) with the additional arguments
return whatever the called object returns"""
return Query("call", *larg, **karg)
@public
def help (command=None) :
"""help(command) -> None
print help about command, if no command is given, list available commands"""
if command is None:
print "commands:", ", ".join(repr(cmd) for cmd in env
if not cmd.startswith("_"))
print " type 'help(cmd)' to ge help about a command"
elif command in env :
print env[command].__doc__
elif command.__name__ in env :
print command.__doc__
else :
print "unknown command %r" % command
@public
def quit () :
"""quit() -> None
terminate the client"""
print "bye"
sys.exit(0)
@public
def load (path) :
"""net(path) -> object
load a PNML file from path (str) and return the object is represents"""
return loads(open(path).read())
@public
def show (query) :
"""show(obj) -> None
show the PNML representation of obj (object), for instance of a query"""
print dumps(query)
_verbose = False
@public
def verbose (state=None) :
"""verbose(state) -> None
turn on (state=True), off (state=False) or toggle (state not
given) the printing of queries before they are sent to the
server"""
global _verbose
if state is None :
_verbose = not _verbose
else :
_verbose = state
if _verbose :
print "dump of queries enabled"
else :
print "dump of queries disabled"
try :
if sys.argv[1] in ("-t", "--tcp") :
proto = "TCP"
del sys.argv[1]
else :
proto = "UDP"
host, port = sys.argv[1:]
port = int(port)
except :
print >>sys.stderr, "Usage: snkc [--tcp] HOST PORT"
sys.exit(1)
sock = None
def sendto (data, address) :
global sock
if proto == "UDP" :
if sock is None :
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.settimeout(2)
sock.sendto(data, address)
else :
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(2)
sock.connect(address)
sock.send(data)
def recvfrom (size) :
global sock
if proto == "UDP" :
data, address = sock.recvfrom(size)
else :
parts = []
while True :
parts.append(sock.recv(size))
if len(parts[-1]) < size :
break
address = sock.getpeername()
sock.close()
data = "".join(parts)
return data, address
while True :
try :
data = raw_input("? ")
q = eval(data.strip(), env)
except (EOFError, KeyboardInterrupt) :
quit()
except SystemExit :
raise
except Exception, e :
print "query error:", e
continue
if q is not None :
q = dumps(q)
if _verbose :
print "# query to %s:%u" % (host, port)
print q
sendto(q, (host, port))
try :
data, address = recvfrom(2**20)
if _verbose :
print "# answer from %s:%u" % address
print data.strip()
except socket.timeout :
print "# no answer received (timeout)"
print
#!/usr/bin/env python
import sys
import snakes.plugins
snakes.plugins.load("query", "snakes.nets", "nets")
port = 1234
size = 2**20
verbose = 0
proto = "UDP"
def help () :
print "Usage: snkd [OPTION]"
print "Options:"
print " -p PORT, --port PORT listen on port number PORT"
print " -t, --tcp use TCP instead of UDP"
print " -s SIZE, --size SIZE set buffer size for inputs"
print " -v, --verbose display information about queries"
print " (use '-v' twice to dump queries/answers)"
print " -h, --help print this help and exit"
args = sys.argv[1:]
try :
while len(args) > 0 :
arg = args.pop(0)
if arg in ("-p", "--port") :
port = int(args.pop(0))
elif arg in ("-v", "--verbose") :
verbose += 1
elif arg in ("-t", "--tcp") :
proto = "TCP"
elif arg in ("-s", "--size") :
size = int(args.pop(0))
elif arg in ("-h", "--help") :
help()
sys.exit(0)
else :
print >>sys.stderr("snkd: invalid command %r" % arg)
sys.exit(1)
except SystemExit :
raise
except :
cls, val, tb = sys.exc_info()
print >>sys.stderr, "snkd: %s, %s" % (cls.__name__, val)
sys.exit(1)
if verbose :
print "# starting"
print "# listen on: %s:%u" % (proto, port)
print "# buffer size: %uMb" % (size/1024)
print "# verbosity:", verbose
try :
if proto == "UDP" :
nets.UDPServer(port, size=size, verbose=verbose).run()
else :
nets.TCPServer(port, size=size, verbose=verbose).run()
except KeyboardInterrupt :
print "# bye"
except :
cls, val, tb = sys.exc_info()
if verbose > 1 :
raise
elif verbose :
print "# fatal error"
print >>sys.stderr, "snkd: %s, %s" % (cls.__name__, val)
sys.exit(2)
lucid 10.04 LTS
hardy 8.04 LTS
oneiric 11.10
precise 12.04 LTS
python-snakes (0.9.16-1) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Tue, 07 Jun 2011 12:23:33 +0200
python-snakes (0.9.15-1) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Tue, 10 May 2011 18:22:34 +0200
python-snakes (0.9.14-1) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Mon, 11 Apr 2011 16:45:50 +0200
python-snakes (0.9.13-2) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Fri, 23 Jul 2010 20:06:53 +0200
python-snakes (0.9.13-1) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Fri, 16 Jul 2010 17:09:14 +0200
python-snakes (0.9.12-2) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Sat, 03 Apr 2010 21:06:50 +0200
python-snakes (0.9.12-1) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Thu, 01 Apr 2010 19:46:02 +0200
python-snakes (0.9.11-1) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Thu, 25 Mar 2010 19:39:47 +0100
python-snakes (0.9.10-1) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Fri, 19 Jun 2009 13:40:36 +0200
python-snakes (0.9.9-2) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Tue, 19 May 2009 09:29:46 +0200
python-snakes (0.9.8-2) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Tue, 14 Apr 2009 20:02:32 +0200
python-snakes (0.9.8-1) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Mon, 23 Mar 2009 17:34:06 +0100
python-snakes (0.9.7-1) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Tue, 20 Jan 2009 13:07:09 +0100
python-snakes (0.9.6-2) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Tue, 02 Dec 2008 17:47:43 +0100
python-snakes (0.9.6-1) UNRELEASED; urgency=low
* see NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Fri, 28 Nov 2008 15:24:05 +0100
python-snakes (0.9.5-2) UNRELEASED; urgency=low
* Fixed doc installation path
-- Franck Pommereau <pommereau@univ-paris12.fr> Fri, 21 Nov 2008 13:20:55 +0100
python-snakes (0.9.5-1) UNRELEASED; urgency=low
* Trying to fix build on Launchpad
-- Franck Pommereau <pommereau@univ-paris12.fr> Fri, 21 Nov 2008 08:19:04 +0100
python-snakes (0.9.5) UNRELEASED; urgency=low
* See NEWS
-- Franck Pommereau <pommereau@univ-paris12.fr> Wed, 19 Nov 2008 21:47:52 +0100
python-snakes (0.9.4) UNRELEASED; urgency=low
* Initial release. (Closes: #XXXXXX)
-- Franck Pommereau <pommereau@univ-paris12.fr> Tue, 18 Nov 2008 12:09:16 +0100
Source: python-snakes
Section: python
Priority: extra
Maintainer: Franck Pommereau <pommereau@univ-paris12.fr>
Homepage: http://lacl.univ-paris12.fr/pommereau/soft/snakes
Build-Depends: python (>=2.5), cdbs (>=0.4.49), debhelper (>= 5), python-central (>=0.5.6)
XS-Python-Version: >=2.5
Standards-Version: 3.7.2
Package: python-snakes
Architecture: all
XB-Python-Version: ${python:Versions}
Depends: python (>=2.5), python-central, graphviz, python-tk
Description: SNAKES is the Net Algebra Kit for Editors and Simulators
SNAKES is a general purpose Petri net Python library allowing to
define and execute most classes of Petri nets. It features a plugin
system to allow its extension. In particular, plugins are provided to
implement the operations usually found in the PBC and M-nets family.
This package was debianized by Franck Pommereau <pommereau@univ-paris12.fr> on
DATE
License:
This package is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This package is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this package; if not, see
`http://www.gnu.org/licenses/lgpl-3.0.txt'.
On Debian systems, the complete text of the GNU General
Public License can be found in `/usr/share/common-licenses/LGPL'.
The Debian packaging is (C) 2008, Franck Pommereau
<pommereau@univ-paris12.fr> and is licensed under the LGPL, see above.
#!/usr/bin/make -f
DEB_PYTHON_SYSTEM=pycentral
include /usr/share/cdbs/1/rules/debhelper.mk
include /usr/share/cdbs/1/class/python-distutils.mk
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
all:
rm -rf api
epydoc --output api --no-frames --graph=all \
--name="SNAKES is the Net Algebra Kit for Editors and Simulators" \
--navlink='<img alt="SNAKES logo" src="snakes-logo.jpg" width="120" height="120"/>' \
--no-private ../snakes
convert ../logo/snakes-logo.png -background white -scale 120x120 ./api/snakes-logo.jpg
ABCD specification language
===========================
This document presents the ABCD language and compiler that is provided
with SNAKES.
WARNING: this documentation needs update since ABCD syntax has changed
a bit. A precise (but not user-friendly) syntax can be found in the
source:
- snakes/lang/abcd/abcd.pgen is the concrete grammar and
- snakes/lang/abcd/abcd.asdl is the abstract syntax
Introduction
------------
ABCD (Asynchronous Box Calculus with Data) is a specification language
whose semantics is given in terms of coloured Petri net. The formal
semantics will not be defined here, but only an intuition of it. ABCD
can be seen as both an Python based implementation and a variant of
several algebras of Petri nets:
- with respect to the versatile box calculus
(http://lacl.univ-paris12.fr/pommereau/publis/2007-dasd.html), ABCD
does not provide tasks and abort mechanism, but it allows nested
parallelism;
- with respect to the box calculus with coloured buffers or the box
calculus with high-level buffers
(http://lacl.univ-paris12.fr/pommereau/publis/2002-bcd.html and
http://lacl.univ-paris12.fr/pommereau/publis/2004-dads.html), ABCD
does not provide synchronous communication operations.
Syntax
------
The syntax of ABCD is a mix between Python and a process algebra. An
ABCD specification is structured as follows:
1. a possibly empty list of definitions, each being either
1. a Python ``def`` statement (function definition)
2. a Python ``import`` of ``from`` statement
3. an ABCD communication buffer definition
4. an ABCD sub-net definition (similar to a sub-program)
2. an ABCD process (similar to the ``main`` function of a C program)
Like in Python, block and sub-blocks are defined through indentation,
and comments begin with ``#`` and end with the line. Unlike Python,
scoping is lexical, with name masking as usual.
Python definitions
^^^^^^^^^^^^^^^^^^
Functions definitions and module imports are exactly as in Python.
Classes definition is not allowed, to do so, one must create a
separate Python module and import its content.
The following is an example of valid ABCD definitions:
from foo import *
from bar import spam
import math
def sqrt (x) :
return int(math.sqrt(x))
Buffers definitions
^^^^^^^^^^^^^^^^^^^
An ABCD buffer is implemented in the Petri nets semantics as a
coloured place, so a buffer is:
typed
values that can be inserted in the buffer must belong to a given
type; using ``object`` allows to put anything in the buffer.
unbounded
there is no a priori limit to the number of values that can be
inserted in a buffer, not even to the number of copies of a given
value within a buffer.
unordered
the order in which values are retrieved from a buffer is non
deterministic and is absolutely not related to the order of
insertion.
In order to contain the combinatorial explosion during the analysis of
the Petri net resulting from an ABCD specification, it is recommended
to take these aspects into account. In particular, it could be good
to:
- define buffer types as small as possible, allowing just the
expected values and no more;
- implement some policy in order to limit the number of values
simultaneously stored in a buffer. Anyway, if the buffer is
unbounded, it is likely that the resulting Petri net cannot be
analysed;
- implement a FIFO policy whenever possible, for instance by storing
numbered pairs ``(num, obj)`` instead of just ``obj`` and by
maintaining a counter for the next value to insert and the next to
get.
In order to declare a buffer, one has to write:
buffer NAME : TYPE = INIT
where ``NAME`` is the name of the buffer (a Python identifier),
``TYPE`` is its type (a Python type name or a more complex type
specification, see below), and ``INIT`` is the initial content of the
buffer: ``()`` if empty, or a comma separated list of values.
For instance, an empty buffer of integers and a buffer of strings with
two values can be declared as:
buffer count : int = ()
buffer messages : str = 'hello', 'world'
Buffer types can be:
Python classes
for instance ``int``, ``str``, ``float``, ``bool``, ``object``,
etc., including user defined classes.
Enumerated types
for instance ``enum(1, 3, 'foo', True)`` allows all the value listed
between the parenthesis but no other value.
Union types
for instance, ``int|float`` allows for integer as well as floating
point numbers. Intersection types, using operator ``&``, are also
allowed even if it hard to find a real usage for them.
Sets of typed values
for instance, ``{int|float}`` defines sets of numbers (integers or
floating point).
Lists of typed values
for instance, ``[str]`` defines lists of strings.
Dictionary types
for instance, ``str:int`` specifies ``dict`` objects whose keys
are strings and values are integers.
Cross product of types
for instance, ``(int, str)`` specifies tuples of length two whose
first item is an integer and second item is a string. Tuples of
length one *must* use a trailing comma; for instance, ``(int,)``
stands for integer singletons, but ``(int)`` is equivalent to just
``int`` as usual in Python.
Parentheses are allowed in order to combine complex types together, as
in ``(int|float):(str|NoneType)``.
Sub-nets definitions
^^^^^^^^^^^^^^^^^^^^
A sub-net is declared as follows:
net NAME (PARAMS) :
BLOCK
where ``NAME`` is the name of the sub-net (a Python identifier),
``PARAMS`` is a list of parameters (as in Python with default values
allowed but not ``*`` or ``**`` arguments) and BLOCK is an indented
block that follows the syntax of an ABCD specification (with optional
definitions and a mandatory process term).
Objects (Python functions or imports, and buffers) defined inside a
sub-net are local to it and cannot be accessed from the outside. But,
objects defined before the sub-net (unless nested in another sub-net)
are visible from within the sub-net and can be used.
Process terms
^^^^^^^^^^^^^
An ABCD process is defined as a term on a process algebra whose
operators are control flow operators:
sequential composition
the execution of ``A ; B`` starts with the execution of ``A``,
followed by the execution of ``B``
choice composition
the execution of ``A + B`` is either the execution of ``A`` or
that of ``B``, which is chosen non-deterministically.
loop composition
the execution of ``A * B`` starts by an arbitrary number of
executions of ``A``, followed by exactly one execution of ``B``,
the choice to loop or terminate is non-deterministic. So, ``A *
B`` is equivalent to ``B + (A ; B) + (A ; A ; B) + ...``.
parallel composition
the execution of ``A | B`` is that of both ``A`` and ``B``
concurrently.
Base terms of the algebra are either atomic processes or sub-net
instantiations. An atomic process, also called an action, is described
by a term enclosed in square brackets ``[...]``. The semantics of an
action is a Petri net transition. We distinguish:
``[True]``
the silent action that can always be executed and performs no
buffer access.
``[False]``
the deadlock action that can never be executed.
complex actions
such an action involve buffer accesses and an optional condition.
If ``expr`` denotes a Python expression, ``obj`` a Python constant
and ``var`` a Python identifier, buffer accesses may be:
- ``buffer+(expr)`` evaluated ``expr`` and adds the resulting value
to the buffer; this results in the semantics as an arc from the
transition to the buffer place, labeled by ``expr``.
- ``buffer-(obj)`` consumes the value ``obj`` from the buffer; this
results in the semantics as an arc from the buffer place to the
transition, labeled by ``obj``.
- ``buffer-(var)`` binds the variable ``var`` to a value present in
the buffer and consumes it; this results in the semantics as an arc
from the buffer place to the transition, labeled by ``var``.
- ``buffer?(obj)`` or ``buffer?(var)`` are similar except that they
just test the presence of a value but do not consume it; this is
semantically a read arc.
- ``buffer>>(var)`` consumes all the values in the buffer and bind
the resulting multiset to the variable ``var``; this is
semantically a flush arc.
- ``buffer<<(expr)`` evaluates the expression ``expr`` (the result
must be iterable) and adds all its values to the buffer; this is
semantically a fill arc.
For instance:
[count-(x), count+(x+1), shift?(j), buf+(j+x) if x<10]
This action can be execution if the following hold:
- buffer ``count`` must hold a value that it is bound to ``x``
- buffer ``shift`` mush hold a value that is bound to ``j``
- the type of buffer ``count`` must allow the value resulting from
the evaluation of ``x+1``
- the type of buffer ``buf`` must allow the value resulting from the
evaluation of ``j+x``
- expression ``x<10`` must evaluate to ``True``
If all these condition hold, the action can be executed, which results in:
- the chosen value for ``x`` is removed from buffer ``count``
- a new value corresponding to the evaluation of ``x+1` is added to
``count``
- a new value corresponding to the evaluation of ``j+x`` is added to
``buf``
This execution is atomic: it can be considered that all buffers accesses
and conditions evaluation are performed simultaneously and
instantaneously.
If ``count`` or ``shift`` contain more than one value, only those that
allow to fulfill the conditions listed above are considered. Among
those valuations, one is chosen non deterministically in order to
execute the action.
Note that the variables (like ``var``) used in an action do not need
to be declared and are local to this action. These are variables
exactly like in mathematics. Moreover, if a variable is used more than
once in an action, the execution gives it a single consistent value.
For instance, ``[count-(x), shit?(x) if x != 0]`` is executable only
if a same non-zero value can be found both in ``count`` and ``shift``.
Example
-------
Let's consider a simple railroad crossing involving:
- one track where trains can arrive repeatedly;
- one road crossing the track;
- a pair of gates that prevent cars to go on the track when a train
is approaching;
- a red light that prevent the train to cross the road before the
gates completely close.
When a train is approaching, the light is turned red and the gates are
asked to go down. When they arrive down, the light is reset to green.
When the train leaves the gates, they are asked to go up.
This system can be specified in ABCD as follows. We first specify
global buffers to model the red light and a communication channel
between trains and the gates:
# light is initially 'green'
buffer light : in('red', 'green') = 'green'
# no command is available initially
buffer command : in('up', 'down') = ()
Then we specify the behavior of the gates. We provide for it an
internal buffer allowing to easily observe its current state.
net gate () :
# gates are initially 'open'
buffer state : in('open', 'moving', 'closed') = 'open'
# a sequence of actions
# receive the command 'down' and start moving
([command-('down'), state-('open'), state+('moving')] ;
# finish to close and reset the light to 'green'
[state-('moving'), state+('closed'), light-('red'), light+('green')] ;
# receive the command 'up' and start moving
[command-('up'), state-('closed'), state+('moving')] ;
# finish to open
[state-('moving'), state+('open')])
# this sequence is infinitely repeated because the loop exit
# cannot be executed
* [False]
Then we specify the track on which trains can repeatedly arrive
net track () :
# we also need to observe trains position
buffer crossing : bool = False
# here also an sequence is infinitely repeated
# a train is approaching so the light is turned red and the
# gates are asked to close
([command+('down'), light-('green'), light+('red')] ;
# the train must wait for green light before to go further and
# cross the road
[light?('green'), crossing-(False), crossing+(True)] ;
# when the train leaves, gates are asked to open
[crossing-(True), crossing+(False), command+('up')])
* [False]
The full system is specified by running in parallel one instance of
the gates and one of the track.
gate() | track()
The Petri net from this specification can be drawn and saved to PNML
by invoking:
abcd --pnml railroad.pnml --dot railroad.png railroad.abcd
This creates both ``railroad.png`` and ``railroad.pnml``, the former
can be viewed in order to check how is the Petri net semantics, and
the latter can be used to verify the system. On such a small system,
SNAKES performs quickly enough for the verification. So we can use it
to iterate the marking graph and search for an insecure state, ie, in
which gates are open and train. The following program does the job:
from snakes.nets import *
n = loads(",railroad.pnml")
g = StateGraph(n)
for s in g :
m = g.net.get_marking()
if ("train().crossing" in m
and True in m["train().crossing"]
and "closed" not in m["gate().state"]) :
print s, m
print "checked", len(g), "states"
Here, no insecure marking is found. This would not be the case if we
would remove the red light since a train could always arrive on the
road faster than the gate could close.
prod-cons.abcd
a simple producer/consumer
railroad.abcd
a (not so) simple railroad crossing system
railroad.py
checks basic property of railroad.abcd
ns/ns.abcd
Needham-Schroeder public key authentication protocol
ns/ns.py
checks mutual authentication
class Nonce (object) :
def __init__ (self, agent) :
self._agent = agent
def __eq__ (self, other) :
try :
return self._agent == other._agent
except :
return False
def __ne__ (self, other) :
return not self.__eq__(other)
def __str__ (self) :
return self.__repr__()
def __repr__ (self) :
return "Nonce(%x)" % self._agent
def _cross (sets) :
if len(sets) == 0 :
pass
elif len(sets) == 1 :
for item in sets[0] :
yield (item,)
else :
for item in sets[0] :
for others in _cross(sets[1:]) :
yield (item,) + others
class Spy (object) :
keywords = set(["crypt", "pub", "priv", "secret", "hash"])
def __init__ (self, *types) :
"""
>>> s = Spy(str, int, (str, int, (float, object)))
>>> s
<Spy>
>>> s._subtypes == set([str, int, float, object,
... (float, object),
... (str, int, (float, object))])
True
"""
self._types = set(types)
self._subtypes = set()
todo = set(self._types)
while len(todo) > 0 :
t = todo.pop()
self._subtypes.add(t)
if isinstance(t, tuple) :
todo.update(t)
def __str__ (self) :
"""
>>> str(Spy(str, int))
'<Spy>'
"""
return "<%s>" % self.__class__.__name__
def __repr__ (self) :
"""
>>> str(Spy(str, int))
'<Spy>'
"""
return self.__str__()
def __eq__ (self, other) :
"""
>>> Spy(str, int) == Spy(int, str)
True
>>> Spy(str, int) == Spy(int, str, float)
False
"""
return self._types == other._types
def __ne__ (self, other) :
"""
>>> Spy(str, int) != Spy(int, str, float)
True
>>> Spy(str, int) != Spy(int, str)
False
"""
return not self.__eq__(other)
def __hash__ (self) :
"""
>>> hash(Spy(str, int)) == hash(Spy(int, str))
True
>>> hash(Spy(str, int)) == hash(Spy(int, str, float))
False
"""
return hash(tuple(sorted(self._types)))
@classmethod
def get_type (cls, obj) :
"""
>>> Spy().get_type(('hello', ('foo', 4), 5))
(<type 'str'>, (<type 'str'>, <type 'int'>), <type 'int'>)
"""
t = type(obj)
if t is tuple :
if len(obj) > 0 and obj[0] in cls.keywords :
return (obj[0],) + tuple(cls.get_type(o) for o in obj[1:])
else :
return tuple(cls.get_type(o) for o in obj)
else :
return t
@classmethod
def match (cls, obj, pattern) :
"""
>>> Spy.match(('hello', 42, (1, 2, 3.4)), (str, int, (1, 2, float)))
True
>>> Spy.match(('hello', 42, (1, 2, 3.4)), ('hello', int, (1, 2, float)))
True
>>> Spy.match(('hello', 42, (1, 2, 3)), (str, int, (1, 2, float)))
False
>>> Spy.match(('hello', 42, (1, 2, 3, 4)), (str, int, (1, 2, float)))
False
>>> Spy.match(('hello', 42, (1, 2, 3.4)), ('foo', int, (1, 2, float)))
False
"""
if type(obj) == tuple == type(pattern) :
if len(obj) != len(pattern) :
return False
for o, p in zip(obj, pattern) :
if not cls.match(o, p) :
return False
return True
elif type(pattern) is type :
return isinstance(obj, pattern)
else :
return obj == pattern
def message (self, obj) :
"""
>>> Spy(str, int).message('hello')
True
>>> Spy(str, int).message(42)
True
>>> Spy(str, int).message((1, 2))
False
>>> Spy(str, int).message(3.14)
False
"""
return self.get_type(obj) in self._types
def fragment (self, obj) :
"""
>>> s = Spy(str, int, (str, int, (float, list)))
>>> s.fragment('hello')
True
>>> s.fragment(3.14)
True
>>> s.fragment((3.14, []))
True
>>> s.fragment(('hello', 1, (3.14, [])))
True
>>> s.fragment((1, 2))
False
>>> s.fragment({})
False
"""
return self.get_type(obj) in self._subtypes
def can_decrypt (self, message, knowledge) :
try :
if message[0] != "crypt" :
return False
key = message[1]
if key[0] == "pub" :
return ("priv", key[1]) in knowledge
elif key[0] == "priv" :
return ("pub", key[1]) in knowledge
elif key[0] == "secret" :
return key in knowledge
except :
pass
return False
def can_decompose (self, message) :
try :
if not isinstance(message, tuple) :
return False
elif message[0] not in self.keywords :
return True
except :
pass
return False
def learn (self, msg, knowledge) :
"""
>>> s = Spy((int, int, str), (int, int, (str, str)))
>>> k = set()
>>> k = s.learn((1, 2, 'hello'), k)
>>> for m in sorted(k) :
... print m
1
2
hello
(1, 1, 'hello')
(1, 1, ('hash', 'hello'))
(1, 1, ('hello', 'hello'))
(1, 2, 'hello')
(1, 2, ('hash', 'hello'))
(1, 2, ('hello', 'hello'))
(2, 1, 'hello')
(2, 1, ('hash', 'hello'))
(2, 1, ('hello', 'hello'))
(2, 2, 'hello')
(2, 2, ('hash', 'hello'))
(2, 2, ('hello', 'hello'))
('hash', 'hello')
('hello', 'hello')
>>> k = s.learn((2, 3, ('hello', 'world')), k)
>>> for m in sorted(k) :
... print m
1
2
3
hello
world
(1, 1, 'hello')
(1, 1, 'world')
(1, 1, ('hash', 'hello'))
(1, 1, ('hash', 'world'))
(1, 1, ('hello', 'hello'))
(1, 1, ('hello', 'world'))
(1, 1, ('world', 'hello'))
(1, 1, ('world', 'world'))
(1, 2, 'hello')
(1, 2, 'world')
(1, 2, ('hash', 'hello'))
(1, 2, ('hash', 'world'))
(1, 2, ('hello', 'hello'))
(1, 2, ('hello', 'world'))
(1, 2, ('world', 'hello'))
(1, 2, ('world', 'world'))
(1, 3, 'hello')
(1, 3, 'world')
(1, 3, ('hash', 'hello'))
(1, 3, ('hash', 'world'))
(1, 3, ('hello', 'hello'))
(1, 3, ('hello', 'world'))
(1, 3, ('world', 'hello'))
(1, 3, ('world', 'world'))
(2, 1, 'hello')
(2, 1, 'world')
(2, 1, ('hash', 'hello'))
(2, 1, ('hash', 'world'))
(2, 1, ('hello', 'hello'))
(2, 1, ('hello', 'world'))
(2, 1, ('world', 'hello'))
(2, 1, ('world', 'world'))
(2, 2, 'hello')
(2, 2, 'world')
(2, 2, ('hash', 'hello'))
(2, 2, ('hash', 'world'))
(2, 2, ('hello', 'hello'))
(2, 2, ('hello', 'world'))
(2, 2, ('world', 'hello'))
(2, 2, ('world', 'world'))
(2, 3, 'hello')
(2, 3, 'world')
(2, 3, ('hash', 'hello'))
(2, 3, ('hash', 'world'))
(2, 3, ('hello', 'hello'))
(2, 3, ('hello', 'world'))
(2, 3, ('world', 'hello'))
(2, 3, ('world', 'world'))
(3, 1, 'hello')
(3, 1, 'world')
(3, 1, ('hash', 'hello'))
(3, 1, ('hash', 'world'))
(3, 1, ('hello', 'hello'))
(3, 1, ('hello', 'world'))
(3, 1, ('world', 'hello'))
(3, 1, ('world', 'world'))
(3, 2, 'hello')
(3, 2, 'world')
(3, 2, ('hash', 'hello'))
(3, 2, ('hash', 'world'))
(3, 2, ('hello', 'hello'))
(3, 2, ('hello', 'world'))
(3, 2, ('world', 'hello'))
(3, 2, ('world', 'world'))
(3, 3, 'hello')
(3, 3, 'world')
(3, 3, ('hash', 'hello'))
(3, 3, ('hash', 'world'))
(3, 3, ('hello', 'hello'))
(3, 3, ('hello', 'world'))
(3, 3, ('world', 'hello'))
(3, 3, ('world', 'world'))
('hash', 'hello')
('hash', 'world')
('hello', 'hello')
('hello', 'world')
('world', 'hello')
('world', 'world')
>>> s = Spy(('crypt', ('pub', int), str))
>>> pub, priv = ('pub', 1), ('priv', 1)
>>> k = set([pub])
>>> k = s.learn(('crypt', priv, 'hello'), k)
>>> 'hello' in k
True
>>> k = s.learn(('crypt', pub, 'secret message'), k)
>>> 'secret message' in k
False
>>> k.add(priv)
>>> k = s.learn(('crypt', pub, 'secret message'), k)
>>> 'secret message' in k
True
"""
k = set(knowledge)
# learn from new message
# add new message to knowledge
k.add(msg)
# hash new message if useful
h = ("hash", msg)
if self.fragment(h) :
k.add(h)
# try to decrypt new message
if self.can_decrypt(msg, k) :
for m in msg[2:] :
if m not in k :
k.update(self.learn(m, k))
# try to decompose new message
elif self.can_decompose(msg) :
for m in msg :
if m not in k :
k.update(self.learn(m, k))
self._learn_(msg, k)
# compose new messages from fragments
for sub in (s for s in sorted(self._subtypes, key=self._size)
if type(s) is tuple) :
sets = []
for t in sub :
sets.append([x for x in k|self.keywords if self.match(x, t)])
k.update(_cross(sets))
return k
@classmethod
def _size (cls, obj) :
if isinstance(obj, tuple) :
return (len(obj),) + tuple(cls._size(o) for o in obj)
else :
return 1
def _learn_ (self, m, k) :
for attr in (a for a in dir(self) if a.startswith("learn_")) :
getattr(self, attr)(m, k)
class SpyKS (Spy) :
def can_decrypt (self, message, knowledge) :
try :
if message[1][0] == "priv" :
knowledge.add(("pub", message[1][1]))
except :
pass
return Spy.can_decrypt(self, message, knowledge)
if __name__ == "__main__" :
import doctest
doctest.testmod(optionflags=doctest.ELLIPSIS)
# communication network
buffer nw : object = ()
# implementation of nonces and Dolev-Yao attacker
from dolev_yao import *
net Alice (this, who: buffer) :
# protocol initiater
buffer peer : int = ()
buffer peer_nonce : Nonce = ()
[who?(B), peer+(B), nw+("crypt", ("pub", B), this, Nonce(this))]
; [nw-("crypt", ("pub", this), Na, Nb), peer_nonce+(Nb) if Na == Nonce(this)]
; [peer?(B), peer_nonce?(Nb), nw+("crypt", ("pub", B), Nb)]
net Bob (this) :
# protocol responder
buffer peer : int = ()
buffer peer_nonce : Nonce = ()
[nw-("crypt", ("pub", this), A, Na), peer+(A), peer_nonce+(Na)]
; [peer?(A), peer_nonce?(Na), nw+("crypt", ("pub", A), Na, Nonce(this))]
; [nw-("crypt", ("pub", this), Nb) if Nb == Nonce(this)]
net Mallory (this, init) :
# attacker
buffer knowledge : object = (this, Nonce(this), ("priv", this)) + init
# Dolev-Yao attacker, bound by protocol signature
buffer spy : object = Spy(("crypt", ("pub", int), int, Nonce),
("crypt", ("pub", int), Nonce, Nonce),
("crypt", ("pub", int), Nonce))
# capture on message and learn from it
([spy?(s), nw-(m), knowledge>>(k), knowledge<<(s.learn(m, k))]
# loose message or inject another one (may be the same)
; ([True] + [spy?(s), knowledge?(x), nw+(x) if s.message(x)]))
* [False]
# Alice will contact one of these agents
buffer agents : int = 2, 3
# main processes, with friendly names
alice::Alice(1, agents)
| bob::Bob(2)
| spy::Mallory(3, (1, ("pub", 1), 2, ("pub", 2)))
import snakes.plugins
snakes.plugins.load("status", "snakes.nets", "nets")
from nets import *
from dolev_yao import Nonce
ns = loads(",ns.pnml")
states = StateGraph(ns)
for s in states :
m = states.net.get_marking()
# skip non final markings
if "bob.x" not in m or "alice.x" not in m :
continue
# get Alice's and Bob's peers ids
bp = list(m["bob.peer"])[0]
ap = list(m["alice.peer"])[0]
# violation of mutual authentication
if bp == 1 and ap != 2 :
print(s, "A(1) <=> %s ; B(2) <=> %s" % (ap, bp))
print(m)
print(len(states), "states")
# get BlackToken
#from snakes.nets import *
buffer fork1 : BlackToken = dot
buffer fork2 : BlackToken = dot
buffer fork3 : BlackToken = dot
# buffer parameters have to be declared as such
net philo (left: buffer, right: buffer):
buffer eating : BlackToken = ()
([left-(dot), right-(dot), eating+(dot)]
; [left+(dot), right+(dot), eating-(dot)])
* [False]
philo(fork1, fork2)
| philo(fork2, fork3)
| philo(fork3, fork1)
# shared buffer between producers and consumers
buffer bag : int = ()
net prod () :
# produces 10 tokens: 1..9 in bag
buffer count : int = 0
[count-(x), count+(x+1), bag+(x) if x < 10] * [count-(x) if x == 10]
net odd () :
# consumes odd tokens in bag
[bag-(x) if (x % 2) == 1] * [False]
net even () :
# consumes even tokens un bag
[bag-(x) if (x % 2) == 0] * [False]
# main process with one instance of each net
odd() | even() | prod()
# symbols
symbol RED, GREEN, UP, DOWN, OPEN, MOVING, CLOSED
# states of the gate
typedef gatestate : enum(OPEN, MOVING, CLOSED)
# stores green light state
buffer light : enum(RED, GREEN) = GREEN
# commands send by the track to the gate
buffer command : enum(UP, DOWN) = ()
net gate () :
# a pair of gates
buffer state : gatestate = OPEN
([command-(DOWN), state-(OPEN), state+(MOVING)] ;
[state-(MOVING), state+(CLOSED), light-(RED), light+(GREEN)] ;
[command-(UP), state-(CLOSED), state+(MOVING)] ;
[state-(MOVING), state+(OPEN)])
* [False]
net track () :
# a track with trains passing on it
buffer crossing : bool = False
([command+(DOWN), light-(GREEN), light+(RED)] ;
[light?(GREEN), crossing-(False), crossing+(True)] ;
[crossing-(True), crossing+(False), command+(UP)])
* [False]
# main process
gate() | track()
import sys
import snakes.plugins
snakes.plugins.load("gv", "snakes.nets", "snk")
from snk import *
n = loads(sys.argv[1])
g = StateGraph(n)
for s in g :
m = g.net.get_marking()
# safety property: train present => gates closed
if ("train().crossing" in m
and True in m["train().crossing"]
and "closed" not in m["gate().state"]) :
print("%s %s" % (s, m))
print("checked %s states" % len(g))
g.draw(sys.argv[1].rsplit(".", 1)[0] + "-states.png")
The plugin 'queries' introduces two new classes:
- Query allows to describe and execute various kind of queries, that
can be serialized to PNML in order to be exchanged with another
program. SNAKES uses an extension of PNML, for a complete list of
SNAKES' PNML tags, see 'snakes-pnml.txt'.
- UDPServer is a sample server over UDP that handles a limited number
of simple queries. However, these queries can be nested, in the
client program as well as in the communication with the server, so
the range of possibilities is unlimited and very complex queries
may be constructed from those simple ones.
A server program is provides in 'utils/query/snkd.py' and is general
an robust enough to be used in real cases (but considering it only
implements the limited set of queries exposed above). As it works in a
disconnected mode (UDP), this server does not try to make any
difference between possibly several clients. This may result in data
sharing or conflicts, which makes this server more suitable to a local
use only. Start the server with option '-h' to have details about its
command line.
A sample client is provided also, in order to allow to experiment with
the server. (Notice that since the client is programmed in Python, it
could use SNAKES directly, which explains why it will never be made
more sophisticated.) It takes the form a shell-like program that
accepts commands of the form "? command(param, ...)" where "? " is the
prompt.
First come local commands, these are commands that do not send any
data to the server but are handled locally:
? help()
list available commands
? help(command)
print help about the given command
? quit()
exits the client, end-of-file or ^C may be used equivalently
? load(path)
loads a PNML file and return the object it represents. This is
useful for instance to load a Petri net on the server as in:
"? set('net', load('mynet.pnml'))"
? show(obj)
prints the PNML representation of 'obj', this may be useful for
instance to see how a query is translated, as in:
"? show(set('x', [1, 2, 'hello']))"
? verbose(mode)
turns on (mode=True), off (mode=False) or toggle (no mode given)
the printing of queries before they are sent to the server
Then come commands that actually generate queries, there is only 4 of
them:
? set('name', value)
assign value to 'name', which is equivalent to the Python statement
"name=value". 'value' may be any Python expression. It is important
to notice that 'name' is assigned on the server side, in an
environment that initially contains Python's builtins, the content
of 'operator' and of 'snakes.net' modules (the later being extended
by all the plugins loaded before 'query').
? get('name')
returns the value previously assigned to 'name'.
? delete('name')
Equivalent to the Python statement "del name".
? call(obj, ...)
equivalent to the Python statement "obj(...)". 'obj' may be a name
or an access to an object like 'x' or 'x.method', or even the
result from a nested query (see examples below).
##
## Answers
##
In case of a success with no return value, the answer is:
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok"/>
</pnml>
If there is a return value, it is given as a PNML sub-tree of tag
<answer>. For instance:
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok">
<object type="str">hello world!</object>
</answer>
</pnml>
If an errors occurs during the handling of the query, an answer with
status "error" is returned. The data in tag <answer> is the error
message and the tag has an attribute 'error' that is the name of the
caught exception. For instance:
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer error="ExceptionName" status="error">Exception message</answer>
</pnml>
##
## Queries
##
Query arguments are passed as tags <argument> nested in <query>, the
value of each argument is encoded in PNML using the tags presented in
'snakes-pnml.txt'. The following is a copy/paste from a snkc session,
interleaved with comments.
First, we turn on the verbose mode.
? verbose()
dump of queries enabled
The first query assigns to 'x' a list composed of an integer, a float
and a string. This allows to illustrate how these types are encoded.
? set('x', [1, 3.14, 'hello'])
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="set">
<argument>
<object type="str">x</object>
</argument>
<argument>
<object type="list">
<object type="int">1</object>
<object type="float">3.14</object>
<object type="str">hello</object>
</object>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok"/>
</pnml>
If we get 'x' back, the same encoding is used again but in the other
direction.
? get('x')
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="get">
<argument>
<object type="str">
x
</object>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok">
<object type="list">
<object type="int">1</object>
<object type="float">3.14</object>
<object type="str">hello</object>
</object>
</answer>
</pnml>
'x' can be removed, after which trying to get it again results in an
error.
? delete('x')
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="del">
<argument>
<object type="str">x</object>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok"/>
</pnml>
? get('x')
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="get">
<argument>
<object type="str">x</object>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer error="AttributeError" status="error">
'module' object has no attribute 'x'
</answer>
</pnml>
We set a new name 's' that is a string in order to illustrate method
calls.
? set('s', 'hello world!')
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="set">
<argument>
<object type="str">s</object>
</argument>
<argument>
<object type="str">hello world!</object>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok"/>
</pnml>
The following calls the method 'replace' of 's' passing it two string
arguments.
? call('s.replace', 'o', '_')
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="call">
<argument>
<object type="str">s.replace</object>
</argument>
<argument>
<object type="str">o</object>
</argument>
<argument>
<object type="str">_</object>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok">
<object type="str">hell_ w_rld!</object>
</answer>
</pnml>
In order to cascade calls, that is call a method of the object
returned by the first call, we can use the function 'getattr' that
returns a named attribute of an object. Here, we get and call the
'split' method from the string returned by 's.replace'. This is
exactly the same as "s.replace('o', '_').split()" in Python.
? call(call('getattr', call('s.replace', 'o', '_'), 'split'))
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="call">
<argument>
<query name="call">
<argument>
<object type="str">getattr</object>
</argument>
<argument>
<query name="call">
<argument>
<object type="str">s.replace</object>
</argument>
<argument>
<object type="str">o</object>
</argument>
<argument>
<object type="str">_</object>
</argument>
</query>
</argument>
<argument>
<object type="str">split</object>
</argument>
</query>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok">
<object type="list">
<object type="str">hell_</object>
<object type="str">w_rld!</object>
</object>
</answer>
</pnml>
Let's now apply these techniques to work with Petri nets. First we
load a net from PNML file. In practical cases, the command 'load' from
snkc is not available. But it is enough to read the PNML file, extract
its '<net>...</net>' part and paste it in the middle of the 'set'
request.
? set('n', load('simple-coloured.pnml'))
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="set">
<argument>
<object type="str">n</object>
</argument>
<argument>
<net id="mynet">
<place id="p2">
<type domain="universal"/>
<initialMarking>
<multiset/>
</initialMarking>
</place>
<place id="p1">
<type domain="universal"/>
<initialMarking>
<multiset>
<item>
<value>
<object type="int">1</object>
</value>
<multiplicity>1</multiplicity>
</item>
<item>
<value>
<object type="int">2</object>
</value>
<multiplicity>1</multiplicity>
</item>
</multiset>
</initialMarking>
</place>
<transition id="t"/>
<arc id="p1:t" source="p1" target="t">
<inscription>
<variable>x</variable>
</inscription>
</arc>
<arc id="t:p2" source="t" target="p2">
<inscription>
<expression>x+1</expression>
</inscription>
</arc>
</net>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok"/>
</pnml>
The marking of the net can then be retrieved. Only place 'p1' is
marked by the two integer-valued tokens 1 and 2.
? call('n.get_marking')
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="call">
<argument>
<object type="str">n.get_marking</object>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok">
<marking>
<place id="p1">
<tokens>
<multiset>
<item>
<value>
<object type="int">1</object>
</value>
<multiplicity>1</multiplicity>
</item>
<item>
<value>
<object type="int">2</object>
</value>
<multiplicity>1</multiplicity>
</item>
</multiset>
</tokens>
</place>
</marking>
</answer>
</pnml>
Using the same techniques as for emulating "s.replace().split()"
above, we can query the modes of transition 't' in net 'n'. We get in
return a list of two substitutions that allow to fire 't'.
? call(call('getattr', call('n.transition', 't'), 'modes'))
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="call">
<argument>
<query name="call">
<argument>
<object type="str">getattr</object>
</argument>
<argument>
<query name="call">
<argument>
<object type="str">n.transition</object>
</argument>
<argument>
<object type="str">t</object>
</argument>
</query>
</argument>
<argument>
<object type="str">modes</object>
</argument>
</query>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok">
<object type="list">
<substitution>
<item>
<name>x</name>
<value>
<object type="int">1</object>
</value>
</item>
</substitution>
<substitution>
<item>
<name>x</name>
<value>
<object type="int">2</object>
</value>
</item>
</substitution>
</object>
</answer>
</pnml>
Instead of getting this list of modes, we could have saved it to a
name 's'. We just need to nest the above query in a 'set' query.
? set('s', call(call('getattr', call('n.transition', 't'), 'modes')))
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="set">
<argument>
<object type="str">s</object>
</argument>
<argument>
<query name="call">
<argument>
<query name="call">
<argument>
<object type="str">getattr</object>
</argument>
<argument>
<query name="call">
<argument>
<object type="str">n.transition</object>
</argument>
<argument>
<object type="str">t</object>
</argument>
</query>
</argument>
<argument>
<object type="str">modes</object>
</argument>
</query>
</argument>
</query>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok"/>
</pnml>
In order to fire 't', we will call its method 'fire' and pass it one
of the modes stored in 's'. Here, we use function 'getitem' to
retrieve the first item in 's' (numbered 0). In a realistic example,
it could be simpler to parse and store on the client the list of
substitutions, instead of storing it on the server.
? call(call('getattr', call('n.transition', 't'), 'fire'), call('getitem', get('s'), 0))
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="call">
<argument>
<query name="call">
<argument>
<object type="str">getattr</object>
</argument>
<argument>
<query name="call">
<argument>
<object type="str">n.transition</object>
</argument>
<argument>
<object type="str">t</object>
</argument>
</query>
</argument>
<argument>
<object type="str">fire</object>
</argument>
</query>
</argument>
<argument>
<query name="call">
<argument>
<object type="str">getitem</object>
</argument>
<argument>
<query name="get">
<argument>
<object type="str">s</object>
</argument>
</query>
</argument>
<argument>
<object type="int">0</object>
</argument>
</query>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok"/>
</pnml>
As we can see now, both places are marked.
? call('n.get_marking')
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="call">
<argument>
<object type="str">n.get_marking</object>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok">
<marking>
<place id="p2">
<tokens>
<multiset>
<item>
<value>
<object type="int">2</object>
</value>
<multiplicity>1</multiplicity>
</item>
</multiset>
</tokens>
</place>
<place id="p1">
<tokens>
<multiset>
<item>
<value>
<object type="int">2</object>
</value>
<multiplicity>1</multiplicity>
</item>
</multiset>
</tokens>
</place>
</marking>
</answer>
</pnml>
Then we can call again 'fire', with the second available mode.
? call(call('getattr', call('n.transition', 't'), 'fire'), call('getitem', get('s'), 1))
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="call">
<argument>
<query name="call">
<argument>
<object type="str">getattr</object>
</argument>
<argument>
<query name="call">
<argument>
<object type="str">n.transition</object>
</argument>
<argument>
<object type="str">t</object>
</argument>
</query>
</argument>
<argument>
<object type="str">fire</object>
</argument>
</query>
</argument>
<argument>
<query name="call">
<argument>
<object type="str">getitem</object>
</argument>
<argument>
<query name="get">
<argument>
<object type="str">s</object>
</argument>
</query>
</argument>
<argument>
<object type="int">1</object>
</argument>
</query>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok"/>
</pnml>
And now only 'p2' is marked.
? call('n.get_marking')
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="call">
<argument>
<object type="str">n.get_marking</object>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok">
<marking>
<place id="p2">
<tokens>
<multiset>
<item>
<value>
<object type="int">2</object>
</value>
<multiplicity>1</multiplicity>
</item>
<item>
<value>
<object type="int">3</object>
</value>
<multiplicity>1</multiplicity>
</item>
</multiset>
</tokens>
</place>
</marking>
</answer>
</pnml>
Querying the modes of 't' now results in an empty list because there
is no more tokens in the input place of 't'.
? call(call('getattr', call('n.transition', 't'), 'modes'))
# query to localhost:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="call">
<argument>
<query name="call">
<argument>
<object type="str">getattr</object>
</argument>
<argument>
<query name="call">
<argument>
<object type="str">n.transition</object>
</argument>
<argument>
<object type="str">t</object>
</argument>
</query>
</argument>
<argument>
<object type="str">modes</object>
</argument>
</query>
</argument>
</query>
</pnml>
# answer from 127.0.0.1:1234
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<answer status="ok">
<object type="list"/>
</answer>
</pnml>
These examples are intended to illustrate what _can_ be made, but not
necessarily what _should_ be made. Here, we use snkc at the client
side, so we have no way to store locally information, nor to parse the
PNML we get from the server. So, when we need to record some data, we
put it at the server side using a 'set' query. Doing so, we will need
to use quite complicated queries in order to extract the bits of data
that we will want to use. See for instance how complicated was the
firing of a transition.
In a realistic client, it is possible to store an manage locally some
information and so avoid complex queries. It is even possible to
completely parse and interpret PNML data received from the server.
Both extremities have pros and cons:
- Storing everything on the server simplifies processing for the
client; but it increases (a lot) the complexity of queries.
- Storing everything on the client requires to parse and interpret
PNML data, and to manage stored information; but it simplifies
queries and may give more control on the amount of exchanged.
Any intermediary position may be adopted: it is possible for a client
to partially interpret PNML and to store fragments of uninterpreted
PNML text as symbolic values. These fragments can then be inserted
into queries where they are required.
For instance, a client could parse the list of modes of a transition
up to the tag <substitution>, which is quite a simple task. Each mode
could then be saved locally as a fragment of PNML text
"<substitution>...</substitution>". Then, firing a transition would
simply require to insert such a fragment at the right position in a
template query. For instance, let's run:
? show(call(call('getattr', call('n.transition', 't'), 'fire'), 'SUBST'))
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="call">
<argument>
<query name="call">
<argument>
<object type="str">getattr</object>
</argument>
<argument>
<query name="call">
<argument>
<object type="str">n.transition</object>
</argument>
<argument>
<object type="str">t</object>
</argument>
</query>
</argument>
<argument>
<object type="str">fire</object>
</argument>
</query>
</argument>
<argument>
<object type="str">SUBST</object>
</argument>
</query>
</pnml>
Then, we just need to replace '<object type="str">SUBST</object>' with
the saved fragment '<substitution>...</substitution>' in order to fire
the transition with the chosen mode. A similar techniques can be
applied to many situations. A lazy (but clever) approach would be to
prepare a series of template queries where placeholders could be
substituted with fragments of PNML text retrieved from the server.
##
## Keyword arguments
##
A <query> may also accepts keyword arguments like functions in Python.
But currently no query expects that. The syntax to add keyword
arguments is to use a tag <keyword> for each such argument, with an
attribute 'name' that store the keyword name and with a child tag that
stores the keyword value. For instance, a Python call "example('x', 1,
foo=5, bar='hello')" would translate to a query:
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="example">
<argument>
<object type="str">
x
</object>
</argument>
<argument>
<object type="int">
1
</object>
</argument>
<keyword name="foo">
<object type="int">
5
</object>
</keyword>
<keyword name="bar">
<object type="str">
hello
</object>
</keyword>
</query>
</pnml>
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<net id="mynet">
<place id="p2">
<type domain="universal"/>
<initialMarking>
<multiset/>
</initialMarking>
</place>
<place id="p1">
<type domain="universal"/>
<initialMarking>
<multiset>
<item>
<value>
<object type="int">
1
</object>
</value>
<multiplicity>
1
</multiplicity>
</item>
<item>
<value>
<object type="int">
2
</object>
</value>
<multiplicity>
1
</multiplicity>
</item>
</multiset>
</initialMarking>
</place>
<transition id="t"/>
<arc id="p1:t" source="p1" target="t">
<inscription>
<variable>
x
</variable>
</inscription>
</arc>
<arc id="t:p2" source="t" target="p2">
<inscription>
<expression>
x+1
</expression>
</inscription>
</arc>
</net>
</pnml>
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<net id="Simple P/T net">
<place id="p2">
<initialMarking>
<text>
0
</text>
</initialMarking>
</place>
<place id="p1">
<initialMarking>
<text>
1
</text>
</initialMarking>
</place>
<transition id="t2"/>
<transition id="t1"/>
<arc id="p2:t2" source="p2" target="t2">
<inscription>
<text>
1
</text>
</inscription>
</arc>
<arc id="t2:p1" source="t2" target="p1">
<inscription>
<text>
1
</text>
</inscription>
</arc>
<arc id="p1:t1" source="p1" target="t1">
<inscription>
<text>
1
</text>
</inscription>
</arc>
<arc id="t1:p2" source="t1" target="p2">
<inscription>
<text>
1
</text>
</inscription>
</arc>
</net>
</pnml>
This document describes the PNML extensions used bu SNAKES. See
http://www.pnml.org for the definition of the standard PNML. In the
following, we use a simplified RELAX NG Compact Syntax (see
http://relaxng.org). For each element, we provide the Python class
that implements it (see API reference for details), a list of its
attributes and children elements.
##
## Petri net elements
##
element pnml {
element net { ... }*
}
# Several nets may be provided in one PNML file.
element net { # class snakes.nets.PetriNet
attribute id { text } # identity of the net
element place { ... }* # places in the net
element transition { ... }* # transitions in the net
element arc { ... }* # arcs in the net
}
element place { # class snakes.nets.Place
attribute id { text } # identity of the place
element type { ... }? # for a coloured place
element initialMarking { # marking
( element text { num:integer } # for a P/T place
| element multiset { ... } ) # for a coloured place
}?
}
# A P/T place is identified in SNAKES by the fact it has a typing
# constraint "tBlackToken". In such a case, its initial marking is
# given as the number of black tokens held by the place, which
# respects the PNML standard. In the other cases, the place is
# considered to be coloured and its type constraint is given by an
# element <type> and its marking is given by an element <multiset>.
element transition { # class snakes.nets.Transition
attribute id { text } # identity of the transition
element guard {
element expression { ... } # guard if not True
}?
}
# When the guard is True, like in P/T nets, it is not saved in the
# PNML so that the result respects PNML standard. Otherwise, the guard
# is saved inside a tag <guard>.
element declare {
statement
}
# One <declare> tag is added to <net> for each Python statement run
# using PetriNet.declare() for this net.
element global {
attribute name { name:string } # object's name
element * { ... } # value
}
# One <global> is added to <net> for each entry in net.globals that
# is not obtained through the use of PetriNet.declare().
##
## Arcs
##
element arc {
attribute id { text } # identity of the arc
attribute source { text } # identity of source node
attribute target { text } # identity of target node
element inscription {
element text { # for a P/T net
weight:int
}?
element * { ... }? # for a coloured net
}
}
# When the net is a P/T net, a weight is given for the arcs, which
# respects the PNML standard. Otherwise, the inscription is one of the
# possible inscriptions given below.
element value { # class snakes.nets.Value
element object { ... } # value transported on an arc
}
element variable { # class snakes.nets.Variable
name:string # name of the variable
}
element expression { # class snakes.nets.Expression
expr:string # text of the expression (Python code)
}
element test { # class snakes.nets.Test
( element value { ... }
| element variable { ... }
| element expression { ... }
| element multiarc { ... }
| element tuple { ... } ) # tested annotation
}
element multiarc { # class snakes.nets.MultiArc
( element value { ... }
| element variable { ... }
| element expression { ... }
| element multiarc { ... }
| element tuple { ... } )* # list of encapsulated annotations
}
element tuple { # class snakes.nets.Tuple
( element value { ... }
| element variable { ... }
| element expression { ... }
| element multiarc { ... }
| element tuple { ... } )* # list of encapsulated annotations
}
##
## Auxiliary tags
##
element token { # class snakes.nets.BlackToken
} # a standard black token
element marking { # class snakes.nets.Marking
element place { # one for each marked place
attribute id { text } # identity of the place
element tokens {
element multiset { ... } # marking of the place
}
}*
}
element multiset { # class snakes.data.MultiSet
element item { # items in the multiset
element value {
element object { ... } # value of one item
}
element multiplicity { # number of time it is repeated
num:integer
}
}*
}
element substitution { # class snakes.data.Substitution
element item { # mapped variables
element name {
name:string # variable name
}
element value {
element object { ... } # associated value
}
}*
}
##
## Python objects
##
element object {
attribute type { ... } # type of the object
... # depends of attribute type
}
element object {
attribute type { "int" } # object is an integer
value:integer
}
element object {
attribute type { "float" } # object is a float
value:float
}
element object {
attribute type { "str" } # object is a string
value:string
}
element object {
attribute type { "bool" } # object is a Boolean
( "True" | "False" )
}
element object {
attribute type { "list" } # object is a list
element object { ... }* # list items
}
element object {
attribute type { "tuple" } # object is a tuple
element object { ... }* # tuple items
}
element object {
attribute type { "set" } # object is a set
element object { ... }* # set items
}
element object {
attribute type { "method" } # object is a class method
attribute name { path:string} # path to method, including module name
}
element object {
attribute type { "function" } # object is a function
attribute name { path:string} # path to function, including module name
}
element object {
attribute type { "class" } # object is a class
attribute name { path:string} # path to class, including module name
}
element object {
attribute type { "module" } # object is a module
attribute name { path:string} # path to module
}
element object {
attribute type { "pickle" } # object that cannot be handled symbolically
data:string # pickled object
}
element object {
attribute type { "NoneType" } # object is the value None
}
##
## Typing constraints
##
element type {
attribute domain { text } # kind of typing constraint
... # depending on domain
}
# Module snakes.typing defines a full algebra of types, all are saved
# to an element <type>. The attribute "domain" is then the key to
# decompose correctly a type. Below is a list of the different
# domains, with the structure of the corresponding type
element type { # class snakes.typing._And
attribute domain { "intersection" } # intersection of two types
element left { # left operand type
element type { ... }
}
element right { # right operand type
element type { ... }
}
}
element type { # class snakes.typing._Or
attribute domain { "union" } # union of two types
element left { # left operand type
element type { ... }
}
element right { # right operand type
element type { ... }
}
}
element type { # class snakes.typing._Sub
attribute domain { "difference" } # difference of two types
element left { # left operand type
element type { ... }
}
element right { # right operand type
element type { ... }
}
}
element type { # class snakes.typing._Xor
attribute domain { "xor" } # exclusive union of two types
element left { # left operand type
element type { ... }
}
element right { # right operand type
element type { ... }
}
}
element type { # class snakes.typing._Invert
attribute domain { "complement" } # complement of a type
element type { ... } # complemented type
}
element type { # class snakes.typing._All
attribute domain { "universal" } # type with all possible values
}
element type { # class snakes.typing._Nothing
attribute domain { "empty" } # type with no value
}
element type { # class snakes.typing.Instance
attribute domain { "instance" } # type whose values are instances
element object { ... } # class of the instances
}
element type { # class snakes.typing.TypeCheck
attribute domain { "checker" } # type defined by a Boolean function
element checker {
element object { ... } # the Boolean function
}
element iterator {
element object { ... } # a function to enumerate the values
}
}
element type { # class snakes.typing.OneOf
attribute domain { "enum" } # enumerated type
element object { ... }* # values in the type
}
element type { # class snakes.typing.Collection
attribute domain { "collection" } # flat container type (list, set, ...)
element container {
element type { ... } # type of the container
}
element items {
element type { ... } # type of the contained items
}
element min {
element object { ... } # smallest allowed number of elements
}
element max {
element object { ... } # biggest allowed number of elements
}
}
element type { # class snakes.typing.Mapping
attribute domain { "mapping" } # dictionary-like container type
element container {
element type { ... } # type of the container
}
element keys {
element type { ... } # type of the contained keys
}
element items {
element type { ... } # type of the contained items
}
}
element type { # class snakes.typing.Range
attribute domain { "range" } # values in a range
element min {
element object { ... } # smallest allowed value
}
element min {
element object { ... } # smallest excluded value
}
element min {
element object { ... } # step between consecutive values
}
}
element type { # class snakes.typing.Greater
attribute domain { "greater" } # values bigger than a given one
element object { ... } # biggest excluded value
}
element type { # class snakes.typing.GreaterOrEqual
attribute domain { "greatereq" } # values not smaller than a given one
element object { ... } # smallest allowed value
}
element type { # class snakes.typing.Less
attribute domain { "less" } # values smaller than a given one
element object { ... } # smallest excluded value
}
element type { # class snakes.typing.LessOrEqual
attribute domain { "lesseq" } # values not bigger than a given one
element object { ... } # biggest allowed value
}
element type { # class snakes.typing.CrossProduct
attribute domain { "crossproduct" } # cross product of types
element type { ... }* # crossed types
}
##
## Additional elements from plugins
##
element snakes {
attribute version { ... } # SNAKES' version that produced this PNML
element plugins {
element object {
attribute type { "tuple" }
element object { # the base module 'snakes.nets' is listed also
attribute type { "str" }
"snakes.nets"
}
element object { # list of plugins
attribute type { "str" }
plugin:string
}*
}
}
}
# Added to another element in order to specify the plugins necessary
# to load properly the element. <snakes><plugins> tags can be added at
# any position in the PNML, but they are then used globally for a
# whole <pnml> tree.
element status { # class snakes.plugins.status.Status
element name {
name:string # status name
}
element value {
element object { ... } # attached value
}
}
# Added to <place> and <transition> when plugin 'status' is loaded.
element multiaction { # class snakes.plugins.synchro.MultiAction
element action { ... }* # actions in the multi-action
}
# Added to <transition> when plugin 'synchro' is loaded.
element action { # class snakes.plugins.synchro.Action
attribute name { name:string } # action name
attribute send { send:boolean } # send/receive action
element * { ... }* # action parameters
}
element clusters { # class snakes.plugins.clusters.Cluster
element node { id:string }* # nodes at this level
element clusters { ... }* # children clusters
}
# Added to <net> when plugin 'clusters' is loaded.
element label { # class snakes.plugins.labels
attribute name { name:string} # label name
element object { ... } # label content
}
# Added to <net>, <place> and <transition> when plugin 'labels' is
# loaded.
element graphics {
element position { # node position
attribute x { xpos:(integer|float) } # x coordinate
attribute y { ypos:(integer|float) } # y coordinate
}
}
# Added to <place> and <transition> when plugin 'pos' is loaded.
element query { # snakes.plugins.query.Query
attribute name { name:text} # name of the query
element argument { ... }* # arguments of the query
element keyword { # keyword arguments
attribute name { text } # keyword name
element * { ... } # associated value
}*
}
# See file 'queries.txt'
element answer {
attribute status { "ok" | "error" } # status of the answer
message:string? # error message if status=error
element * { ... }? # returned value if status=ok
}
# See file 'queries.txt'
##
## Abstract syntax tree
##
# The following specifies the PNML translation of abstract syntax
# trees for ABCD programs, as inserted into <net> but the ABCD
# compiler.
element ast { # class snakes.compyler.Tree
attribute name { text } # node nature
attribute lineno { num:integer }? # line number in source code
attribute * { ... }* # depending on name
element * { ... }* # depending on name
}
# <ast> trees are obtained by direct translation of
# snakes.compyler.Tree instances: tree name is mapped to an attribute
# 'name', sub-trees are mapped to child tags, and attributes are
# mapped either to attributes or to child tags <attribute> when they
# cannot be represented as a simple string. The following lists the
# more current patterns. The element 'lineno' will be omitted in the
# following.
element ast {
attribute name { "abcd" } # start symbol
element ast { # buffer declarations
attribute name { "buffer" }
...
}*
element ast { # net declarations
attribute name { "net" }
...
}*
element attribute { # ABCD expression
attribute name { "expr" }
...
}
}
# An ABCD program is composed of optional buffer and net declarations,
# followed by an ABCD expression.
element ast {
attribute name { "buffer" } # buffer declaration
attribute ident { text:string } # buffer's name
element attribute {
attribute name { "type" } # buffer's type
...
}
element attribute { # buffer's initial value
attribute name { "init" }
...
}
}
element net {
attribute name { "net" } # net declaration
attribute ident { text:string } # net's name
element ast { # net's body
attribute name { "abcd" }
...
}
}
element ast {
attribute name { "expr" }
element ast {
attribute name { ( "parallel" # binary composition
| "loop"
| "sequence"
| "choice" ) }
element ast { ... } # first operand
element ast { ... } # second operand
}
}
element ast {
attribute name { "expr" }
element ast {
attribute name { "scope" } # name hiding
element ast { ... } # first operand
element object {
attribute type { "str" } # hidden name
name:string
}
}
}
element ast {
attribute name { "action" } # basic action
attribute net { ( name:string | "None" ) } # reference to a net
attribute test { ( "False" | "True" ) }? # trivial condition
element ast {
attribute name { "access" } # buffer access
attribute buffer { name:string } # buffer's name
attribute mode { ("?" | "+" | "-") } # test, put or get
element attribute {
attribute name { "param" } # access' parameter
element python { ... } # Python's AST of the parameter
}
}*
element attribute { # non-trivial condition
attribute name { "test" }
element python { ... } # Python's AST of the condition
}?
}
# A basic action can be the name of a net or a trivial action
# '[False]'. Otherwise, it is composed of a possibly empty list of
# buffer accesses and a condition. If the condition is 'True', it is
# stored as attribute 'test=True", but more complex conditions are
# stored as a Python AST in a child tag <attribute name="test">. Each
# access is composed of a buffer name, an access mode and a parameter.
# Next elements are children of a tag <attribute name="type"> and
# defined the type of a buffer.
element ast {
attribute name { "name } # Python built-in type
attribute value { type:string } # name of the type
}
element ast {
attribute name { "enum" } # enumerated type
element ast {
attribute name { "values" }
element python { # Python's AST node in this case
attribute class { "Tuple" } # is a Tuple of Const
element attribute {
attribute name { "nodes" }
values # eg, "[Const('a'), Const('b')]"
}
}
}
}
element ast {
attribute name { "union" } # union of two types
element ast { ... } # first type
element ast { ... } # second type
}
element ast {
attribute name { "intersection" } # intersection of two types
element ast { ... } # first type
element ast { ... } # second type
}
element ast {
attribute name { "list" } # list type
element ast { ... } # items' type
}
element ast {
attribute name { "dict" } # dict type
element ast { ... } # keys' type
element ast { ... } # values' type
}
element ast {
attribute name { "set" } # set type
element ast { ... } # items' type
}
# A Python AST is serialized as a <python> tag, see the section 31.3.1
# (AST Nodes) of the Python Library Reference for a list of AST nodes.
element python {
attribute * { text }* # direct mapping of simple attributes
element attribute { # complex attributes are serialized
attribute name { text } # in a tag <object>
element object { ... }
}*
data? # when none of the above method works,
# 'repr' is used to convert the AST to text
}
SNAKES: a tutorial
==================
////////////////////////////////////////////////////////////////
This file is formatted in order to be processed by AsciiDoc
(http://www.methods.co.nz/asciidoc). It will be more comfortable
to render it or to read the HTML version available at:
http://www.univ-paris12.fr/pommereau/soft/snakes/tutorial.html
////////////////////////////////////////////////////////////////
The first example is a simple coloured Petri net with a single
transition that increments an integer valued (so 0 is the _value_ of
the token, not a number of tokens) token held by a single place, the
incrementation stops when the value is $5$ thanks to a guard on the
transition.
image:tutorial.png[]
To define this net, we must load SNAKES, define a Petri net (lets call
it 'First net'), add the place (called 'p'), add the transition
(called 't') and then connect them with arcs.
[python]
^^^^^^^^^^^^^^^^^^^^^^
>>> from snakes.nets import *
>>> n1 = PetriNet('First net')
>>> n1.add_place(Place('p', [0]))
>>> n1.add_transition(Transition('t', Expression('x<5')))
>>> n1.add_input('p', 't', Variable('x'))
>>> n1.add_output('p', 't', Expression('x+1'))
^^^^^^^^^^^^^^^^^^^^^^
On the third line, the net is added a place, which could equivalently
be written as:
//skip
[python]
^^^^^^^^^^^^^^^^^^^^^^
>>> p = Place('p', [0])
>>> n1.add_place(p)
^^^^^^^^^^^^^^^^^^^^^^
However, having a variable for the place is not necessary as it can be
retrieved from $n1$ using its name with $n1.place('p')$.
The instruction $Place('p', [0])$ is the construction of a new
instance of the class $Place$ that expects the name of the place as
its first argument. The second argument is optional and is the initial
marking of the place, that can be given as a list, set, tuple or
multiset of tokens (the class for multisets is defined in the module
$snakes.data$). A third optional argument is a constraint for the
tokens that the place can hold (also known as its type); the default
value allows any token to mark the place. The typing of the places
will be detailed later on.
In order to build the transition, we create an instance of the class
$Transition$ whose constructor expects first the name of the
transition and the, optionally, the guard of it that is true by
default. A guard is otherwise specified as $Expression('...')$ where
$...$ is an arbitrary Python expression, like $Expression('x<5')$ in
our example. We will details latter on how this expression is
evaluated.
Arcs are added using one of the methods $add_input$ or $add_output$ of
$PetriNet$; both expect a place name, a transition name and the arc
annotation as arguments (always in this order). An input arc is
directed from a place toward a transition, an output arc is outgoing a
transition; so arcs are considered from the point of view of the
transition to which they are connected. Valid arc annotations are:
values:: They are instances of the class $Value$ whose constructor
simply expects the value that can be any Python object. For instances,
$Value(1)$ is the integer $1$.
variables:: These are names that are bound to token values when a
transition if executed. A variable is created by instantiating the
class $Variable$ whose constructor expects the name of the variable as
a Python string (valid names are those matching the Python regexp:
$'[a-zA-Z]\w*'$). For instance, $Variable('x')$, $Variable('count')
and $Variable('x_1')$ are valid but $Variable('x-1')$ and
$Variable('1x') are not.
expressions:: They are used to compute new values. An expression is an
instance of the class $Expression$ whose constructor expects any
Python expression as a string. How this expression is evaluated is
explained in the next section. In our example, $Expression('x+1')$ as
been used on the output arc.
tests:: The class $Test$ is used to implement a test arc: it
encapsulates another arc annotation and behaves exactly like it,
except that no token is transported by the arc when the attached
transition fires. The constructor expects another arc annotation as
its sole argument, for instance: $Test(Variable('x'))$ on an input arc
allows to test for a token in a place, its value being usable as $x$.
On a output arc, $Test(Expression('x+1'))$ may be used to test that
the value of $x+1$ is accepted by the connected place, without
actually producing it.
multi-arcs:: When an arc needs to transport several values, the class
$MultiArc$ may be used. Its constructor expects a list (or tuple) of
other annotations that are simultaneously transported on the arc. For
instance, $MultiArc([Variable('x'), Variable('y')])$ on an input arcs
allows to consume two tokens, binding them to the variables $x$ and
$y$.
Executing transitions
---------------------
The first step to execute a transition is to bind the variables
labelling the arcs to actual token values. This is possible by calling
the method $modes()$ of a transition. It returns a list of
$Substitution$ instances (this class is defined in $snakes.data$). A
$Substitution$ is a $dict$-like object that maps variable names to
other variables names or to values. The method $modes$ returns the
list of substitutions that are acceptable in order to fire the
transition, _i.e._, those that respect the usual following conditions:
* each input arc, evaluated through the substitution, corresponds to
a multiset of tokens that is less or equal to the current marking
of the connected place;
* each output arc, evaluated through the substitution, results in a
multiset of tokens that respects the type constraint of the
connected place;
* the guard of the transition, evaluates to $True$ through the
substitution.
For instance, with our net:
[python]
^^^^^^^^^^^^^^^^^^
>>> n1.transition('t').modes()
[Substitution(x=0)]
^^^^^^^^^^^^^^^^^^
The only way to fire the transition is to bind $x$ to $0$. Other may
have been tried, but do not respect at least one of the above
conditions. For instance, choosing $x=1$ respects the guard and place
types but not the marking (the token $1$ is missing):
[python]
^^^^^^^^^^^^^^^^^^
>>> from snakes.data import Substitution
>>> s = Substitution(x=1)
>>> n1.transition('t').enabled(s)
False
^^^^^^^^^^^^^^^^^^
In order to fire a transition, we have to call its method $fire$ with
an enabling substitution as argument (_i.e._, one of those returned by
$modes()$). In our example, we could run:
[python]
^^^^^^^^^^^^^^^^^^
>>> n1.transition('t').fire(Substitution(x=0))
>>> n1.place('p').tokens
MultiSet([1])
^^^^^^^^^^^^^^^^^^
It is important to understand how the firing is performed: the
substitution is used as a Python environment to evaluate the
annotations on the arcs and the guard of the transition in order to
check that the conditions above are respected. For instance, the guard
$Expression('x<5')$ can be evaluated to $true$ because $x$ is bound to
$0$ through the substitution. Similarly, the output arc
$Expression('x+1')$ is evaluated to $1$. The environment used to
evaluate the guard and output arcs is built using the input arcs, this
means that all the variables used during the firing must have been
bound through one of these input arcs. For instance, we could use
$Expression('x<5 and y==x+1')$ for the guard and $Variable('y')$ for the
output arc, and $Substitution(x=0, y=1)$ for the firing:
[python]
^^^^^^^^^^^^^^^^^^
>>> n2 = PetriNet('Second net')
>>> n2.add_place(Place('p', [0]))
>>> n2.add_transition(Transition('t', Expression('x<5 and y==x+1')))
>>> n2.add_input('p', 't', Variable('x'))
>>> n2.add_output('p', 't', Variable('y'))
>>> n2.transition('t').fire(Substitution(x=0, y=1))
>>> n2.place('p').tokens
MultiSet([1])
^^^^^^^^^^^^^^^^^^
This example is correct as long as the substitution is provided by the
user. But the method $modes$ is unable to deduce the value for $y$, it
would require to solve the equation $x<5 and y==x+1$ in the guard.
This is easy here but impossible in general, so, the modes are
computed only with respect to the input arcs. This is why $Expression$
instances are not allowed on input arcs, neither directly nor when
encapsulated in $Test$ or $MultiArc$ instances. So, in this second
example, if we call $modes()$, we get an error since $y$ in cannot be
evaluated:
[python]
^^^^^^^^^^^^^^^^^^
>>> n2.transition('t').modes()
Traceback (most recent call last):
...
NameError: name 'y' is not defined
^^^^^^^^^^^^^^^^^^
Declarations
------------
There is one more aspect about the evaluation of the expression that
should be known: it is possible to declare names that are global to a
Petri net, for instance constants or functions. To do so, we shall use
the method $declare$ of a $PetriNet$ instance, that expects as its
argument a Python statement in a string. This statement is executed
and its effect is remembered in order to be used as a global execution
environment when expressions are evaluated. For instance, lets
construct a Petri net that generates random tokens using the standard
Python function $random.randint$:
[python]
^^^^^^^^^^^^^^^^^^
>>> n3 = PetriNet('Thirs net')
>>> n3.add_place(Place('p', [0]))
>>> n3.add_transition(Transition('t'))
>>> n3.add_input('p', 't', Variable('x'))
>>> n3.add_output('p', 't', Expression('random.randint(0, 100)'))
>>> n3.transition('t').fire(Substitution(x=0))
Traceback (most recent call last):
...
NameError: name 'random' is not defined
^^^^^^^^^^^^^^^^^^
This result is not surprising as the module as not been imported. This
must be made with the statements (the second line is to initialise the
random generator):
[python]
^^^^^^^^^^^^^^^^^^
>>> n3.declare('import random')
>>> n3.declare('random.seed()')
^^^^^^^^^^^^^^^^^^
Then, the transition can fire; of course, the resulting marking will
be different from one execution to another since it is random:
//hide 93
[python]
^^^^^^^^^^^^^^^^^^
>>> n3.transition('t').fire(Substitution(x=0))
>>> n3.place('p').tokens
MultiSet([93])
^^^^^^^^^^^^^^^^^^
The effect of the $n3.declare(statement)$ method is to call $exec
statement in n3.globals$, where $n3.globals$ is a dictionary shared by
all the expression embedded in the net (in guards or arcs). So,
another way to influence the evaluation of these expressions is to
directly assign values to the dictionary $n3.globals$, for instance:
//doc >>> n3.place('p').reset([93])
//hide 18
[python]
^^^^^^^^^^^^^^^^^^
>>> n3.add_place(Place('x'))
>>> n3.add_output('x', 't', Expression('y+1'))
>>> n3.transition('t').fire(Substitution(x=93))
Traceback (most recent call last):
...
NameError: name 'y' is not defined
>>> n3.globals['y'] = 42
>>> n3.transition('t').fire(Substitution(x=93))
>>> n3.get_marking()
Marking({'p': MultiSet([18]), 'x': MultiSet([43])})
^^^^^^^^^^^^^^^^^^
The first error is expected as $y$ as not been declared and is not
bound to any token value through an input arc. After assigning it to
$n3.globals$, it becomes defined so $Expression('y+1')$ can be
evaluated. The same effect could have been achieved using
$n3.declare('y=42')$.
The last instruction gets the marking of the net, lets detail now what
we can do with marking objects.
Markings, marking graph
-----------------------
An instance of $PetriNet$ has methods to get and set its marking,
which are respectively $get_marking$ and $set_marking$. There are also
the methods $add_marking$ and $remove_marking$ to increase or decrease
the current marking.
[NOTE]
========================
The marking of an individual place can also be directly manipulated,
either though its attribute $tokens$ (we used it above) that is a
multiset, or through the following methods:
$add(toks)$:: adds the tokens in $toks$ to the place ($toks$ can be
any collection of values: $set$, $list$, $tuple$, $MultiSet$, ...)
$remove(toks)$:: does just the contrary
$reset(toks)$:: replaces the marking with the tokens in $toks$
$empty()$:: removes all the tokens from the place
$is_empty()$:: returns a Boolean indicating whether the place is empty
or not
========================
A marking is an instance of the class $snakes.nets.Marking$ and is
basically a mapping from place names to multisets of values. It has
been chosen that a marking is independent of any particular Petri net,
so, empty places are not listed in a marking and when assigning a
marking to a net, places that are present in the marking but absent
from the net are simply ignored. Markings can be added with the $+$
operator, subtracted with $-$ or compared with the usual operators
$>$, $<=$, $==$, etc. In order to know the marking of a particular
place, we can use it as a function taking the place name as argument:
[python]
^^^^^^^^^^^^^^^^
>>> m = Marking(p1=MultiSet([1, 2, 3]), p2=MultiSet([5]))
>>> m('p1')
MultiSet([1, 2, 3])
>>> m('p')
MultiSet([])
^^^^^^^^^^^^^^^^
The last result shows that a place that is not listed in a marking is
considered empty, which is consistent with the fact that empty places
are not listed in a marking extracted from a net. If we use the
marking as a $dict$ instead of a function, we will get errors on non
existing places:
[python]
^^^^^^^^^^^^^^^^
>>> m['p1']
MultiSet([1, 2, 3])
>>> m['p']
Traceback (most recent call last):
...
KeyError: 'p'
^^^^^^^^^^^^^^^^
The marking graph of a net can be manipulated using the class
$StateGraph$. Lets take an example to see how it works:
[python]
^^^^^^^^^^^^^^^^^^^^^^
>>> n4 = PetriNet('Fourth net')
>>> n4.add_place(Place('p', [-1]))
>>> n4.add_transition(Transition('t'))
>>> n4.add_input('p', 't', Variable('x'))
>>> n4.add_output('p', 't', Expression('(x+1) % 5'))
^^^^^^^^^^^^^^^^^^^^^^
This example runs infinitely, incrementing modulo 5 the token in the
place $'p'$. Its markings can be computed using a simple loop:
//skip
[python]
^^^^^^^^^^^^^^^^^^^^^^
>>> while True:
... print n4.get_marking()
... modes = n4.transition('t').modes()
... if len(modes) == 0 :
... break
... n4.transition('t').fire(modes[0])
...
Marking({'p': MultiSet([-1])})
Marking({'p': MultiSet([0])})
Marking({'p': MultiSet([1])})
Marking({'p': MultiSet([2])})
Marking({'p': MultiSet([3])})
Marking({'p': MultiSet([4])})
Marking({'p': MultiSet([0])})
Marking({'p': MultiSet([1])})
^^^^^^^^^^^^^^^^^^^^^^
Unfortunately, this loop runs forever as the net has no deadlock.
Moreover, if in a state there exist several modes, only the first one
will be used in the exploration. To avoid building ourselves the
marking graph, we can use instead:
[python]
^^^^^^^^^^^^^^^^^^^^^^
>>> s = StateGraph(n4)
>>> s.build()
>>> for state in s :
... print state, s.net.get_marking()
... print " =>", s.successors()
... print " <=", s.predecessors()
...
0 Marking({'p': MultiSet([-1])})
=> {1: (Transition('t', Expression('True')), Substitution(x=-1))}
<= {}
1 Marking({'p': MultiSet([0])})
=> {2: (Transition('t', Expression('True')), Substitution(x=0))}
<= {0: (Transition('t', Expression('True')), Substitution(x=-1)), 5: (Transition('t', Expression('True')), Substitution(x=4))}
2 Marking({'p': MultiSet([1])})
=> {3: (Transition('t', Expression('True')), Substitution(x=1))}
<= {1: (Transition('t', Expression('True')), Substitution(x=0))}
3 Marking({'p': MultiSet([2])})
=> {4: (Transition('t', Expression('True')), Substitution(x=2))}
<= {2: (Transition('t', Expression('True')), Substitution(x=1))}
4 Marking({'p': MultiSet([3])})
=> {5: (Transition('t', Expression('True')), Substitution(x=3))}
<= {3: (Transition('t', Expression('True')), Substitution(x=2))}
5 Marking({'p': MultiSet([4])})
=> {1: (Transition('t', Expression('True')), Substitution(x=4))}
<= {4: (Transition('t', Expression('True')), Substitution(x=3))}
^^^^^^^^^^^^^^^^^^^^^^
The second statement computes the graph, then, for each state, we
print its number, the corresponding marking and successors and
predecessors states that include the state number as well as the
transition (and its mode) that changed the marking. For instance, the
state $1$ can be reached from $0$ or $5$ by firing $'t'$ with $x=-1$
in and $x=4$ respectively.
A marking graph is computed (an thus iterated) in a breadth first
search. It may be iterated before it has been built completely; in
this case, successors states are computed on the fly as each state is
yield by the iteration. However, this results in having wrong
predecessors states as a state not yet explored may lead to one
already visited. This is the case here for the state $5$ that can lead
to $1$ but this is not yet known when we are visiting $1$. This error
is noticeable when the state $1$ is displayed while running:
[python]
^^^^^^^^^^^^^^^^^^^^^^
>>> s2 = StateGraph(n4)
>>> for state in s2 :
... print state, s2.net.get_marking()
... print " =>", s2.successors()
... print " <=", s2.predecessors()
...
0 Marking({'p': MultiSet([-1])})
=> {1: (Transition('t', Expression('True')), Substitution(x=-1))}
<= {}
1 Marking({'p': MultiSet([0])})
=> {2: (Transition('t', Expression('True')), Substitution(x=0))}
<= {0: (Transition('t', Expression('True')), Substitution(x=-1))}
2 Marking({'p': MultiSet([1])})
=> {3: (Transition('t', Expression('True')), Substitution(x=1))}
<= {1: (Transition('t', Expression('True')), Substitution(x=0))}
3 Marking({'p': MultiSet([2])})
=> {4: (Transition('t', Expression('True')), Substitution(x=2))}
<= {2: (Transition('t', Expression('True')), Substitution(x=1))}
4 Marking({'p': MultiSet([3])})
=> {5: (Transition('t', Expression('True')), Substitution(x=3))}
<= {3: (Transition('t', Expression('True')), Substitution(x=2))}
5 Marking({'p': MultiSet([4])})
=> {1: (Transition('t', Expression('True')), Substitution(x=4))}
<= {4: (Transition('t', Expression('True')), Substitution(x=3))}
^^^^^^^^^^^^^^^^^^^^^^
Finally, it is worth noting that no check is performed in order to
ensure that a marking graph is finite. The method $build$ may run
forever until it fills the computer memory and crash the program.
Moreover, if we use strange constructs like random numbers generators,
the state graph obtained may be partial as the program cannot known
when all the possibilities have been explored.
Places types
------------
The places defined until now did accept any token value in their
marking. It is possible to restrict that by providing a type to place
when it is constructed. A typing system is defined in the module
$typing$ to provide the flexibility for defining any place type. In
this context, a type is understood as a set of values that can be
infinite. Types can be combined to by the usual set operation
(examples are given below). Several basic types are already defined:
$tAll$:: allows any value, this is the type assigned to a constructed
place when no other type is given.
$tNothing$:: is the type with no value.
$tInteger$:: is the type of integer values.
$tNatural$:: is a restriction of $tInteger$ to non-negative values.
$tPositive$:: is a restriction of $tInteger$ to strictly positive
values.
$tFloat$:: is the type of floating point numbers.
$tNumber$:: is the union of $tFloat$ and $tInteger$.
$tString$:: is the type for Python $str$ instances.
$tBoolean$:: is the set holding the two values $False$ and $True$.
$tNone$:: holds the only value $None$.
$tBlackToken$:: holds the only value $dot$ that stands for the usual
Petri net black token.
$tList$:: allows for values that are instances of the Python class
$list$.
$tDict$:: allows for values that are instances of the Python class
$dict$.
$tTuple$:: allows for values that are instances of the Python class
$tuple$.
$tPair$:: is a restriction of $tTuple$ to tuples of length 2.
The module typing also provides with type constructors, allowing to
create new types. Moreover, types can be combined using various
operators, for instance, the module $typing$ makes the following
definitions:
[python]
^^^^^^^^^^^^^^^^^
tInteger = Instance(int)
# an instance of int
tNatural = tInteger & GreaterOrEqual(0)
# an instance of int and a value greater than or equal to zero
tPositive = tInteger & Greater(0)
# an instance of int and a value strictly positive
tFloat = Instance(float)
tNumber = tInteger|tFloat
# an instance of int or of float
tBoolean = OneOf(False, True)
# one value of False and True
tNone = OneOf(None)
# only the value None
tBlackToken = OneOf(dot)
# only the value dot
tTuple = Tuple(tAll)
# an instance of tuple holding any value
tPair = Tuple(tAll, min=2, max=2)
# an instance of tuple holding exactly two items of any type
^^^^^^^^^^^^^^^^^
A type can be called as a function in which case it returns $True$ if
all its argument belong to the type and $False$ otherwise, for
instance:
[python]
^^^^^^^^^^^^^^^^^
>>> from snakes.typing import *
>>> tNatural(2, 3, 4, 0)
True
>>> tNatural(-1)
False
^^^^^^^^^^^^^^^^^
When a place is constructed a third argument can be given to define
its type. If later on a token that does not respect the type is added
to the place, an exception is raised.
[python]
^^^^^^^^^^^^^^^^^
>>> from snakes.typing import *
>>> tNatural(2)
True
>>> tNatural(-1)
False
>>> p = Place('p', [0, 1, 2, 3], tNatural)
>>> p.add(3)
>>> p.tokens
MultiSet([0, 1, 2, 3, 3])
>>> p.add(-1)
Traceback (most recent call last):
...
ValueError: forbidden token '-1'
^^^^^^^^^^^^^^^^^
When no type is given at construction time, $tAll$ is actually used,
which explains why a place accepts any token by default.
Plugins
-------
A system a plugins allow to extend SNAKES. In order to plug the module
$foo$ into the module $snakes.nets$, one as to use:
//skip
[python]
^^^^^^^^^^^^^^^^^^^^^^^
import snakes.plugins
snakes.plugins.load('snakes.plugins.foo', 'snakes.nets', 'my_nets')
^^^^^^^^^^^^^^^^^^^^^^^
Intuitively, this (correct) statements have the effect of the
(incorrect) statement:
//skip
[python]
^^^^^^^^^^^^^^^^^^^^^^^
import 'snakes.nets extended by snakes.plugins.foo' as my_nets
^^^^^^^^^^^^^^^^^^^^^^^
So, one could now use:
//skip
[python]
^^^^^^^^^^^^^^^^^^^^^^^
from my_nets import *
^^^^^^^^^^^^^^^^^^^^^^^
Several plugins may be loaded at the same time:
//skip
[python]
^^^^^^^^^^^^^^^^^^^^^^^
snakes.plugins.load(['snakes.plugins.foo, 'snakes.plugins.bar],
'snakes.nets', 'my_nets')
^^^^^^^^^^^^^^^^^^^^^^^
This has the effect to load the plugin $foo$ on the top of
$snakes.nets$ resulting in a module on the top of which $bar$ is then
loaded.
=== Giving positions to the nodes
The plugin $snakes.plugins.pos$ is a very simple one that allows to
give _x_/_y_ positions to the nodes of a Petri net.
First we load the plugin:
[python]
^^^^^^^^^^^^^^^^^^^^^^
>>> import snakes.plugins
>>> snakes.plugins.load('snakes.plugins.pos', 'snakes.nets', 'nets')
<module 'nets' from '...'>
>>> from nets import PetriNet, Place, Transition
^^^^^^^^^^^^^^^^^^^^^^
A place can be added without specifying a position for it, in which
case it will be positioned at $(0,0)$. The position is stored in an
attribute $pos$ of the place as well as in $pos.x$ and $pos.y$:
[python]
^^^^^^^^^^^^^^^^^^^^^^
>>> n = PetriNet('N')
>>> p = Place('p00')
>>> n.add_place(p)
>>> p.pos
Position(0, 0)
>>> p.pos.x, p.pos.y
(0, 0)
^^^^^^^^^^^^^^^^^^^^^^
The position can be defined when the node is created or when it is
added to a net.
[python]
^^^^^^^^^^^^^^^^^^^^^^
>>> t10 = Transition('t10', pos=(1, 0))
>>> n.add_transition(t10)
>>> t10.pos
Position(1, 0)
>>> t = Transition('t01')
>>> t.pos
Position(0, 0)
>>> n.add_transition(t, pos=(0, 1))
>>> t.pos
Position(0, 1)
^^^^^^^^^^^^^^^^^^^^^^
The last statement above shows that the node is copied when added to a
net and thus $t$ keeps its position $(0,0)$ but its copy in $n$ as
been positioned at $(0,1)$.
Nodes can be moved using the $shift$ or $moveto$ method, but not by
directly assigning their attributes $pos.x$ of $pos.y$:
[python]
^^^^^^^^^^^^^^^^^^^^^^
>>> t = n.transition('t01')
>>> t.pos
Position(0, 1)
>>> t.pos.moveto(1, 2)
>>> t.pos
Position(1, 2)
>>> t.pos.shift(1, -1)
>>> t.pos
Position(2, 1)
>>> t.pos.x = 0
Traceback (most recent call last):
...
AttributeError: readonly attribute
^^^^^^^^^^^^^^^^^^^^^^
A net extended by $snakes.plugins.pos$ has a method to compute its
bounding box that is a tuple $((xmin, ymin), (xmax, ymax))$ where
$xmin$ is the $x$ coordinate of the left-most node, and so on. A
method $shift$ also allows to shift all the nodes of the net and a
method $transpose$ rotates a whole net by 90° (_i.e._, the top-down
direction becomes the left-right).
[python]
^^^^^^^^^^^^^^^^^^^^^^
>>> n.bbox()
((0, 0), (2, 1))
>>> n.shift(10, 10)
>>> n.bbox()
((10, 10), (12, 11))
>>> t.pos
Position(12, 11)
>>> n.transpose()
>>> t.pos
Position(-11, 12)
>>> n.bbox()
((-11, 10), (-10, 12))
^^^^^^^^^^^^^^^^^^^^^^
Finally, notice that when nodes are merged, the position of the result
can be defined by giving an argument $pos$ to the $merge_transitions$
or $merge_places$ method. If no such argument is given, the position
of the new node is computed as the barycentre of the positions of the
merged nodes.
[python]
^^^^^^^^^^^^^^^^^^^^^^
>>> n.merge_transitions('t11', ['t01', 't10'], pos=(1,1))
>>> n.transition('t11').pos
Position(1, 1)
>>> n.transition('t01').pos
Position(-11, 12)
>>> n.transition('t10').pos
Position(-10, 11)
>>> n.merge_transitions('t', ['t01', 't10'])
>>> n.transition('t').pos
Position(-10.5, 11.5)
^^^^^^^^^^^^^^^^^^^^^^
=== Drawing nets and marking graphs
The plugin $graphviz$ can be used in order to produce a graphical
rendering of $PetriNet$ and $StateGraph$ objects. It add to them a
method $draw$ that saves a picture in various formats (those supported
by http://www.graphviz.org[GraphViz]), in particular: PNG, JPEG, EPS,
DOT. For a Petri net, the positions of the nodes is fixed using the
plugin $pos$ (that is automatically loaded). For a stage graph, the
nodes are automatically positioned by GraphViz.
[WARNING]
===========================
In order to produce a graphical rendering in a format other than DOT,
the plugin $graphviz$ calls the program $dot$ or $neato$. It is
possible that a specially crafted file name will result in executing
arbitrary commands on the system. So, take care when you run a SNAKES
script that you did not program yourself. (In general, take care when
your run any script from an unsafe source.)
===========================
Let's consider a simple example:
[python]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> import snakes.plugins
>>> snakes.plugins.load('snakes.plugins.graphviz', 'snakes.nets', 'nets')
<module 'nets' from '...'>
>>> from nets import *
>>> n = PetriNet('N')
>>> n.add_place(Place('p00', [0]))
>>> n.add_transition(Transition('t10', pos=(1, 0)))
>>> n.add_place(Place('p11', pos=(1, 1)))
>>> n.add_transition(Transition('t01', pos=(0, 1)))
>>> n.add_input('p00', 't10', Variable('x'))
>>> n.add_output('p11', 't10', Expression('(x+1) % 3'))
>>> n.add_input('p11', 't01', Variable('y'))
>>> n.add_output('p00', 't01', Expression('(y+2) % 4'))
>>> n.draw('graphviz-net.png')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This produces the following picture:
image:graphviz-net.png[GraphViz net]
The rendering is not very beautiful but should be useful in many
cases. Transitions are labelled with their name and guard, places with
their marking (inside), name and type (top-right), arcs are labelled
with their inscription. The picture format is chosen using the
extension of the created file: $.png$, $.jpg$, $.eps$, $.dot$...
The marking graph can then be built and drawn also:
[python]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> s = StateGraph(n)
>>> s.draw('graphviz-graph.png', landscape=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Which produces the picture:
image:graphviz-graph.png[GraphViz stage graph]
Each state is labelled with its number and the corresponding marking,
arcs are labelled by the transition and binding that produce one
marking from another.
Notice that the option $landscape$ has been used in order to ask
GraphViz to produce a vertical layout (it is horizontal by default).
Other options exist, given here with their default value:
$scale=72.0$:: a scale factor for the whole picture. The greater it is,
the larger will be the space between the nodes. This option applies to
both $PetriNet.draw$ and $StateGraoh.draw$.
$nodesize=0.5$:: the size of the nodes (places, transitions or
states). The greater it is, the wider the nodes are. This option
applies to both $PetriNet.draw$ and $StateGraoh.draw$.
$engine$:: the rendering program to use (one of $"neato"$, $dot$,
$"circo"$, $"twopi"$, $"fdp"$). The default for $PetriNet$ objects is
$"neato"$, the default for $StateGraph$ objects is $dot$.
$layout=False$:: for $PetriNet.draw$ only, controls whether the
rendering program is allowed to move nodes or not. This only works for
$"neato"$, other programs are always allowed to move nodes.
$print_state=None$:: for $StateGraph.draw$ only, defines the text
printed inside each state. If $None$, the number of the state is
printed with the corresponding marking. Otherwise, a function should
be provided, taking the state number as its first argument and the
state graph as its second argument, and returning the string to print.
$print_arc=None$:: for $StateGraph.draw$, similarly to $print_state$,
defines the text printed on the arcs. The function to provide must
take five arguments: the source state, the target state, the name of
the transition fired, its mode (_i.e._ the substitution used), and the
state graph. For $PetriNet.draw$, the function must take five
arguments: the label of the arc, the place connected to it, the
transition connected to it, a Boolean indicating whether this is an
input (if $True$) or output arc (if $False$) and the Petri net.
$print_trans=None$:: for $PetriNet.draw$ only, the function should
take as arguments the transition and the net.
$print_place=None$:: for $PetriNet.draw$ only, the function should
take as arguments the place and the net. This corresponds to the label
that is drawn outside of the place.
$print_tokens=None$:: for $PetriNet.draw$ only, the function should
take as arguments the multiset of tokens, the place and the net. This
corresponds to the label that is drawn inside the place.
=== Merging nodes using name-based rules
The plugin $status$ extends Petri nets nodes with an attribute called
$status$ that is composed of a _name_ and _value_. The former
corresponds to a class of similar statuses (_e.g._, $entry$,
$internal$, $exit$, $buffer$ or $tick$) and the latter to a particular
subset of this class. The idea is to be able to merge nodes that have
the same status (name and value). Each status uses its own merge rule.
[NOTE]
====================
The principle of places status is well known in PBC and M-nets, see
for instance the paper
http://www.univ-paris12.fr/lacl/pommereau/publis/2003-fi.html[Asynchronous
Box Calculus] where they are used in order to perform compositions
of Petri net. The plugin extends this notion to transitions.
====================
For instance, the plugin defines a function $buffer(id)$ that creates
a status $('buffer', id)$. If several places with the status $buffer$
and the same $id$ are present in the net, they can be automatically
merged. Concretely, let's define a net wit three buffer places:
[python]
^^^^^^^^^^^^^^^^^^^^^^^^
>>> import snakes.plugins
>>> snakes.plugins.load('snakes.plugins.status', 'snakes.nets', 'nets')
<module 'nets' from ...>
>>> from nets import *
>>> import snakes.plugins.status as status
>>> n = PetriNet('N')
>>> n.add_place(Place('p1', [1], status=status.buffer('foo')))
>>> n.add_place(Place('p2', [2]), status=status.buffer('foo'))
>>> n.add_place(Place('p3', [3]), status=status.buffer('bar'))
>>> n.add_place(Place('p4', [4]))
^^^^^^^^^^^^^^^^^^^^^^^^
Notice that the status can be assigned when the place is created (as
for $p1$) or when it is added to the net (as for $p2$ and $p3$). A
status has not been specified for $p4$ that thus receives the empty
status $(None, None)$.
It is now possible to list all the places that have the same status:
[python]
^^^^^^^^^^^^^^^^^^^^^^^^
>>> n.status(status.buffer('foo'))
('p2', 'p1')
>>> n.status(status.buffer('bar'))
('p3',)
^^^^^^^^^^^^^^^^^^^^^^^^
Moreover, it is possible to merge the places that have the same
status:
[python]
^^^^^^^^^^^^^^^^^^^^^^^^
>>> n.status.merge(status.buffer('foo'))
>>> n.place()
[Place('p3', MultiSet([3]), tAll, status=Buffer('buffer','bar')),
Place('(p1+p2)', MultiSet([1, 2]), tAll, status=Buffer('buffer','foo')),
Place('p4', MultiSet([4]), tAll)]
^^^^^^^^^^^^^^^^^^^^^^^^
The places $p1$ and $p2$ has been merged (and then removed), yielding
a place $p1+p2$ whose marking is the sum of the marking of $p1$ and
$p2$. This treatment of the marking is specific to buffer status,
other status may use other methods.
A Petri net is also enriched with a method in order to change the
status of a node:
[python]
^^^^^^^^^^^^^^^^^^^^^^^^
>>> n.set_status('(p1+p2)', status.buffer(None))
>>> n.place()
[Place('p3', MultiSet([3]), tAll, status=Buffer('buffer','bar')),
Place('(p1+p2)', MultiSet([1, 2]), tAll, status=Buffer('buffer')),
Place('p4', MultiSet([4]), tAll)]
^^^^^^^^^^^^^^^^^^^^^^^^
A buffer status with a value $None$ is particular in that it will be
ignored by $PetriNet.status.merge$ ans thus not merged (it is a
private buffer).
One may have noticed that buffer status are actually instances of the
class $Buffer$ that is itself a subclass of $Status$. In order to
create a status, one just has to extend the class $Status$ and
redefine the method $merge$ (with its arguments as below), for example
the $Buffer$ class is defined as:
//skip
[python]
^^^^^^^^^^^^^^^^^^^^^^^^
class Buffer (Status) :
def merge (self, net, nodes, name=None) :
# net: the net in which the merge occurs
# nodes: the nodes to merge
# name: the name of the resulting node
if self._value is None :
return # private buffers are ignored
if name is None : # create a name if none has been given
name = "(%s)" % "+".join(sorted(nodes))
net.merge_places(name, nodes, status=self) # merge the places
for src in nodes : # remove the merged places
net.remove_place(src)
^^^^^^^^^^^^^^^^^^^^^^^^
[NOTE]
==========================
Using this class $Buffer$, a helper function $buffer$ is defined as:
//skip
[python]
^^^^^^^^^^^^^^^^^^^^^^^^^^
def buffer (name) :
return Buffer('buffer', name)
^^^^^^^^^^^^^^^^^^^^^^^^^^
==========================
The plugin defines other statuses:
$variable(id)$:: is similar to $buffer$ except that when places are
merged, they must all have the same marking. So, $variable$ places are
like variables in a program that store a single value (possibly
structured).
$tick(id)$:: is a transition status. When $tick$ transitions are
merged, their guards are and-ed.
$entry$, $internal$ and $exit$:: are the traditional place status
allowing to define control flow operations between Petri nets in PBC
and its successors.
=== PBC/M-nets control flow operations
The plugin $ops$ defines the control flow operations usually defined
for PBC and M-nets. To do so, it relies on places status (the plugin
$status$ is automatically loaded) and in particular on the $entry$,
$internal$ and $exit$ statuses. Indeed, it is expected that a Petri
net starts its execution with one token in each entry place (entry
marking) and evolves until it reaches a marking with one token in each
exit place (exit marking). In order to produce the expected control
flow, the operations use combinations of the entry and exit places of
the composed nets. All the details about these operations can be found
in the paper
http://www.univ-paris12.fr/lacl/pommereau/publis/2003-fi.html[Asynchronous
Box Calculus].
[NOTE]
======================
There are basically two approaches for such combinations. The PBC
approach relies on cross-products of sets of places with simple types
(low-level places). This has the advantage to be simple but may
produce a large number of places. On the other hand, the M-nets
approach relies on building fewer high-level places whose type are
cross-products of the types of the combined places (this is a
simplification). This has the advantage to produce less places than
in PBC but their types are very complicated and the resulting transition
rule is hard to implement (it is necessary to match tree-structured
tokens against tree-structures annotations). Because of this
complexity, we have chosen to implement to PBC approach which is also
what is used in the most recent models of the family.
======================
The plugin defines four control flow operations. Let $n1$ and $n2$ be
two nets:
* the _sequence_ $n1 & n2$ is obtained by combining the exit places
of $n1$ with the entry places of $n2$ so that when $n1$ reaches its
exit marking, it corresponds to the entry marking of $n2$. As a
result, $n1$ is executed first and is followed by the execution of
$n2$;
* the _choice_ $n1 + n2$ combines the entry places of $n1$ and $n2$
on the one hand, and their exit places on the other hand. As a
result, either $n1$ or $n2$ is executed and they have the same exit
marking;
* the _iteration_ $n1 * n2$ combines the entry and exit places of
$n1$ with the $entry$ places of $n2$. As a result, $n1$ can be
executed an arbitrary number of time (including zero) and is
followed by one execution of $n2$;
* the _parallel_ composition $n1 | n2$ does not combine any place so
that $n1$ and $n2$ can evolve concurrently.
When two nets are combined using one of these operators, their nodes
are automatically merged according to their status, in particular:
buffer or variables places, and tick transitions are merged using the
method that is defined by each status.
Two operations that are not related to control flow are also defined:
* the node hiding $n1.hide(old)$ gives the empty status to all the
nodes in $n1$ that used to have the status $old$ before. It may be
called with a second argument in order to choose the new status.
For instance $n1.hide(buffer('foo'), buffer(None))$ makes private
all the buffer places with the status $('buffer', 'foo')$;
* a variant of the node hiding is $n1 / val$. Its right argument must
be a status value, the result is a copy of $n1$ in which all the
nodes with a status $(x,val)$ are given the status $(x,None)$.
This has the effect to disable further merges of, _e.g._, buffer
places if the net is later combined with another one.
The plugin $posops$ is a combination of $pos$ and $ops$ that tries to
take into account the positions of the nodes when nets are composed.
It avoids overlapping the composed nets and tries to distribute evenly
the combined places in order to have a acceptable result (as far as
possible). Let's see it in action:
[python]
^^^^^^^^^^^^^^^^^^^^^^
>>> import snakes.plugins
>>> snakes.plugins.load(['snakes.plugins.posops', 'snakes.plugins.graphviz'], 'snakes.nets', 'nets')
<module 'nets' from ...>
>>> from nets import *
>>> from snakes.plugins.status import entry, internal, exit, safebuffer
>>> n = PetriNet('basic')
>>> n.add_place(Place('e', status=entry, pos=(0, 2)))
>>> n.add_place(Place('x', status=exit, pos=(0, 0)))
>>> n.add_transition(Transition('t', pos=(0, 1)))
>>> n.add_input('e', 't', Value(dot))
>>> n.add_output('x', 't', Value(dot))
>>> n.add_place(Place('v', [0], status=safebuffer('var'), pos=(1,1)))
>>> n.add_input('v', 't', Variable('x'))
>>> n.add_output('v', 't', Expression('x+1'))
>>> n.draw('basic.png')
^^^^^^^^^^^^^^^^^^^^^^
This basic net is as follows:
image:basic.png[Basic net]
It can be composed with itself in order to produce a more complex net,
for instance:
[python]
^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> complex = n & (n * (n + n))
>>> var = complex.status(safebuffer('var'))[0]
>>> complex.place(var).pos.moveto(-1,-1)
>>> complex.draw('complex.png', scale=60)
^^^^^^^^^^^^^^^^^^^^^^^^^^
On the second line, the new name of the variable place is retrieved
from its status. Then the place is moved in order to have a prettier
result:
image:complex.png[Complex net]
=== PBC/M-nets synchronisation
Another important operation featured by the models in the PBC and
M-nets family is the synchronization. This operation is similar to the
CCS synchronization but operates on multi-actions (_i.e_, several
synchronizations may take place at the same transition). In PBC,
actions have no parameters while the have in M-nets (which implies the
unification of these parameters). With respect to CCS, the
synchronization is here a static operation that builds all the
possible transitions corresponding to synchronized actions. The plugin
$synchro$ implements a generalisation of the M-nets synchronisation.
It is generalised in that it does not impose a fixed arity associated
to each action name. For more information about M-nets
synchronisation, see for instance the section 4 of the paper
http://www.univ-paris12.fr/lacl/pommereau/publis/2002-mtcs.html[Petri
nets with causal time for system verification].
Let's consider a simple example: three transitions have to
synchronize, doing so, one transition can receive a value from each
other transition. To do this with SNAKES, one may run the following
code:
[python]
^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> import snakes.plugins
>>> snakes.plugins.load(['snakes.plugins.synchro', 'snakes.plugins.graphviz'], 'snakes.nets', 'nets')
<module 'nets' from ...>
>>> from nets import *
>>> from snakes.plugins.synchro import Action
>>> n = PetriNet('N')
>>> n.add_place(Place('e1', [dot], pos=(0,2)))
>>> n.add_place(Place('e2', [dot], pos=(1,2)))
>>> n.add_place(Place('e3', [dot], pos=(2,2)))
>>> n.add_place(Place('x1', [], pos=(0,0)))
>>> n.add_place(Place('x2', [], pos=(1,0)))
>>> n.add_place(Place('x3', [], pos=(2,0)))
>>> n.add_transition(Transition('t1', pos=(0,1),
... actions=[Action('a', True, [Value(2)])]))
>>> n.add_transition(Transition('t2', pos=(1,1),
... actions=[Action('a', False, [Variable('x')]),
... Action('a', False, [Variable('y')])]))
>>> n.add_transition(Transition('t3', pos=(2,1),
... actions=[Action('a', True, [Value(3)])]))
>>> n.add_input("e1", "t1", Value(dot))
>>> n.add_input("e2", "t2", Value(dot))
>>> n.add_input("e3", "t3", Value(dot))
>>> n.add_output("x1", "t1", Value(dot))
>>> n.add_output("x2", "t2", Value(dot))
>>> n.add_output("x3", "t3", Value(dot))
>>> def pt (trans, net) :
... return "%s\\n%s" % (trans.name, str(trans.actions))
>>> n.draw('synchro-1.png', print_trans=pt)
^^^^^^^^^^^^^^^^^^^^^^^^^^
The resulting picture is the following:
image:synchro-1.png[Before synchronisation]
Applying the synchronisation is possible by calling the method
$synchronise$ that expects an action name as parameter. The code below
performs the synchronisation then draws the net with a new layout in
order to make it more readable (by default, a synchronised transition
is place in the middle of the transition that participated to the
synchronisation).
[python]
^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> n.synchronise('a')
>>> n.draw('synchro-2.png', print_trans=pt, layout=True, engine='circo')
^^^^^^^^^^^^^^^^^^^^^^^^^^
image:synchro-2.png[After synchronisation]
The result is not really readable, but let's see what happened:
- $t1$ has two ways to synchronise with $t2$, which results in
building two transitions, that still hold one receive action $a$;
- this is the same for $t3$ and $t2$, we have so far created 4 new
transitions;
- then, both $t1$ and $t3$ can synchronise with the 4 new
transitions, which results in 8 new transitions.
So their are now 15 transitions in the net. The names of the new
transitions correspond to how they were obtained. For instance, the
one on the left of $x1$ (on the top) has the name
$(t1{x->2}+t2{x->2}[a(2)]$, which means that is was obtained from the
synchronisation of $t1$ and $t2$ for which the variable $x$ was bound
to $2$ (the two substitutions are the way to unify the synchronised
actions) that communicated the value $2$ on the action $a$.
It may be observed that only the last 8 transitions correspond to the
full synchronisation of $t2$ with both $t1$ and $t3$ (and each of
these 8 transition corresponds to one way of achieving the
synchronisation, some being equivalent). But if the net is executed,
it is possible to fire $t1$, $t2$, $t3$ or any of the partially
synchronised transitions. In order to force the execution of the full
synchronisation, one can call the method $restrict$ that remove all
the transitions that still hold an action $a$. Most of time, a
restriction always follows a synchronisation; so, there is also a
method $scope$ that perform both in turn.
[python]
^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> n.restrict('a')
>>> n.draw('synchro-3.png', print_trans=pt, layout=True, engine='circo')
^^^^^^^^^^^^^^^^^^^^^^^^^^
This results in the following picture where only 8 transitions are
remaining:
image:synchro-3.png[After restriction]
On this last picture, one may notice that some transitions consume or
produce two tokens. This is the case for instance for the top-left
transition that results from the synchronisation of $t1$ with $t2$
which was then synchronised again with $t1$. In a model of safe or
colour-safe Petri nets like PBC or M-nets, this is a dead transition
that could be removed. This can be made easily with the following
code, resulting in the picture below:
[python]
^^^^^^^^^^^^^^^^^^^^^^^
>>> for trans in n.transition() :
... for place, label in trans.pre.items() :
... if label == MultiArc([Value(dot), Value(dot)]) :
... n.remove_transition(trans.name)
... break
>>> n.draw('synchro-4.png', print_trans=pt, layout=True, engine='circo')
^^^^^^^^^^^^^^^^^^^^^^^
image:synchro-4.png[After removing the dead transitions]
It may be interesting also to remove the duplicated transitions, but
this is beyond the scope of this tutorial.
=== Representing infinite data domains
[NOTE]
=======================
This section describes the implementation that corresponds to the
paper
http://www.univ-paris12.fr/lacl/pommereau/publis/2007-infinity.html[Efficient
reachability graph representation of Petri nets with unbounded
counters]. The only difference with respect to the paper is that
SNAKES is not limited to P/T nets but happily handles high-level nets
together with Lash data. The rest of the section uses the example
developed in the paper, taking $omega=8$.
=======================
The first thing to do is to load the plugin $lashdata$ do that we are
able to store data in Lash, we also import load $graphviz$ in order to
draw the resulting graphs:
[python]
^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> import snakes.plugins
>>> nets = snakes.plugins.load(["snakes.plugins.graphviz",
... "snakes.plugins.lashdata"],
... "snakes.nets", "nets")
>>> from nets import *
>>> from snakes.typing import *
>>> from snakes.data import *
>>> from snakes.plugins.lashdata import Data
^^^^^^^^^^^^^^^^^^^^^^^^^^
$Data$ is a class that allows to store integer variables into Lash.
This library actually store data under the form of sets of vectors of
integers but the class $Data$ hides this under the usual concept of
variables. When we build a Petri net, we must give a $lash$ argument
that is one instance of $Data$. Its constructor simply takes a list of
variables with their initial values:
[python]
^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> n = PetriNet("N", lash=Data(x=0))
^^^^^^^^^^^^^^^^^^^^^^^^^^
Then we add the places:
[python]
^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> n.add_place(Place("s_1", [dot], tBlackToken, pos=(0, 0)))
>>> n.add_place(Place("s_2", [], tBlackToken, pos=(2, 0)))
^^^^^^^^^^^^^^^^^^^^^^^^^^
And last the transitions. Each transition is given a $condition$ under
the form of a linear Python expression ($or$, $not$ nor $=!$ allowed).
An $update$ is also provided in order to modify the variables, this a
single assignment of one variable with a linear expression (several
assignment may be combined using $;$).
[python]
^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> n.add_transition(Transition("t_3", pos=(1, -1)),
... condition="x<8",
... update="x=x+1")
>>> n.add_input("s_2", "t_3", Value(dot))
>>> n.add_output("s_1", "t_3", Value(dot))
>>> n.add_transition(Transition("t_2", pos=(1, 1)),
... condition="x>0", update="x=x-1")
>>> n.add_input("s_1", "t_2", Value(dot))
>>> n.add_output("s_2", "t_2", Value(dot))
>>> n.add_transition(Transition("t_5", pos=(3, 1)),
... condition="x>0", update="x=x-1")
>>> n.add_input("s_2", "t_5", Value(dot))
>>> n.add_output("s_2", "t_5", Value(dot))
>>> n.add_transition(Transition("t_4", pos=(3, -1)),
... condition="x<8",
... update="x=x+1")
>>> n.add_input("s_2", "t_4", Value(dot))
>>> n.add_output("s_2", "t_4", Value(dot))
>>> n.add_transition(Transition("t_1", pos=(-1, 0)),
... condition="x<4",
... update="x=x+1")
>>> n.add_input("s_1", "t_1", Value(dot))
>>> n.add_output("s_1", "t_1", Value(dot))
^^^^^^^^^^^^^^^^^^^^^^^^^^
Then, we can build four different marking graph. The first one is the
full, usual, marking graph:
[python]
^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> m = StateGraph(n)
>>> m.build()
>>> def ps (state, graph) :
... return str(state)
>>> def pa (source, target, trans, mode, graph) :
... return trans
>>> m.draw("lash-full.png", landscape=True, print_state=ps, print_arc=pa)
^^^^^^^^^^^^^^^^^^^^^^^^^^
That results in the following picture:
image:lash-full.png[Full marking graph]
Then come the compact graphs. First we can ask SNAKES to add a meta
transition for each detected side-loop (a transition that does not
change the marking).
[python]
^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> m = StateGraph(n, loops=True)
>>> m.build()
>>> m.draw("lash-loops.png", landscape=True, print_state=ps, print_arc=pa)
^^^^^^^^^^^^^^^^^^^^^^^^^^
That results in:
image:lash-loops.png[First compact marking graph]
We can then ask to add a meta transition when cycles are detected (a
new state that covers an existing one). We can further ask to remove
states that are covered when cycles are detected.
[python]
^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> m = StateGraph(n, cycles=True)
>>> m.build()
>>> m.draw("lash-cycles.png", landscape=True, print_state=ps, print_arc=pa)
>>> m = StateGraph(n, remove=True)
>>> m.build()
>>> m.draw("lash-remove.png", landscape=True, print_state=ps, print_arc=pa)
^^^^^^^^^^^^^^^^^^^^^^^^^^
In this particular example, exploiting cycles results in
link:lash-cycles.png[the same graph as above], but with the $remove$
option, we get a more compact graph:
image:lash-remove.png[The most compact graph]
Finally, note that using the $cycles$ option automatically turns on
the $loops$ option, and using the $remove$ option turns on the
$cycles$ option (and thus also the $loops$ one).
Exporting to PNML and other formats
-----------------------------------
To do...
Writing a plugin
----------------
To do...
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="744.09448819"
height="1052.3622047"
id="svg4049"
sodipodi:version="0.32"
inkscape:version="0.46"
sodipodi:docname="snakes-logo.svg"
inkscape:output_extension="org.inkscape.output.svg.inkscape">
<defs
id="defs4051">
<linearGradient
id="linearGradient4671">
<stop
style="stop-color:#ffd43b;stop-opacity:1;"
offset="0"
id="stop4673" />
<stop
style="stop-color:#ffe873;stop-opacity:1"
offset="1"
id="stop4675" />
</linearGradient>
<linearGradient
id="linearGradient4689">
<stop
style="stop-color:#5a9fd4;stop-opacity:1;"
offset="0"
id="stop4691" />
<stop
style="stop-color:#306998;stop-opacity:1;"
offset="1"
id="stop4693" />
</linearGradient>
<linearGradient
id="linearGradient3275">
<stop
style="stop-color:#5a9fd4;stop-opacity:1;"
offset="0"
id="stop3277" />
<stop
style="stop-color:#ffd43b;stop-opacity:1;"
offset="1"
id="stop3279" />
</linearGradient>
<marker
inkscape:stockid="Arrow2Mend"
orient="auto"
refY="0.0"
refX="0.0"
id="Arrow2Mend"
style="overflow:visible;">
<path
id="path4724"
style="font-size:12.0;fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
transform="scale(0.6) rotate(180) translate(0,0)" />
</marker>
<marker
inkscape:stockid="Arrow1Mend"
orient="auto"
refY="0.0"
refX="0.0"
id="Arrow1Mend"
style="overflow:visible;">
<path
id="path4706"
d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;marker-start:none;"
transform="scale(0.4) rotate(180) translate(10,0)" />
</marker>
<marker
inkscape:stockid="Arrow2Lend"
orient="auto"
refY="0.0"
refX="0.0"
id="Arrow2Lend"
style="overflow:visible;">
<path
id="path4718"
style="font-size:12.0;fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
transform="scale(1.1) rotate(180) translate(1,0)" />
</marker>
<linearGradient
id="linearGradient4671-685">
<stop
id="stop3989"
offset="0"
style="stop-color:#9d9d9d;stop-opacity:1;" />
<stop
id="stop3991"
offset="1"
style="stop-color:#b9b9b9;stop-opacity:1" />
</linearGradient>
<linearGradient
id="linearGradient4689-776">
<stop
id="stop3983"
offset="0"
style="stop-color:#979797;stop-opacity:1;" />
<stop
id="stop3985"
offset="1"
style="stop-color:#646464;stop-opacity:1;" />
</linearGradient>
<inkscape:perspective
sodipodi:type="inkscape:persp3d"
inkscape:vp_x="0 : 526.18109 : 1"
inkscape:vp_y="0 : 1000 : 0"
inkscape:vp_z="744.09448 : 526.18109 : 1"
inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
id="perspective4057" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient4689-776"
id="linearGradient4656"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0.562541,0,0,0.567972,78.198395,2.6157917)"
x1="26.648937"
y1="20.603781"
x2="135.66525"
y2="114.39767" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient4671-685"
id="linearGradient4658"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0.562541,0,0,0.567972,78.198395,2.6157917)"
x1="150.96111"
y1="192.35176"
x2="112.03144"
y2="137.27299" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient4689-776"
id="linearGradient7767"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0.562541,0,0,0.567972,78.198395,2.6157917)"
x1="26.648937"
y1="20.603781"
x2="135.66525"
y2="114.39767" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient4671-685"
id="linearGradient7769"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0.562541,0,0,0.567972,78.198395,2.6157917)"
x1="150.96111"
y1="192.35176"
x2="112.03144"
y2="137.27299" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient4689-776"
id="linearGradient7775"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0.562541,0,0,0.567972,78.198395,2.6157917)"
x1="26.648937"
y1="20.603781"
x2="135.66525"
y2="114.39767" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient4671-685"
id="linearGradient7777"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0.562541,0,0,0.567972,78.198395,2.6157917)"
x1="150.96111"
y1="192.35176"
x2="112.03144"
y2="137.27299" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient4671-685"
id="linearGradient7780"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0.2881307,0,0,0.2909125,159.73956,131.18265)"
x1="150.96111"
y1="192.35176"
x2="112.03144"
y2="137.27299" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient4689-776"
id="linearGradient7783"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0.2881307,0,0,0.2909125,159.73956,131.18265)"
x1="26.648937"
y1="20.603781"
x2="135.66525"
y2="114.39767" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient3275"
id="linearGradient3281"
x1="28.148954"
y1="-295.26871"
x2="296.6087"
y2="-295.26871"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0,1,-1,0,-2,0)" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient4689"
id="linearGradient3978"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0.562541,0,0,0.567972,-9.399749,-5.305317)"
x1="26.648937"
y1="20.603781"
x2="135.66525"
y2="114.39767" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient4671"
id="linearGradient3980"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0.562541,0,0,0.567972,-9.399749,-5.305317)"
x1="150.96111"
y1="192.35176"
x2="112.03144"
y2="137.27299" />
</defs>
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
gridtolerance="10000"
guidetolerance="10"
objecttolerance="10"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="0.98994949"
inkscape:cx="322.98744"
inkscape:cy="827.15881"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
inkscape:window-width="1440"
inkscape:window-height="878"
inkscape:window-x="0"
inkscape:window-y="0"
showguides="true"
inkscape:guide-bbox="true" />
<metadata
id="metadata4054">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1">
<g
id="g4096">
<path
id="rect4073"
d="M 53.21875 28.4375 L 53.21875 296.3125 L 131.5625 296.3125 L 131.5625 254.96875 C 127.40006 255.10395 123.06445 255.05613 122.53125 254.25 C 121.71364 253.01388 127.57742 239.04558 128.15625 237.59375 C 128.35581 237.09321 129.06594 235.3019 129.9375 233.09375 C 86.808636 220.67326 55.21875 180.88942 55.21875 133.78125 C 55.218752 76.74507 101.52631 30.4375 158.5625 30.4375 C 215.59868 30.4375 261.875 76.745063 261.875 133.78125 C 261.87501 181.48268 229.49465 221.65529 185.53125 233.53125 L 185.53125 277.28125 C 189.44172 277.51593 194.36954 277.98504 194.625 278.96875 C 195.04299 280.57833 189.62262 291.82522 187.4375 296.3125 L 321.0625 296.3125 L 321.0625 28.4375 L 53.21875 28.4375 z M 173.125 236.0625 C 168.3641 236.73542 163.50817 237.09375 158.5625 237.09375 C 154.43899 237.09375 150.37275 236.84547 146.375 236.375 C 148.86246 241.49268 153.47528 251.18709 153.09375 252.65625 C 152.83829 253.63995 147.91047 254.10906 144 254.34375 L 144 296.3125 L 170.59375 296.3125 C 170.24478 295.4181 169.79764 294.30751 169.6875 294.03125 C 169.10867 292.57942 163.2449 278.61112 164.0625 277.375 C 164.59703 276.56685 168.95311 276.51976 173.125 276.65625 L 173.125 236.0625 z "
style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:3;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1" />
<path
id="text4087"
d="M 284.16123,61.264954 C 279.36122,61.264918 275.47323,59.896919 272.49723,57.160954 C 269.52123,54.424924 268.00923,50.680928 267.96123,45.928954 C 267.91323,41.272938 269.49723,37.360941 272.71323,34.192954 C 275.68923,31.264948 279.57722,29.416949 284.37723,28.648954 L 288.55323,40.600954 C 285.57722,40.600938 283.58522,40.792938 282.57723,41.176954 C 280.70522,41.848937 279.76922,43.408935 279.76923,45.856954 C 279.81722,48.64093 280.87322,50.032929 282.93723,50.032954 C 284.95322,50.032929 286.89722,48.40093 288.76923,45.136954 C 291.93721,39.616939 293.73721,36.592942 294.16923,36.064954 C 296.80921,32.800946 300.0972,31.168948 304.03323,31.168954 C 308.06519,31.168948 311.40119,32.608946 314.04123,35.488954 C 316.63319,38.224941 317.92918,41.632937 317.92923,45.712954 C 317.92918,49.552929 316.60919,52.744926 313.96923,55.288954 C 312.14519,57.064922 309.09719,58.91292 304.82523,60.832954 L 300.50523,50.104954 C 302.5692,49.09693 303.7212,48.54493 303.96123,48.448954 C 305.4492,47.536931 306.1932,46.600932 306.19323,45.640954 C 306.1932,44.872934 305.9772,44.176935 305.54523,43.552954 C 305.1132,42.928936 304.5132,42.592936 303.74523,42.544954 C 303.6972,42.640936 302.0412,45.328934 298.77723,50.608954 C 296.71321,53.968925 294.76921,56.440922 292.94523,58.024954 C 290.40121,60.184919 287.52121,61.264918 284.30523,61.264954 L 284.16123,61.264954 M 284.23323,60.904954 C 288.07321,60.904918 291.19321,59.680919 293.59323,57.232954 C 294.98521,55.840923 296.68921,53.200926 298.70523,49.312954 C 300.5772,45.760933 302.2572,43.384935 303.74523,42.184954 L 303.88923,42.184954 C 304.7532,42.184937 305.4012,42.520936 305.83323,43.192954 C 306.3132,43.912935 306.5532,44.704934 306.55323,45.568954 C 306.5052,47.056932 304.6572,48.61693 301.00923,50.248954 L 305.04123,60.328954 C 309.12119,58.45692 312.04919,56.656922 313.82523,54.928954 C 316.36919,52.432926 317.64118,49.33693 317.64123,45.640954 C 317.64118,41.656937 316.36919,38.296941 313.82523,35.560954 C 311.32919,32.824946 308.08919,31.456947 304.10523,31.456954 C 300.2652,31.456947 297.0492,33.088946 294.45723,36.352954 C 294.21721,36.640942 292.41721,39.688939 289.05723,45.496954 C 287.18521,48.76093 285.19322,50.392928 283.08123,50.392954 C 281.88122,50.392928 280.96922,49.936929 280.34523,49.024954 C 279.72122,48.160931 279.40922,47.104932 279.40923,45.856954 C 279.40922,43.408935 280.32122,41.776937 282.14523,40.960954 C 283.24922,40.480938 285.26522,40.240939 288.19323,40.240954 L 284.16123,29.080954 C 279.60122,29.848949 275.88123,31.672947 273.00123,34.552954 C 269.92923,37.624941 268.39323,41.392937 268.39323,45.856954 C 268.39323,50.560928 269.83323,54.232925 272.71323,56.872954 C 275.64123,59.560919 279.48122,60.904918 284.23323,60.904954 M 305.47323,59.392954 L 304.53723,57.160954 L 306.33723,58.960954 L 305.47323,59.392954 M 308.85723,57.664954 C 304.2492,53.056926 301.9452,50.728928 301.94523,50.680954 C 302.9532,50.200929 303.8652,49.696929 304.68123,49.168954 L 311.52123,55.936954 C 310.60919,56.608922 309.72119,57.184922 308.85723,57.664954 M 313.46523,54.280954 L 306.62523,47.512954 C 307.00919,46.936932 307.22519,46.312933 307.27323,45.640954 C 307.27319,45.016934 307.05719,44.176935 306.62523,43.120954 L 315.33723,51.832954 C 314.90519,52.648926 314.28119,53.464925 313.46523,54.280954 M 316.34523,49.096954 L 300.14523,32.824954 C 301.4412,32.440946 302.6892,32.248947 303.88923,32.248954 L 316.70523,45.136954 L 316.70523,45.712954 C 316.70519,46.768932 316.58519,47.896931 316.34523,49.096954 M 315.91323,40.528954 L 308.42523,32.968954 C 311.83319,34.408944 314.32919,36.928942 315.91323,40.528954 M 292.00923,57.376954 L 285.31323,50.752954 C 286.03322,50.320929 286.82522,49.624929 287.68923,48.664954 L 294.31323,55.216954 C 293.49721,56.128923 292.72921,56.848922 292.00923,57.376954 M 295.96923,53.200954 L 289.20123,46.432954 C 289.58521,45.856933 290.16121,44.944934 290.92923,43.696954 L 297.76923,50.608954 C 297.0012,51.760927 296.40121,52.624926 295.96923,53.200954 M 299.06523,48.232954 L 292.22523,41.320954 L 293.73723,38.512954 L 300.64923,45.424954 L 299.06523,48.232954 M 302.08923,43.048954 L 295.32123,36.352954 C 296.13721,35.440943 296.90521,34.720944 297.62523,34.192954 L 305.40123,42.040954 C 304.6332,41.752937 304.0572,41.608937 303.67323,41.608954 C 303.2412,41.608937 302.7132,42.088937 302.08923,43.048954 M 286.60923,60.112954 L 269.32923,42.832954 C 269.52123,41.728937 269.88123,40.624938 270.40923,39.520954 L 278.90523,48.016954 C 279.24122,49.600929 280.20122,50.560928 281.78523,50.896954 L 289.77723,58.888954 C 288.67321,59.416919 287.61721,59.824919 286.60923,60.112954 M 282.93723,60.328954 C 279.76922,60.040919 277.96922,59.680919 277.53723,59.248954 L 269.90523,51.616954 C 269.56923,51.280928 269.25723,49.528929 268.96923,46.360954 L 282.93723,60.328954 M 282.86523,40.096954 L 275.88123,33.112954 C 276.50523,32.728946 277.44122,32.200947 278.68923,31.528954 L 286.75323,39.664954 C 285.21722,39.664939 283.92122,39.808939 282.86523,40.096954 M 278.76123,44.200954 L 271.70523,37.144954 C 272.18523,36.424942 272.85723,35.632943 273.72123,34.768954 L 280.27323,41.392954 C 279.55322,42.016937 279.04922,42.952936 278.76123,44.200954 M 285.02523,34.192954 L 281.35323,30.520954 C 282.31322,30.232949 283.05722,30.064949 283.58523,30.016954 L 285.02523,34.192954 M 305.61723,58.888954 L 305.76123,58.888954 L 305.54523,58.672954 L 305.61723,58.888954 M 308.85723,57.304954 C 309.81719,56.728922 310.51319,56.272923 310.94523,55.936954 L 304.60923,49.600954 C 304.1292,49.888929 303.4092,50.272929 302.44923,50.752954 L 308.92923,57.232954 L 308.85723,57.304954 M 313.39323,53.704954 C 313.92119,53.224926 314.42519,52.624926 314.90523,51.904954 L 307.48923,44.416954 C 307.58519,44.992934 307.60919,45.400933 307.56123,45.640954 C 307.60919,46.264933 307.46519,46.864932 307.12923,47.440954 L 313.39323,53.704954 M 316.20123,48.376954 C 316.34519,47.512931 316.41719,46.648932 316.41723,45.784954 L 316.34523,45.280954 L 303.74523,32.608954 C 302.7372,32.608946 301.7532,32.752946 300.79323,33.040954 L 316.20123,48.376954 M 314.61723,38.656954 C 313.46519,36.688942 311.95319,35.176944 310.08123,34.120954 L 314.61723,38.656954 M 292.08123,56.944954 C 292.46521,56.608922 293.04121,56.056923 293.80923,55.288954 L 287.68923,49.168954 C 287.06521,49.840929 286.48922,50.368928 285.96123,50.752954 L 292.08123,56.944954 M 295.89723,52.552954 C 296.37721,51.976927 296.83321,51.328928 297.26523,50.608954 L 291.00123,44.272954 L 289.70523,46.432954 L 295.89723,52.552954 M 298.99323,47.656954 L 300.21723,45.496954 L 293.88123,39.088954 L 292.58523,41.248954 L 298.99323,47.656954 M 302.01723,42.472954 C 302.6892,41.560937 303.4092,41.176938 304.17723,41.320954 L 297.55323,34.624954 C 296.9772,35.056944 296.40121,35.608943 295.82523,36.280954 L 302.01723,42.472954 M 286.68123,59.680954 C 287.59321,59.440919 288.43321,59.15292 289.20123,58.816954 L 281.64123,51.256954 C 279.91322,50.824928 278.88122,49.792929 278.54523,48.160954 L 270.55323,40.096954 C 270.16923,41.056938 269.90523,41.944937 269.76123,42.760954 L 286.68123,59.680954 M 282.07323,59.896954 L 269.47323,47.296954 C 269.80923,49.840929 270.07323,51.208928 270.26523,51.400954 L 277.82523,58.888954 C 278.20922,59.27292 279.62522,59.608919 282.07323,59.896954 M 283.00923,39.664954 C 283.68122,39.520939 284.66522,39.400939 285.96123,39.304954 L 278.61723,32.032954 C 277.65722,32.512946 276.93722,32.896946 276.45723,33.184954 L 283.00923,39.664954 M 278.54523,43.480954 C 278.73722,42.712936 279.14522,41.992937 279.76923,41.320954 L 273.72123,35.272954 C 273.19323,35.848943 272.66523,36.448942 272.13723,37.072954 L 278.54523,43.480954 M 284.16123,32.824954 L 283.36923,30.448954 L 282.07323,30.664954 L 284.16123,32.824954 M 268.17723,113.0622 L 268.10523,105.5022 L 295.75323,82.318204 L 268.03323,82.318204 L 267.96123,70.150204 L 318.21723,70.078204 L 318.28923,79.870204 L 292.44123,101.1102 L 318.43323,100.9662 L 318.50523,113.0622 L 268.17723,113.0622 M 268.60923,112.7022 L 318.14523,112.7022 L 318.21723,101.3262 L 291.50523,101.4702 L 317.92923,79.726204 L 318.00123,70.438204 L 268.46523,70.510204 L 268.39323,81.958204 L 296.83323,81.958204 L 268.53723,105.6462 L 268.60923,112.7022 M 317.49723,110.7582 L 308.85723,102.1182 L 313.60923,102.1182 L 317.42523,105.9342 L 317.49723,110.7582 M 310.51323,111.9822 L 300.64923,102.1182 L 305.40123,102.1182 L 315.26523,111.9822 L 310.51323,111.9822 M 302.37723,111.9822 L 292.58523,102.1902 L 297.33723,102.1902 L 307.12923,111.9822 L 302.37723,111.9822 M 294.24123,111.9822 L 279.55323,97.294204 L 282.14523,95.134204 L 298.99323,111.9822 L 294.24123,111.9822 M 286.10523,111.9822 L 275.08923,101.0382 L 277.68123,98.878204 L 290.85723,111.9822 L 286.10523,111.9822 M 317.20923,77.926204 L 310.44123,71.158204 L 315.19323,71.158204 L 317.13723,73.174204 L 317.20923,77.926204 M 300.00123,93.406204 L 292.80123,86.206204 L 295.39323,83.974204 L 302.66523,91.246204 L 300.00123,93.406204 M 313.39323,82.390204 L 302.23323,71.230204 L 306.98523,71.158204 L 316.05723,80.230204 L 313.39323,82.390204 M 295.53723,97.078204 L 288.33723,89.878204 L 290.92923,87.718204 L 298.20123,94.918204 L 295.53723,97.078204 M 291.07323,100.7502 L 283.87323,93.622204 L 286.53723,91.462204 L 293.66523,98.590204 L 291.07323,100.7502 M 277.82523,111.9822 L 270.62523,104.7822 L 273.21723,102.5502 L 282.64923,111.9822 L 277.82523,111.9822 M 308.92923,86.062204 L 294.09723,71.230204 L 298.84923,71.230204 L 311.59323,83.902204 L 308.92923,86.062204 M 269.68923,111.9822 L 269.18523,111.4062 L 269.11323,106.6542 L 274.51323,111.9822 L 269.68923,111.9822 M 304.46523,89.734204 L 297.19323,82.462204 L 298.63323,81.238204 L 295.96923,81.238204 L 285.96123,71.230204 L 290.71323,71.230204 L 307.12923,87.574204 L 304.46523,89.734204 M 287.83323,81.238204 L 277.82523,71.230204 L 282.57723,71.230204 L 292.65723,81.238204 L 287.83323,81.238204 M 279.69723,81.238204 L 269.68923,71.230204 L 274.44123,71.230204 L 284.44923,81.238204 L 279.69723,81.238204 M 271.56123,81.238204 L 269.04123,78.718204 L 268.96923,73.894204 L 276.31323,81.238204 L 271.56123,81.238204 M 317.06523,109.8942 L 317.13723,106.0782 L 313.46523,102.4782 L 309.72123,102.4782 L 317.06523,109.8942 M 310.65723,111.6222 L 314.40123,111.6222 L 305.25723,102.4782 L 301.51323,102.4782 L 310.65723,111.6222 M 302.52123,111.6222 L 306.26523,111.6222 L 297.19323,102.5502 L 293.44923,102.5502 L 302.52123,111.6222 M 294.38523,111.6222 L 298.12923,111.6222 L 289.05723,102.5502 L 288.40923,102.5502 L 288.76923,102.2622 L 282.14523,95.638204 L 280.05723,97.366204 L 294.38523,111.6222 M 316.77723,77.062204 L 316.84923,73.318204 L 315.04923,71.518204 L 311.30523,71.518204 L 316.77723,77.062204 M 286.24923,111.6222 L 289.99323,111.6222 L 277.68123,99.310204 L 275.66523,101.0382 L 286.24923,111.6222 M 300.00123,92.974204 L 302.08923,91.246204 L 295.32123,84.478204 L 293.30523,86.206204 L 300.00123,92.974204 M 313.46523,81.886204 L 315.48123,80.230204 L 306.84123,71.518204 L 303.09723,71.590204 L 313.46523,81.886204 M 295.53723,96.646204 L 297.62523,94.918204 L 290.92923,88.222204 L 288.84123,89.950204 L 295.53723,96.646204 M 291.07323,100.3182 L 293.16123,98.590204 L 286.46523,91.894204 L 284.44923,93.622204 L 291.07323,100.3182 M 277.96923,111.6222 L 281.78523,111.6222 L 273.21723,103.0542 L 271.12923,104.7822 L 277.96923,111.6222 M 309.00123,85.558204 L 311.01723,83.902204 L 298.70523,71.590204 L 294.96123,71.590204 L 309.00123,85.558204 M 269.83323,111.6222 L 273.64923,111.6222 L 269.54523,107.5182 L 269.47323,111.2622 L 269.83323,111.6222 M 304.53723,89.230204 L 306.55323,87.574204 L 290.56923,71.590204 L 286.82523,71.590204 L 296.11323,80.878204 L 299.64123,80.878204 L 297.69723,82.462204 L 304.53723,89.230204 M 287.97723,80.878204 L 291.79323,80.878204 L 282.43323,71.590204 L 278.68923,71.590204 L 287.97723,80.878204 M 279.84123,80.878204 L 283.58523,80.878204 L 274.29723,71.590204 L 270.55323,71.590204 L 279.84123,80.878204 M 271.70523,80.878204 L 275.44923,80.878204 L 269.40123,74.830204 L 269.32923,78.574204 L 271.70523,80.878204 M 267.96123,168.68333 L 268.03323,168.68333 L 267.96123,155.79533 L 280.92123,151.33133 L 280.99323,138.87533 L 268.03323,134.26733 L 267.96123,121.45133 L 318.14523,139.95533 L 318.21723,150.03533 L 268.03323,168.68333 L 268.03323,168.68333 L 267.96123,168.68333 M 268.46523,168.17933 L 317.85723,149.74733 L 317.92923,140.24333 L 268.46523,121.95533 L 268.39323,133.97933 L 281.35323,138.58733 L 281.42523,151.61933 L 268.39323,156.08333 L 268.46523,168.17933 M 317.20923,143.91533 L 312.16923,138.87533 L 317.13723,140.74733 L 317.20923,143.91533 M 303.16923,154.42733 L 295.60923,146.86733 L 298.77723,145.64333 L 306.40923,153.27533 L 303.16923,154.42733 M 315.04923,150.03533 L 299.13723,134.12333 C 303.5052,135.75531 305.8332,136.57131 306.12123,136.57133 C 307.36919,137.53131 309.21719,139.30731 311.66523,141.89933 C 313.48919,143.8193 315.28919,145.7393 317.06523,147.65933 L 317.13723,149.24333 L 315.04923,150.03533 M 293.01723,147.08333 L 293.08923,147.08333 L 293.01723,143.05133 L 298.34523,145.06733 L 293.08923,147.08333 L 293.08923,147.08333 L 293.01723,147.08333 M 291.21723,158.81933 L 282.00123,149.60333 L 281.92923,145.21133 L 294.45723,157.66733 L 291.21723,158.81933 M 309.07323,152.26733 L 286.10523,129.29933 C 288.40921,130.11532 290.76121,130.97932 293.16123,131.89133 C 295.75321,134.05131 298.9932,137.17131 302.88123,141.25133 C 309.02519,147.6833 312.14519,150.9473 312.24123,151.04333 L 309.07323,152.26733 M 285.31323,161.05133 L 277.82523,153.56333 L 281.06523,152.41133 L 288.48123,159.89933 L 285.31323,161.05133 M 279.33723,163.28333 L 271.70523,155.65133 L 275.01723,154.49933 L 282.57723,162.05933 L 279.33723,163.28333 M 268.96923,167.09933 L 269.04123,167.09933 L 268.96923,164.79533 L 270.69723,166.52333 L 269.04123,167.09933 L 269.04123,167.09933 L 268.96923,167.09933 M 273.43323,165.44333 L 269.04123,161.05133 L 268.96923,156.65933 L 276.60123,164.29133 L 273.43323,165.44333 M 297.19323,156.65933 L 281.92923,141.39533 L 282.00123,138.15533 C 280.12922,138.05931 277.72922,136.78731 274.80123,134.33933 C 272.40123,132.37131 270.48123,130.40332 269.04123,128.43533 L 268.96923,124.11533 L 292.29723,147.37133 L 292.22523,148.09133 L 292.80123,147.87533 L 300.36123,155.43533 L 297.19323,156.65933 M 296.76123,143.69933 L 292.22523,141.97133 L 292.29723,143.55533 L 273.21723,124.47533 C 275.52123,125.33932 277.84922,126.22732 280.20123,127.13933 C 282.26522,128.81932 285.07322,131.50732 288.62523,135.20333 C 291.31321,138.03531 294.02521,140.86731 296.76123,143.69933 M 270.98523,134.26733 L 269.04123,133.54733 L 268.96923,132.32333 L 270.98523,134.26733 M 316.77723,143.05133 L 316.84923,140.96333 L 313.53723,139.81133 L 316.77723,143.05133 M 303.31323,153.99533 L 305.76123,153.13133 L 298.70523,146.07533 L 296.25723,147.01133 L 303.31323,153.99533 M 315.19323,149.60333 L 316.70523,149.02733 L 316.77723,147.80333 C 314.23319,145.0673 310.60919,141.44331 305.90523,136.93133 C 305.7612,136.78731 303.9852,136.13931 300.57723,134.98733 L 315.19323,149.60333 M 309.14523,151.83533 L 311.59323,150.89933 L 292.87323,132.17933 C 292.72921,132.03532 290.95321,131.38732 287.54523,130.23533 L 309.14523,151.83533 M 293.44923,146.50733 L 297.33723,145.06733 L 293.37723,143.55533 L 293.44923,146.50733 M 291.36123,158.45933 L 293.80923,157.52333 L 282.36123,146.07533 L 282.28923,149.38733 L 291.36123,158.45933 M 285.38523,160.61933 L 287.83323,159.75533 L 280.99323,152.84333 L 278.47323,153.70733 L 285.38523,160.61933 M 297.26523,156.22733 L 299.71323,155.29133 L 292.72923,148.30733 L 291.86523,148.59533 L 291.93723,147.51533 L 282.36123,137.93933 L 282.28923,141.25133 L 297.26523,156.22733 M 279.48123,162.85133 L 281.92923,161.91533 L 274.87323,154.93133 L 272.42523,155.79533 L 279.48123,162.85133 M 269.47323,166.59533 L 270.12123,166.37933 L 269.40123,165.65933 L 269.47323,166.59533 M 273.50523,165.08333 L 275.95323,164.14733 L 269.40123,157.52333 L 269.32923,160.90733 L 273.50523,165.08333 M 295.17723,142.69133 C 291.81721,139.04331 286.75322,133.93131 279.98523,127.35533 C 279.79322,127.16332 278.01722,126.51532 274.65723,125.41133 L 291.93723,142.69133 L 291.86523,141.46733 L 295.17723,142.69133 M 282.21723,137.79533 L 269.40123,124.97933 L 269.32923,128.36333 C 271.20123,130.37932 273.76923,132.89931 277.03323,135.92333 C 277.17722,136.06731 278.90522,136.69131 282.21723,137.79533 M 269.61723,133.40333 L 269.40123,133.18733 L 269.32923,133.33133 L 269.61723,133.40333 M 318.21723,218.70983 L 295.10523,193.79783 L 268.03323,218.34983 L 267.96123,201.78983 L 282.21723,189.04583 L 268.03323,189.04583 L 267.96123,176.08583 L 318.14523,176.08583 L 318.21723,189.04583 L 307.05723,189.04583 L 318.14523,201.14183 L 318.21723,218.70983 M 317.78523,217.77383 L 317.85723,201.28583 L 306.19323,188.68583 L 317.78523,188.68583 L 317.85723,176.44583 L 268.39323,176.44583 L 268.32123,188.68583 L 283.15323,188.68583 L 268.39323,202.00583 L 268.32123,217.48583 L 295.10523,193.29383 L 317.78523,217.77383 M 316.99323,180.90983 L 316.99323,180.83783 L 313.24923,177.16583 L 317.06523,177.16583 L 316.99323,180.83783 L 317.06523,180.90983 L 316.99323,180.90983 M 316.99323,205.38983 L 288.76923,177.16583 L 293.16123,177.09383 C 299.7852,183.62182 303.8412,187.62981 305.32923,189.11783 C 309.88919,193.72581 313.75319,197.8778 316.92123,201.57383 L 316.99323,205.38983 M 316.92123,213.52583 L 280.48923,177.16583 L 284.88123,177.16583 L 316.84923,209.13383 L 316.92123,213.52583 M 315.69723,187.96583 L 304.89723,177.16583 L 309.28923,177.16583 L 316.77723,184.72583 L 316.84923,187.96583 L 315.69723,187.96583 M 307.56123,187.96583 L 296.76123,177.16583 L 301.15323,177.16583 L 311.95323,187.96583 L 307.56123,187.96583 M 269.61723,215.18183 L 268.82523,214.38983 L 268.75323,209.99783 L 271.92123,213.09383 L 269.61723,215.18183 M 286.68123,199.70183 L 279.62523,192.57383 L 281.92923,190.48583 L 288.98523,197.61383 L 286.68123,199.70183 M 273.86523,211.29383 L 268.82523,206.18183 L 268.75323,202.29383 L 269.04123,202.07783 L 276.16923,209.20583 L 273.86523,211.29383 M 282.43323,203.51783 L 275.30523,196.46183 L 277.60923,194.37383 L 284.73723,201.50183 L 282.43323,203.51783 M 278.11323,207.40583 L 270.98523,200.27783 L 273.36123,198.26183 L 280.41723,205.31783 L 278.11323,207.40583 M 291.00123,195.81383 L 283.87323,188.75783 L 284.73723,187.96583 L 283.08123,187.96583 L 272.28123,177.16583 L 276.67323,177.16583 L 293.23323,193.72583 L 291.00123,195.81383 M 274.94523,187.96583 L 268.82523,181.77383 L 268.75323,177.38183 L 279.33723,187.96583 L 274.94523,187.96583 M 268.82523,187.96583 L 268.75323,185.51783 L 271.20123,187.96583 L 268.82523,187.96583 M 316.63323,180.04583 L 316.70523,177.52583 L 314.11323,177.52583 L 316.63323,180.04583 M 316.56123,204.52583 L 316.63323,201.71783 C 311.97719,196.29381 304.0812,188.22981 292.94523,177.52583 L 289.63323,177.52583 L 316.56123,204.52583 M 316.48923,212.66183 L 316.56123,209.27783 L 284.73723,177.52583 L 281.42523,177.52583 L 316.48923,212.66183 M 315.84123,187.60583 L 316.41723,187.60583 L 316.48923,184.86983 L 309.14523,177.52583 L 305.76123,177.52583 L 315.84123,187.60583 M 307.70523,187.60583 L 311.08923,187.60583 L 301.00923,177.52583 L 297.62523,177.52583 L 307.70523,187.60583 M 269.61723,214.67783 L 271.34523,213.09383 L 269.18523,210.86183 L 269.11323,214.24583 L 269.61723,214.67783 M 286.68123,199.19783 L 288.48123,197.61383 L 281.92923,190.98983 L 280.12923,192.57383 L 286.68123,199.19783 M 273.86523,210.78983 L 275.66523,209.20583 L 269.18523,202.72583 L 269.11323,206.03783 L 273.86523,210.78983 M 282.43323,203.08583 L 284.16123,201.42983 L 277.60923,194.87783 L 275.80923,196.46183 L 282.43323,203.08583 M 278.18523,206.90183 L 279.91323,205.31783 L 273.28923,198.69383 L 271.56123,200.34983 L 278.18523,206.90183 M 291.00123,195.30983 L 292.72923,193.72583 L 276.52923,177.52583 L 273.14523,177.52583 L 283.22523,187.60583 L 285.74523,187.60583 L 284.44923,188.75783 L 291.00123,195.30983 M 269.25723,187.60583 L 270.40923,187.60583 L 269.18523,186.38183 L 269.25723,187.60583 M 275.08923,187.60583 L 278.47323,187.60583 L 269.18523,178.24583 L 269.11323,181.62983 L 275.08923,187.60583 M 268.03323,258.31433 L 267.96123,224.47433 L 318.50523,224.47433 L 318.57723,258.31433 L 306.62523,258.31433 L 306.69723,236.57033 L 297.91323,236.57033 L 297.98523,253.99433 L 286.03323,253.99433 L 286.10523,236.57033 L 279.84123,236.49833 L 279.91323,258.31433 L 268.03323,258.31433 M 268.46523,257.95433 L 279.62523,257.95433 L 279.55323,236.13833 L 286.53723,236.21033 L 286.46523,253.63433 L 297.69723,253.63433 L 297.62523,236.21033 L 307.12923,236.21033 L 307.05723,257.95433 L 318.21723,257.95433 L 318.28923,224.83433 L 268.39323,224.83433 L 268.39323,225.55433 L 268.96923,225.55433 L 278.83323,235.41833 L 278.04123,235.41833 L 278.11323,239.01833 L 268.39323,229.29833 L 268.39323,233.04233 L 278.11323,242.76233 L 278.18523,247.15433 L 268.46523,237.43433 L 268.39323,234.26633 L 268.46523,257.95433 M 277.75323,246.29033 L 277.82523,242.90633 L 268.82523,233.90633 L 268.75323,237.29033 L 277.75323,246.29033 M 277.75323,238.15433 L 277.68123,235.05833 L 277.96923,235.05833 L 268.75323,225.91433 L 268.68123,229.15433 L 277.75323,238.15433 M 313.39323,257.23433 L 307.84923,251.69033 L 307.77723,247.29833 L 317.71323,257.23433 L 313.39323,257.23433 M 307.84923,257.23433 L 307.77723,255.43433 L 309.57723,257.23433 L 307.84923,257.23433 M 317.49723,253.27433 L 307.77723,243.55433 L 307.70523,239.16233 L 317.42523,248.88233 L 317.49723,253.27433 M 317.42523,228.79433 L 314.11323,225.55433 L 317.35323,225.55433 L 317.42523,228.79433 M 317.35323,236.93033 L 305.90523,225.55433 L 310.29723,225.55433 L 317.28123,232.53833 L 317.35323,236.93033 M 317.20923,245.06633 C 315.09719,242.95431 311.92919,239.76231 307.70523,235.49033 L 307.63323,235.49033 L 297.69723,225.55433 L 302.08923,225.55433 L 317.20923,240.74633 L 317.28123,245.06633 L 317.20923,245.06633 M 292.29723,252.91433 L 286.82523,247.37033 L 286.75323,242.97833 L 296.68923,252.91433 L 292.29723,252.91433 M 286.82523,252.91433 L 286.75323,251.11433 L 288.55323,252.91433 L 286.82523,252.91433 M 299.28123,235.49033 L 289.34523,225.55433 L 293.66523,225.55433 L 303.60123,235.49033 L 299.28123,235.49033 M 272.13723,257.23433 L 268.68123,253.70633 L 268.60923,249.38633 L 276.52923,257.23433 L 272.13723,257.23433 M 296.47323,248.95433 L 286.68123,239.23433 L 286.75323,235.49033 L 282.93723,235.41833 L 273.00123,225.55433 L 277.39323,225.55433 L 296.40123,244.56233 L 296.47323,248.95433 M 296.40123,240.81833 L 281.06523,225.55433 L 285.45723,225.55433 L 296.32923,236.42633 L 296.40123,240.81833 M 278.18523,255.29033 L 278.18523,255.21833 L 268.53723,245.57033 L 268.46523,241.17833 L 278.25723,250.89833 L 278.18523,255.21833 L 278.25723,255.29033 L 278.18523,255.29033 M 313.53723,256.87433 L 316.84923,256.87433 L 308.20923,248.16233 L 308.13723,251.54633 L 313.53723,256.87433 M 308.20923,256.87433 L 308.71323,256.87433 L 308.13723,256.29833 L 308.20923,256.87433 M 317.06523,252.33833 L 317.13723,249.02633 L 308.13723,240.02633 L 308.06523,243.33833 L 317.06523,252.33833 M 316.99323,227.93033 L 317.06523,225.91433 L 314.97723,225.91433 L 316.99323,227.93033 M 316.92123,236.06633 L 316.99323,232.75433 L 310.15323,225.91433 L 306.76923,225.91433 L 316.92123,236.06633 M 316.77723,244.20233 L 316.84923,240.89033 L 301.87323,225.91433 L 298.48923,225.91433 L 316.77723,244.20233 M 292.44123,252.55433 L 295.82523,252.55433 L 287.18523,243.84233 L 287.11323,247.22633 L 292.44123,252.55433 M 287.18523,252.55433 L 287.68923,252.55433 L 287.11323,251.97833 L 287.18523,252.55433 M 296.04123,248.09033 L 296.11323,244.70633 L 287.11323,235.70633 L 287.04123,239.09033 L 296.04123,248.09033 M 299.42523,235.13033 L 302.73723,235.13033 L 293.52123,225.91433 L 290.20923,225.91433 L 299.42523,235.13033 M 272.28123,256.87433 L 275.66523,256.87433 L 269.04123,250.25033 L 268.96923,253.56233 L 272.28123,256.87433 M 295.96923,239.95433 L 296.04123,236.57033 L 285.31323,225.91433 L 282.00123,225.91433 L 295.96923,239.95433 M 277.82523,254.42633 L 277.89723,251.04233 L 268.89723,242.04233 L 268.82523,245.42633 L 277.82523,254.42633 M 286.39323,235.13033 L 277.17723,225.91433 L 273.79323,225.91433 L 283.00923,235.05833 L 286.39323,235.13033 M 284.16123,296.1087 C 279.36122,296.10867 275.47323,294.74067 272.49723,292.0047 C 269.52123,289.26867 268.00923,285.52468 267.96123,280.7727 C 267.91323,276.11669 269.49723,272.20469 272.71323,269.0367 C 275.68923,266.1087 279.57722,264.2607 284.37723,263.4927 L 288.55323,275.4447 C 285.57722,275.44469 283.58522,275.63669 282.57723,276.0207 C 280.70522,276.69269 279.76922,278.25269 279.76923,280.7007 C 279.81722,283.48468 280.87322,284.87668 282.93723,284.8767 C 284.95322,284.87668 286.89722,283.24468 288.76923,279.9807 C 291.93721,274.46069 293.73721,271.43669 294.16923,270.9087 C 296.80921,267.6447 300.0972,266.0127 304.03323,266.0127 C 308.06519,266.0127 311.40119,267.4527 314.04123,270.3327 C 316.63319,273.06869 317.92918,276.47669 317.92923,280.5567 C 317.92918,284.39668 316.60919,287.58868 313.96923,290.1327 C 312.14519,291.90867 309.09719,293.75667 304.82523,295.6767 L 300.50523,284.9487 C 302.5692,283.94068 303.7212,283.38868 303.96123,283.2927 C 305.4492,282.38068 306.1932,281.44468 306.19323,280.4847 C 306.1932,279.71668 305.9772,279.02068 305.54523,278.3967 C 305.1132,277.77269 304.5132,277.43669 303.74523,277.3887 C 303.6972,277.48469 302.0412,280.17268 298.77723,285.4527 C 296.71321,288.81267 294.76921,291.28467 292.94523,292.8687 C 290.40121,295.02867 287.52121,296.10867 284.30523,296.1087 L 284.16123,296.1087 M 284.23323,295.7487 C 288.07321,295.74867 291.19321,294.52467 293.59323,292.0767 C 294.98521,290.68467 296.68921,288.04468 298.70523,284.1567 C 300.5772,280.60468 302.2572,278.22869 303.74523,277.0287 L 303.88923,277.0287 C 304.7532,277.02869 305.4012,277.36469 305.83323,278.0367 C 306.3132,278.75668 306.5532,279.54868 306.55323,280.4127 C 306.5052,281.90068 304.6572,283.46068 301.00923,285.0927 L 305.04123,295.1727 C 309.12119,293.30067 312.04919,291.50067 313.82523,289.7727 C 316.36919,287.27668 317.64118,284.18068 317.64123,280.4847 C 317.64118,276.50069 316.36919,273.14069 313.82523,270.4047 C 311.32919,267.6687 308.08919,266.3007 304.10523,266.3007 C 300.2652,266.3007 297.0492,267.9327 294.45723,271.1967 C 294.21721,271.48469 292.41721,274.53269 289.05723,280.3407 C 287.18521,283.60468 285.19322,285.23668 283.08123,285.2367 C 281.88122,285.23668 280.96922,284.78068 280.34523,283.8687 C 279.72122,283.00468 279.40922,281.94868 279.40923,280.7007 C 279.40922,278.25269 280.32122,276.62069 282.14523,275.8047 C 283.24922,275.32469 285.26522,275.08469 288.19323,275.0847 L 284.16123,263.9247 C 279.60122,264.6927 275.88123,266.5167 273.00123,269.3967 C 269.92923,272.46869 268.39323,276.23669 268.39323,280.7007 C 268.39323,285.40468 269.83323,289.07667 272.71323,291.7167 C 275.64123,294.40467 279.48122,295.74867 284.23323,295.7487 M 305.47323,294.2367 L 304.53723,292.0047 L 306.33723,293.8047 L 305.47323,294.2367 M 308.85723,292.5087 C 304.2492,287.90068 301.9452,285.57268 301.94523,285.5247 C 302.9532,285.04468 303.8652,284.54068 304.68123,284.0127 L 311.52123,290.7807 C 310.60919,291.45267 309.72119,292.02867 308.85723,292.5087 M 313.46523,289.1247 L 306.62523,282.3567 C 307.00919,281.78068 307.22519,281.15668 307.27323,280.4847 C 307.27319,279.86068 307.05719,279.02068 306.62523,277.9647 L 315.33723,286.6767 C 314.90519,287.49268 314.28119,288.30868 313.46523,289.1247 M 316.34523,283.9407 L 300.14523,267.6687 C 301.4412,267.2847 302.6892,267.0927 303.88923,267.0927 L 316.70523,279.9807 L 316.70523,280.5567 C 316.70519,281.61268 316.58519,282.74068 316.34523,283.9407 M 315.91323,275.3727 L 308.42523,267.8127 C 311.83319,269.25269 314.32919,271.77269 315.91323,275.3727 M 292.00923,292.2207 L 285.31323,285.5967 C 286.03322,285.16468 286.82522,284.46868 287.68923,283.5087 L 294.31323,290.0607 C 293.49721,290.97267 292.72921,291.69267 292.00923,292.2207 M 295.96923,288.0447 L 289.20123,281.2767 C 289.58521,280.70068 290.16121,279.78868 290.92923,278.5407 L 297.76923,285.4527 C 297.0012,286.60468 296.40121,287.46868 295.96923,288.0447 M 299.06523,283.0767 L 292.22523,276.1647 L 293.73723,273.3567 L 300.64923,280.2687 L 299.06523,283.0767 M 302.08923,277.8927 L 295.32123,271.1967 C 296.13721,270.28469 296.90521,269.56469 297.62523,269.0367 L 305.40123,276.8847 C 304.6332,276.59669 304.0572,276.45269 303.67323,276.4527 C 303.2412,276.45269 302.7132,276.93269 302.08923,277.8927 M 286.60923,294.9567 L 269.32923,277.6767 C 269.52123,276.57269 269.88123,275.46869 270.40923,274.3647 L 278.90523,282.8607 C 279.24122,284.44468 280.20122,285.40468 281.78523,285.7407 L 289.77723,293.7327 C 288.67321,294.26067 287.61721,294.66867 286.60923,294.9567 M 282.93723,295.1727 C 279.76922,294.88467 277.96922,294.52467 277.53723,294.0927 L 269.90523,286.4607 C 269.56923,286.12468 269.25723,284.37268 268.96923,281.2047 L 282.93723,295.1727 M 282.86523,274.9407 L 275.88123,267.9567 C 276.50523,267.5727 277.44122,267.0447 278.68923,266.3727 L 286.75323,274.5087 C 285.21722,274.50869 283.92122,274.65269 282.86523,274.9407 M 278.76123,279.0447 L 271.70523,271.9887 C 272.18523,271.26869 272.85723,270.47669 273.72123,269.6127 L 280.27323,276.2367 C 279.55322,276.86069 279.04922,277.79669 278.76123,279.0447 M 285.02523,269.0367 L 281.35323,265.3647 C 282.31322,265.0767 283.05722,264.9087 283.58523,264.8607 L 285.02523,269.0367 M 305.61723,293.7327 L 305.76123,293.7327 L 305.54523,293.5167 L 305.61723,293.7327 M 308.85723,292.1487 C 309.81719,291.57267 310.51319,291.11667 310.94523,290.7807 L 304.60923,284.4447 C 304.1292,284.73268 303.4092,285.11668 302.44923,285.5967 L 308.92923,292.0767 L 308.85723,292.1487 M 313.39323,288.5487 C 313.92119,288.06868 314.42519,287.46868 314.90523,286.7487 L 307.48923,279.2607 C 307.58519,279.83668 307.60919,280.24468 307.56123,280.4847 C 307.60919,281.10868 307.46519,281.70868 307.12923,282.2847 L 313.39323,288.5487 M 316.20123,283.2207 C 316.34519,282.35668 316.41719,281.49268 316.41723,280.6287 L 316.34523,280.1247 L 303.74523,267.4527 C 302.7372,267.4527 301.7532,267.5967 300.79323,267.8847 L 316.20123,283.2207 M 314.61723,273.5007 C 313.46519,271.53269 311.95319,270.02069 310.08123,268.9647 L 314.61723,273.5007 M 292.08123,291.7887 C 292.46521,291.45267 293.04121,290.90067 293.80923,290.1327 L 287.68923,284.0127 C 287.06521,284.68468 286.48922,285.21268 285.96123,285.5967 L 292.08123,291.7887 M 295.89723,287.3967 C 296.37721,286.82068 296.83321,286.17268 297.26523,285.4527 L 291.00123,279.1167 L 289.70523,281.2767 L 295.89723,287.3967 M 298.99323,282.5007 L 300.21723,280.3407 L 293.88123,273.9327 L 292.58523,276.0927 L 298.99323,282.5007 M 302.01723,277.3167 C 302.6892,276.40469 303.4092,276.02069 304.17723,276.1647 L 297.55323,269.4687 C 296.9772,269.90069 296.40121,270.45269 295.82523,271.1247 L 302.01723,277.3167 M 286.68123,294.5247 C 287.59321,294.28467 288.43321,293.99667 289.20123,293.6607 L 281.64123,286.1007 C 279.91322,285.66868 278.88122,284.63668 278.54523,283.0047 L 270.55323,274.9407 C 270.16923,275.90069 269.90523,276.78869 269.76123,277.6047 L 286.68123,294.5247 M 282.07323,294.7407 L 269.47323,282.1407 C 269.80923,284.68468 270.07323,286.05268 270.26523,286.2447 L 277.82523,293.7327 C 278.20922,294.11667 279.62522,294.45267 282.07323,294.7407 M 283.00923,274.5087 C 283.68122,274.36469 284.66522,274.24469 285.96123,274.1487 L 278.61723,266.8767 C 277.65722,267.3567 276.93722,267.7407 276.45723,268.0287 L 283.00923,274.5087 M 278.54523,278.3247 C 278.73722,277.55669 279.14522,276.83669 279.76923,276.1647 L 273.72123,270.1167 C 273.19323,270.69269 272.66523,271.29269 272.13723,271.9167 L 278.54523,278.3247 M 284.16123,267.6687 L 283.36923,265.2927 L 282.07323,265.5087 L 284.16123,267.6687"
style="font-size:72px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:100%;writing-mode:lr-tb;text-anchor:start;fill:none;fill-opacity:1;stroke:url(#linearGradient3281);stroke-width:1px;stroke-linecap:round;stroke-linejoin:round;stroke-opacity:1;font-family:Malabars Tryout;-inkscape-font-specification:Malabars Tryout" />
<g
transform="translate(-4.7251541,72.197657)"
id="g3972">
<path
transform="matrix(0.5473468,0,0,0.5565847,134.61044,56.187104)"
d="M 160.75191,61.069302 A 56.146683,55.214787 0 1 1 48.458549,61.069302 A 56.146683,55.214787 0 1 1 160.75191,61.069302 z"
sodipodi:ry="55.214787"
sodipodi:rx="56.146683"
sodipodi:cy="61.069302"
sodipodi:cx="104.60523"
id="path3915"
style="fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:#000000;stroke-width:2.78393936;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
sodipodi:type="arc" />
<g
id="g3930"
transform="matrix(0.4966719,0,0,0.4966719,161.48334,59.089877)">
<path
id="path3943"
d="M 60.510156,6.3979729 C 55.926503,6.4192712 51.549217,6.8101906 47.697656,7.4917229 C 36.35144,9.4962267 34.291407,13.691825 34.291406,21.429223 L 34.291406,31.647973 L 61.103906,31.647973 L 61.103906,35.054223 L 34.291406,35.054223 L 24.228906,35.054223 C 16.436447,35.054223 9.6131468,39.73794 7.4789058,48.647973 C 5.0170858,58.860939 4.9078907,65.233996 7.4789058,75.897973 C 9.3848341,83.835825 13.936449,89.491721 21.728906,89.491723 L 30.947656,89.491723 L 30.947656,77.241723 C 30.947656,68.391821 38.6048,60.585475 47.697656,60.585473 L 74.478906,60.585473 C 81.933857,60.585473 87.885159,54.447309 87.885156,46.960473 L 87.885156,21.429223 C 87.885156,14.162884 81.755176,8.7044455 74.478906,7.4917229 C 69.872919,6.7249976 65.093809,6.3766746 60.510156,6.3979729 z M 46.010156,14.616723 C 48.779703,14.616723 51.041406,16.915369 51.041406,19.741723 C 51.041404,22.558059 48.779703,24.835473 46.010156,24.835473 C 43.23068,24.835472 40.978906,22.558058 40.978906,19.741723 C 40.978905,16.91537 43.23068,14.616723 46.010156,14.616723 z"
style="fill:url(#linearGradient3978);fill-opacity:1" />
<path
id="path3945"
d="M 91.228906,35.054223 L 91.228906,46.960473 C 91.228906,56.191228 83.403011,63.960472 74.478906,63.960473 L 47.697656,63.960473 C 40.361823,63.960473 34.291407,70.238956 34.291406,77.585473 L 34.291406,103.11672 C 34.291406,110.38306 40.609994,114.65704 47.697656,116.74172 C 56.184987,119.23733 64.323893,119.68835 74.478906,116.74172 C 81.229061,114.78733 87.885159,110.85411 87.885156,103.11672 L 87.885156,92.897973 L 61.103906,92.897973 L 61.103906,89.491723 L 87.885156,89.491723 L 101.29141,89.491723 C 109.08387,89.491723 111.98766,84.056315 114.69765,75.897973 C 117.49698,67.499087 117.37787,59.422197 114.69765,48.647973 C 112.77187,40.890532 109.09378,35.054223 101.29141,35.054223 L 91.228906,35.054223 z M 76.166406,99.710473 C 78.945884,99.710476 81.197656,101.98789 81.197656,104.80422 C 81.197654,107.63057 78.945881,109.92922 76.166406,109.92922 C 73.396856,109.92922 71.135156,107.63057 71.135156,104.80422 C 71.135158,101.98789 73.396853,99.710473 76.166406,99.710473 z"
style="fill:url(#linearGradient3980);fill-opacity:1" />
</g>
</g>
</g>
</g>
</svg>
import os, os.path
snakes_version = open("VERSION").readline().strip()
package_version = open("debian/VERSION").readline().strip()
ppa_version = open("debian/PPA").readline().strip()
changelog_version = open("debian/changelog").readline().split()[1].strip("()")
distribs = [l.strip().split()[0] for l in open("debian/DISTRIB")]
base_dir = os.getcwd()
base_dist_dir = "dist"
dput_sh = open("dput.sh", "w")
def system (command) :
print("*** %s" % command)
retcode = os.system(command)
if retcode != 0 :
print("*** error return status (%s)" % retcode)
sys.exit(retcode)
def chdir (path) :
print("*** cd %s" % path)
os.chdir(path)
def changelog (path, dist) :
full_version = "%s-%s~ppa%s~%s1" % (snakes_version, package_version,
ppa_version, dist)
chdir(path)
system("debchange -b --newversion %s --distribution %s 'see NEWS'"
% (full_version, dist))
chdir(base_dir)
def build_package (dist_dir, dist) :
full_version = "%s-%s~ppa%s~%s1" % (snakes_version, package_version,
ppa_version, dist)
deb_dir = os.path.join(dist_dir, "python-snakes_%s" % full_version)
if not os.path.isdir(dist_dir) :
print("*** make dir %r" % dist_dir)
os.makedirs(dist_dir)
if os.path.isdir(deb_dir) :
system("rm -rf %s" % deb_dir)
system("hg archive %s" % deb_dir)
changelog(deb_dir, dist)
system("sed -i -e 's/DATE/$(date -R)/' %s/debian/copyright" % deb_dir)
system("sed -i -e 's/UNRELEASED/%s/' %s/debian/changelog" % (dist, deb_dir))
chdir(deb_dir)
system("make doc")
system("dpkg-buildpackage")
system("dpkg-buildpackage -S -sa")
chdir(base_dir)
dput_sh.write("dput lp %s_source.changes\n" % deb_dir)
main_version = "%s-%s" % (snakes_version, package_version)
if main_version != changelog_version :
system("debchange --newversion %s --distribution UNRELEASED 'see NEWS'"
% main_version)
system("hg commit -m 'updated debian/changelog' debian/changelog")
for dist in distribs :
build_package(base_dist_dir, dist)
dput_sh.close()
import glob, os, os.path
for src in glob.glob("snakes/lang/*/*.pgen") :
tgt = os.path.join(os.path.dirname(src), "pgen.py")
if not os.path.isfile(tgt) or os.path.getmtime(src) > os.path.getmtime(tgt) :
print("python snakes/lang/pgen.py --output=%s %s" % (tgt, src))
os.system("python snakes/lang/pgen.py --output=%s %s" % (tgt, src))
for src in glob.glob("snakes/lang/*/*.asdl") :
tgt = os.path.join(os.path.dirname(src), "asdl.py")
if not os.path.isfile(tgt) or os.path.getmtime(src) > os.path.getmtime(tgt) :
print("python snakes/lang/asdl.py --output=%s %s" % (tgt, src))
os.system("python snakes/lang/asdl.py --output=%s %s" % (tgt, src))
#!/usr/bin/env python
import sys, os
from distutils.core import setup
def doc_files() :
import os, os.path
result = {}
for root, dirs, files in os.walk("doc") :
target_dir = os.path.join("share/doc/python-snakes",
*root.split(os.sep)[1:])
for name in files :
if target_dir not in result :
result[target_dir] = []
result[target_dir].append(os.path.join(root, name))
return list(result.items())
if __name__ == "__main__" :
print("Compiling Emacs files...")
os.system("emacs -batch -f batch-byte-compile utils/abcd-mode.el")
#
setup(name="SNAKES",
version=open("VERSION").read().strip(),
description="SNAKES is the Net Algebra Kit for Editors and Simulators",
long_description="""SNAKES is a general purpose Petri net Python
library allowing to define and execute most classes of Petri
nets. It features a plugin system to allow its extension. In
particular, plugins are provided to implement the operations
usually found in the PBC and M-nets family.""",
author="Franck Pommereau",
author_email="pommereau@univ-paris12.fr",
maintainer="Franck Pommereau",
maintainer_email="pommereau@univ-paris12.fr",
url="http://lacl.univ-paris12.fr/pommereau/soft/snakes",
scripts=["bin/abcd",
"bin/snkc",
"bin/snkd",
],
packages=["snakes",
"snakes.lang",
"snakes.lang.pylib",
"snakes.lang.python",
"snakes.lang.abcd",
"snakes.lang.ctlstar",
"snakes.plugins",
"snakes.utils",
"snakes.utils.abcd",
"snakes.utils.ctlstar",
],
data_files=(doc_files()
+ [("share/emacs/site-lisp", ["utils/abcd-mode.el",
"utils/abcd-mode.elc"])]),
)
"""SNAKES is the Net Algebra Kit for Editors and Simulators
@author: Franck Pommereau
@organization: University of Paris 12
@copyright: (C) 2005 Franck Pommereau
@license: GNU Lesser General Public Licence (aka. GNU LGPL),
see the file C{doc/COPYING} in the distribution or visit U{the GNU
web site<http://www.gnu.org/licenses/licenses.html#LGPL>}
@contact: pommereau@univ-paris12.fr
SNAKES is a Python library allowing to model all sorts of Petri nets
and to execute them. It is very general as most Petri nets annotations
can be arbitrary Python expressions while most values can be arbitrary
Python objects.
SNAKES can be further extended with plugins, several ones being
already provided, in particular two plugins implement the Petri nets
compositions defined for the Petri Box Calculus and its successors.
"""
version = "0.9.16"
defaultencoding = "utf-8"
class SnakesError (Exception) :
"An error in SNAKES"
pass
class ConstraintError (SnakesError) :
"Violation of a constraint"
pass
class NodeError (SnakesError) :
"Error related to a place or a transition"
pass
class DomainError (SnakesError) :
"Function applied out of its domain"
pass
class ModeError (SnakesError) :
"The modes of a transition cannot be found"
pass
class PluginError (SnakesError) :
"Error when adding a plugin"
pass
class UnificationError (SnakesError) :
"Error while unifying parameters"
pass
"""Python 2 and 3 compatibility layer
"""
import sys
try :
xrange
except NameError :
xrange = range
try :
reduce
except NameError :
from functools import reduce
try :
import StringIO as io
except ImportError :
import io
try :
next
except NameError :
def next (obj) :
return obj.next()
PY3 = sys.version > "3"
"""Basic data types and functions used in SNAKES"""
import operator, inspect
from snakes.compat import *
from snakes import DomainError
from snakes.hashables import hdict
from snakes.pnml import Tree
def cross (sets) :
"""Cross-product.
>>> list(cross([[1, 2], [3, 4, 5]]))
[(1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5)]
>>> list(cross([[1, 2], [3, 4, 5], [6, 7, 8, 9]]))
[(1, 3, 6), (1, 3, 7), (1, 3, 8), (1, 3, 9), (1, 4, 6), (1, 4, 7),
(1, 4, 8), (1, 4, 9), (1, 5, 6), (1, 5, 7), (1, 5, 8), (1, 5, 9),
(2, 3, 6), (2, 3, 7), (2, 3, 8), (2, 3, 9), (2, 4, 6), (2, 4, 7),
(2, 4, 8), (2, 4, 9), (2, 5, 6), (2, 5, 7), (2, 5, 8), (2, 5, 9)]
>>> list(cross([[], [1]]))
[]
@param sets: the sets of values to use
@type sets: C{iterable(iterable(object))}
@return: the C{list} of obtained tuples (lists are used to allow
unhashable objects)
@rtype: C{generator(tuple(object))}
"""
if len(sets) == 0 :
pass
elif len(sets) == 1 :
for item in sets[0] :
yield (item,)
else :
for item in sets[0] :
for others in cross(sets[1:]) :
yield (item,) + others
def iterate (value) :
"""Like Python's builtin C{iter} but consider strings as atomic.
>>> list(iter([1, 2, 3]))
[1, 2, 3]
>>> list(iterate([1, 2, 3]))
[1, 2, 3]
>>> list(iter('foo'))
['f', 'o', 'o']
>>> list(iterate('foo'))
['foo']
@param value: any object
@type value: C{object}
@return: an iterator on the elements of C{value} if is is iterable
and is not string, an iterator on the sole C{value} otherwise
@rtype: C{generator}
"""
if isinstance(value, str) :
return iter([value])
else :
try :
return iter(value)
except TypeError :
return iter([value])
class WordSet (set) :
"""A set of words being able to generate fresh words."""
def fresh (self, add=False, min=1, allowed="abcdefghijklmnopqrstuvwxyz",
base="") :
"""Create a fresh word (ie, which is not in the set).
>>> w = WordSet(['foo', 'bar'])
>>> list(sorted(w))
['bar', 'foo']
>>> w.fresh(True, 3)
'aaa'
>>> list(sorted(w))
['aaa', 'bar', 'foo']
>>> w.fresh(True, 3)
'baa'
>>> list(sorted(w))
['aaa', 'baa', 'bar', 'foo']
@param add: add the created word to the set if C{add=True}
@type add: C{bool}
@param min: minimal length of the new word
@type min: C{int}
@param allowed: characters allowed in the new word
@type allowed: C{str}
"""
if base :
result = [base] + [allowed[0]] * max(0, min - len(base))
if base in self :
result.append(allowed[0])
pos = len(result) - 1
elif len(base) < min :
pos = 1
else :
pos = 0
else :
result = [allowed[0]] * min
pos = 0
while "".join(result) in self :
for c in allowed :
try :
result[pos] = c
except IndexError :
result.append(c)
if "".join(result) not in self :
break
pos += 1
if add :
self.add("".join(result))
return "".join(result)
class MultiSet (hdict) :
"""Set with repetitions, ie, function from values to integers.
MultiSets support various operations, in particular: addition
(C{+}), substraction (C{-}), multiplication by a non negative
integer (C{*k}), comparisons (C{<}, C{>}, etc.), length (C{len})"""
def __init__ (self, values=[]) :
"""Initialise the multiset, adding values to it.
>>> MultiSet([1, 2, 3, 1, 2])
MultiSet([...])
>>> MultiSet()
MultiSet([])
@param values: a single value or an iterable object holding
values (strings are not iterated)
@type values: any atomic object (C{str} included) or an
iterable object
"""
self.add(values)
def copy (self) :
"""Copy a C{MultiSet}
>>> MultiSet([1, 2, 3, 1, 2]).copy()
MultiSet([...])
@return: a copy of the multiset
@rtype: C{MultiSet}
"""
result = MultiSet()
result.update(self)
return result
__pnmltag__ = "multiset"
def __pnmldump__ (self) :
"""
>>> MultiSet([1, 2, 3, 4, 1, 2]).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<multiset>
<item>
<value>
<object type="int">
...
</object>
</value>
<multiplicity>
...
</multiplicity>
</item>
<item>
<value>
<object type="int">
...
</object>
</value>
<multiplicity>
...
</multiplicity>
</item>
<item>
<value>
<object type="int">
...
</object>
</value>
<multiplicity>
...
</multiplicity>
</item>
<item>
<value>
<object type="int">
...
</object>
</value>
<multiplicity>
...
</multiplicity>
</item>
</multiset>
</pnml>
"""
nodes = []
for value in hdict.__iter__(self) :
nodes.append(Tree("item", None,
Tree("value", None, Tree.from_obj(value)),
Tree("multiplicity", str(self[value]))))
return Tree(self.__pnmltag__, None, *nodes)
@classmethod
def __pnmlload__ (cls, tree) :
"""Load a multiset from its PNML representation
>>> t = MultiSet([1, 2, 3, 4, 1, 2]).__pnmldump__()
>>> MultiSet.__pnmlload__(t)
MultiSet([...])
"""
result = cls()
for item in tree :
times = int(item.child("multiplicity").data)
value = item.child("value").child().to_obj()
result._add(value, times)
return result
def _add (self, value, times=1) :
"""Add a single value C{times} times.
@param value: the value to add
@type value: any object
@param times: the number of times that C{value} has to be
added
@type times: non negative C{int}
"""
if times < 0 :
raise ValueError("negative values are forbidden")
try :
self[value] += times
except KeyError :
self[value] = times
def add (self, values, times=1) :
"""Add values to the multiset.
>>> m = MultiSet()
>>> m.add([1, 2, 2, 3], 2)
>>> list(sorted(m.items()))
[1, 1, 2, 2, 2, 2, 3, 3]
>>> m.add(5, 3)
>>> list(sorted(m.items()))
[1, 1, 2, 2, 2, 2, 3, 3, 5, 5, 5]
@param values: the values to add or a single value to add
@type values: any atomic object (C{str} included) or an
iterable object
@param times: the number of times each value should be added
@type times: non negative C{int}"""
self.__mutable__()
for value in iterate(values) :
self._add(value, times)
def _remove (self, value, times=1) :
"""Remove a single value C{times} times.
@param value: the value to remove
@type value: any object
@param times: the number of times that C{value} has to be
removed
@type times: non negative C{int}
"""
if times < 0 :
raise ValueError("negative values are forbidden")
if times > self.get(value, 0) :
raise ValueError("not enough occurrences")
self[value] -= times
if self[value] == 0 :
del self[value]
def remove (self, values, times=1) :
"""Remove values to the multiset.
>>> m = MultiSet()
>>> m.add([1, 2, 2, 3], 2)
>>> list(sorted(m.items()))
[1, 1, 2, 2, 2, 2, 3, 3]
>>> m.remove(2, 3)
>>> list(sorted(m.items()))
[1, 1, 2, 3, 3]
>>> m.remove([1, 3], 2)
>>> list(sorted(m.items()))
[2]
@param values: the values to remove or a single value to remove
@type values: any atomic object (C{str} included) or an
iterable object
@param times: the number of times each value should be removed
@type times: non negative C{int}"""
self.__mutable__()
for value in iterate(values) :
self._remove(value, times)
def __call__ (self, value) :
"""Number of occurrences of C{value}.
>>> m = MultiSet([1, 1, 2, 3, 3, 3])
>>> m(1), m(2), m(3), m(4)
(2, 1, 3, 0)
@param value: the value the count
@type value: C{object}
@rtype: C{int}
"""
return self.get(value, 0)
def __iter__ (self) :
"""Iterate over the values (with repetitions).
Use C{MultiSet.keys} to ignore repetitions.
>>> list(sorted(iter(MultiSet([1, 2, 3, 1, 2]))))
[1, 1, 2, 2, 3]
@return: an iterator on the elements
@rtype: C{iterator}"""
for value in dict.__iter__(self) :
for count in range(self[value]) :
yield value
def items (self) :
"""
Return the list of items with repetitions. The list without
repetitions can be retrieved with the C{key} method.
>>> m = MultiSet([1, 2, 2, 3])
>>> list(sorted(m.items()))
[1, 2, 2, 3]
>>> list(sorted(m.keys()))
[1, 2, 3]
@return: list of items with repetitions
@rtype: C{list}"""
return list(iter(self))
def __str__ (self) :
"""Return a simple string representation of the multiset
>>> str(MultiSet([1, 2, 2, 3]))
'{...}'
@return: simple string representation of the multiset
@rtype: C{str}
"""
return "{%s}" % ", ".join(repr(x) for x in self)
def __repr__ (self) :
"""Return a string representation of the multiset that is
suitable for C{eval}
>>> repr(MultiSet([1, 2, 2, 3]))
'MultiSet([...])'
@return: precise string representation of the multiset
@rtype: C{str}
"""
return "MultiSet([%s])" % ", ".join(repr(x) for x in self)
def __len__ (self) :
"""Return the number of elements, including repetitions.
>>> len(MultiSet([1, 2] * 3))
6
@rtype: C{int}
"""
if self.size() == 0 :
return 0
else :
return reduce(operator.add, self.values())
def size (self) :
"""Return the number of elements, excluding repetitions.
>>> MultiSet([1, 2] * 3).size()
2
@rtype: C{int}
"""
return dict.__len__(self)
def __add__ (self, other) :
"""Adds two multisets.
>>> MultiSet([1, 2, 3]) + MultiSet([2, 3, 4])
MultiSet([...])
@param other: the multiset to add
@type other: C{MultiSet}
@rtype: C{MultiSet}
"""
result = self.copy()
for value, times in dict.items(other) :
result._add(value, times)
return result
def __sub__ (self, other) :
"""Substract two multisets.
>>> MultiSet([1, 2, 3]) - MultiSet([2, 3])
MultiSet([1])
>>> MultiSet([1, 2, 3]) - MultiSet([2, 3, 4])
Traceback (most recent call last):
...
ValueError: not enough occurrences
@param other: the multiset to substract
@type other: C{MultiSet}
@rtype: C{MultiSet}
"""
result = self.copy()
for value, times in dict.items(other) :
result._remove(value, times)
return result
def __mul__ (self, other) :
"""Multiplication by a non-negative integer.
>>> MultiSet([1, 2]) * 3
MultiSet([...])
@param other: the integer to multiply
@type other: non-negative C{int}
@rtype: C{MultiSet}
"""
if other < 0 :
raise ValueError("negative values are forbidden")
elif other == 0 :
return MultiSet()
else :
result = self.copy()
for value in self.keys() :
result[value] *= other
return result
__hash__ = hdict.__hash__
def __eq__ (self, other) :
"""Test for equality.
>>> MultiSet([1, 2, 3]*2) == MultiSet([1, 2, 3]*2)
True
>>> MultiSet([1, 2, 3]) == MultiSet([1, 2, 3, 3])
False
@param other: the multiset to compare with
@type other: C{MultiSet}
@rtype: C{bool}
"""
if len(self) != len(other) :
return False
else :
for val in self :
try :
if self[val] != other[val] :
return False
except (KeyError, TypeError) :
return False
return True
def __ne__ (self, other) :
"""Test for difference.
>>> MultiSet([1, 2, 3]*2) != MultiSet([1, 2, 3]*2)
False
>>> MultiSet([1, 2, 3]) != MultiSet([1, 2, 3, 3])
True
@param other: the multiset to compare with
@type other: C{MultiSet}
@rtype: C{bool}
"""
return not(self == other)
def __lt__ (self, other) :
"""Test for strict inclusion.
>>> MultiSet([1, 2, 3]) < MultiSet([1, 2, 3, 4])
True
>>> MultiSet([1, 2, 3]) < MultiSet([1, 2, 3, 3])
True
>>> MultiSet([1, 2, 3]) < MultiSet([1, 2, 3])
False
>>> MultiSet([1, 2, 3]) < MultiSet([1, 2])
False
>>> MultiSet([1, 2, 2]) < MultiSet([1, 2, 3, 4])
False
@param other: the multiset to compare with
@type other: C{MultiSet}
@rtype: C{bool}
"""
if not set(self.keys()) <= set(other.keys()) :
return False
result = False
for value, times in dict.items(self) :
count = other(value)
if times > count :
return False
elif times < count :
result = True
return result or (dict.__len__(self) < dict.__len__(other))
def __le__ (self, other) :
"""Test for inclusion inclusion.
>>> MultiSet([1, 2, 3]) <= MultiSet([1, 2, 3, 4])
True
>>> MultiSet([1, 2, 3]) <= MultiSet([1, 2, 3, 3])
True
>>> MultiSet([1, 2, 3]) <= MultiSet([1, 2, 3])
True
>>> MultiSet([1, 2, 3]) <= MultiSet([1, 2])
False
>>> MultiSet([1, 2, 2]) <= MultiSet([1, 2, 3, 4])
False
@param other: the multiset to compare with
@type other: C{MultiSet}
@rtype: C{bool}
"""
if not set(self.keys()) <= set(other.keys()) :
return False
for value, times in dict.items(self) :
count = other(value)
if times > count :
return False
return True
def __gt__ (self, other) :
"""Test for strict inclusion.
>>> MultiSet([1, 2, 3, 4]) > MultiSet([1, 2, 3])
True
>>> MultiSet([1, 2, 3, 3]) > MultiSet([1, 2, 3])
True
>>> MultiSet([1, 2, 3]) > MultiSet([1, 2, 3])
False
>>> MultiSet([1, 2]) > MultiSet([1, 2, 3])
False
>>> MultiSet([1, 2, 3, 4]) > MultiSet([1, 2, 2])
False
@param other: the multiset to compare with
@type other: C{MultiSet}
@rtype: C{bool}
"""
return other.__lt__(self)
def __ge__ (self, other) :
"""Test for inclusion.
>>> MultiSet([1, 2, 3, 4]) >= MultiSet([1, 2, 3])
True
>>> MultiSet([1, 2, 3, 3]) >= MultiSet([1, 2, 3])
True
>>> MultiSet([1, 2, 3]) >= MultiSet([1, 2, 3])
True
>>> MultiSet([1, 2]) >= MultiSet([1, 2, 3])
False
>>> MultiSet([1, 2, 3, 4]) >= MultiSet([1, 2, 2])
False
@param other: the multiset to compare with
@type other: C{MultiSet}
@rtype: C{bool}
"""
return other.__le__(self)
def domain (self) :
"""Return the domain of the multiset
>>> list(sorted((MultiSet([1, 2, 3, 4]) + MultiSet([1, 2, 3])).domain()))
[1, 2, 3, 4]
@return: the set of values in the domain
@rtype: C{set}
"""
return set(self.keys())
class Substitution (object) :
"""Map names to values or names, equals the identity where not defined.
Substitutions support the C{+} operation (union with consistency
check between the two operands) and the C{*} operation which is
the composition of functions (C{(f*g)(x)} is C{f(g(x))}).
Several methods (eg, C{image}) return lists instead of sets, this
avoids the restriction of having only hashable values in a
substitution image.
"""
def __init__ (self, *largs, **dargs) :
"""Initialise using a dictionnary as a mapping.
The expected arguments are any ones acceptables for
initializing a dictionnary.
>>> Substitution()
Substitution()
>>> Substitution(x=1, y=2)
Substitution(...)
>>> Substitution([('x', 1), ('y', 2)])
Substitution(...)
>>> Substitution({'x': 1, 'y': 2})
Substitution(...)
"""
self._dict = dict(*largs, **dargs)
def __hash__ (self) :
"""
>>> hash(Substitution(x=1, y=2)) == hash(Substitution(y=2, x=1))
True
"""
# 153913524 = hash('snakes.data.Substitution')
return reduce(operator.xor,
(hash(i) for i in self._dict.items()),
153913524)
def __eq__ (self, other) :
"""
>>> Substitution(x=1, y=2) == Substitution(y=2, x=1)
True
>>> Substitution(x=1, y=2) == Substitution(y=1, x=1)
False
"""
try :
return self._dict == other._dict
except :
return False
def __ne__ (self, other) :
"""
>>> Substitution(x=1, y=2) != Substitution(y=2, x=1)
False
>>> Substitution(x=1, y=2) != Substitution(y=1, x=1)
True
"""
return not self.__eq__(other)
__pnmltag__ = "substitution"
def __pnmldump__ (self) :
"""Dumps a substitution to a PNML tree
>>> Substitution(x=1, y=2).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<substitution>
<item>
<name>
...
</name>
<value>
<object type="int">
...
</object>
</value>
</item>
<item>
<name>
...
</name>
<value>
<object type="int">
...
</object>
</value>
</item>
</substitution>
</pnml>
@return: PNML representation
@rtype: C{snakes.pnml.Tree}
"""
nodes = []
for name, value in self._dict.items() :
nodes.append(Tree("item", None,
Tree("name", name),
Tree("value", None,
Tree.from_obj(value))))
return Tree(self.__pnmltag__, None, *nodes)
@classmethod
def __pnmlload__ (cls, tree) :
"""Load a substitution from its PNML representation
>>> t = Substitution(x=1, y=2).__pnmldump__()
>>> Substitution.__pnmlload__(t)
Substitution(...)
@param tree: the PNML tree to load
@type tree: C{snakes.pnml.Tree}
@return: the substitution loaded
@rtype: C{Substitution}
"""
result = cls()
for item in tree :
name = item.child("name").data
value = item.child("value").child().to_obj()
result._dict[name] = value
return result
def items (self) :
"""Return the list of pairs (name, value).
>>> Substitution(x=1, y=2).items()
[('...', ...), ('...', ...)]
@return: a list of pairs (name, value) for each mapped name
@rtype: C{list}
"""
return list(self._dict.items())
def domain (self) :
"""Return the set of mapped names.
>>> list(sorted(Substitution(x=1, y=2).domain()))
['x', 'y']
@return: the set of mapped names
@rtype: C{set}
"""
return set(self._dict.keys())
def image (self) :
"""Return the set of values associated to the names.
>>> list(sorted(Substitution(x=1, y=2).image()))
[1, 2]
@return: the set of values associated to names
@rtype: C{set}
"""
return set(self._dict.values())
def __contains__ (self, name) :
"""Test if a name is mapped.
>>> 'x' in Substitution(x=1, y=2)
True
>>> 'z' in Substitution(x=1, y=2)
False
@param name: the name to test
@type name: C{str} (usually)
@return: a Boolean indicating whether this name is in the
domain or not
@rtype: C{bool}
"""
return name in self._dict
def __iter__ (self) :
"""Iterate over the mapped names.
>>> list(sorted(iter(Substitution(x=1, y=2))))
['x', 'y']
@return: an iterator over the domain of the substitution
@rtype: C{generator}
"""
return iter(self._dict)
def __str__ (self) :
"""Return a compact string representation.
>>> str(Substitution(x=1, y=2))
'{... -> ..., ... -> ...}'
@return: a simple string representation
@rtype: C{str}
"""
return "{%s}" % ", ".join(["%s -> %r" % (str(var), val)
for var, val in self.items()])
def __repr__ (self) :
"""Return a string representation suitable for C{eval}.
>>> repr(Substitution(x=1, y=2))
'Substitution(...)'
@return: a precise string representation
@rtype: C{str}
"""
return "%s(%s)" % (self.__class__.__name__,
", ".join(("%s=%s" % (str(var), repr(val))
for var, val in self.items())))
def dict (self) :
"""Return the mapping as a dictionnary.
>>> Substitution(x=1, y=2).dict()
{'...': ..., '...': ...}
@return: a dictionnary that does the same mapping as the
substitution
@rtype: C{dict}
"""
return self._dict.copy()
def copy (self) :
"""Copy the mapping.
>>> Substitution(x=1, y=2).copy()
Substitution(...)
@return: a copy of the substitution
@rtype: C{Substitution}
"""
return Substitution(self.dict())
def __setitem__ (self, var, value) :
"""Assign an entry to the substitution
>>> s = Substitution()
>>> s['x'] = 42
>>> s
Substitution(x=42)
@param var: the name of the variable
@type var: C{str}
@param value: the value to which C{var} is bound
@type value: C{object}
"""
self._dict[var] = value
def __getitem__ (self, var) :
"""Return the mapped value.
Fails with C{DomainError} if C{var} is not mapped.
>>> s = Substitution(x=1, y=2)
>>> s['x']
1
>>> try : s['z']
... except DomainError : print(sys.exc_info()[1])
unbound variable 'z'
@param var: the name of the variable
@type var: C{str} (usually)
@return: the value that C{var} maps to
@rtype: C{object}
@raise DomainError: if C{var} does not belong to the domain
"""
try :
return self._dict[var]
except KeyError :
raise DomainError("unbound variable '%s'" % var)
def __call__ (self, var) :
"""Return the mapped value.
Never fails but return C{var} if it is not mapped.
>>> s = Substitution(x=1, y=2)
>>> s('x')
1
>>> s('z')
'z'
@param var: the name of the variable
@type var: C{str} (usually)
@return: the value that C{var} maps to or C{var} itself if it
does not belong to the domain
@rtype: C{object}
"""
try :
return self._dict[var]
except KeyError :
return var
def __add__ (self, other) :
"""Add two substitution.
Fails with C{DomainError} if the two substitutions map a same
name to different values.
>>> s = Substitution(x=1, y=2) + Substitution(y=2, z=3)
>>> s('x'), s('y'), s('z')
(1, 2, 3)
>>> try : Substitution(x=1, y=2) + Substitution(y=4, z=3)
... except DomainError : print(sys.exc_info()[1])
conflict on 'y'
@param other: another substitution
@type other: C{Substitution}
@return: the union of the substitutions
@rtype: C{Substitution}
@raise DomainError: when one name is mapped to distinct values
"""
for var in self :
if var in other and (self[var] != other[var]) :
raise DomainError("conflict on '%s'" % var)
s = self.__class__(self.dict())
s._dict.update(other.dict())
return s
def __mul__ (self, other) :
"""Compose two substitutions.
The composition of f and g is such that (f*g)(x) = f(g(x)).
>>> f = Substitution(a=1, d=3, y=5)
>>> g = Substitution(b='d', c=2, e=4, y=6)
>>> h = f*g
>>> h('a'), h('b'), h('c'), h('d'), h('e'), h('y'), h('x')
(1, 3, 2, 3, 4, 6, 'x')
@param other: another substitution
@type other: C{Substitution}
@return: the composition of the substitutions
@rtype: C{Substitution}
"""
res = self.copy()
for var in other :
res._dict[var] = self(other(var))
return res
class Symbol (object) :
"""A symbol that may be used as a constant
"""
def __init__ (self, name, export=True) :
"""
If C{export} is C{True}, the created symbol is exported under
its name. If C{export} is C{False}, no export is made.
Finally, if C{export} is a string, it specifies the name of
the exported symbol.
@param name: the name (or value of the symbol)
@type name: C{str}
@param export: the name under which the symbol is exported
@type export: C{str} or C{bool} or C{None}
>>> Symbol('foo')
Symbol('foo')
>>> foo
Symbol('foo')
>>> Symbol('egg', 'spam')
Symbol('egg', 'spam')
>>> spam
Symbol('egg', 'spam')
>>> Symbol('bar', False)
Symbol('bar', False)
>>> bar
Traceback (most recent call last):
...
NameError: ...
"""
self.name = name
if export is True :
export = name
self._export = export
if export :
inspect.stack()[1][0].f_globals[export] = self
__pnmltag__ = "symbol"
def __pnmldump__ (self) :
"""
>>> Symbol('egg', 'spam').__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<symbol name="egg">
<object type="str">
spam
</object>
</symbol>
</pnml>
>>> Symbol('foo').__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<symbol name="foo"/>
</pnml>
>>> Symbol('bar', False).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<symbol name="bar">
<object type="bool">
False
</object>
</symbol>
</pnml>
"""
if self.name == self._export :
children = []
else :
children = [Tree.from_obj(self._export)]
return Tree(self.__pnmltag__, None, *children, **dict(name=self.name))
@classmethod
def __pnmlload__ (cls, tree) :
"""
>>> Symbol.__pnmlload__(Symbol('foo', 'bar').__pnmldump__())
Symbol('foo', 'bar')
>>> Symbol.__pnmlload__(Symbol('foo').__pnmldump__())
Symbol('foo')
>>> Symbol.__pnmlload__(Symbol('foo', False).__pnmldump__())
Symbol('foo', False)
"""
name = tree["name"]
try :
export = tree.child().to_obj()
except :
export = name
return cls(name, export)
def __eq__ (self, other) :
"""
>>> Symbol('foo', 'bar') == Symbol('foo')
True
>>> Symbol('egg') == Symbol('spam')
False
"""
try :
return (self.__class__.__name__ == other.__class__.__name__
and self.name == other.name)
except AttributeError :
return False
def __ne__ (self, other) :
"""
>>> Symbol('foo', 'bar') != Symbol('foo')
False
>>> Symbol('egg') != Symbol('spam')
True
"""
return not (self == other)
def __hash__ (self) :
"""
>>> hash(Symbol('foo', 'bar')) == hash(Symbol('foo'))
True
"""
return hash((self.__class__.__name__, self.name))
def __str__ (self) :
"""
>>> str(Symbol('foo'))
'foo'
"""
return self.name
def __repr__ (self) :
"""
>>> Symbol('foo')
Symbol('foo')
>>> Symbol('egg', 'spam')
Symbol('egg', 'spam')
>>> Symbol('bar', False)
Symbol('bar', False)
"""
if self._export == self.name :
return "%s(%r)" % (self.__class__.__name__, self.name)
else :
return "%s(%r, %r)" % (self.__class__.__name__, self.name,
self._export)
"""Hashable mutable objets.
This module proposes hashable version of the mutable containers
C{list}, C{dict} and C{set}, called respectively C{hlist}, C{hdict}
and C{hset}. After one object has been hashed, it becomes not mutable
and raises a ValueError if a method which changes the object (let call
a I{mutation} such a method) is invoqued. The object can be then
un-hashed by calling C{unhash} on it so that it becomes mutable again.
Notice that this may cause troubles if the object is stored in a set
or dict which uses its hashcode to locate it. Notice also that
hashable containers cannot be hashed if they contain non hashable
objects.
>>> l = hlist(range(5))
>>> l
hlist([0, 1, 2, 3, 4])
>>> _ = hash(l)
>>> l.append(5)
Traceback (most recent call last):
...
ValueError: hashed 'hlist' object is not mutable
>>> unhash(l)
>>> l.append(5)
>>> l
hlist([0, 1, 2, 3, 4, 5])
Testing if a C{hlist} is in a C{set}, C{dict}, C{hset} or C{hdict}
causes its hashing. If this is not desirable, you should either
compensate with a call to C{unhash}, or test if a copy is in the set:
>>> s = set()
>>> s.add(list(range(4)))
Traceback (most recent call last):
...
TypeError: ...
>>> s.add(hlist(range(4)))
>>> l = hlist(range(3))
>>> l in s
False
>>> l.append(3)
Traceback (most recent call last):
...
ValueError: hashed 'hlist' object is not mutable
>>> unhash(l)
>>> l.append(3)
>>> l[:] in s
True
>>> l.append(4)
"""
import inspect
from operator import xor
from snakes.compat import *
def unhash (obj) :
"""Make the object hashable again.
>>> l = hlist(range(3))
>>> _ = hash(l)
>>> l.append(3)
Traceback (most recent call last):
...
ValueError: hashed 'hlist' object is not mutable
>>> unhash(l)
>>> l.append(3)
@param obj: any object
@type obj: C{object}
"""
try :
del obj._hash
except :
pass
def hashable (cls) :
"""Wrap methods in a class in order to make it hashable.
"""
classname, bases, classdict = cls.__name__, cls.__bases__, cls.__dict__
for name, attr in classdict.items() :
try :
doc = inspect.getdoc(attr)
if doc is None :
attr.__doc__ = getattr(bases[0], name).__doc__
else :
attr.__doc__ = "\n".join([inspect.getdoc(getattr(bases[0],
name)),
doc])
except :
pass
def __hash__ (self) :
if not hasattr(self, "_hash") :
self._hash = self.__hash__.HASH(self)
return self._hash
__hash__.HASH = classdict["__hash__"]
__hash__.__doc__ = classdict["__hash__"].__doc__
cls.__hash__ = __hash__
def __mutable__ (self) :
"Raise C{ValueError} if the %s has been hashed."
if self.hashed() :
raise ValueError("hashed '%s' object is not mutable" % classname)
try :
__mutable__.__doc__ = __mutable__.__doc__ % classname
except :
pass
cls.__mutable__ = __mutable__
def hashed (self) :
"Return 'True' if the %s has been hashed, 'False' otherwise."
return hasattr(self, "_hash")
try :
hashed.__doc__ = hashed.__doc__ % classname
except :
pass
cls.hashed = hashed
def mutable (self) :
"Return 'True' if the %s is not hashed, 'False' otherwise."
return not self.hashed()
try :
mutable.__doc__ = mutable.__doc__ % classname
except :
pass
cls.mutable = mutable
return cls
class hlist (list) :
"""Hashable lists.
>>> l = hlist(range(5))
>>> l
hlist([0, 1, 2, 3, 4])
>>> l.append(5)
>>> l
hlist([0, 1, 2, 3, 4, 5])
>>> _ = hash(l)
>>> l.append(6)
Traceback (most recent call last):
...
ValueError: hashed 'hlist' object is not mutable
>>> unhash(l)
>>> l.append(6)
"""
def __add__ (self, other) :
return self.__class__(list.__add__(self, other))
def __delitem__ (self, item) :
self.__mutable__()
list.__delitem__(self, item)
def __delslice__ (self, *l, **d) :
self.__mutable__()
list.__delslice__(self, *l, **d)
def __getslice__ (self, first, last) :
return self.__class__(list.__getslice__(self, first, last))
def __getitem__ (self, item) :
ret = list.__getitem__(self, item)
if ret.__class__ is list :
return self.__class__(ret)
return ret
def __hash__ (self) :
return hash(tuple(self))
def __iadd__ (self, other) :
return self.__class__(list.__iadd__(self, other))
def __imul__ (self, n) :
return self.__class__(list.__imul__(self, n))
def __mul__ (self, n) :
return self.__class__(list.__mul__(self, n))
def __repr__ (self) :
"""
>>> repr(hlist(range(3)))
'hlist([0, 1, 2])'
"""
return "%s(%s)" % (self.__class__.__name__, list.__repr__(self))
def __rmul__ (self, n) :
return self._class__(list.__rmul__(self, n))
def __setitem__ (self, index, item) :
self.__mutable__()
list.__setitem__(self, index, item)
def __setslice__ (self, first, last, item) :
self.__mutable__()
list.__setslice__(self, first, last, item)
def append (self, item) :
self.__mutable__()
list.append(self, item)
def extend (self, iterable) :
self.__mutable__()
list.extend(self, iterable)
def insert (self, index, item) :
self.__mutable__()
list.insert(self, index, item)
def pop (self, index=-1) :
self.__mutable__()
return list.pop(self, index)
def remove (self, item) :
self.__mutable__()
list.remove(self, item)
def reverse (self) :
self.__mutable__()
list.reverse(self)
def sort (self, cmp=None, key=None, reverse=False) :
self.__mutable__()
list.sort(self, cmp, key, reverse)
hlist = hashable(hlist)
class hdict (dict) :
"""Hashable dictionnaries.
>>> l = hlist(range(5))
>>> d = hdict([(l, 0)])
>>> d
hdict({hlist([0, 1, 2, 3, 4]): 0})
>>> l in d
True
>>> [0, 1, 2, 3, 4] in d
Traceback (most recent call last):
...
TypeError: ...
>>> hlist([0, 1, 2, 3, 4]) in d
True
>>> d[hlist([0, 1, 2, 3, 4])]
0
>>> l.append(5)
Traceback (most recent call last):
...
ValueError: hashed 'hlist' object is not mutable
>>> _ = hash(d)
>>> d.pop(l) # any mutation would produce the same error
Traceback (most recent call last):
...
ValueError: hashed 'hdict' object is not mutable
>>> unhash(d)
>>> d.pop(l)
0
"""
def __delitem__ (self, key) :
self.__mutable__()
dict.__delitem__(self, key)
def __hash__ (self) :
"""
>>> _ = hash(hdict(a=1, b=2))
"""
# 252756382 = hash("snakes.hashables.hlist")
return reduce(xor, (hash(i) for i in self.items()), 252756382)
def __repr__ (self) :
"""
>>> repr(hdict(a=1))
"hdict({'a': 1})"
"""
return "%s(%s)" % (self.__class__.__name__, dict.__repr__(self))
def __setitem__ (self, key, item) :
self.__mutable__()
dict.__setitem__(self, key, item)
def clear (self) :
self.__mutable__()
dict.clear(self)
def copy (self) :
return self.__class__(dict.copy(self))
@classmethod
def fromkeys (_class, *args) :
return _class(dict.fromkeys(*args))
def pop (self, *args) :
self.__mutable__()
return dict.pop(self, *args)
def popitem (self) :
self.__mutable__()
return dict.popitem(self)
def setdefault (self, key, item=None) :
self.__mutable__()
return dict.setdefault (self, key, item)
def update (self, other, **more) :
self.__mutable__()
return dict.update(self, other, **more)
hdict = hashable(hdict)
class hset (set) :
"""Hashable sets.
>>> s = hset()
>>> l = hlist(range(5))
>>> s.add(l)
>>> s
hset([hlist([0, 1, 2, 3, 4])])
>>> l in s
True
>>> [0, 1, 2, 3, 4] in s
Traceback (most recent call last):
...
TypeError: ...
>>> hlist([0, 1, 2, 3, 4]) in s
True
>>> l.append(5)
Traceback (most recent call last):
...
ValueError: hashed 'hlist' object is not mutable
>>> _ = hash(s)
>>> s.discard(l) # any mutation would produce the same error
Traceback (most recent call last):
...
ValueError: hashed 'hset' object is not mutable
>>> unhash(s)
>>> s.discard(l)
"""
def __and__ (self, other) :
return self.__class__(set.__and__(self, other))
def __hash__ (self) :
"""
>>> _ = hash(hset([1, 2, 3]))
>>> _ = hash(hset(range(5)) - set([0, 4]))
"""
# 196496309 = hash("snakes.hashables.hset")
return reduce(xor, (hash(x) for x in self), 196496309)
def __iand__ (self, other) :
return self.__class__(set.__iand__(self, other))
def __ior__ (self, other) :
return self.__class__(set.__ior__(self, other))
def __isub__ (self, other) :
return self.__class__(set.__isub__(self, other))
def __ixor__ (self, other) :
return self.__class__(set.__ixor__(self, other))
def __or__ (self, other) :
return self.__class__(set.__or__(self, other))
def __rand__ (self, other) :
return self.__class__(set.__rand__(self, other))
def __repr__ (self) :
"""
>>> repr(hset([1]))
'hset([1])'
"""
return "%s([%s])" % (self.__class__.__name__,
set.__repr__(self)[6:-2])
def __ror__ (self, other) :
return self.__class__(set.__ror__(self, other))
def __rsub__ (self, other) :
return self.__class__(set.__rsub__(self, other))
def __rxor__ (self, other) :
return self.__class__(set.__rxor__(self, other))
def __str__ (self) :
return self.__class__.__name__ + "(" + set.__str__(self).split("(", 1)[1]
def __sub__ (self, other) :
return self.__class__(set.__sub__(self, other))
def __xor__ (self, other) :
return self.__class__(set.__xor__(self, other))
def add (self, item) :
self.__mutable__()
set.add(self, item)
def clear (self) :
self.__mutable__()
set.clear(self)
def copy (self) :
return self.__class__(set.copy(self))
def difference (self, other) :
return self.__class__(set.difference(self, other))
def difference_update (self, other) :
self.__mutable__()
set.difference_update(self, other)
def discard (self, item) :
self.__mutable__()
set.discard(self, item)
def intersection (self, other) :
return self._class__(set.intersection(self, other))
def intersection_update (self, other) :
self.__mutable__()
set.intersection_update(self, other)
def pop (self) :
self.__mutable__()
return set.pop(self)
def remove (self, item) :
self.__mutable__()
set.remove(self, item)
def symmetric_difference (self, other) :
self.__mutable__()
set.symmetric_difference(self, other)
def symmetric_difference_update (self, other) :
self.__mutable__()
set.symmetric_difference_update(self, other)
def union (self, other) :
return self.__class__(set.union(self, other))
def update (self, other) :
self.__mutable__()
set.update(self, other)
hset = hashable(hset)
import sys
if sys.version_info[:2] in ((2, 6), (2, 7)) :
import ast
elif sys.version_info[0] == 3 :
import ast
elif hasattr(sys, "pypy_version_info") :
import astpypy as ast
elif hasattr(sys, "JYTHON_JAR") :
import astjy25 as ast
elif sys.version_info[:2] == (2, 5) :
import astpy25 as ast
else :
raise NotImplementedError("unsupported Python version")
sys.modules["snkast"] = ast
from . import unparse as _unparse
from snakes.compat import *
class Names (ast.NodeVisitor) :
def __init__ (self) :
ast.NodeVisitor.__init__(self)
self.names = set()
def visit_Name (self, node) :
self.names.add(node.id)
def getvars (expr) :
"""
>>> list(sorted(getvars('x+y<z')))
['x', 'y', 'z']
>>> list(sorted(getvars('x+y<z+f(3,t)')))
['f', 't', 'x', 'y', 'z']
"""
names = Names()
names.visit(ast.parse(expr))
return names.names - set(['None', 'True', 'False'])
class Unparser(_unparse.Unparser) :
boolops = {"And": 'and', "Or": 'or'}
def _Interactive (self, tree) :
for stmt in tree.body :
self.dispatch(stmt)
def _Expression (self, tree) :
self.dispatch(tree.body)
def _ClassDef(self, tree):
self.write("\n")
for deco in tree.decorator_list:
self.fill("@")
self.dispatch(deco)
self.fill("class "+tree.name)
if tree.bases:
self.write("(")
for a in tree.bases:
self.dispatch(a)
self.write(", ")
self.write(")")
self.enter()
self.dispatch(tree.body)
self.leave()
def unparse (st) :
output = io.StringIO()
Unparser(st, output)
return output.getvalue().strip()
class Renamer (ast.NodeTransformer) :
def __init__ (self, map_names) :
ast.NodeTransformer.__init__(self)
self.map = [map_names]
def visit_ListComp (self, node) :
bind = self.map[-1].copy()
for comp in node.generators :
for name in getvars(comp.target) :
if name in bind :
del bind[name]
self.map.append(bind)
node.elt = self.visit(node.elt)
self.map.pop(-1)
return node
def visit_SetComp (self, node) :
return self.visit_ListComp(node)
def visit_DictComp (self, node) :
bind = self.map[-1].copy()
for comp in node.generators :
for name in getvars(comp.target) :
if name in bind :
del bind[name]
self.map.append(bind)
node.key = self.visit(node.key)
node.value = self.visit(node.value)
self.map.pop(-1)
return node
def visit_Name (self, node) :
return ast.copy_location(ast.Name(id=self.map[-1].get(node.id,
node.id),
ctx=ast.Load()), node)
def rename (expr, map={}, **ren) :
"""
>>> rename('x+y<z', x='t')
'((t + y) < z)'
>>> rename('x+y<z+f(3,t)', f='g', t='z', z='t')
'((x + y) < (t + g(3, z)))'
>>> rename('[x+y for x in range(3)]', x='z')
'[(x + y) for x in range(3)]'
>>> rename('[x+y for x in range(3)]', y='z')
'[(x + z) for x in range(3)]'
"""
map_names = dict(map)
map_names.update(ren)
transf = Renamer(map_names)
return unparse(transf.visit(ast.parse(expr)))
class Binder (Renamer) :
def visit_Name (self, node) :
if node.id in self.map[-1] :
return self.map[-1][node.id]
else :
return node
def bind (expr, map={}, **ren) :
"""
>>> bind('x+y<z', x=ast.Num(n=2))
'((2 + y) < z)'
>>> bind('x+y<z', y=ast.Num(n=2))
'((x + 2) < z)'
>>> bind('[x+y for x in range(3)]', x=ast.Num(n=2))
'[(x + y) for x in range(3)]'
>>> bind('[x+y for x in range(3)]', y=ast.Num(n=2))
'[(x + 2) for x in range(3)]'
"""
map_names = dict(map)
map_names.update(ren)
transf = Binder(map_names)
return unparse(transf.visit(ast.parse(expr)))
if __name__ == "__main__" :
import doctest
doctest.testmod()
module ABCD version "$Revision: 1 $"
{
abcd = AbcdSpec(decl* context, process body, expr* asserts)
decl = AbcdTypedef(identifier name, abcdtype type)
| AbcdBuffer(identifier name, abcdtype type,
Slice? capacity, expr content)
| AbcdSymbol(identifier* symbols)
| AbcdConst(identifier name, expr value)
| AbcdNet(identifier name, arguments args, AbcdSpec body)
| AbcdTask(identifier name, AbcdSpec body,
abcdtype* input, ancdtype* output)
| stmt -- this too much, but does not harm
attributes (int lineno, int col_offset)
process = AbcdAction(access* accesses, object guard)
| AbcdFlowOp(process left, flowop op, process right)
| AbcdInstance(identifier net, identifier? asname, expr* args,
keyword* keywords, expr? starargs, expr? kwargs)
attributes (int lineno, int col_offset)
flowop = Sequence | Choice | Parallel | Loop
access = SimpleAccess(identifier buffer, arc arc, expr tokens)
| FlushAccess(identifier buffer, identifier target)
| SwapAccess(identifier buffer, expr target, expr tokens)
| Spawn(identifier net, expr pid, expr args)
| Wait(identifier net, expr pid, expr args)
| Suspend(identifier net, expr pid)
| Resume(identifier net, expr pid)
attributes (int lineno, int col_offset)
arc = Produce | Test | Consume | Fill
abcdtype = UnionType(abcdtype* types)
| IntersectionType(abcdtype* types)
| CrossType(abcdtype* types)
| ListType(abcdtype items)
| TupleType(abcdtype items)
| SetType(abcdtype items)
| DictType(abcdtype keys, abcdtype values)
| EnumType(expr* items)
| NamedType(identifier name)
attributes (int lineno, int col_offset)
--------------------------------------------------------------
-- the rest is copied from "snakes/lang/python/python.asdl" --
--------------------------------------------------------------
stmt = FunctionDef(identifier name, arguments args,
stmt* body, expr* decorator_list, expr? returns)
| ClassDef(identifier name,
expr* bases,
keyword* keywords,
expr? starargs,
expr? kwargs,
stmt* body,
expr *decorator_list)
| Return(expr? value)
| Delete(expr* targets)
| Assign(expr* targets, expr value)
| AugAssign(expr target, operator op, expr value)
| For(expr target, expr iter, stmt* body, stmt* orelse)
| While(expr test, stmt* body, stmt* orelse)
| If(expr test, stmt* body, stmt* orelse)
| With(expr context_expr, expr? optional_vars, stmt* body)
| Raise(expr? exc, expr? cause)
| TryExcept(stmt* body, excepthandler* handlers, stmt* orelse)
| TryFinally(stmt* body, stmt* finalbody)
| Assert(expr test, expr? msg)
| Import(alias* names)
| ImportFrom(identifier module, alias* names, int? level)
| Exec(expr body, expr? globals, expr? locals)
| Global(identifier* names)
| Nonlocal(identifier* names)
| Expr(expr value)
| Pass | Break | Continue
attributes (int lineno, int col_offset)
expr = BoolOp(boolop op, expr* values)
| BinOp(expr left, operator op, expr right)
| UnaryOp(unaryop op, expr operand)
| Lambda(arguments args, expr body)
| IfExp(expr test, expr body, expr orelse)
| Dict(expr* keys, expr* values)
| Set(expr* elts)
| ListComp(expr elt, comprehension* generators)
| SetComp(expr elt, comprehension* generators)
| DictComp(expr key, expr value, comprehension* generators)
| GeneratorExp(expr elt, comprehension* generators)
| Yield(expr? value)
| Compare(expr left, cmpop* ops, expr* comparators)
| Call(expr func, expr* args, keyword* keywords,
expr? starargs, expr? kwargs)
| Num(object n)
| Str(string s)
| Ellipsis
| Attribute(expr value, identifier attr, expr_context ctx)
| Subscript(expr value, slice slice, expr_context ctx)
| Starred(expr value, expr_context ctx)
| Name(identifier id, expr_context ctx)
| List(expr* elts, expr_context ctx)
| Tuple(expr* elts, expr_context ctx)
attributes (int lineno, int col_offset)
expr_context = Load | Store | Del | AugLoad | AugStore | Param
slice = Slice(expr? lower, expr? upper, expr? step)
| ExtSlice(slice* dims)
| Index(expr value)
boolop = And | Or
operator = Add | Sub | Mult | Div | Mod | Pow | LShift
| RShift | BitOr | BitXor | BitAnd | FloorDiv
unaryop = Invert | Not | UAdd | USub
cmpop = Eq | NotEq | Lt | LtE | Gt | GtE | Is | IsNot | In | NotIn
comprehension = (expr target, expr iter, expr* ifs)
excepthandler = ExceptHandler(expr? type, identifier? name, stmt* body)
attributes (int lineno, int col_offset)
arguments = (arg* args, identifier? vararg, expr? varargannotation,
arg* kwonlyargs, identifier? kwarg,
expr? kwargannotation, expr* defaults,
expr* kw_defaults)
arg = (identifier arg, expr? annotation)
keyword = (identifier arg, expr value)
alias = (identifier name, identifier? asname)
}
# Grammar for ABCD
# new tokens
$QUESTION '?'
$ELLIPSIS '...'
file_input: abcd_main ENDMARKER
abcd_main: (NEWLINE | abcd_global)* abcd_expr abcd_prop*
abcd_global: import_stmt | abcd_symbol | abcd_typedef | abcd_const | abcd_decl
abcd_spec: (NEWLINE | abcd_decl)* abcd_expr
abcd_decl: abcd_net | abcd_task | abcd_buffer
abcd_const: 'const' NAME '=' testlist
abcd_symbol: 'symbol' abcd_namelist
abcd_typedef: 'typedef' NAME ':' abcd_type
abcd_net: 'net' NAME parameters ':' abcd_suite
abcd_task: 'task' NAME typelist '-' '>' typelist ':' abcd_suite
abcd_suite: abcd_expr | NEWLINE INDENT abcd_spec DEDENT
abcd_buffer: [ decorators ] 'buffer' NAME ['[' test ']'] ':' abcd_type '=' testlist
abcd_namelist: NAME (',' NAME)*
typelist: '(' [abcd_type (',' abcd_type)*] ')'
abcd_type: abcd_and_type ('|' abcd_and_type)*
abcd_and_type: abcd_cross_type ('&' abcd_cross_type)*
abcd_cross_type: abcd_base_type ('*' abcd_base_type)*
abcd_base_type: (NAME ['(' abcd_type (',' abcd_type)* ')']
| 'enum' '(' test (',' test)* ')' | '(' abcd_type ')')
abcd_expr: abcd_choice_expr ('|' abcd_choice_expr)*
abcd_choice_expr: abcd_iter_expr ('+' abcd_iter_expr)*
abcd_iter_expr: abcd_seq_expr ('*' abcd_seq_expr)*
abcd_seq_expr: abcd_base_expr (';' abcd_base_expr)*
abcd_base_expr: (abcd_action | '(' abcd_expr ')') (NEWLINE)*
abcd_action: ('[' 'True' ']' |
'[' 'False' ']' |
'[' abcd_access_list ['if' test] ']' |
abcd_instance)
abcd_access_list: abcd_access (',' abcd_access)*
abcd_access: (NAME '+' '(' testlist ')' |
NAME '?' '(' testlist ')' |
NAME '-' '(' testlist ')' |
NAME '<>' '(' testlist '=' testlist ')' |
NAME '>>' '(' NAME ')' |
NAME '<<' '(' testlist_comp ')' |
NAME '.' NAME '(' test (',' test)* ')')
abcd_instance: [NAME ':' ':'] NAME '(' [arglist] ')'
tfpdef: NAME [':' ('net' | 'buffer' | 'task')]
abcd_prop: 'assert' test (NEWLINE)*
#
# the rest is from SNAKES/Python grammar
#
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
decorators: decorator+
decorated: decorators (classdef | funcdef)
funcdef: 'def' NAME parameters ['-' '>' test] ':' suite
parameters: '(' [typedargslist] ')'
typedargslist: ((tfpdef ['=' test] ',')*
('*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef]
| '**' tfpdef)
| tfpdef ['=' test] (',' tfpdef ['=' test])* [','])
varargslist: ((vfpdef ['=' test] ',')*
('*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef]
| '**' vfpdef)
| vfpdef ['=' test] (',' vfpdef ['=' test])* [','])
vfpdef: NAME
stmt: simple_stmt | compound_stmt
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | nonlocal_stmt | assert_stmt)
expr_stmt: testlist (augassign (yield_expr|testlist) |
('=' (yield_expr|testlist))*)
augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' |
'<<=' | '>>=' | '**=' | '//=')
del_stmt: 'del' exprlist
pass_stmt: 'pass'
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
break_stmt: 'break'
continue_stmt: 'continue'
return_stmt: 'return' [testlist]
yield_stmt: yield_expr
raise_stmt: 'raise' [test ['from' test]]
import_stmt: import_name | import_from
import_name: 'import' dotted_as_names
import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)
'import' ('*' | '(' import_as_names ')' | import_as_names))
import_as_name: NAME ['as' NAME]
dotted_as_name: dotted_name ['as' NAME]
import_as_names: import_as_name (',' import_as_name)* [',']
dotted_as_names: dotted_as_name (',' dotted_as_name)*
dotted_name: NAME ('.' NAME)*
global_stmt: 'global' NAME (',' NAME)*
nonlocal_stmt: 'nonlocal' NAME (',' NAME)*
assert_stmt: 'assert' test [',' test]
compound_stmt: (if_stmt | while_stmt | for_stmt | try_stmt | with_stmt
| funcdef | classdef | decorated)
if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
while_stmt: 'while' test ':' suite ['else' ':' suite]
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
try_stmt: ('try' ':' suite
((except_clause ':' suite)+
['else' ':' suite]
['finally' ':' suite] |
'finally' ':' suite))
with_stmt: 'with' test [ with_var ] ':' suite
with_var: 'as' expr
except_clause: 'except' [test ['as' NAME]]
suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT
test: or_test ['if' or_test 'else' test] | lambdef
test_nocond: or_test | lambdef_nocond
lambdef: 'lambda' [varargslist] ':' test
lambdef_nocond: 'lambda' [varargslist] ':' test_nocond
or_test: and_test ('or' and_test)*
and_test: not_test ('and' not_test)*
not_test: 'not' not_test | comparison
comparison: star_expr (comp_op star_expr)*
comp_op: '<'|'>'|'=='|'>='|'<='|'!='|'<>'|'in'|'not' 'in'|'is'|'is' 'not'
star_expr: ['*'] expr
expr: xor_expr ('|' xor_expr)*
xor_expr: and_expr ('^' and_expr)*
and_expr: shift_expr ('&' shift_expr)*
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
arith_expr: term (('+'|'-') term)*
term: factor (('*'|'/'|'%'|'//') factor)*
factor: ('+'|'-'|'~') factor | power
power: atom trailer* ['**' factor]
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
testlist_comp: test ( comp_for | (',' test)* [','] )
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
subscript: test | [test] ':' [test] [sliceop]
sliceop: ':' [test]
exprlist: star_expr (',' star_expr)* [',']
testlist: test (',' test)* [',']
dictorsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) |
(test (comp_for | (',' test)* [','])) )
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
arglist: (argument ',')* (argument [',']
|'*' test (',' argument)* [',' '**' test]
|'**' test)
argument: test [comp_for] | test '=' test
comp_iter: comp_for | comp_if
comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_if: 'if' test_nocond [comp_iter]
yield_expr: 'yield' [testlist]
# this file has been automatically generated running:
# snakes/lang/asdl.py --output=snakes/lang/abcd/asdl.py snakes/lang/abcd/abcd.asdl
# timestamp: 2011-09-29 11:07:13.505687
from snakes.lang import ast
from ast import *
class _AST (ast.AST):
def __init__ (self, **ARGS):
ast.AST.__init__(self)
for k, v in ARGS.items():
setattr(self, k, v)
class abcd (_AST):
pass
class AbcdSpec (abcd):
_fields = ('context', 'body', 'asserts')
_attributes = ()
def __init__ (self, body, context=[], asserts=[], **ARGS):
abcd.__init__(self, **ARGS)
self.context = list(context)
self.body = body
self.asserts = list(asserts)
class decl (_AST):
pass
class AbcdTypedef (decl):
_fields = ('name', 'type')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, type, lineno=0, col_offset=0, **ARGS):
decl.__init__(self, **ARGS)
self.name = name
self.type = type
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class AbcdBuffer (decl):
_fields = ('name', 'type', 'capacity', 'content')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, type, content, capacity=None, lineno=0, col_offset=0, **ARGS):
decl.__init__(self, **ARGS)
self.name = name
self.type = type
self.capacity = capacity
self.content = content
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class AbcdSymbol (decl):
_fields = ('symbols',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, symbols=[], lineno=0, col_offset=0, **ARGS):
decl.__init__(self, **ARGS)
self.symbols = list(symbols)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class AbcdConst (decl):
_fields = ('name', 'value')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, value, lineno=0, col_offset=0, **ARGS):
decl.__init__(self, **ARGS)
self.name = name
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class AbcdNet (decl):
_fields = ('name', 'args', 'body')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, args, body, lineno=0, col_offset=0, **ARGS):
decl.__init__(self, **ARGS)
self.name = name
self.args = args
self.body = body
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class AbcdTask (decl):
_fields = ('name', 'body', 'input', 'output')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, body, input=[], output=[], lineno=0, col_offset=0, **ARGS):
decl.__init__(self, **ARGS)
self.name = name
self.body = body
self.input = list(input)
self.output = list(output)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class stmt (decl):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
decl.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class slice (_AST):
pass
class Slice (slice):
_fields = ('lower', 'upper', 'step')
_attributes = ()
def __init__ (self, lower=None, upper=None, step=None, **ARGS):
slice.__init__(self, **ARGS)
self.lower = lower
self.upper = upper
self.step = step
class ExtSlice (slice):
_fields = ('dims',)
_attributes = ()
def __init__ (self, dims=[], **ARGS):
slice.__init__(self, **ARGS)
self.dims = list(dims)
class Index (slice):
_fields = ('value',)
_attributes = ()
def __init__ (self, value, **ARGS):
slice.__init__(self, **ARGS)
self.value = value
class arc (_AST):
pass
class Produce (arc):
_fields = ()
_attributes = ()
class Test (arc):
_fields = ()
_attributes = ()
class Consume (arc):
_fields = ()
_attributes = ()
class Fill (arc):
_fields = ()
_attributes = ()
class expr_context (_AST):
pass
class Load (expr_context):
_fields = ()
_attributes = ()
class Store (expr_context):
_fields = ()
_attributes = ()
class Del (expr_context):
_fields = ()
_attributes = ()
class AugLoad (expr_context):
_fields = ()
_attributes = ()
class AugStore (expr_context):
_fields = ()
_attributes = ()
class Param (expr_context):
_fields = ()
_attributes = ()
class keyword (_AST):
_fields = ('arg', 'value')
_attributes = ()
def __init__ (self, arg, value, **ARGS):
_AST.__init__(self, **ARGS)
self.arg = arg
self.value = value
class unaryop (_AST):
pass
class Invert (unaryop):
_fields = ()
_attributes = ()
class Not (unaryop):
_fields = ()
_attributes = ()
class UAdd (unaryop):
_fields = ()
_attributes = ()
class USub (unaryop):
_fields = ()
_attributes = ()
class process (_AST):
pass
class AbcdAction (process):
_fields = ('accesses', 'guard')
_attributes = ('lineno', 'col_offset')
def __init__ (self, guard, accesses=[], lineno=0, col_offset=0, **ARGS):
process.__init__(self, **ARGS)
self.accesses = list(accesses)
self.guard = guard
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class AbcdFlowOp (process):
_fields = ('left', 'op', 'right')
_attributes = ('lineno', 'col_offset')
def __init__ (self, left, op, right, lineno=0, col_offset=0, **ARGS):
process.__init__(self, **ARGS)
self.left = left
self.op = op
self.right = right
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class AbcdInstance (process):
_fields = ('net', 'asname', 'args', 'keywords', 'starargs', 'kwargs')
_attributes = ('lineno', 'col_offset')
def __init__ (self, net, asname=None, args=[], keywords=[], starargs=None, kwargs=None, lineno=0, col_offset=0, **ARGS):
process.__init__(self, **ARGS)
self.net = net
self.asname = asname
self.args = list(args)
self.keywords = list(keywords)
self.starargs = starargs
self.kwargs = kwargs
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class expr (_AST):
pass
class BoolOp (expr):
_fields = ('op', 'values')
_attributes = ('lineno', 'col_offset')
def __init__ (self, op, values=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.op = op
self.values = list(values)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class BinOp (expr):
_fields = ('left', 'op', 'right')
_attributes = ('lineno', 'col_offset')
def __init__ (self, left, op, right, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.left = left
self.op = op
self.right = right
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class UnaryOp (expr):
_fields = ('op', 'operand')
_attributes = ('lineno', 'col_offset')
def __init__ (self, op, operand, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.op = op
self.operand = operand
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Lambda (expr):
_fields = ('args', 'body')
_attributes = ('lineno', 'col_offset')
def __init__ (self, args, body, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.args = args
self.body = body
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class IfExp (expr):
_fields = ('test', 'body', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, test, body, orelse, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.test = test
self.body = body
self.orelse = orelse
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Dict (expr):
_fields = ('keys', 'values')
_attributes = ('lineno', 'col_offset')
def __init__ (self, keys=[], values=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.keys = list(keys)
self.values = list(values)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Set (expr):
_fields = ('elts',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, elts=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elts = list(elts)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class ListComp (expr):
_fields = ('elt', 'generators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elt, generators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elt = elt
self.generators = list(generators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class SetComp (expr):
_fields = ('elt', 'generators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elt, generators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elt = elt
self.generators = list(generators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class DictComp (expr):
_fields = ('key', 'value', 'generators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, key, value, generators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.key = key
self.value = value
self.generators = list(generators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class GeneratorExp (expr):
_fields = ('elt', 'generators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elt, generators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elt = elt
self.generators = list(generators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Yield (expr):
_fields = ('value',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, value=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Compare (expr):
_fields = ('left', 'ops', 'comparators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, left, ops=[], comparators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.left = left
self.ops = list(ops)
self.comparators = list(comparators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Call (expr):
_fields = ('func', 'args', 'keywords', 'starargs', 'kwargs')
_attributes = ('lineno', 'col_offset')
def __init__ (self, func, args=[], keywords=[], starargs=None, kwargs=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.func = func
self.args = list(args)
self.keywords = list(keywords)
self.starargs = starargs
self.kwargs = kwargs
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Num (expr):
_fields = ('n',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, n, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.n = n
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Str (expr):
_fields = ('s',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, s, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.s = s
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Ellipsis (expr):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Attribute (expr):
_fields = ('value', 'attr', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, attr, ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.value = value
self.attr = attr
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Subscript (expr):
_fields = ('value', 'slice', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, slice, ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.value = value
self.slice = slice
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Starred (expr):
_fields = ('value', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.value = value
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Name (expr):
_fields = ('id', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, id, ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.id = id
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class List (expr):
_fields = ('elts', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elts=[], ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elts = list(elts)
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Tuple (expr):
_fields = ('elts', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elts=[], ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elts = list(elts)
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class cmpop (_AST):
pass
class Eq (cmpop):
_fields = ()
_attributes = ()
class NotEq (cmpop):
_fields = ()
_attributes = ()
class Lt (cmpop):
_fields = ()
_attributes = ()
class LtE (cmpop):
_fields = ()
_attributes = ()
class Gt (cmpop):
_fields = ()
_attributes = ()
class GtE (cmpop):
_fields = ()
_attributes = ()
class Is (cmpop):
_fields = ()
_attributes = ()
class IsNot (cmpop):
_fields = ()
_attributes = ()
class In (cmpop):
_fields = ()
_attributes = ()
class NotIn (cmpop):
_fields = ()
_attributes = ()
class boolop (_AST):
pass
class And (boolop):
_fields = ()
_attributes = ()
class Or (boolop):
_fields = ()
_attributes = ()
class stmt (_AST):
pass
class FunctionDef (stmt):
_fields = ('name', 'args', 'body', 'decorator_list', 'returns')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, args, body=[], decorator_list=[], returns=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.name = name
self.args = args
self.body = list(body)
self.decorator_list = list(decorator_list)
self.returns = returns
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class ClassDef (stmt):
_fields = ('name', 'bases', 'keywords', 'starargs', 'kwargs', 'body', 'decorator_list')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, bases=[], keywords=[], starargs=None, kwargs=None, body=[], decorator_list=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.name = name
self.bases = list(bases)
self.keywords = list(keywords)
self.starargs = starargs
self.kwargs = kwargs
self.body = list(body)
self.decorator_list = list(decorator_list)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Return (stmt):
_fields = ('value',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, value=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Delete (stmt):
_fields = ('targets',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, targets=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.targets = list(targets)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Assign (stmt):
_fields = ('targets', 'value')
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, targets=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.targets = list(targets)
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class AugAssign (stmt):
_fields = ('target', 'op', 'value')
_attributes = ('lineno', 'col_offset')
def __init__ (self, target, op, value, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.target = target
self.op = op
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class For (stmt):
_fields = ('target', 'iter', 'body', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, target, iter, body=[], orelse=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.target = target
self.iter = iter
self.body = list(body)
self.orelse = list(orelse)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class While (stmt):
_fields = ('test', 'body', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, test, body=[], orelse=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.test = test
self.body = list(body)
self.orelse = list(orelse)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class If (stmt):
_fields = ('test', 'body', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, test, body=[], orelse=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.test = test
self.body = list(body)
self.orelse = list(orelse)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class With (stmt):
_fields = ('context_expr', 'optional_vars', 'body')
_attributes = ('lineno', 'col_offset')
def __init__ (self, context_expr, optional_vars=None, body=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.context_expr = context_expr
self.optional_vars = optional_vars
self.body = list(body)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Raise (stmt):
_fields = ('exc', 'cause')
_attributes = ('lineno', 'col_offset')
def __init__ (self, exc=None, cause=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.exc = exc
self.cause = cause
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class TryExcept (stmt):
_fields = ('body', 'handlers', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, body=[], handlers=[], orelse=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.body = list(body)
self.handlers = list(handlers)
self.orelse = list(orelse)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class TryFinally (stmt):
_fields = ('body', 'finalbody')
_attributes = ('lineno', 'col_offset')
def __init__ (self, body=[], finalbody=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.body = list(body)
self.finalbody = list(finalbody)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Assert (stmt):
_fields = ('test', 'msg')
_attributes = ('lineno', 'col_offset')
def __init__ (self, test, msg=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.test = test
self.msg = msg
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Import (stmt):
_fields = ('names',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, names=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.names = list(names)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class ImportFrom (stmt):
_fields = ('module', 'names', 'level')
_attributes = ('lineno', 'col_offset')
def __init__ (self, module, names=[], level=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.module = module
self.names = list(names)
self.level = level
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Exec (stmt):
_fields = ('body', 'globals', 'locals')
_attributes = ('lineno', 'col_offset')
def __init__ (self, body, globals=None, locals=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.body = body
self.globals = globals
self.locals = locals
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Global (stmt):
_fields = ('names',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, names=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.names = list(names)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Nonlocal (stmt):
_fields = ('names',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, names=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.names = list(names)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Expr (stmt):
_fields = ('value',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Pass (stmt):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Break (stmt):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Continue (stmt):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class access (_AST):
pass
class SimpleAccess (access):
_fields = ('buffer', 'arc', 'tokens')
_attributes = ('lineno', 'col_offset')
def __init__ (self, buffer, arc, tokens, lineno=0, col_offset=0, **ARGS):
access.__init__(self, **ARGS)
self.buffer = buffer
self.arc = arc
self.tokens = tokens
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class FlushAccess (access):
_fields = ('buffer', 'target')
_attributes = ('lineno', 'col_offset')
def __init__ (self, buffer, target, lineno=0, col_offset=0, **ARGS):
access.__init__(self, **ARGS)
self.buffer = buffer
self.target = target
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class SwapAccess (access):
_fields = ('buffer', 'target', 'tokens')
_attributes = ('lineno', 'col_offset')
def __init__ (self, buffer, target, tokens, lineno=0, col_offset=0, **ARGS):
access.__init__(self, **ARGS)
self.buffer = buffer
self.target = target
self.tokens = tokens
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Spawn (access):
_fields = ('net', 'pid', 'args')
_attributes = ('lineno', 'col_offset')
def __init__ (self, net, pid, args, lineno=0, col_offset=0, **ARGS):
access.__init__(self, **ARGS)
self.net = net
self.pid = pid
self.args = args
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Wait (access):
_fields = ('net', 'pid', 'args')
_attributes = ('lineno', 'col_offset')
def __init__ (self, net, pid, args, lineno=0, col_offset=0, **ARGS):
access.__init__(self, **ARGS)
self.net = net
self.pid = pid
self.args = args
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Suspend (access):
_fields = ('net', 'pid')
_attributes = ('lineno', 'col_offset')
def __init__ (self, net, pid, lineno=0, col_offset=0, **ARGS):
access.__init__(self, **ARGS)
self.net = net
self.pid = pid
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Resume (access):
_fields = ('net', 'pid')
_attributes = ('lineno', 'col_offset')
def __init__ (self, net, pid, lineno=0, col_offset=0, **ARGS):
access.__init__(self, **ARGS)
self.net = net
self.pid = pid
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class alias (_AST):
_fields = ('name', 'asname')
_attributes = ()
def __init__ (self, name, asname=None, **ARGS):
_AST.__init__(self, **ARGS)
self.name = name
self.asname = asname
class flowop (_AST):
pass
class Sequence (flowop):
_fields = ()
_attributes = ()
class Choice (flowop):
_fields = ()
_attributes = ()
class Parallel (flowop):
_fields = ()
_attributes = ()
class Loop (flowop):
_fields = ()
_attributes = ()
class comprehension (_AST):
_fields = ('target', 'iter', 'ifs')
_attributes = ()
def __init__ (self, target, iter, ifs=[], **ARGS):
_AST.__init__(self, **ARGS)
self.target = target
self.iter = iter
self.ifs = list(ifs)
class arg (_AST):
_fields = ('arg', 'annotation')
_attributes = ()
def __init__ (self, arg, annotation=None, **ARGS):
_AST.__init__(self, **ARGS)
self.arg = arg
self.annotation = annotation
class operator (_AST):
pass
class Add (operator):
_fields = ()
_attributes = ()
class Sub (operator):
_fields = ()
_attributes = ()
class Mult (operator):
_fields = ()
_attributes = ()
class Div (operator):
_fields = ()
_attributes = ()
class Mod (operator):
_fields = ()
_attributes = ()
class Pow (operator):
_fields = ()
_attributes = ()
class LShift (operator):
_fields = ()
_attributes = ()
class RShift (operator):
_fields = ()
_attributes = ()
class BitOr (operator):
_fields = ()
_attributes = ()
class BitXor (operator):
_fields = ()
_attributes = ()
class BitAnd (operator):
_fields = ()
_attributes = ()
class FloorDiv (operator):
_fields = ()
_attributes = ()
class abcdtype (_AST):
pass
class UnionType (abcdtype):
_fields = ('types',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, types=[], lineno=0, col_offset=0, **ARGS):
abcdtype.__init__(self, **ARGS)
self.types = list(types)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class IntersectionType (abcdtype):
_fields = ('types',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, types=[], lineno=0, col_offset=0, **ARGS):
abcdtype.__init__(self, **ARGS)
self.types = list(types)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class CrossType (abcdtype):
_fields = ('types',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, types=[], lineno=0, col_offset=0, **ARGS):
abcdtype.__init__(self, **ARGS)
self.types = list(types)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class ListType (abcdtype):
_fields = ('items',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, items, lineno=0, col_offset=0, **ARGS):
abcdtype.__init__(self, **ARGS)
self.items = items
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class TupleType (abcdtype):
_fields = ('items',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, items, lineno=0, col_offset=0, **ARGS):
abcdtype.__init__(self, **ARGS)
self.items = items
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class SetType (abcdtype):
_fields = ('items',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, items, lineno=0, col_offset=0, **ARGS):
abcdtype.__init__(self, **ARGS)
self.items = items
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class DictType (abcdtype):
_fields = ('keys', 'values')
_attributes = ('lineno', 'col_offset')
def __init__ (self, keys, values, lineno=0, col_offset=0, **ARGS):
abcdtype.__init__(self, **ARGS)
self.keys = keys
self.values = values
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class EnumType (abcdtype):
_fields = ('items',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, items=[], lineno=0, col_offset=0, **ARGS):
abcdtype.__init__(self, **ARGS)
self.items = list(items)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class NamedType (abcdtype):
_fields = ('name',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, lineno=0, col_offset=0, **ARGS):
abcdtype.__init__(self, **ARGS)
self.name = name
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class excepthandler (_AST):
pass
class ExceptHandler (excepthandler):
_fields = ('type', 'name', 'body')
_attributes = ('lineno', 'col_offset')
def __init__ (self, type=None, name=None, body=[], lineno=0, col_offset=0, **ARGS):
excepthandler.__init__(self, **ARGS)
self.type = type
self.name = name
self.body = list(body)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class arguments (_AST):
_fields = ('args', 'vararg', 'varargannotation', 'kwonlyargs', 'kwarg', 'kwargannotation', 'defaults', 'kw_defaults')
_attributes = ()
def __init__ (self, args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[], **ARGS):
_AST.__init__(self, **ARGS)
self.args = list(args)
self.vararg = vararg
self.varargannotation = varargannotation
self.kwonlyargs = list(kwonlyargs)
self.kwarg = kwarg
self.kwargannotation = kwargannotation
self.defaults = list(defaults)
self.kw_defaults = list(kw_defaults)
"""
>>> testparser(Translator)
"""
import operator, sys
import snakes
from snakes.lang.python.parser import (ParseTree, ParseTestParser,
Translator as PyTranslator,
ParseTree as PyParseTree,
testparser)
from snakes.lang.pgen import ParseError
from snakes.lang.abcd.pgen import parser
import snakes.lang.abcd.asdl as ast
_symbols = parser.tokenizer.tok_name.copy()
# next statement overrides 'NT_OFFSET' entry with 'single_input'
# (this is desired)
_symbols.update(parser.symbolMap)
def skip (token) :
if token.kind == token.lexer.COMMENT :
words = token.strip().split()
if words[:2] == ["#", "coding="] :
coding = words[2]
elif words[:3] == ["#", "-*-", "coding:"] :
coding = words[3]
else :
return
snakes.defaultencoding = coding
parser.tokenizer.skip_token = skip
class ParseTree (PyParseTree) :
_symbols = _symbols
class Translator (PyTranslator) :
ParseTree = ParseTree
parser = parser
ST = ast
def do_file_input (self, st, ctx) :
"""file_input: abcd_main ENDMARKER
-> ast.AbcdSpec
<<< symbol FOO, BAR
... [egg+(spam) if spam == ham] ; [spam-(FOO, BAR)]
"AbcdSpec(context=[AbcdSymbol(symbols=['FOO', 'BAR'])], body=AbcdFlowOp(left=AbcdAction(accesses=[SimpleAccess(buffer='egg', arc=Produce(), tokens=Name(id='spam', ctx=Load()))], guard=Compare(left=Name(id='spam', ctx=Load()), ops=[Eq()], comparators=[Name(id='ham', ctx=Load())])), op=Sequence(), right=AbcdAction(accesses=[SimpleAccess(buffer='spam', arc=Consume(), tokens=Tuple(elts=[Name(id='FOO', ctx=Load()), Name(id='BAR', ctx=Load())], ctx=Load()))], guard=Name(id='True', ctx=Load()))), asserts=[])"
"""
return self.do(st[0])
def do_abcd_main (self, st, ctx) :
"""abcd_main: (NEWLINE | abcd_global)* abcd_expr (abcd_prop)*
-> ast.AbcdSpec
<<< symbol FOO, BAR
... [egg+(spam) if spam == ham] ; [spam-(FOO, BAR)]
"AbcdSpec(context=[AbcdSymbol(symbols=['FOO', 'BAR'])], body=AbcdFlowOp(left=AbcdAction(accesses=[SimpleAccess(buffer='egg', arc=Produce(), tokens=Name(id='spam', ctx=Load()))], guard=Compare(left=Name(id='spam', ctx=Load()), ops=[Eq()], comparators=[Name(id='ham', ctx=Load())])), op=Sequence(), right=AbcdAction(accesses=[SimpleAccess(buffer='spam', arc=Consume(), tokens=Tuple(elts=[Name(id='FOO', ctx=Load()), Name(id='BAR', ctx=Load())], ctx=Load()))], guard=Name(id='True', ctx=Load()))), asserts=[])"
"""
expr = len(st)-1
for i, child in enumerate(st) :
if child.symbol == "abcd_expr" :
expr = i
break
return self.ST.AbcdSpec(lineno=st.srow, col_offset=st.scol,
context=[self.do(child) for child in st[:expr]
if child.kind != self.NEWLINE],
body=self.do(st[expr]),
asserts=[self.do(child) for child in st[expr+1:]])
def do_abcd_prop (self, st, ctx) :
"""abcd_prop: 'assert' test (NEWLINE)*
-> ast.test
<<< [True]
... assert True
... assert False
"AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[Name(id='True', ctx=Load()), Name(id='False', ctx=Load())])"
"""
return self.do(st[1])
def do_abcd_global (self, st, ctx) :
"""abcd_global: (import_stmt | abcd_symbol | abcd_typedef
| abcd_const | abcd_decl)
-> ast.AST
<<< import module
... from module import content
... symbol EGG, SPAM, HAM
... typedef t : enum(EGG, SPAM, HAM)
... net Foo() : [True]
... buffer bar : t = ()
... [True]
"AbcdSpec(context=[Import(names=[alias(name='module', asname=None)]), ImportFrom(module='module', names=[alias(name='content', asname=None)], level=0), AbcdSymbol(symbols=['EGG', 'SPAM', 'HAM']), AbcdTypedef(name='t', type=EnumType(items=[Name(id='EGG', ctx=Load()), Name(id='SPAM', ctx=Load()), Name(id='HAM', ctx=Load())])), AbcdNet(name='Foo', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[])), AbcdBuffer(name='bar', type=NamedType(name='t'), capacity=None, content=Tuple(elts=[], ctx=Load()))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
"""
return self.do(st[0])
def do_abcd_spec (self, st, ctx) :
"""abcd_spec: (NEWLINE | abcd_decl)* abcd_expr
-> ast.AbcdSpec
<<< net Foo () :
... buffer bar : spam = ()
... net Bar () : [False]
... [True]
... Foo()
"AbcdSpec(context=[AbcdNet(name='Foo', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=AbcdSpec(context=[AbcdBuffer(name='bar', type=NamedType(name='spam'), capacity=None, content=Tuple(elts=[], ctx=Load())), AbcdNet(name='Bar', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=False), asserts=[]))], body=AbcdAction(accesses=[], guard=True), asserts=[]))], body=AbcdInstance(net='Foo', asname=None, args=[], keywords=[], starargs=None, kwargs=None), asserts=[])"
"""
tree = self.do_abcd_main(st, ctx)
tree.st = st
return tree
def do_abcd_decl (self, st, ctx) :
"""abcd_decl: abcd_net | abcd_task | abcd_buffer
-> ast.AST
<<< net Foo () :
... buffer bar : spam = ()
... net Bar () : [False]
... [True]
... Foo()
"AbcdSpec(context=[AbcdNet(name='Foo', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=AbcdSpec(context=[AbcdBuffer(name='bar', type=NamedType(name='spam'), capacity=None, content=Tuple(elts=[], ctx=Load())), AbcdNet(name='Bar', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=False), asserts=[]))], body=AbcdAction(accesses=[], guard=True), asserts=[]))], body=AbcdInstance(net='Foo', asname=None, args=[], keywords=[], starargs=None, kwargs=None), asserts=[])"
"""
tree = self.do_abcd_global(st, ctx)
tree.st = st
return tree
def do_abcd_const (self, st, ctx) :
"""abcd_const: 'const' NAME '=' testlist
-> ast.AbcdConst
<<< const FOO = 5
... [True]
"AbcdSpec(context=[AbcdConst(name='FOO', value=Num(n=5))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< const FOO = 1, 2, 3
... [True]
"AbcdSpec(context=[AbcdConst(name='FOO', value=Tuple(elts=[Num(n=1), Num(n=2), Num(n=3)], ctx=Load()))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
"""
return self.ST.AbcdConst(lineno=st.srow, col_offset=st.scol,
name=st[1].text, value=self.do(st[3], ctx))
def do_abcd_symbol (self, st, ctx) :
"""abcd_symbol: 'symbol' abcd_namelist
-> ast.AbcdSymbol
<<< symbol FOO
... [True]
"AbcdSpec(context=[AbcdSymbol(symbols=['FOO'])], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< symbol FOO, BAR
... [True]
"AbcdSpec(context=[AbcdSymbol(symbols=['FOO', 'BAR'])], body=AbcdAction(accesses=[], guard=True), asserts=[])"
"""
return self.ST.AbcdSymbol(lineno=st.srow, col_offset=st.scol,
symbols=self.do(st[1]))
def do_abcd_namelist (self, st, ctx) :
"""abcd_namelist: NAME (',' NAME)*
-> str+
<<< symbol FOO
... [True]
"AbcdSpec(context=[AbcdSymbol(symbols=['FOO'])], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< symbol FOO, BAR
... [True]
"AbcdSpec(context=[AbcdSymbol(symbols=['FOO', 'BAR'])], body=AbcdAction(accesses=[], guard=True), asserts=[])"
"""
return [child.text for child in st[::2]]
def _do_flowop (self, st, op) :
nodes = [self.do(child) for child in st[::2]]
while len(nodes) > 1 :
left = nodes.pop(0)
right = nodes.pop(0)
nodes.insert(0, self.ST.AbcdFlowOp(lineno=left.lineno,
col_offset=left.col_offset,
left=left, op=op(), right=right))
return nodes[0]
def do_abcd_expr (self, st, ctx) :
"""abcd_expr: abcd_choice_expr ('|' abcd_choice_expr)*
-> ast.process
<<< [True]
'AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[])'
<<< [True] | [False]
'AbcdSpec(context=[], body=AbcdFlowOp(left=AbcdAction(accesses=[], guard=True), op=Parallel(), right=AbcdAction(accesses=[], guard=False)), asserts=[])'
<<< [True] | [False] | [True]
'AbcdSpec(context=[], body=AbcdFlowOp(left=AbcdFlowOp(left=AbcdAction(accesses=[], guard=True), op=Parallel(), right=AbcdAction(accesses=[], guard=False)), op=Parallel(), right=AbcdAction(accesses=[], guard=True)), asserts=[])'
"""
return self._do_flowop(st, self.ST.Parallel)
def do_abcd_choice_expr (self, st, ctx) :
"""abcd_choice_expr: abcd_iter_expr ('+' abcd_iter_expr)*
-> ast.process
<<< [True]
'AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[])'
<<< [True] + [False]
'AbcdSpec(context=[], body=AbcdFlowOp(left=AbcdAction(accesses=[], guard=True), op=Choice(), right=AbcdAction(accesses=[], guard=False)), asserts=[])'
<<< [True] + [False] + [True]
'AbcdSpec(context=[], body=AbcdFlowOp(left=AbcdFlowOp(left=AbcdAction(accesses=[], guard=True), op=Choice(), right=AbcdAction(accesses=[], guard=False)), op=Choice(), right=AbcdAction(accesses=[], guard=True)), asserts=[])'
"""
return self._do_flowop(st, self.ST.Choice)
def do_abcd_iter_expr (self, st, ctx) :
"""abcd_iter_expr: abcd_seq_expr ('*' abcd_seq_expr)*
-> ast.process
<<< [True]
'AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[])'
<<< [True] * [False]
'AbcdSpec(context=[], body=AbcdFlowOp(left=AbcdAction(accesses=[], guard=True), op=Loop(), right=AbcdAction(accesses=[], guard=False)), asserts=[])'
<<< [True] * [False] * [True]
'AbcdSpec(context=[], body=AbcdFlowOp(left=AbcdFlowOp(left=AbcdAction(accesses=[], guard=True), op=Loop(), right=AbcdAction(accesses=[], guard=False)), op=Loop(), right=AbcdAction(accesses=[], guard=True)), asserts=[])'
"""
return self._do_flowop(st, self.ST.Loop)
def do_abcd_seq_expr (self, st, ctx) :
"""abcd_seq_expr: abcd_base_expr (';' abcd_base_expr)*
-> ast.process
<<< [True]
'AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[])'
<<< [True] ; [False]
'AbcdSpec(context=[], body=AbcdFlowOp(left=AbcdAction(accesses=[], guard=True), op=Sequence(), right=AbcdAction(accesses=[], guard=False)), asserts=[])'
<<< [True] ; [False] ; [True]
'AbcdSpec(context=[], body=AbcdFlowOp(left=AbcdFlowOp(left=AbcdAction(accesses=[], guard=True), op=Sequence(), right=AbcdAction(accesses=[], guard=False)), op=Sequence(), right=AbcdAction(accesses=[], guard=True)), asserts=[])'
"""
return self._do_flowop(st, self.ST.Sequence)
def do_abcd_base_expr (self, st, ctx) :
"""abcd_base_expr: (abcd_action | '(' abcd_expr ')') (NEWLINE)*
-> ast.process
<<< [True]
'AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[])'
<<< ([True])
'AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[])'
"""
if st[0].text == "(" :
return self.do(st[1])
else :
return self.do(st[0])
def do_abcd_action (self, st, ctx) :
"""abcd_action: ('[' 'True' ']' | '[' 'False' ']' |
'[' abcd_access_list ['if' test] ']' |
abcd_instance)
-> ast.AbcdAction | ast.AbcdInstance
<<< [True]
'AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[])'
<<< [False]
'AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=False), asserts=[])'
<<< Foo(1, 2)
"AbcdSpec(context=[], body=AbcdInstance(net='Foo', asname=None, args=[Num(n=1), Num(n=2)], keywords=[], starargs=None, kwargs=None), asserts=[])"
"""
if len(st) == 1 :
return self.do(st[0])
elif st[1].text == "True" :
return self.ST.AbcdAction(lineno=st.srow, col_offset=st.scol,
accesses=[], guard=True)
elif st[1].text == "False" :
return self.ST.AbcdAction(lineno=st.srow, col_offset=st.scol,
accesses=[], guard=False)
elif len(st) == 3 :
return self.ST.AbcdAction(lineno=st.srow, col_offset=st.scol,
accesses=self.do(st[1]),
guard=self.ST.Name(lineno=st[-1].srow,
col_offset=st[-1].scol,
id="True",
ctx=self.ST.Load()))
else :
return self.ST.AbcdAction(lineno=st.srow, col_offset=st.scol,
accesses=self.do(st[1]),
guard=self.do(st[3]))
def do_abcd_access_list (self, st, ctx) :
"""abcd_access_list: abcd_access (',' abcd_access)*
-> ast.access+
<<< [foo-(1)]
"AbcdSpec(context=[], body=AbcdAction(accesses=[SimpleAccess(buffer='foo', arc=Consume(), tokens=Num(n=1))], guard=Name(id='True', ctx=Load())), asserts=[])"
<<< [foo-(1), bar+(2)]
"AbcdSpec(context=[], body=AbcdAction(accesses=[SimpleAccess(buffer='foo', arc=Consume(), tokens=Num(n=1)), SimpleAccess(buffer='bar', arc=Produce(), tokens=Num(n=2))], guard=Name(id='True', ctx=Load())), asserts=[])"
"""
return [self.do(child) for child in st[::2]]
_arc = {"+" : ast.Produce,
"-" : ast.Consume,
"?" : ast.Test,
"<<" : ast.Fill}
def do_abcd_access (self, st, ctx) :
"""abcd_access: (NAME '+' '(' testlist ')' |
NAME '?' '(' testlist ')' |
NAME '-' '(' testlist ')' |
NAME '<>' '(' testlist '=' testlist ')' |
NAME '>>' '(' NAME ')' |
NAME '<<' '(' testlist_comp ')' |
NAME '.' NAME '(' test (',' test)* ')')
-> ast.access
<<< [egg+(x), spam-(y), ham?(z), foo<<(bar)]
"AbcdSpec(context=[], body=AbcdAction(accesses=[SimpleAccess(buffer='egg', arc=Produce(), tokens=Name(id='x', ctx=Load())), SimpleAccess(buffer='spam', arc=Consume(), tokens=Name(id='y', ctx=Load())), SimpleAccess(buffer='ham', arc=Test(), tokens=Name(id='z', ctx=Load())), SimpleAccess(buffer='foo', arc=Fill(), tokens=Name(id='bar', ctx=Load()))], guard=Name(id='True', ctx=Load())), asserts=[])"
<<< [foo<<(spam(egg) for egg in ham)]
"AbcdSpec(context=[], body=AbcdAction(accesses=[SimpleAccess(buffer='foo', arc=Fill(), tokens=ListComp(elt=Call(func=Name(id='spam', ctx=Load()), args=[Name(id='egg', ctx=Load())], keywords=[], starargs=None, kwargs=None), generators=[comprehension(target=Name(id='egg', ctx=Store()), iter=Name(id='ham', ctx=Load()), ifs=[])]))], guard=Name(id='True', ctx=Load())), asserts=[])"
<<< [bar-(l), foo<<(l)]
"AbcdSpec(context=[], body=AbcdAction(accesses=[SimpleAccess(buffer='bar', arc=Consume(), tokens=Name(id='l', ctx=Load())), SimpleAccess(buffer='foo', arc=Fill(), tokens=Name(id='l', ctx=Load()))], guard=Name(id='True', ctx=Load())), asserts=[])"
<<< [bar-(l), foo<<(l,)]
"AbcdSpec(context=[], body=AbcdAction(accesses=[SimpleAccess(buffer='bar', arc=Consume(), tokens=Name(id='l', ctx=Load())), SimpleAccess(buffer='foo', arc=Fill(), tokens=Tuple(elts=[Name(id='l', ctx=Load())], ctx=Load()))], guard=Name(id='True', ctx=Load())), asserts=[])"
<<< [spam>>(ham) if spam is egg]
"AbcdSpec(context=[], body=AbcdAction(accesses=[FlushAccess(buffer='spam', target='ham')], guard=Compare(left=Name(id='spam', ctx=Load()), ops=[Is()], comparators=[Name(id='egg', ctx=Load())])), asserts=[])"
<<< [count<>(n = n+1)]
"AbcdSpec(context=[], body=AbcdAction(accesses=[SwapAccess(buffer='count', target=Name(id='n', ctx=Load()), tokens=BinOp(left=Name(id='n', ctx=Load()), op=Add(), right=Num(n=1)))], guard=Name(id='True', ctx=Load())), asserts=[])"
<<< [foo.spawn(child, 1, 2, 3)]
"AbcdSpec(context=[], body=AbcdAction(accesses=[Spawn(net='foo', pid=Name(id='child', ctx=Load()), args=[Num(n=1), Num(n=2), Num(n=3)])], guard=Name(id='True', ctx=Load())), asserts=[])"
<<< [foo.wait(child, y, z)]
"AbcdSpec(context=[], body=AbcdAction(accesses=[Wait(net='foo', pid=Name(id='child', ctx=Load()), args=[Name(id='y', ctx=Load()), Name(id='z', ctx=Load())])], guard=Name(id='True', ctx=Load())), asserts=[])"
<<< [foo.suspend(pid)]
"AbcdSpec(context=[], body=AbcdAction(accesses=[Suspend(net='foo', pid=Name(id='pid', ctx=Load()))], guard=Name(id='True', ctx=Load())), asserts=[])"
<<< [foo.resume(pid)]
"AbcdSpec(context=[], body=AbcdAction(accesses=[Resume(net='foo', pid=Name(id='pid', ctx=Load()))], guard=Name(id='True', ctx=Load())), asserts=[])"
"""
if st[1].text in ("+", "?", "-") :
return self.ST.SimpleAccess(lineno=st.srow, col_offset=st.scol,
buffer=st[0].text,
arc=self._arc[st[1].text](),
tokens=self.do(st[3]))
elif st[1].text == "<<" :
loop, elts, atom = self.do(st[3], ctx)
if atom is not None :
args = atom
elif loop is None :
args = self.ST.Tuple(lineno=st.srow, col_offset=st.scol,
elts=elts, ctx=ctx())
else :
args = self.ST.ListComp(lineno=st.srow, col_offset=st.scol,
elt=loop, generators=elts)
return self.ST.SimpleAccess(lineno=st.srow, col_offset=st.scol,
buffer=st[0].text,
arc=self._arc[st[1].text](),
tokens=args)
elif st[1].text == ">>" :
return self.ST.FlushAccess(lineno=st.srow, col_offset=st.scol,
buffer=st[0].text,
target=st[3].text)
elif st[1].text == "<>" :
return self.ST.SwapAccess(lineno=st.srow, col_offset=st.scol,
buffer=st[0].text,
target=self.do(st[3]),
tokens=self.do(st[5]))
elif st[2].text in ("suspend", "resume") : # st[1].text == "."
if len(st) > 6 :
raise ParseError(st.text, reason="too many arguments for %s"
% st[2].text)
if st[2].text == "suspend" :
tree = self.ST.Suspend
else :
tree = self.ST.Resume
return tree(lineno=st.srow, col_offset=st.scol,
net=st[0].text,
pid=self.do(st[4]))
elif st[2].text in ("spawn", "wait") : # st[1].text == "."
if len(st) > 6 :
args = [self.do(child) for child in st[6:-1:2]]
else :
args = []
if st[2].text == "spawn" :
tree = self.ST.Spawn
else :
tree = self.ST.Wait
return tree(lineno=st.srow, col_offset=st.scol,
net=st[0].text,
pid=self.do(st[4]),
args=args)
else :
raise ParseError(st[2].text, reason=("expected 'spawn', 'wait', "
"'suspend' or 'resume', but found '%s'")
% st[2].text)
def do_abcd_instance (self, st, ctx) :
"""abcd_instance: [NAME ':' ':'] NAME '(' [arglist] ')'
-> ast.AbcdInstance
<<< sub()
"AbcdSpec(context=[], body=AbcdInstance(net='sub', asname=None, args=[], keywords=[], starargs=None, kwargs=None), asserts=[])"
<<< sub(1, 2, 3)
"AbcdSpec(context=[], body=AbcdInstance(net='sub', asname=None, args=[Num(n=1), Num(n=2), Num(n=3)], keywords=[], starargs=None, kwargs=None), asserts=[])"
<<< sub(1, *l)
"AbcdSpec(context=[], body=AbcdInstance(net='sub', asname=None, args=[Num(n=1)], keywords=[], starargs=Name(id='l', ctx=Load()), kwargs=None), asserts=[])"
<<< sub(1, **d)
"AbcdSpec(context=[], body=AbcdInstance(net='sub', asname=None, args=[Num(n=1)], keywords=[], starargs=None, kwargs=Name(id='d', ctx=Load())), asserts=[])"
<<< sub(a=1, b=2)
"AbcdSpec(context=[], body=AbcdInstance(net='sub', asname=None, args=[], keywords=[keyword(arg='a', value=Num(n=1)), keyword(arg='b', value=Num(n=2))], starargs=None, kwargs=None), asserts=[])"
<<< foo::sub()
"AbcdSpec(context=[], body=AbcdInstance(net='sub', asname='foo', args=[], keywords=[], starargs=None, kwargs=None), asserts=[])"
<<< foo::sub(1, 2)
"AbcdSpec(context=[], body=AbcdInstance(net='sub', asname='foo', args=[Num(n=1), Num(n=2)], keywords=[], starargs=None, kwargs=None), asserts=[])"
"""
if len(st) in (3, 6) :
args, keywords, starargs, kwargs = [], [], None, None
else :
args, keywords, starargs, kwargs = self.do(st[-2])
if st[1].text == ':' :
net = st[3].text
asname = st[0].text
else :
net = st[0].text
asname = None
return self.ST.AbcdInstance(lineno=st.srow, col_offset=st.scol,
net=net, asname=asname, args=args,
keywords=keywords, starargs=starargs,
kwargs=kwargs)
def do_abcd_net (self, st, ctx) :
"""abcd_net: 'net' NAME parameters ':' abcd_suite
-> ast.AbcdNet
<<< net Foo () : [True]
... [False]
"AbcdSpec(context=[AbcdNet(name='Foo', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[]))], body=AbcdAction(accesses=[], guard=False), asserts=[])"
<<< net Foo (x, y) : [True]
... [False]
"AbcdSpec(context=[AbcdNet(name='Foo', args=arguments(args=[arg(arg='x', annotation=None), arg(arg='y', annotation=None)], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[]))], body=AbcdAction(accesses=[], guard=False), asserts=[])"
"""
params = self.do(st[2])
return self.ST.AbcdNet(lineno=st.srow, col_offset=st.scol,
name=st[1].text,
args=params,
body=self.do(st[4]))
def do_abcd_task (self, st, ctx) :
"""abcd_task: 'task' NAME typelist '-' '>' typelist ':' abcd_suite
-> ast.AbcdTask
<<< task Foo (int) -> () : [True]
... [False]
"AbcdSpec(context=[AbcdTask(name='Foo', body=AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[]), input=[NamedType(name='int')], output=[])], body=AbcdAction(accesses=[], guard=False), asserts=[])"
"""
return self.ST.AbcdTask(lineno=st.srow, col_offset=st.scol,
name=st[1].text,
body=self.do(st[-1]),
input=self.do(st[2]),
output=self.do(st[5]))
def do_typelist (self, st, ctx) :
"""typelist: '(' [abcd_type (',' abcd_type)*] ')'
-> ast.abcdtype*
<<< task Foo () -> (int, int, int|bool) : [True]
... [False]
"AbcdSpec(context=[AbcdTask(name='Foo', body=AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[]), input=[], output=[NamedType(name='int'), NamedType(name='int'), UnionType(types=[NamedType(name='int'), NamedType(name='bool')])])], body=AbcdAction(accesses=[], guard=False), asserts=[])"
"""
return [self.do(child) for child in st[1:-1:2]]
def do_abcd_suite (self, st, ctx) :
"""abcd_suite: abcd_expr | NEWLINE INDENT abcd_spec DEDENT
-> ast.AbcdSpec
<<< net Foo () :
... [True]
... [False]
"AbcdSpec(context=[AbcdNet(name='Foo', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[]))], body=AbcdAction(accesses=[], guard=False), asserts=[])"
<<< net Foo () : ([True]
... + [False])
... [False]
"AbcdSpec(context=[AbcdNet(name='Foo', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=AbcdSpec(context=[], body=AbcdFlowOp(left=AbcdAction(accesses=[], guard=True), op=Choice(), right=AbcdAction(accesses=[], guard=False)), asserts=[]))], body=AbcdAction(accesses=[], guard=False), asserts=[])"
"""
if len(st) == 1 :
return self.ST.AbcdSpec(lineno=st.srow, col_offset=st.scol,
context=[],
body=self.do(st[0]))
else :
return self.do(st[2])
def do_abcd_buffer (self, st, ctx) :
"""[ decorators ] 'buffer' NAME ['[' test ']'] ':' abcd_type '=' testlist
-> ast.AbcdBuffer
<<< buffer foo : int = ()
... [True]
"AbcdSpec(context=[AbcdBuffer(name='foo', type=NamedType(name='int'), capacity=None, content=Tuple(elts=[], ctx=Load()))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< buffer foo : int = 1, 2
... [True]
"AbcdSpec(context=[AbcdBuffer(name='foo', type=NamedType(name='int'), capacity=None, content=Tuple(elts=[Num(n=1), Num(n=2)], ctx=Load()))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< buffer foo : int|bool = ()
... [True]
"AbcdSpec(context=[AbcdBuffer(name='foo', type=UnionType(types=[NamedType(name='int'), NamedType(name='bool')]), capacity=None, content=Tuple(elts=[], ctx=Load()))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< @capacity(max=5)
... buffer foo : int = ()
... [True]
"AbcdSpec(context=[AbcdBuffer(name='foo', type=NamedType(name='int'), capacity=[None, Num(n=5)], content=Tuple(elts=[], ctx=Load()))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< @capacity(min=2)
... buffer foo : int = ()
... [True]
"AbcdSpec(context=[AbcdBuffer(name='foo', type=NamedType(name='int'), capacity=[Num(n=2), None], content=Tuple(elts=[], ctx=Load()))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< @capacity(min=2, max=5)
... buffer foo : int = ()
... [True]
"AbcdSpec(context=[AbcdBuffer(name='foo', type=NamedType(name='int'), capacity=[Num(n=2), Num(n=5)], content=Tuple(elts=[], ctx=Load()))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
"""
if len(st) == 6 : # no decorator, no array
return self.ST.AbcdBuffer(lineno=st.srow, col_offset=st.scol,
name=st[1].text,
type=self.do(st[3]),
capacity=None,
content=self.do(st[-1]))
elif len(st) == 7 : # decorator, no array
deco = self.do_buffer_decorators(st[0], ctx)
return self.ST.AbcdBuffer(lineno=st.srow, col_offset=st.scol,
name=st[2].text,
type=self.do(st[4]),
capacity=deco["capacity"],
content=self.do(st[-1]))
else :
raise ParseError(st.text,
reason="arrays not (yet) supported")
def do_buffer_decorators (self, st, ctx) :
deco = {}
for child in st :
tree = self.do(child)
if isinstance(tree, self.ST.Call) and tree.func.id == "capacity" :
if tree.args or tree.starargs or tree.kwargs :
raise ParseError(child, reason="invalid parameters")
min, max = None, None
for kw in tree.keywords :
if kw.arg == "min" :
min = kw.value
elif kw.arg == "max" :
max = kw.value
else :
raise ParseError(child,
reason="invalid parameter %r" % kw.arg)
if min or max :
deco["capacity"] = [min, max]
else :
deco["capacity"] = None
continue
raise ParseError(child, reason="invalid buffer decorator")
return deco
def do_abcd_typedef (self, st, ctx) :
"""abcd_typedef: 'typedef' NAME ':' abcd_type
-> ast.AbcdTypedef
<<< typedef foo : int
... [True]
"AbcdSpec(context=[AbcdTypedef(name='foo', type=NamedType(name='int'))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
"""
return self.ST.AbcdTypedef(lineno=st.srow, col_offset=st.scol,
name=st[1].text,
type=self.do(st[3]))
def do_abcd_type (self, st, ctx) :
"""abcd_type: abcd_and_type ('|' abcd_and_type)*
-> snakes.typing.Type
<<< typedef foo : int
... [True]
"AbcdSpec(context=[AbcdTypedef(name='foo', type=NamedType(name='int'))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< typedef foo : int | bool
... [True]
"AbcdSpec(context=[AbcdTypedef(name='foo', type=UnionType(types=[NamedType(name='int'), NamedType(name='bool')]))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
"""
if len(st) == 1 :
return self.do(st[0])
else :
return self.ST.UnionType(lineno=st.srow, col_offset=st.scol,
types=[self.do(child) for child in st[::2]])
def do_abcd_and_type (self, st, ctx) :
"""abcd_and_type: abcd_cross_type ('&' abcd_cross_type)*
-> snakes.typing.Type
<<< typedef foo : int
... [True]
"AbcdSpec(context=[AbcdTypedef(name='foo', type=NamedType(name='int'))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< typedef foo : int & bool
... [True]
"AbcdSpec(context=[AbcdTypedef(name='foo', type=IntersectionType(types=[NamedType(name='int'), NamedType(name='bool')]))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
"""
if len(st) == 1 :
return self.do(st[0])
else :
return self.ST.IntersectionType(lineno=st.srow, col_offset=st.scol,
types=[self.do(child) for child in st[::2]])
def do_abcd_cross_type (self, st, ctx) :
"""abcd_cross_type: abcd_base_type ('*' abcd_base_type)*
-> snakes.typing.Type
<<< typedef foo : int
... [True]
"AbcdSpec(context=[AbcdTypedef(name='foo', type=NamedType(name='int'))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< typedef foo : int * bool
... [True]
"AbcdSpec(context=[AbcdTypedef(name='foo', type=CrossType(types=[NamedType(name='int'), NamedType(name='bool')]))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
"""
if len(st) == 1 :
return self.do(st[0])
else :
return self.ST.CrossType(lineno=st.srow, col_offset=st.scol,
types=[self.do(child) for child in st[::2]])
def do_abcd_base_type (self, st, ctx) :
"""abcd_base_type: (NAME ['(' abcd_type (',' abcd_type)* ')']
| 'enum' '(' test (',' test)* ')' | '(' abcd_type ')')
-> snakes.typing.Type
<<< typedef foo : list(int)
... [True]
"AbcdSpec(context=[AbcdTypedef(name='foo', type=ListType(items=NamedType(name='int')))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< typedef foo : set(int)
... [True]
"AbcdSpec(context=[AbcdTypedef(name='foo', type=SetType(items=NamedType(name='int')))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< typedef foo : dict(int, bool)
... [True]
"AbcdSpec(context=[AbcdTypedef(name='foo', type=DictType(keys=NamedType(name='int'), values=NamedType(name='bool')))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< typedef foo : enum(1, 2, 3)
... [True]
"AbcdSpec(context=[AbcdTypedef(name='foo', type=EnumType(items=[Num(n=1), Num(n=2), Num(n=3)]))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
<<< typedef foo : int
... [True]
"AbcdSpec(context=[AbcdTypedef(name='foo', type=NamedType(name='int'))], body=AbcdAction(accesses=[], guard=True), asserts=[])"
"""
if len(st) == 1 :
return self.ST.NamedType(lineno=st.srow, col_offset=st.scol,
name=st[0].text)
elif len(st) == 3 :
return self.do(st[1])
elif st[0].text in ("list", "set", "tuple") :
if len(st) > 4 :
raise ParseError(st.text,
reason="too many arguments for %s type"
% st[0].text)
if st[0].text == "list" :
tree = self.ST.ListType
elif st[0].text == "tuple" :
tree = self.ST.TupleType
else :
tree = self.ST.SetType
return tree(lineno=st.srow, col_offset=st.scol,
items=self.do(st[2]))
elif st[0].text == "dict" :
if len(st) > 6 :
raise ParseError(st.text,
reason="too many arguments for dict type")
return self.ST.DictType(lineno=st.srow, col_offset=st.scol,
keys=self.do(st[2]),
values=self.do(st[4]))
elif st[0].text == "enum" :
return self.ST.EnumType(lineno=st.srow, col_offset=st.scol,
items=[self.do(child) for child in st[2:-1:2]])
else :
raise ParseError(st[0].text,
reason=("expected 'enum', 'list', 'set' or"
" 'dict' but found '%s'") % st[0].text)
def do_tfpdef (self, st, ctx) :
"""tfpdef: NAME [':' ('net' | 'buffer' | 'task')]
-> str, ast.AST?
<<< net Foo (x, y, n:net, b:buffer, t:task) : [True]
... [False]
"AbcdSpec(context=[AbcdNet(name='Foo', args=arguments(args=[arg(arg='x', annotation=None), arg(arg='y', annotation=None), arg(arg='n', annotation=Name(id='net', ctx=Load())), arg(arg='b', annotation=Name(id='buffer', ctx=Load())), arg(arg='t', annotation=Name(id='task', ctx=Load()))], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=AbcdSpec(context=[], body=AbcdAction(accesses=[], guard=True), asserts=[]))], body=AbcdAction(accesses=[], guard=False), asserts=[])"
"""
if len(st) == 1 :
return st[0].text, None
else :
return st[0].text, self.ST.Name(lineno=st[2].srow,
col_offset=st[2].scol,
id=st[2].text,
ctx=ctx())
parse = Translator.parse
if __name__ == "__main__" :
testparser(Translator)
This diff could not be displayed because it is too large.
import sys, datetime
from snakes.lang.pylib import asdl
from collections import defaultdict
from functools import partial
class memoize(object):
def __init__(self, function):
self.function = function
self.memoized = {}
def __call__(self, *args):
try:
return self.memoized[args]
except KeyError:
call = self.function(*args)
self.memoized[args] = call
return call
def __get__(self, obj, objtype):
"""Support instance methods."""
return partial(self.__call__, obj)
def component_has_cycle(node, graph, proceeding, visited):
if node in visited:
return False
if node in proceeding:
proceeding.append(node) # populate trace
return True
proceeding.append(node)
if node in graph:
for successor in graph[node]:
if component_has_cycle(successor, graph, proceeding, visited):
return True
proceeding.remove(node)
visited.add(node)
return False
def has_cycle(graph):
visited = set()
proceeding = list()
todo = set(graph.keys())
while todo:
node = todo.pop()
if component_has_cycle(node, graph, proceeding, visited):
i = proceeding.index(proceeding[-1])
return proceeding[i:]
todo.difference_update(visited)
return []
class CyclicDependencies(Exception):
def __init__(self, seq):
self.seq = seq
def __str__(self):
return "cyclic dependencies: {}".format(" -> ".join(self.seq))
def remove_duplicates(l):
d = {}
nl = []
for e in l:
if not e in d:
d[e] = 1
nl.append(e)
return nl
class CodeGen (asdl.VisitorBase) :
def __init__(self, node):
asdl.VisitorBase.__init__(self)
self.starting_node = None
self.current_node = None
self.hierarchy = defaultdict(list)
self.hierarchy['_AST'] = []
self.fields = defaultdict(list)
self.attributes = defaultdict(list)
self.code = defaultdict(list)
self.visit(node)
ret = has_cycle(self.hierarchy)
if ret:
raise CyclicDependencies(ret)
self._gen_code(node)
def visitModule(self, node):
for name, child in node.types.items():
if not self.starting_node:
self.starting_node = str(name)
self.current_node = str(name)
self.hierarchy[name]
self.visit(child)
def visitSum(self, node):
if hasattr(node, "fields"):
self.fields[self.current_node] = node.fields
else:
self.fields[self.current_node] = []
if hasattr(node, "attributes"):
self.attributes[self.current_node] = node.attributes
else:
self.attributes[self.current_node] = []
for child in node.types:
self.visit(child)
def visitConstructor (self, node):
if str(node.name) in self.fields:
#print >> sys.stderr, "constructor '{!s}' appears twice !".format(node.name)
#exit(0)
return
self.fields[str(node.name)].extend(node.fields)
self.hierarchy[str(node.name)].append(self.current_node)
def visitProduct(self, node):
self.fields[self.current_node].extend(node.fields)
@memoize
def _get_fields(self, name):
if self.fields.has_key(name):
fields = map(lambda f : f.name, self.fields[name])
for parent in self.hierarchy[name]:
fields.extend(self._get_fields(parent))
return fields
else:
return []
@memoize
def _get_attributes(self, name):
if name == '_AST':
return []
attributes = map(lambda a : a.name, self.attributes[name])
for parent in self.hierarchy[name]:
attributes.extend(self._get_attributes(parent))
return attributes
def _gen_code(self, node):
is_methods = []
for name in sorted(self.hierarchy):
if name != '_AST':
is_methods.extend(["",
"def is{!s}(self):".format(name),
["return False"]
])
cls = ["class _AST (ast.AST):",
["_fields = ()",
"_attributes = ()",
"",
"def __init__ (self, **ARGS):",
["ast.AST.__init__(self)",
"for k, v in ARGS.items():",
["setattr(self, k, v)"]
]
] + is_methods
]
self.code['_AST'] = cls
for name, parents in self.hierarchy.iteritems():
if name == '_AST':
continue
if not parents:
parents = ['_AST']
fields = self.fields[name]
args = []
assign = []
body = []
_fields = remove_duplicates(self._get_fields(name))
_attributes = remove_duplicates(self._get_attributes(name))
body = []
cls = ["class {!s} ({!s}):".format(name, ", ".join(parents)), body]
non_default_args = []
default_args = []
for f in fields:
if f.name.value == 'ctx':
f.opt = True
if f.opt:
default_args.append("{!s}=None".format(f.name))
assign.append("self.{0!s} = {0!s}".format(f.name))
elif f.seq:
default_args.append("{!s}=[]".format(f.name))
assign.append("self.{0!s} = list({0!s})".format(f.name))
else:
non_default_args.append("{!s}".format(f.name))
assign.append("self.{0!s} = {0!s}".format(f.name))
args = non_default_args + default_args
body.append("_fields = {!r}".format( tuple(map(repr, _fields))))
body.append("_attributes = {!r}".format( tuple(map(repr, _attributes))))
body.append("")
# ctor
args_str = ", ".join(args)
if args_str != "":
args_str += ", "
body.append("def __init__ (self, {!s} **ARGS):".format(args_str))
ctor_body = []
body.append(ctor_body)
ctor_body.extend(map(lambda base : "{!s}.__init__(self, **ARGS)".format(base), parents))
ctor_body.extend(assign)
body.extend(["", "def is{}(self):".format(name), ["return True"]])
self.code[name] = cls
@memoize
def _cost(self, name):
# print "call cost {}".format(name)
if name == '_AST':
return 0
parents = self.hierarchy[name]
return reduce(lambda acc, x: acc + self._cost(x), parents, 1)
@property
def python(self):
classes = self.hierarchy.keys()
classes.sort(lambda a, b: self._cost(a) - self._cost(b))
code = ["from snakes.lang import ast",
"from ast import *",
""]
for cls in classes:
code.extend(self.code[cls])
code.append("")
def python (code, indent) :
for line in code :
if isinstance(line, str) :
yield (4*indent) * " " + line
else :
for sub in python(line, indent+1) :
yield sub
return "\n".join(python(code, 0))
def compile_asdl(infilename, outfilename):
""" Helper function to compile asdl files. """
infile = open(infilename, 'r')
outfile = open(outfilename, 'w')
scanner = asdl.ASDLScanner()
parser = asdl.ASDLParser()
tokens = scanner.tokenize(infile.read())
node = parser.parse(tokens)
outfile.write(("# this file has been automatically generated running:\n"
"# %s\n# timestamp: %s\n\n") % (" ".join(sys.argv),
datetime.datetime.now()))
outfile.write(CodeGen(node).python)
outfile.close()
infile.close()
if __name__ == "__main__":
# a simple CLI
import getopt
outfile = sys.stdout
try :
opts, args = getopt.getopt(sys.argv[1:], "ho:",
["help", "output="])
if ("-h", "") in opts or ("--help", "") in opts :
opts = [("-h", "")]
args = [None]
elif not args :
raise getopt.GetoptError("no input file provided"
" (try -h to get help)")
elif len(args) > 1 :
raise getopt.GetoptError("more than one input file provided")
except getopt.GetoptError :
sys.stderr.write("%s: %s\n" % (__file__, sys.exc_info()[1]))
sys.exit(1)
for (flag, arg) in opts :
if flag in ("-h", "--help") :
print("""usage: %s [OPTIONS] INFILE
Options:
-h, --help print this help and exit
--output=OUTPUT set output file""" % __file__)
sys.exit(0)
elif flag in ("-o", "--output") :
outfile = open(arg, "w")
scanner = asdl.ASDLScanner()
parser = asdl.ASDLParser()
tokens = scanner.tokenize(open(args[0]).read())
node = parser.parse(tokens)
outfile.write(("# this file has been automatically generated running:\n"
"# %s\n# timestamp: %s\n\n") % (" ".join(sys.argv),
datetime.datetime.now()))
try:
outfile.write(CodeGen(node).python)
except CyclicDependencies as cycle:
msg = "[E] {!s}".format(cycle)
outfile.write(msg)
if outfile != sys.stdout:
print >> sys.stderr, msg
outfile.close()
"""Backport of Python 2.6 'ast' module.
"""
from _ast import *
def parse(expr, filename='<unknown>', mode='exec'):
"""
Parse an expression into an AST node.
Equivalent to compile(expr, filename, mode, PyCF_ONLY_AST).
"""
return compile(expr, filename, mode, PyCF_ONLY_AST)
def literal_eval(node_or_string):
"""
Safely evaluate an expression node or a string containing a Python
expression. The string or node provided may only consist of the following
Python literal structures: strings, numbers, tuples, lists, dicts, booleans,
and None.
"""
_safe_names = {'None': None, 'True': True, 'False': False}
if isinstance(node_or_string, basestring):
node_or_string = parse(node_or_string, mode='eval')
if isinstance(node_or_string, Expression):
node_or_string = node_or_string.body
def _convert(node):
if isinstance(node, Str):
return node.s
elif isinstance(node, Num):
return node.n
elif isinstance(node, Tuple):
return tuple(map(_convert, node.elts))
elif isinstance(node, List):
return list(map(_convert, node.elts))
elif isinstance(node, Dict):
return dict((_convert(k), _convert(v)) for k, v
in zip(node.keys, node.values))
elif isinstance(node, Name):
if node.id in _safe_names:
return _safe_names[node.id]
raise ValueError('malformed string')
return _convert(node_or_string)
def dump(node, annotate_fields=True, include_attributes=False):
"""
Return a formatted dump of the tree in *node*. This is mainly useful for
debugging purposes. The returned string will show the names and the values
for fields. This makes the code impossible to evaluate, so if evaluation is
wanted *annotate_fields* must be set to False. Attributes such as line
numbers and column offsets are not dumped by default. If this is wanted,
*include_attributes* can be set to True.
"""
def _format(node):
if isinstance(node, AST):
fields = [(a, _format(b)) for a, b in iter_fields(node)]
rv = '%s(%s' % (node.__class__.__name__, ', '.join(
('%s=%s' % field for field in fields)
if annotate_fields else
(b for a, b in fields)
))
if include_attributes and node._attributes:
rv += fields and ', ' or ' '
rv += ', '.join('%s=%s' % (a, _format(getattr(node, a)))
for a in node._attributes)
return rv + ')'
elif isinstance(node, list):
return '[%s]' % ', '.join(_format(x) for x in node)
return repr(node)
if not isinstance(node, AST):
raise TypeError('expected AST, got %r' % node.__class__.__name__)
return _format(node)
def copy_location(new_node, old_node):
"""
Copy source location (`lineno` and `col_offset` attributes) from
*old_node* to *new_node* if possible, and return *new_node*.
"""
for attr in 'lineno', 'col_offset':
if attr in old_node._attributes and attr in new_node._attributes \
and hasattr(old_node, attr):
setattr(new_node, attr, getattr(old_node, attr))
return new_node
def fix_missing_locations(node):
"""
When you compile a node tree with compile(), the compiler expects lineno and
col_offset attributes for every node that supports them. This is rather
tedious to fill in for generated nodes, so this helper adds these attributes
recursively where not already set, by setting them to the values of the
parent node. It works recursively starting at *node*.
"""
def _fix(node, lineno, col_offset):
if 'lineno' in node._attributes:
if not hasattr(node, 'lineno'):
node.lineno = lineno
else:
lineno = node.lineno
if 'col_offset' in node._attributes:
if not hasattr(node, 'col_offset'):
node.col_offset = col_offset
else:
col_offset = node.col_offset
for child in iter_child_nodes(node):
_fix(child, lineno, col_offset)
_fix(node, 1, 0)
return node
def increment_lineno(node, n=1):
"""
Increment the line number of each node in the tree starting at *node* by *n*.
This is useful to "move code" to a different location in a file.
"""
if 'lineno' in node._attributes:
node.lineno = getattr(node, 'lineno', 0) + n
for child in walk(node):
if 'lineno' in child._attributes:
child.lineno = getattr(child, 'lineno', 0) + n
return node
def iter_fields(node):
"""
Yield a tuple of ``(fieldname, value)`` for each field in ``node._fields``
that is present on *node*.
"""
for field in node._fields:
try:
yield field, getattr(node, field)
except AttributeError:
pass
def iter_child_nodes(node):
"""
Yield all direct child nodes of *node*, that is, all fields that are nodes
and all items of fields that are lists of nodes.
"""
for name, field in iter_fields(node):
if isinstance(field, AST):
yield field
elif isinstance(field, list):
for item in field:
if isinstance(item, AST):
yield item
def get_docstring(node, clean=True):
"""
Return the docstring for the given node or None if no docstring can
be found. If the node provided does not have docstrings a TypeError
will be raised.
"""
if not isinstance(node, (FunctionDef, ClassDef, Module)):
raise TypeError("%r can't have docstrings" % node.__class__.__name__)
if node.body and isinstance(node.body[0], Expr) and \
isinstance(node.body[0].value, Str):
if clean:
import inspect
return inspect.cleandoc(node.body[0].value.s)
return node.body[0].value.s
def walk(node):
"""
Recursively yield all child nodes of *node*, in no specified order. This is
useful if you only want to modify nodes in place and don't care about the
context.
"""
from collections import deque
todo = deque([node])
while todo:
node = todo.popleft()
todo.extend(iter_child_nodes(node))
yield node
class NodeVisitor(object):
"""
A node visitor base class that walks the abstract syntax tree and calls a
visitor function for every node found. This function may return a value
which is forwarded by the `visit` method.
This class is meant to be subclassed, with the subclass adding visitor
methods.
Per default the visitor functions for the nodes are ``'visit_'`` +
class name of the node. So a `TryFinally` node visit function would
be `visit_TryFinally`. This behavior can be changed by overriding
the `visit` method. If no visitor function exists for a node
(return value `None`) the `generic_visit` visitor is used instead.
Don't use the `NodeVisitor` if you want to apply changes to nodes during
traversing. For this a special visitor exists (`NodeTransformer`) that
allows modifications.
"""
def visit(self, node):
"""Visit a node."""
method = 'visit_' + node.__class__.__name__
visitor = getattr(self, method, self.generic_visit)
return visitor(node)
def generic_visit(self, node):
"""Called if no explicit visitor function exists for a node."""
for field, value in iter_fields(node):
if isinstance(value, list):
for item in value:
if isinstance(item, AST):
self.visit(item)
elif isinstance(value, AST):
self.visit(value)
class NodeTransformer(NodeVisitor):
"""
A :class:`NodeVisitor` subclass that walks the abstract syntax tree and
allows modification of nodes.
The `NodeTransformer` will walk the AST and use the return value of the
visitor methods to replace or remove the old node. If the return value of
the visitor method is ``None``, the node will be removed from its location,
otherwise it is replaced with the return value. The return value may be the
original node in which case no replacement takes place.
Here is an example transformer that rewrites all occurrences of name lookups
(``foo``) to ``data['foo']``::
class RewriteName(NodeTransformer):
def visit_Name(self, node):
return copy_location(Subscript(
value=Name(id='data', ctx=Load()),
slice=Index(value=Str(s=node.id)),
ctx=node.ctx
), node)
Keep in mind that if the node you're operating on has child nodes you must
either transform the child nodes yourself or call the :meth:`generic_visit`
method for the node first.
For nodes that were part of a collection of statements (that applies to all
statement nodes), the visitor may also return a list of nodes rather than
just a single node.
Usually you use the transformer like this::
node = YourTransformer().visit(node)
"""
def generic_visit(self, node):
for field, old_value in iter_fields(node):
old_value = getattr(node, field, None)
if isinstance(old_value, list):
new_values = []
for value in old_value:
if isinstance(value, AST):
value = self.visit(value)
if value is None:
continue
elif not isinstance(value, AST):
new_values.extend(value)
continue
new_values.append(value)
old_value[:] = new_values
elif isinstance(old_value, AST):
new_node = self.visit(old_value)
if new_node is None:
delattr(node, field)
else:
setattr(node, field, new_node)
return node
"""Backport of Python 2.6 'ast' module.
AST classes are wrappers around classes in Python 2.5 '_ast' module.
The rest is copied from Python 2.6 'ast.py'.
"""
import inspect
import _ast
from _ast import PyCF_ONLY_AST
from _ast import AST
for name, cls in inspect.getmembers(_ast, inspect.isclass) :
if issubclass(cls, AST) :
if cls is AST :
continue
class _Ast (cls, AST) :
def __init__ (self, *larg, **karg) :
if len(larg) > 0 and len(larg) != len(self._fields) :
raise TypeError, ("%s constructor takes either 0 or "
"%u positional arguments"
% (self.__class__.__name__,
len(self._fields)))
for name, arg in zip(self._fields, larg) + karg.items() :
if name in self._fields :
setattr(self, name, arg)
try :
_Ast._fields = tuple(cls._fields)
except :
_Ast._fields = ()
_Ast.__name__ = name
globals()[name] = _Ast
del _Ast, cls, name
def literal_eval(node_or_string):
"""
Safely evaluate an expression node or a string containing a Python
expression. The string or node provided may only consist of the following
Python literal structures: strings, numbers, tuples, lists, dicts, booleans,
and None.
"""
_safe_names = {'None': None, 'True': True, 'False': False}
if isinstance(node_or_string, basestring):
node_or_string = parse(node_or_string, mode='eval')
if isinstance(node_or_string, Expression):
node_or_string = node_or_string.body
def _convert(node):
if isinstance(node, Str):
return node.s
elif isinstance(node, Num):
return node.n
elif isinstance(node, Tuple):
return tuple(map(_convert, node.elts))
elif isinstance(node, List):
return list(map(_convert, node.elts))
elif isinstance(node, Dict):
return dict((_convert(k), _convert(v)) for k, v
in zip(node.keys, node.values))
elif isinstance(node, Name):
if node.id in _safe_names:
return _safe_names[node.id]
raise ValueError('malformed string')
return _convert(node_or_string)
def dump(node, annotate_fields=True, include_attributes=False):
"""
Return a formatted dump of the tree in *node*. This is mainly useful for
debugging purposes. The returned string will show the names and the values
for fields. This makes the code impossible to evaluate, so if evaluation is
wanted *annotate_fields* must be set to False. Attributes such as line
numbers and column offsets are not dumped by default. If this is wanted,
*include_attributes* can be set to True.
"""
def _format(node):
if isinstance(node, AST):
fields = [(a, _format(b)) for a, b in iter_fields(node)]
rv = '%s(%s' % (node.__class__.__name__, ', '.join(
('%s=%s' % field for field in fields)
if annotate_fields else
(b for a, b in fields)
))
if include_attributes and node._attributes:
rv += fields and ', ' or ' '
rv += ', '.join('%s=%s' % (a, _format(getattr(node, a)))
for a in node._attributes)
return rv + ')'
elif isinstance(node, list):
return '[%s]' % ', '.join(_format(x) for x in node)
return repr(node)
if not isinstance(node, AST):
raise TypeError('expected AST, got %r' % node.__class__.__name__)
return _format(node)
def copy_location(new_node, old_node):
"""
Copy source location (`lineno` and `col_offset` attributes) from
*old_node* to *new_node* if possible, and return *new_node*.
"""
for attr in 'lineno', 'col_offset':
if attr in old_node._attributes and attr in new_node._attributes \
and hasattr(old_node, attr):
setattr(new_node, attr, getattr(old_node, attr))
return new_node
def fix_missing_locations(node):
"""
When you compile a node tree with compile(), the compiler expects lineno and
col_offset attributes for every node that supports them. This is rather
tedious to fill in for generated nodes, so this helper adds these attributes
recursively where not already set, by setting them to the values of the
parent node. It works recursively starting at *node*.
"""
def _fix(node, lineno, col_offset):
if 'lineno' in node._attributes:
if not hasattr(node, 'lineno'):
node.lineno = lineno
else:
lineno = node.lineno
if 'col_offset' in node._attributes:
if not hasattr(node, 'col_offset'):
node.col_offset = col_offset
else:
col_offset = node.col_offset
for child in iter_child_nodes(node):
_fix(child, lineno, col_offset)
_fix(node, 1, 0)
return node
def increment_lineno(node, n=1):
"""
Increment the line number of each node in the tree starting at *node* by *n*.
This is useful to "move code" to a different location in a file.
"""
if 'lineno' in node._attributes:
node.lineno = getattr(node, 'lineno', 0) + n
for child in walk(node):
if 'lineno' in child._attributes:
child.lineno = getattr(child, 'lineno', 0) + n
return node
def iter_fields(node):
"""
Yield a tuple of ``(fieldname, value)`` for each field in ``node._fields``
that is present on *node*.
"""
for field in node._fields:
try:
yield field, getattr(node, field)
except AttributeError:
pass
def iter_child_nodes(node):
"""
Yield all direct child nodes of *node*, that is, all fields that are nodes
and all items of fields that are lists of nodes.
"""
for name, field in iter_fields(node):
if isinstance(field, AST):
yield field
elif isinstance(field, list):
for item in field:
if isinstance(item, AST):
yield item
def get_docstring(node, clean=True):
"""
Return the docstring for the given node or None if no docstring can
be found. If the node provided does not have docstrings a TypeError
will be raised.
"""
if not isinstance(node, (FunctionDef, ClassDef, Module)):
raise TypeError("%r can't have docstrings" % node.__class__.__name__)
if node.body and isinstance(node.body[0], Expr) and \
isinstance(node.body[0].value, Str):
if clean:
return inspect.cleandoc(node.body[0].value.s)
return node.body[0].value.s
def walk(node):
"""
Recursively yield all child nodes of *node*, in no specified order. This is
useful if you only want to modify nodes in place and don't care about the
context.
"""
from collections import deque
todo = deque([node])
while todo:
node = todo.popleft()
todo.extend(iter_child_nodes(node))
yield node
class NodeVisitor(object):
"""
A node visitor base class that walks the abstract syntax tree and calls a
visitor function for every node found. This function may return a value
which is forwarded by the `visit` method.
This class is meant to be subclassed, with the subclass adding visitor
methods.
Per default the visitor functions for the nodes are ``'visit_'`` +
class name of the node. So a `TryFinally` node visit function would
be `visit_TryFinally`. This behavior can be changed by overriding
the `visit` method. If no visitor function exists for a node
(return value `None`) the `generic_visit` visitor is used instead.
Don't use the `NodeVisitor` if you want to apply changes to nodes during
traversing. For this a special visitor exists (`NodeTransformer`) that
allows modifications.
"""
def visit(self, node):
"""Visit a node."""
method = 'visit_' + node.__class__.__name__
visitor = getattr(self, method, self.generic_visit)
return visitor(node)
def generic_visit(self, node):
"""Called if no explicit visitor function exists for a node."""
for field, value in iter_fields(node):
if isinstance(value, list):
for item in value:
if isinstance(item, AST):
self.visit(item)
elif isinstance(value, AST):
self.visit(value)
class NodeTransformer(NodeVisitor):
"""
A :class:`NodeVisitor` subclass that walks the abstract syntax tree and
allows modification of nodes.
The `NodeTransformer` will walk the AST and use the return value of the
visitor methods to replace or remove the old node. If the return value of
the visitor method is ``None``, the node will be removed from its location,
otherwise it is replaced with the return value. The return value may be the
original node in which case no replacement takes place.
Here is an example transformer that rewrites all occurrences of name lookups
(``foo``) to ``data['foo']``::
class RewriteName(NodeTransformer):
def visit_Name(self, node):
return copy_location(Subscript(
value=Name(id='data', ctx=Load()),
slice=Index(value=Str(s=node.id)),
ctx=node.ctx
), node)
Keep in mind that if the node you're operating on has child nodes you must
either transform the child nodes yourself or call the :meth:`generic_visit`
method for the node first.
For nodes that were part of a collection of statements (that applies to all
statement nodes), the visitor may also return a list of nodes rather than
just a single node.
Usually you use the transformer like this::
node = YourTransformer().visit(node)
"""
def generic_visit(self, node):
for field, old_value in iter_fields(node):
old_value = getattr(node, field, None)
if isinstance(old_value, list):
new_values = []
for value in old_value:
if isinstance(value, AST):
value = self.visit(value)
if value is None:
continue
elif not isinstance(value, AST):
new_values.extend(value)
continue
new_values.append(value)
old_value[:] = new_values
elif isinstance(old_value, AST):
new_node = self.visit(old_value)
if new_node is None:
delattr(node, field)
else:
setattr(node, field, new_node)
return node
def _ast2ast (node) :
new = globals()[node.__class__.__name__]()
if not hasattr(node, "_fields") or node._fields is None :
node._fields = ()
for name, field in iter_fields(node) :
new_field = field
if field is None :
new_field = None
elif isinstance(field, AST) :
new_field = _ast2ast(field)
elif isinstance(field, list) :
new_field = []
for value in field :
if isinstance(value, AST) :
new_field.append(_ast2ast(value))
else :
new_field.append(value)
setattr(new, name, new_field)
copy_location(new, node)
return new
def parse(expr, filename='<unknown>', mode='exec'):
"""
Parse an expression into an AST node.
Equivalent to compile(expr, filename, mode, PyCF_ONLY_AST).
"""
return _ast2ast(compile(expr, filename, mode, PyCF_ONLY_AST))
"""Backport of Python 2.6 'ast' module.
"""
import _ast
from _ast import *
try :
PyCF_ONLY_AST
except NameError :
PyCF_ONLY_AST = PyCF_AST_ONLY
def parse(expr, filename='<unknown>', mode='exec'):
"""
Parse an expression into an AST node.
Equivalent to compile(expr, filename, mode, PyCF_ONLY_AST).
"""
return compile(expr, filename, mode, PyCF_ONLY_AST)
def literal_eval(node_or_string):
"""
Safely evaluate an expression node or a string containing a Python
expression. The string or node provided may only consist of the following
Python literal structures: strings, numbers, tuples, lists, dicts, booleans,
and None.
"""
_safe_names = {'None': None, 'True': True, 'False': False}
if isinstance(node_or_string, basestring):
node_or_string = parse(node_or_string, mode='eval')
if isinstance(node_or_string, Expression):
node_or_string = node_or_string.body
def _convert(node):
if isinstance(node, Str):
return node.s
elif isinstance(node, Num):
return node.n
elif isinstance(node, Tuple):
return tuple(map(_convert, node.elts))
elif isinstance(node, List):
return list(map(_convert, node.elts))
elif isinstance(node, Dict):
return dict((_convert(k), _convert(v)) for k, v
in zip(node.keys, node.values))
elif isinstance(node, Name):
if node.id in _safe_names:
return _safe_names[node.id]
raise ValueError('malformed string')
return _convert(node_or_string)
def dump(node, annotate_fields=True, include_attributes=False):
"""
Return a formatted dump of the tree in *node*. This is mainly useful for
debugging purposes. The returned string will show the names and the values
for fields. This makes the code impossible to evaluate, so if evaluation is
wanted *annotate_fields* must be set to False. Attributes such as line
numbers and column offsets are not dumped by default. If this is wanted,
*include_attributes* can be set to True.
"""
def _format(node):
if isinstance(node, AST):
fields = [(a, _format(b)) for a, b in iter_fields(node)]
rv = '%s(%s' % (node.__class__.__name__, ', '.join(
('%s=%s' % field for field in fields)
if annotate_fields else
(b for a, b in fields)
))
if include_attributes and node._attributes:
rv += fields and ', ' or ' '
rv += ', '.join('%s=%s' % (a, _format(getattr(node, a)))
for a in node._attributes)
return rv + ')'
elif isinstance(node, list):
return '[%s]' % ', '.join(_format(x) for x in node)
return repr(node)
if not isinstance(node, AST):
raise TypeError('expected AST, got %r' % node.__class__.__name__)
return _format(node)
def copy_location(new_node, old_node):
"""
Copy source location (`lineno` and `col_offset` attributes) from
*old_node* to *new_node* if possible, and return *new_node*.
"""
for attr in 'lineno', 'col_offset':
if attr in old_node._attributes and attr in new_node._attributes \
and hasattr(old_node, attr):
setattr(new_node, attr, getattr(old_node, attr))
return new_node
def fix_missing_locations(node):
"""
When you compile a node tree with compile(), the compiler expects lineno and
col_offset attributes for every node that supports them. This is rather
tedious to fill in for generated nodes, so this helper adds these attributes
recursively where not already set, by setting them to the values of the
parent node. It works recursively starting at *node*.
"""
def _fix(node, lineno, col_offset):
if 'lineno' in node._attributes:
if not hasattr(node, 'lineno'):
node.lineno = lineno
else:
lineno = node.lineno
if 'col_offset' in node._attributes:
if not hasattr(node, 'col_offset'):
node.col_offset = col_offset
else:
col_offset = node.col_offset
for child in iter_child_nodes(node):
_fix(child, lineno, col_offset)
_fix(node, 1, 0)
return node
def increment_lineno(node, n=1):
"""
Increment the line number of each node in the tree starting at *node* by *n*.
This is useful to "move code" to a different location in a file.
"""
if 'lineno' in node._attributes:
node.lineno = getattr(node, 'lineno', 0) + n
for child in walk(node):
if 'lineno' in child._attributes:
child.lineno = getattr(child, 'lineno', 0) + n
return node
def iter_fields(node):
"""
Yield a tuple of ``(fieldname, value)`` for each field in ``node._fields``
that is present on *node*.
"""
for field in node._fields:
try:
yield field, getattr(node, field)
except AttributeError:
pass
def iter_child_nodes(node):
"""
Yield all direct child nodes of *node*, that is, all fields that are nodes
and all items of fields that are lists of nodes.
"""
for name, field in iter_fields(node):
if isinstance(field, AST):
yield field
elif isinstance(field, list):
for item in field:
if isinstance(item, AST):
yield item
def get_docstring(node, clean=True):
"""
Return the docstring for the given node or None if no docstring can
be found. If the node provided does not have docstrings a TypeError
will be raised.
"""
if not isinstance(node, (FunctionDef, ClassDef, Module)):
raise TypeError("%r can't have docstrings" % node.__class__.__name__)
if node.body and isinstance(node.body[0], Expr) and \
isinstance(node.body[0].value, Str):
if clean:
import inspect
return inspect.cleandoc(node.body[0].value.s)
return node.body[0].value.s
def walk(node):
"""
Recursively yield all child nodes of *node*, in no specified order. This is
useful if you only want to modify nodes in place and don't care about the
context.
"""
from collections import deque
todo = deque([node])
while todo:
node = todo.popleft()
todo.extend(iter_child_nodes(node))
yield node
class NodeVisitor(object):
"""
A node visitor base class that walks the abstract syntax tree and calls a
visitor function for every node found. This function may return a value
which is forwarded by the `visit` method.
This class is meant to be subclassed, with the subclass adding visitor
methods.
Per default the visitor functions for the nodes are ``'visit_'`` +
class name of the node. So a `TryFinally` node visit function would
be `visit_TryFinally`. This behavior can be changed by overriding
the `visit` method. If no visitor function exists for a node
(return value `None`) the `generic_visit` visitor is used instead.
Don't use the `NodeVisitor` if you want to apply changes to nodes during
traversing. For this a special visitor exists (`NodeTransformer`) that
allows modifications.
"""
def visit(self, node):
"""Visit a node."""
method = 'visit_' + node.__class__.__name__
visitor = getattr(self, method, self.generic_visit)
return visitor(node)
def generic_visit(self, node):
"""Called if no explicit visitor function exists for a node."""
for field, value in iter_fields(node):
if isinstance(value, list):
for item in value:
if isinstance(item, AST):
self.visit(item)
elif isinstance(value, AST):
self.visit(value)
class NodeTransformer(NodeVisitor):
"""
A :class:`NodeVisitor` subclass that walks the abstract syntax tree and
allows modification of nodes.
The `NodeTransformer` will walk the AST and use the return value of the
visitor methods to replace or remove the old node. If the return value of
the visitor method is ``None``, the node will be removed from its location,
otherwise it is replaced with the return value. The return value may be the
original node in which case no replacement takes place.
Here is an example transformer that rewrites all occurrences of name lookups
(``foo``) to ``data['foo']``::
class RewriteName(NodeTransformer):
def visit_Name(self, node):
return copy_location(Subscript(
value=Name(id='data', ctx=Load()),
slice=Index(value=Str(s=node.id)),
ctx=node.ctx
), node)
Keep in mind that if the node you're operating on has child nodes you must
either transform the child nodes yourself or call the :meth:`generic_visit`
method for the node first.
For nodes that were part of a collection of statements (that applies to all
statement nodes), the visitor may also return a list of nodes rather than
just a single node.
Usually you use the transformer like this::
node = YourTransformer().visit(node)
"""
def generic_visit(self, node):
for field, old_value in iter_fields(node):
old_value = getattr(node, field, None)
if isinstance(old_value, list):
new_values = []
for value in old_value:
if isinstance(value, AST):
value = self.visit(value)
if value is None:
continue
elif not isinstance(value, AST):
new_values.extend(value)
continue
new_values.append(value)
old_value[:] = new_values
elif isinstance(old_value, AST):
new_node = self.visit(old_value)
if new_node is None:
delattr(node, field)
else:
setattr(node, field, new_node)
return node
# this file has been automatically generated running:
# snakes/lang/asdl.py --output=snakes/lang/ctlstar/asdl.py snakes/lang/ctlstar/ctlstar.asdl
# timestamp: 2011-11-16 13:30:34.219385
from snakes.lang import ast
from ast import *
class _AST (ast.AST):
def __init__ (self, **ARGS):
ast.AST.__init__(self)
for k, v in ARGS.items():
setattr(self, k, v)
class expr_context (_AST):
pass
class Load (expr_context):
_fields = ()
_attributes = ()
class Store (expr_context):
_fields = ()
_attributes = ()
class Del (expr_context):
_fields = ()
_attributes = ()
class AugLoad (expr_context):
_fields = ()
_attributes = ()
class AugStore (expr_context):
_fields = ()
_attributes = ()
class Param (expr_context):
_fields = ()
_attributes = ()
class comprehension (_AST):
_fields = ('target', 'iter', 'ifs')
_attributes = ()
def __init__ (self, target, iter, ifs=[], **ARGS):
_AST.__init__(self, **ARGS)
self.target = target
self.iter = iter
self.ifs = list(ifs)
class arg (_AST):
_fields = ('arg', 'annotation')
_attributes = ()
def __init__ (self, arg, annotation=None, **ARGS):
_AST.__init__(self, **ARGS)
self.arg = arg
self.annotation = annotation
class operator (_AST):
pass
class Add (operator):
_fields = ()
_attributes = ()
class Sub (operator):
_fields = ()
_attributes = ()
class Mult (operator):
_fields = ()
_attributes = ()
class Div (operator):
_fields = ()
_attributes = ()
class Mod (operator):
_fields = ()
_attributes = ()
class Pow (operator):
_fields = ()
_attributes = ()
class LShift (operator):
_fields = ()
_attributes = ()
class RShift (operator):
_fields = ()
_attributes = ()
class BitOr (operator):
_fields = ()
_attributes = ()
class BitXor (operator):
_fields = ()
_attributes = ()
class BitAnd (operator):
_fields = ()
_attributes = ()
class FloorDiv (operator):
_fields = ()
_attributes = ()
class slice (_AST):
pass
class Slice (slice):
_fields = ('lower', 'upper', 'step')
_attributes = ()
def __init__ (self, lower=None, upper=None, step=None, **ARGS):
slice.__init__(self, **ARGS)
self.lower = lower
self.upper = upper
self.step = step
class ExtSlice (slice):
_fields = ('dims',)
_attributes = ()
def __init__ (self, dims=[], **ARGS):
slice.__init__(self, **ARGS)
self.dims = list(dims)
class Index (slice):
_fields = ('value',)
_attributes = ()
def __init__ (self, value, **ARGS):
slice.__init__(self, **ARGS)
self.value = value
class excepthandler (_AST):
pass
class ExceptHandler (excepthandler):
_fields = ('type', 'name', 'body')
_attributes = ('lineno', 'col_offset')
def __init__ (self, type=None, name=None, body=[], lineno=0, col_offset=0, **ARGS):
excepthandler.__init__(self, **ARGS)
self.type = type
self.name = name
self.body = list(body)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class arguments (_AST):
_fields = ('args', 'vararg', 'varargannotation', 'kwonlyargs', 'kwarg', 'kwargannotation', 'defaults', 'kw_defaults')
_attributes = ()
def __init__ (self, args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[], **ARGS):
_AST.__init__(self, **ARGS)
self.args = list(args)
self.vararg = vararg
self.varargannotation = varargannotation
self.kwonlyargs = list(kwonlyargs)
self.kwarg = kwarg
self.kwargannotation = kwargannotation
self.defaults = list(defaults)
self.kw_defaults = list(kw_defaults)
class ctlbinary (_AST):
pass
class boolop (ctlbinary):
_fields = ()
_attributes = ()
class Imply (ctlbinary):
_fields = ()
_attributes = ()
class Iff (ctlbinary):
_fields = ()
_attributes = ()
class Until (ctlbinary):
_fields = ()
_attributes = ()
class WeakUntil (ctlbinary):
_fields = ()
_attributes = ()
class Release (ctlbinary):
_fields = ()
_attributes = ()
class ctlunary (_AST):
pass
class notop (ctlunary):
_fields = ()
_attributes = ()
class All (ctlunary):
_fields = ()
_attributes = ()
class Exists (ctlunary):
_fields = ()
_attributes = ()
class Next (ctlunary):
_fields = ()
_attributes = ()
class Future (ctlunary):
_fields = ()
_attributes = ()
class Globally (ctlunary):
_fields = ()
_attributes = ()
class form (_AST):
pass
class atom (form):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
form.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class CtlUnary (form):
_fields = ('op', 'child')
_attributes = ('lineno', 'col_offset')
def __init__ (self, op, child, lineno=0, col_offset=0, **ARGS):
form.__init__(self, **ARGS)
self.op = op
self.child = child
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class CtlBinary (form):
_fields = ('op', 'left', 'right')
_attributes = ('lineno', 'col_offset')
def __init__ (self, op, left, right, lineno=0, col_offset=0, **ARGS):
form.__init__(self, **ARGS)
self.op = op
self.left = left
self.right = right
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class unaryop (_AST):
pass
class Invert (unaryop):
_fields = ()
_attributes = ()
class notop (unaryop):
_fields = ()
_attributes = ()
class UAdd (unaryop):
_fields = ()
_attributes = ()
class USub (unaryop):
_fields = ()
_attributes = ()
class boolop (_AST):
pass
class And (boolop):
_fields = ()
_attributes = ()
class Or (boolop):
_fields = ()
_attributes = ()
class stmt (_AST):
pass
class FunctionDef (stmt):
_fields = ('name', 'args', 'body', 'decorator_list', 'returns')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, args, body=[], decorator_list=[], returns=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.name = name
self.args = args
self.body = list(body)
self.decorator_list = list(decorator_list)
self.returns = returns
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class ClassDef (stmt):
_fields = ('name', 'bases', 'keywords', 'starargs', 'kwargs', 'body', 'decorator_list')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, bases=[], keywords=[], starargs=None, kwargs=None, body=[], decorator_list=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.name = name
self.bases = list(bases)
self.keywords = list(keywords)
self.starargs = starargs
self.kwargs = kwargs
self.body = list(body)
self.decorator_list = list(decorator_list)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Return (stmt):
_fields = ('value',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, value=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Delete (stmt):
_fields = ('targets',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, targets=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.targets = list(targets)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Assign (stmt):
_fields = ('targets', 'value')
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, targets=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.targets = list(targets)
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class AugAssign (stmt):
_fields = ('target', 'op', 'value')
_attributes = ('lineno', 'col_offset')
def __init__ (self, target, op, value, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.target = target
self.op = op
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class For (stmt):
_fields = ('target', 'iter', 'body', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, target, iter, body=[], orelse=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.target = target
self.iter = iter
self.body = list(body)
self.orelse = list(orelse)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class While (stmt):
_fields = ('test', 'body', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, test, body=[], orelse=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.test = test
self.body = list(body)
self.orelse = list(orelse)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class If (stmt):
_fields = ('test', 'body', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, test, body=[], orelse=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.test = test
self.body = list(body)
self.orelse = list(orelse)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class With (stmt):
_fields = ('context_expr', 'optional_vars', 'body')
_attributes = ('lineno', 'col_offset')
def __init__ (self, context_expr, optional_vars=None, body=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.context_expr = context_expr
self.optional_vars = optional_vars
self.body = list(body)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Raise (stmt):
_fields = ('exc', 'cause')
_attributes = ('lineno', 'col_offset')
def __init__ (self, exc=None, cause=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.exc = exc
self.cause = cause
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class TryExcept (stmt):
_fields = ('body', 'handlers', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, body=[], handlers=[], orelse=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.body = list(body)
self.handlers = list(handlers)
self.orelse = list(orelse)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class TryFinally (stmt):
_fields = ('body', 'finalbody')
_attributes = ('lineno', 'col_offset')
def __init__ (self, body=[], finalbody=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.body = list(body)
self.finalbody = list(finalbody)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Assert (stmt):
_fields = ('test', 'msg')
_attributes = ('lineno', 'col_offset')
def __init__ (self, test, msg=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.test = test
self.msg = msg
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Import (stmt):
_fields = ('names',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, names=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.names = list(names)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class ImportFrom (stmt):
_fields = ('module', 'names', 'level')
_attributes = ('lineno', 'col_offset')
def __init__ (self, module, names=[], level=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.module = module
self.names = list(names)
self.level = level
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Exec (stmt):
_fields = ('body', 'globals', 'locals')
_attributes = ('lineno', 'col_offset')
def __init__ (self, body, globals=None, locals=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.body = body
self.globals = globals
self.locals = locals
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Global (stmt):
_fields = ('names',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, names=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.names = list(names)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Nonlocal (stmt):
_fields = ('names',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, names=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.names = list(names)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Expr (stmt):
_fields = ('value',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Pass (stmt):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Break (stmt):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Continue (stmt):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class notop (_AST):
pass
class Not (notop):
_fields = ()
_attributes = ()
class ctlstar (_AST):
pass
class Spec (ctlstar):
_fields = ('atoms', 'properties', 'main')
_attributes = ('lineno', 'col_offset')
def __init__ (self, atoms=[], properties=[], main=None, lineno=0, col_offset=0, **ARGS):
ctlstar.__init__(self, **ARGS)
self.atoms = list(atoms)
self.properties = list(properties)
self.main = main
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class atom (_AST):
pass
class InPlace (atom):
_fields = ('data', 'place')
_attributes = ('lineno', 'col_offset')
def __init__ (self, place, data=[], lineno=0, col_offset=0, **ARGS):
atom.__init__(self, **ARGS)
self.data = list(data)
self.place = place
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class NotInPlace (atom):
_fields = ('data', 'place')
_attributes = ('lineno', 'col_offset')
def __init__ (self, place, data=[], lineno=0, col_offset=0, **ARGS):
atom.__init__(self, **ARGS)
self.data = list(data)
self.place = place
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class EmptyPlace (atom):
_fields = ('place',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, place, lineno=0, col_offset=0, **ARGS):
atom.__init__(self, **ARGS)
self.place = place
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class MarkedPlace (atom):
_fields = ('place',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, place, lineno=0, col_offset=0, **ARGS):
atom.__init__(self, **ARGS)
self.place = place
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Deadlock (atom):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
atom.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Boolean (atom):
_fields = ('val',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, val, lineno=0, col_offset=0, **ARGS):
atom.__init__(self, **ARGS)
self.val = val
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Instance (atom):
_fields = ('name', 'args')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, args=[], lineno=0, col_offset=0, **ARGS):
atom.__init__(self, **ARGS)
self.name = name
self.args = list(args)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Quantifier (atom):
_fields = ('op', 'vars', 'place', 'child', 'distinct')
_attributes = ('lineno', 'col_offset')
def __init__ (self, op, place, child, distinct, vars=[], lineno=0, col_offset=0, **ARGS):
atom.__init__(self, **ARGS)
self.op = op
self.vars = list(vars)
self.place = place
self.child = child
self.distinct = distinct
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class cmpop (_AST):
pass
class Eq (cmpop):
_fields = ()
_attributes = ()
class NotEq (cmpop):
_fields = ()
_attributes = ()
class Lt (cmpop):
_fields = ()
_attributes = ()
class LtE (cmpop):
_fields = ()
_attributes = ()
class Gt (cmpop):
_fields = ()
_attributes = ()
class GtE (cmpop):
_fields = ()
_attributes = ()
class Is (cmpop):
_fields = ()
_attributes = ()
class IsNot (cmpop):
_fields = ()
_attributes = ()
class In (cmpop):
_fields = ()
_attributes = ()
class NotIn (cmpop):
_fields = ()
_attributes = ()
class keyword (_AST):
_fields = ('arg', 'value')
_attributes = ()
def __init__ (self, arg, value, **ARGS):
_AST.__init__(self, **ARGS)
self.arg = arg
self.value = value
class ctlarg (_AST):
pass
class Place (ctlarg):
_fields = ('name', 'place')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, place, lineno=0, col_offset=0, **ARGS):
ctlarg.__init__(self, **ARGS)
self.name = name
self.place = place
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Token (ctlarg):
_fields = ('name', 'place')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, place, lineno=0, col_offset=0, **ARGS):
ctlarg.__init__(self, **ARGS)
self.name = name
self.place = place
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Argument (ctlarg):
_fields = ('name', 'value', 'type')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, value, type, lineno=0, col_offset=0, **ARGS):
ctlarg.__init__(self, **ARGS)
self.name = name
self.value = value
self.type = type
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class expr (_AST):
pass
class BoolOp (expr):
_fields = ('op', 'values')
_attributes = ('lineno', 'col_offset')
def __init__ (self, op, values=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.op = op
self.values = list(values)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class BinOp (expr):
_fields = ('left', 'op', 'right')
_attributes = ('lineno', 'col_offset')
def __init__ (self, left, op, right, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.left = left
self.op = op
self.right = right
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class UnaryOp (expr):
_fields = ('op', 'operand')
_attributes = ('lineno', 'col_offset')
def __init__ (self, op, operand, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.op = op
self.operand = operand
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Lambda (expr):
_fields = ('args', 'body')
_attributes = ('lineno', 'col_offset')
def __init__ (self, args, body, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.args = args
self.body = body
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class IfExp (expr):
_fields = ('test', 'body', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, test, body, orelse, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.test = test
self.body = body
self.orelse = orelse
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Dict (expr):
_fields = ('keys', 'values')
_attributes = ('lineno', 'col_offset')
def __init__ (self, keys=[], values=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.keys = list(keys)
self.values = list(values)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Set (expr):
_fields = ('elts',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, elts=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elts = list(elts)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class ListComp (expr):
_fields = ('elt', 'generators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elt, generators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elt = elt
self.generators = list(generators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class SetComp (expr):
_fields = ('elt', 'generators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elt, generators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elt = elt
self.generators = list(generators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class DictComp (expr):
_fields = ('key', 'value', 'generators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, key, value, generators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.key = key
self.value = value
self.generators = list(generators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class GeneratorExp (expr):
_fields = ('elt', 'generators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elt, generators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elt = elt
self.generators = list(generators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Yield (expr):
_fields = ('value',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, value=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Compare (expr):
_fields = ('left', 'ops', 'comparators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, left, ops=[], comparators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.left = left
self.ops = list(ops)
self.comparators = list(comparators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Call (expr):
_fields = ('func', 'args', 'keywords', 'starargs', 'kwargs')
_attributes = ('lineno', 'col_offset')
def __init__ (self, func, args=[], keywords=[], starargs=None, kwargs=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.func = func
self.args = list(args)
self.keywords = list(keywords)
self.starargs = starargs
self.kwargs = kwargs
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Num (expr):
_fields = ('n',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, n, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.n = n
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Str (expr):
_fields = ('s',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, s, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.s = s
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Ellipsis (expr):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Attribute (expr):
_fields = ('value', 'attr', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, attr, ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.value = value
self.attr = attr
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Subscript (expr):
_fields = ('value', 'slice', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, slice, ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.value = value
self.slice = slice
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Starred (expr):
_fields = ('value', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.value = value
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Name (expr):
_fields = ('id', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, id, ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.id = id
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class List (expr):
_fields = ('elts', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elts=[], ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elts = list(elts)
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Tuple (expr):
_fields = ('elts', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elts=[], ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elts = list(elts)
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class ctldecl (_AST):
pass
class Atom (ctldecl):
_fields = ('name', 'args', 'params', 'body')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, args=[], params=[], body=[], lineno=0, col_offset=0, **ARGS):
ctldecl.__init__(self, **ARGS)
self.name = name
self.args = list(args)
self.params = list(params)
self.body = list(body)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Property (ctldecl):
_fields = ('name', 'args', 'params', 'body')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, body, args=[], params=[], lineno=0, col_offset=0, **ARGS):
ctldecl.__init__(self, **ARGS)
self.name = name
self.args = list(args)
self.params = list(params)
self.body = body
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class alias (_AST):
_fields = ('name', 'asname')
_attributes = ()
def __init__ (self, name, asname=None, **ARGS):
_AST.__init__(self, **ARGS)
self.name = name
self.asname = asname
class ctlparam (_AST):
pass
class Parameter (ctlparam):
_fields = ('name', 'type')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, type, lineno=0, col_offset=0, **ARGS):
ctlparam.__init__(self, **ARGS)
self.name = name
self.type = type
self.lineno = int(lineno)
self.col_offset = int(col_offset)
module CTLstar version "$Revision: 1 $"
{
ctlstar = Spec(ctldecl* atoms, ctldecl* properties, form? main)
attributes (int lineno, int col_offset)
ctldecl = Atom(identifier name, ctlarg* args, ctlparams* params,
stmt* body)
| Property(identifier name, ctlargs* args,
ctlparams* params, form body)
attributes (int lineno, int col_offset)
ctlarg = Place(identifier name, string place)
| Token(identifier name, string place)
| Argument(identifier name, expr value, identifier type)
attributes (int lineno, int col_offset)
ctlparam = Parameter(identifier name, identifier type)
attributes (int lineno, int col_offset)
form = atom
| CtlUnary(ctlunary op, form child)
| CtlBinary(ctlbinary op, form left, form right)
attributes (int lineno, int col_offset)
ctlunary = notop | All | Exists | Next | Future | Globally
notop = Not
ctlbinary = boolop | Imply | Iff | Until | WeakUntil | Release
atom = InPlace(expr* data, ctlarg place)
| NotInPlace(expr* data, ctlarg place)
| EmptyPlace(ctlarg place)
| MarkedPlace(ctlarg place)
| Deadlock
| Boolean(bool val)
| Instance(identifier name, arg* args)
| Quantifier(ctlunary op,
identifier* vars,
ctlarg place,
form child,
bool distinct)
attributes (int lineno, int col_offset)
--------------------------------------------------------------
-- the rest is copied from "snakes/lang/python/python.asdl" --
--------------------------------------------------------------
stmt = FunctionDef(identifier name, arguments args,
stmt* body, expr* decorator_list, expr? returns)
| ClassDef(identifier name,
expr* bases,
keyword* keywords,
expr? starargs,
expr? kwargs,
stmt* body,
expr *decorator_list)
| Return(expr? value)
| Delete(expr* targets)
| Assign(expr* targets, expr value)
| AugAssign(expr target, operator op, expr value)
| For(expr target, expr iter, stmt* body, stmt* orelse)
| While(expr test, stmt* body, stmt* orelse)
| If(expr test, stmt* body, stmt* orelse)
| With(expr context_expr, expr? optional_vars, stmt* body)
| Raise(expr? exc, expr? cause)
| TryExcept(stmt* body, excepthandler* handlers, stmt* orelse)
| TryFinally(stmt* body, stmt* finalbody)
| Assert(expr test, expr? msg)
| Import(alias* names)
| ImportFrom(identifier module, alias* names, int? level)
| Exec(expr body, expr? globals, expr? locals)
| Global(identifier* names)
| Nonlocal(identifier* names)
| Expr(expr value)
| Pass | Break | Continue
attributes (int lineno, int col_offset)
expr = BoolOp(boolop op, expr* values)
| BinOp(expr left, operator op, expr right)
| UnaryOp(unaryop op, expr operand)
| Lambda(arguments args, expr body)
| IfExp(expr test, expr body, expr orelse)
| Dict(expr* keys, expr* values)
| Set(expr* elts)
| ListComp(expr elt, comprehension* generators)
| SetComp(expr elt, comprehension* generators)
| DictComp(expr key, expr value, comprehension* generators)
| GeneratorExp(expr elt, comprehension* generators)
| Yield(expr? value)
| Compare(expr left, cmpop* ops, expr* comparators)
| Call(expr func, expr* args, keyword* keywords,
expr? starargs, expr? kwargs)
| Num(object n)
| Str(string s)
| Ellipsis
| Attribute(expr value, identifier attr, expr_context ctx)
| Subscript(expr value, slice slice, expr_context ctx)
| Starred(expr value, expr_context ctx)
| Name(identifier id, expr_context ctx)
| List(expr* elts, expr_context ctx)
| Tuple(expr* elts, expr_context ctx)
attributes (int lineno, int col_offset)
expr_context = Load | Store | Del | AugLoad | AugStore | Param
slice = Slice(expr? lower, expr? upper, expr? step)
| ExtSlice(slice* dims)
| Index(expr value)
boolop = And | Or
operator = Add | Sub | Mult | Div | Mod | Pow | LShift
| RShift | BitOr | BitXor | BitAnd | FloorDiv
unaryop = Invert | notop | UAdd | USub
cmpop = Eq | NotEq | Lt | LtE | Gt | GtE | Is | IsNot | In | NotIn
comprehension = (expr target, expr iter, expr* ifs)
excepthandler = ExceptHandler(expr? type, identifier? name, stmt* body)
attributes (int lineno, int col_offset)
arguments = (arg* args, identifier? vararg, expr? varargannotation,
arg* kwonlyargs, identifier? kwarg,
expr? kwargannotation, expr* defaults,
expr* kw_defaults)
arg = (identifier arg, expr? annotation)
keyword = (identifier arg, expr value)
alias = (identifier name, identifier? asname)
}
# Grammar for (permissive) CTL*
# new tokens
$ELLIPSIS '...'
file_input: (NEWLINE | ctl_atomdef | ctl_propdef)* [ ctl_formula ] NEWLINE* ENDMARKER
ctl_atomdef: 'atom' NAME '(' [ctl_parameters] ')' ':' suite
ctl_propdef: 'prop' NAME '(' [ctl_parameters] ')' ':' ctl_suite
ctl_suite: ( ctl_formula NEWLINE
| NEWLINE INDENT ctl_formula NEWLINE+ DEDENT )
ctl_parameters: (ctl_param ',')* ctl_param
ctl_param: NAME ( '=' '@' STRING+ | ':' NAME )
ctl_formula: ctl_or_formula [ ctl_connector ctl_or_formula ]
ctl_connector: ( '=' '>' | '<=' '>' )
ctl_or_formula: ctl_and_formula ('or' ctl_and_formula)*
ctl_and_formula: ctl_not_formula ('and' ctl_not_formula)*
ctl_not_formula: ('not' ctl_not_formula | ctl_binary_formula)
ctl_binary_formula: ctl_unary_formula [ ctl_binary_op ctl_unary_formula ]
ctl_unary_formula: [ ctl_unary_op ] (ctl_atom_formula | '(' ctl_formula ')')
ctl_unary_op: ('A' | 'G' | 'F' | 'E' | 'X')
ctl_binary_op: ('R' | 'U' | 'W')
ctl_atom_formula: ( 'empty' '(' ctl_place ')'
| 'marked' '(' ctl_place ')'
| 'has' ['not'] '(' ctl_place ',' test (',' test)* ')'
| 'deadlock' | 'True' | 'False'
| NAME '(' ctl_arguments ')'
| 'forall' [ 'distinct' ] NAME (',' NAME)*
'in' ctl_place '(' ctl_atom_formula ')'
| 'exists' [ 'distinct' ] NAME (',' NAME)*
'in' ctl_place '(' ctl_atom_formula ')' )
ctl_arguments: (NAME '=' ctl_place_or_test ',')* NAME '=' ctl_place_or_test
ctl_place: '@' STRING+ | NAME
ctl_place_or_test: test | '@' STRING+
#
# the rest is from SNAKES/Python grammar
#
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
decorators: decorator+
decorated: decorators (classdef | funcdef)
funcdef: 'def' NAME parameters ['-' '>' test] ':' suite
parameters: '(' [typedargslist] ')'
typedargslist: ((tfpdef ['=' test] ',')*
('*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef]
| '**' tfpdef)
| tfpdef ['=' test] (',' tfpdef ['=' test])* [','])
tfpdef: NAME [':' test]
varargslist: ((vfpdef ['=' test] ',')*
('*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef]
| '**' vfpdef)
| vfpdef ['=' test] (',' vfpdef ['=' test])* [','])
vfpdef: NAME
stmt: simple_stmt | compound_stmt
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | nonlocal_stmt | assert_stmt)
expr_stmt: testlist (augassign (yield_expr|testlist) |
('=' (yield_expr|testlist))*)
augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' |
'<<=' | '>>=' | '**=' | '//=')
del_stmt: 'del' exprlist
pass_stmt: 'pass'
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
break_stmt: 'break'
continue_stmt: 'continue'
return_stmt: 'return' [testlist]
yield_stmt: yield_expr
raise_stmt: 'raise' [test ['from' test]]
import_stmt: import_name | import_from
import_name: 'import' dotted_as_names
import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)
'import' ('*' | '(' import_as_names ')' | import_as_names))
import_as_name: NAME ['as' NAME]
dotted_as_name: dotted_name ['as' NAME]
import_as_names: import_as_name (',' import_as_name)* [',']
dotted_as_names: dotted_as_name (',' dotted_as_name)*
dotted_name: NAME ('.' NAME)*
global_stmt: 'global' NAME (',' NAME)*
nonlocal_stmt: 'nonlocal' NAME (',' NAME)*
assert_stmt: 'assert' test [',' test]
compound_stmt: (if_stmt | while_stmt | for_stmt | try_stmt | with_stmt
| funcdef | classdef | decorated)
if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
while_stmt: 'while' test ':' suite ['else' ':' suite]
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
try_stmt: ('try' ':' suite
((except_clause ':' suite)+
['else' ':' suite]
['finally' ':' suite] |
'finally' ':' suite))
with_stmt: 'with' test [ with_var ] ':' suite
with_var: 'as' expr
except_clause: 'except' [test ['as' NAME]]
suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT
test: or_test ['if' or_test 'else' test] | lambdef
test_nocond: or_test | lambdef_nocond
lambdef: 'lambda' [varargslist] ':' test
lambdef_nocond: 'lambda' [varargslist] ':' test_nocond
or_test: and_test ('or' and_test)*
and_test: not_test ('and' not_test)*
not_test: 'not' not_test | comparison
comparison: star_expr (comp_op star_expr)*
comp_op: '<'|'>'|'=='|'>='|'<='|'!='|'<>'|'in'|'not' 'in'|'is'|'is' 'not'
star_expr: ['*'] expr
expr: xor_expr ('|' xor_expr)*
xor_expr: and_expr ('^' and_expr)*
and_expr: shift_expr ('&' shift_expr)*
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
arith_expr: term (('+'|'-') term)*
term: factor (('*'|'/'|'%'|'//') factor)*
factor: ('+'|'-'|'~') factor | power
power: atom trailer* ['**' factor]
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
testlist_comp: test ( comp_for | (',' test)* [','] )
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
subscript: test | [test] ':' [test] [sliceop]
sliceop: ':' [test]
exprlist: star_expr (',' star_expr)* [',']
testlist: test (',' test)* [',']
dictorsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) |
(test (comp_for | (',' test)* [','])) )
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
arglist: (argument ',')* (argument [',']
|'*' test (',' argument)* [',' '**' test]
|'**' test)
argument: test [comp_for] | test '=' test
comp_iter: comp_for | comp_if
comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_if: 'if' test_nocond [comp_iter]
yield_expr: 'yield' [testlist]
"""
>>> testparser(Translator)
"""
import operator, sys
import snakes
from snakes.lang.python.parser import (ParseTree, ParseTestParser,
Translator as PyTranslator,
ParseTree as PyParseTree,
testparser)
from snakes.lang.pgen import ParseError
from snakes.lang.ctlstar.pgen import parser
import snakes.lang.ctlstar.asdl as ast
_symbols = parser.tokenizer.tok_name.copy()
# next statement overrides 'NT_OFFSET' entry with 'single_input'
# (this is desired)
_symbols.update(parser.symbolMap)
def skip (token) :
if token.kind == token.lexer.COMMENT :
words = token.strip().split()
if words[:2] == ["#", "coding="] :
snakes.defaultencoding = words[2]
elif words[:3] == ["#", "-*-", "coding:"] :
snakes.defaultencoding = words[3]
parser.tokenizer.skip_token = skip
class ParseTree (PyParseTree) :
_symbols = _symbols
class Translator (PyTranslator) :
ParseTree = ParseTree
parser = parser
ST = ast
def do_file_input (self, st, ctx) :
"""file_input: (NEWLINE | ctl_atomdef | ctl_propdef)* [ ctl_formula ] NEWLINE* ENDMARKER
-> ast.Spec
<<< atom foo () : return True
... prop bar () : True
... has(@'my place', x)
"Spec(atoms=[Atom(name='foo', args=[], params=[], body=[Return(value=Name(id='True', ctx=Load()))])], properties=[Property(name='bar', args=[], params=[], body=Boolean(val=True))], main=InPlace(data=[Name(id='x', ctx=Load())], place=Place(name=None, place='my place')))"
"""
atoms, props, main = [], [], None
for i, child in enumerate(st) :
if child.symbol == "ctl_atomdef" :
atoms.append(self.do(child, ctx))
elif child.symbol == "ctl_propdef" :
props.append(self.do(child, ctx))
elif child.symbol == "ctl_formula" :
main = self.do(child, ctx)
elif child.symbol in ("NEWLINE", "ENDMARKER") :
pass
else :
raise ParseError(child.text, reason="unexpected token")
return self.ST.Spec(lineno=st.srow, col_offset=st.scol,
atoms=atoms, properties=props, main=main)
def do_ctl_atomdef (self, st, ctx) :
"""ctl_atomdef: 'atom' NAME '(' [ctl_parameters] ')' ':' suite
-> ast.Atom
<<< atom foo () : return True
"Spec(atoms=[Atom(name='foo', args=[], params=[], body=[Return(value=Name(id='True', ctx=Load()))])], properties=[], main=None)"
<<< atom bar () :
... return True
"Spec(atoms=[Atom(name='bar', args=[], params=[], body=[Return(value=Name(id='True', ctx=Load()))])], properties=[], main=None)"
<<< atom egg (p = @'my place', x : int, q : place) :
... return x in p and x in q
"Spec(atoms=[Atom(name='egg', args=[Place(name='p', place='my place')], params=[Parameter(name='x', type='int'), Parameter(name='q', type='place')], body=[Return(value=BoolOp(op=And(), values=[Compare(left=Name(id='x', ctx=Load()), ops=[In()], comparators=[Name(id='p', ctx=Load())]), Compare(left=Name(id='x', ctx=Load()), ops=[In()], comparators=[Name(id='q', ctx=Load())])]))])], properties=[], main=None)"
"""
if len(st) == 7 :
args, params = self.do(st[3], ctx)
return self.ST.Atom(lineno=st.srow, col_offset=st.scol,
name=st[1].text,
args=args,
params=params,
body=self.do(st[-1], ctx))
else :
return self.ST.Atom(lineno=st.srow, col_offset=st.scol,
name=st[1].text,
args=[],
params=[],
body=self.do(st[-1], ctx))
def do_ctl_propdef (self, st, ctx) :
"""ctl_propdef: 'prop' NAME '(' [ctl_parameters] ')' ':' ctl_suite
-> ast.Property
<<< prop foo () : True
"Spec(atoms=[], properties=[Property(name='foo', args=[], params=[], body=Boolean(val=True))], main=None)"
<<< prop bar (p = @'my place', x : int, q : place) : True
"Spec(atoms=[], properties=[Property(name='bar', args=[Place(name='p', place='my place')], params=[Parameter(name='x', type='int'), Parameter(name='q', type='place')], body=Boolean(val=True))], main=None)"
<<< prop egg (p = @'my place', x : int, q : place) : True
"Spec(atoms=[], properties=[Property(name='egg', args=[Place(name='p', place='my place')], params=[Parameter(name='x', type='int'), Parameter(name='q', type='place')], body=Boolean(val=True))], main=None)"
"""
if len(st) == 7 :
args, params = self.do(st[3], ctx)
return self.ST.Property(lineno=st.srow, col_offset=st.scol,
name=st[1].text,
args=args,
params=params,
body=self.do(st[-1], ctx))
else :
return self.ST.Property(lineno=st.srow, col_offset=st.scol,
name=st[1].text,
args=[],
params=[],
body=self.do(st[-1], ctx))
def do_ctl_suite (self, st, ctx) :
"""ctl_suite: ( ctl_formula NEWLINE | NEWLINE INDENT ctl_formula DEDENT )
-> ast.form
<<< prop foo () : True
"Spec(atoms=[], properties=[Property(name='foo', args=[], params=[], body=Boolean(val=True))], main=None)"
<<< prop bar () :
... True
"Spec(atoms=[], properties=[Property(name='bar', args=[], params=[], body=Boolean(val=True))], main=None)"
"""
if len(st) == 2 :
return self.do(st[0], ctx)
else :
return self.do(st[2], ctx)
def do_ctl_parameters (self, st, ctx) :
"""ctl_parameters: (ctl_param ',')* ctl_param
-> [ast.ctlarg], [ast.ctlparam]
<<< prop foo (p = @'my place', q : place, x : int, r : place) : True
"Spec(atoms=[], properties=[Property(name='foo', args=[Place(name='p', place='my place')], params=[Parameter(name='q', type='place'), Parameter(name='x', type='int'), Parameter(name='r', type='place')], body=Boolean(val=True))], main=None)"
<<< prop bar (x : int) : True
"Spec(atoms=[], properties=[Property(name='bar', args=[], params=[Parameter(name='x', type='int')], body=Boolean(val=True))], main=None)"
<<< prop egg (p = @'my place') : True
"Spec(atoms=[], properties=[Property(name='egg', args=[Place(name='p', place='my place')], params=[], body=Boolean(val=True))], main=None)"
<<< prop spam (p = @'my place', q : int, p : place) : True
Traceback (most recent call last):
...
ParseError: ... duplicate parameter 'p'
"""
args, params = [], []
seen = set()
for child in st[::2] :
node = self.do(child, ctx)
if node.name in seen :
raise ParseError(child, reason="duplicate parameter %r"
% node.name)
seen.add(node.name)
if isinstance(node, self.ST.Place) :
args.append(node)
else :
params.append(node)
return args, params
def do_ctl_param (self, st, ctx) :
"""ctl_param: NAME ( '=' '@' STRING+ | ':' NAME )
-> ast.ctlarg|ast.ctlparam
<<< prop foo (p = @'my place', q : place, x : int, r : place) : True
"Spec(atoms=[], properties=[Property(name='foo', args=[Place(name='p', place='my place')], params=[Parameter(name='q', type='place'), Parameter(name='x', type='int'), Parameter(name='r', type='place')], body=Boolean(val=True))], main=None)"
<<< prop bar (x : int) : True
"Spec(atoms=[], properties=[Property(name='bar', args=[], params=[Parameter(name='x', type='int')], body=Boolean(val=True))], main=None)"
<<< prop egg (p = @'my place') : True
"Spec(atoms=[], properties=[Property(name='egg', args=[Place(name='p', place='my place')], params=[], body=Boolean(val=True))], main=None)"
"""
if st[1].text == "=" :
return self.ST.Place(lineno=st.srow, col_offset=st.scol,
name=st[0].text,
place="".join(self.ST.literal_eval(c.text)
for c in st[3:]))
else :
return self.ST.Parameter(lineno=st.srow, col_offset=st.scol,
name=st[0].text,
type=st[2].text)
def do_ctl_arguments (self, st, ctx) :
"""ctl_arguments: (NAME '=' ctl_place_or_test ',')* NAME '=' ctl_place_or_test
-> [(str, ast.expr)]
<<< foo(x=3, p='my place')
"Spec(atoms=[], properties=[], main=Instance(name='foo', args=[arg(arg='x', annotation=Num(n=3)), arg(arg='p', annotation=Str(s='my place'))]))"
<<< foo(x=3)
"Spec(atoms=[], properties=[], main=Instance(name='foo', args=[arg(arg='x', annotation=Num(n=3))]))"
<<< foo(p='my place')
"Spec(atoms=[], properties=[], main=Instance(name='foo', args=[arg(arg='p', annotation=Str(s='my place'))]))"
<<< foo(x=3, p=@'my place')
"Spec(atoms=[], properties=[], main=Instance(name='foo', args=[arg(arg='x', annotation=Num(n=3)), arg(arg='p', annotation=Place(name=None, place='my place'))]))"
<<< foo(x=3)
"Spec(atoms=[], properties=[], main=Instance(name='foo', args=[arg(arg='x', annotation=Num(n=3))]))"
<<< foo(p=@'my place')
"Spec(atoms=[], properties=[], main=Instance(name='foo', args=[arg(arg='p', annotation=Place(name=None, place='my place'))]))"
"""
return [self.ST.arg(name.text, self.do(value, ctx))
for name, value in zip(st[::4], st[2::4])]
def do_ctl_place_or_test (self, st, ctx) :
"""ctl_place_or_test: test | '@' STRING+
-> ast.expr | ast.Place
<<< Foo(s='string', p=@'place', q=place_also)
"Spec(atoms=[], properties=[], main=Instance(name='Foo', args=[arg(arg='s', annotation=Str(s='string')), arg(arg='p', annotation=Place(name=None, place='place')), arg(arg='q', annotation=Name(id='place_also', ctx=Load()))]))"
"""
if st[0].symbol == "test" :
return self.do(st[0], ctx)
else :
return self.do_ctl_place(st, ctx)
def do_ctl_formula (self, st, ctx) :
"""ctl_formula: ctl_or_formula [ ctl_connector ctl_or_formula ]
-> ast.form
<<< True
'Spec(atoms=[], properties=[], main=Boolean(val=True))'
<<< False => True
'Spec(atoms=[], properties=[], main=CtlBinary(op=Imply(), left=Boolean(val=False), right=Boolean(val=True)))'
<<< False <=> False
'Spec(atoms=[], properties=[], main=CtlBinary(op=Iff(), left=Boolean(val=False), right=Boolean(val=False)))'
"""
if len(st) == 1 :
return self.do(st[0], ctx)
else :
return self.ST.CtlBinary(lineno=st.srow, col_offset=st.scol,
op=self.do_ctl_connector(st[1], ctx),
left=self.do(st[0], ctx),
right=self.do(st[2], ctx))
def do_ctl_connector (self, st, ctx) :
"""ctl_connector: ( '=' '>' | '<=' '>' )
-> ast.ctlbinary
<<< False => True
'Spec(atoms=[], properties=[], main=CtlBinary(op=Imply(), left=Boolean(val=False), right=Boolean(val=True)))'
<<< False <=> False
'Spec(atoms=[], properties=[], main=CtlBinary(op=Iff(), left=Boolean(val=False), right=Boolean(val=False)))'
"""
op = "".join(child.text for child in st)
return self._ctl_binary_op[op](lineno=st.srow,
col_offset=st.scol)
def do_ctl_or_formula (self, st, ctx) :
"""ctl_or_formula: ctl_and_formula ('or' ctl_and_formula)*
-> ast.form
<<< True or False
'Spec(atoms=[], properties=[], main=CtlBinary(op=Or(), left=Boolean(val=True), right=Boolean(val=False)))'
<<< True or False or True
'Spec(atoms=[], properties=[], main=CtlBinary(op=Or(), left=CtlBinary(op=Or(), left=Boolean(val=True), right=Boolean(val=False)), right=Boolean(val=True)))'
<<< True or False or False and True and False
'Spec(atoms=[], properties=[], main=CtlBinary(op=Or(), left=CtlBinary(op=Or(), left=Boolean(val=True), right=Boolean(val=False)), right=CtlBinary(op=And(), left=CtlBinary(op=And(), left=Boolean(val=False), right=Boolean(val=True)), right=Boolean(val=False))))'
"""
if len(st) == 1 :
return self.do(st[0], ctx)
else :
values = [self.do(child, ctx) for child in st[::2]]
ops = [self._ctl_binary_op[child.text](lineno=child.srow,
col_offset=child.scol)
for child in st[1::2]]
while len(values) > 1 :
left = values.pop(0)
right = values.pop(0)
operator = ops.pop(0)
values.insert(0, self.ST.CtlBinary(lineno=st.srow,
col_offset=st.scol,
left=left,
op=operator,
right=right))
return values[0]
def do_ctl_and_formula (self, st, ctx) :
"""ctl_and_formula: ctl_not_formula ('and' ctl_not_formula)*
-> ast.form
<<< True and False
'Spec(atoms=[], properties=[], main=CtlBinary(op=And(), left=Boolean(val=True), right=Boolean(val=False)))'
"""
return self.do_ctl_or_formula(st, ctx)
def do_ctl_not_formula (self, st, ctx) :
"""ctl_not_formula: ('not' ctl_not_formula | ctl_binary_formula)
-> ast.form
<<< True
'Spec(atoms=[], properties=[], main=Boolean(val=True))'
<<< not True
'Spec(atoms=[], properties=[], main=CtlUnary(op=Not(), child=Boolean(val=True)))'
<<< not not True
'Spec(atoms=[], properties=[], main=CtlUnary(op=Not(), child=CtlUnary(op=Not(), child=Boolean(val=True))))'
"""
if len(st) == 1 :
return self.do(st[0], ctx)
else :
return self.ST.CtlUnary(lineno=st.srow, col_offset=st.scol,
op=self.ST.Not(lineno=st[0].srow,
col_offset=st[0].scol),
child=self.do(st[1], ctx))
def do_ctl_binary_formula (self, st, ctx) :
"""ctl_binary_formula: ctl_unary_formula [ ctl_binary_op ctl_unary_formula ]
-> ast.form
<<< True U False
'Spec(atoms=[], properties=[], main=CtlBinary(op=Until(), left=Boolean(val=True), right=Boolean(val=False)))'
<<< True W False
'Spec(atoms=[], properties=[], main=CtlBinary(op=WeakUntil(), left=Boolean(val=True), right=Boolean(val=False)))'
<<< True R False
'Spec(atoms=[], properties=[], main=CtlBinary(op=Release(), left=Boolean(val=True), right=Boolean(val=False)))'
"""
if len(st) == 1 :
return self.do(st[0], ctx)
else :
return self.ST.CtlBinary(lineno=st.srow, col_offset=st.scol,
op=self.do(st[1], ctx),
left=self.do(st[0], ctx),
right=self.do(st[2], ctx))
def do_ctl_unary_formula (self, st, ctx) :
"""ctl_unary_formula: [ ctl_unary_op ] (ctl_atom_formula | '(' ctl_formula ')')
-> ast.form
<<< True
'Spec(atoms=[], properties=[], main=Boolean(val=True))'
<<< X True
'Spec(atoms=[], properties=[], main=CtlUnary(op=Next(), child=Boolean(val=True)))'
<<< A True
'Spec(atoms=[], properties=[], main=CtlUnary(op=All(), child=Boolean(val=True)))'
<<< G True
'Spec(atoms=[], properties=[], main=CtlUnary(op=Globally(), child=Boolean(val=True)))'
<<< F True
'Spec(atoms=[], properties=[], main=CtlUnary(op=Future(), child=Boolean(val=True)))'
<<< E True
'Spec(atoms=[], properties=[], main=CtlUnary(op=Exists(), child=Boolean(val=True)))'
<<< (True or False)
'Spec(atoms=[], properties=[], main=CtlBinary(op=Or(), left=Boolean(val=True), right=Boolean(val=False)))'
<<< X (True or False)
'Spec(atoms=[], properties=[], main=CtlUnary(op=Next(), child=CtlBinary(op=Or(), left=Boolean(val=True), right=Boolean(val=False))))'
"""
if len(st) == 1 :
return self.do(st[0], ctx)
elif len(st) == 2 :
return self.ST.CtlUnary(lineno=st.srow, col_offset=st.scol,
op=self.do(st[0], ctx),
child=self.do(st[1], ctx))
elif len(st) == 3 :
return self.do(st[1], ctx)
else :
return self.ST.CtlUnary(lineno=st.srow, col_offset=st.scol,
op=self.do(st[0], ctx),
child=self.do(st[2], ctx))
_ctl_unary_op = {"A" : ast.All,
"E" : ast.Exists,
"X" : ast.Next,
"F" : ast.Future,
"G" : ast.Globally}
def do_ctl_unary_op (self, st, ctx) :
"""ctl_unary_op: ('A' | 'G' | 'F' | 'E' | 'X')
-> ast.ctlunary
<<< X True
'Spec(atoms=[], properties=[], main=CtlUnary(op=Next(), child=Boolean(val=True)))'
<<< A True
'Spec(atoms=[], properties=[], main=CtlUnary(op=All(), child=Boolean(val=True)))'
<<< G True
'Spec(atoms=[], properties=[], main=CtlUnary(op=Globally(), child=Boolean(val=True)))'
<<< F True
'Spec(atoms=[], properties=[], main=CtlUnary(op=Future(), child=Boolean(val=True)))'
<<< E True
'Spec(atoms=[], properties=[], main=CtlUnary(op=Exists(), child=Boolean(val=True)))'
"""
return self._ctl_unary_op[st[0].text](lineno=st.srow,
col_offset=st.scol)
_ctl_binary_op = {"=>" : ast.Imply,
"<=>" : ast.Iff,
"and" : ast.And,
"or" : ast.Or,
"U" : ast.Until,
"W" : ast.WeakUntil,
"R" : ast.Release}
def do_ctl_binary_op (self, st, ctx) :
"""ctl_binary_op: ('R' | 'U' | 'W')
-> ast.ctlbinary
<<< True R False
'Spec(atoms=[], properties=[], main=CtlBinary(op=Release(), left=Boolean(val=True), right=Boolean(val=False)))'
<<< True U False
'Spec(atoms=[], properties=[], main=CtlBinary(op=Until(), left=Boolean(val=True), right=Boolean(val=False)))'
<<< True W False
'Spec(atoms=[], properties=[], main=CtlBinary(op=WeakUntil(), left=Boolean(val=True), right=Boolean(val=False)))'
"""
return self._ctl_binary_op[st[0].text](lineno=st[0].srow,
col_offset=st[0].scol)
def do_ctl_atom_formula (self, st, ctx) :
"""ctl_atom_formula: ( 'empty' '(' ctl_place ')'
| 'marked' '(' ctl_place ')'
| 'has' ['not'] '(' ctl_place ',' test (',' test)* ')'
| 'deadlock' | 'True' | 'False'
| NAME '(' ctl_arguments ')'
| 'forall' [ 'distinct' ] NAME (',' NAME)*
'in' ctl_place '(' ctl_atom_formula ')'
| 'exists' [ 'distinct' ] NAME (',' NAME)*
'in' ctl_place '(' ctl_atom_formula ')' )
-> ast.atom
<<< empty(p)
"Spec(atoms=[], properties=[], main=EmptyPlace(place=Parameter(name='p', type='place')))"
<<< empty(@'my' 'place')
"Spec(atoms=[], properties=[], main=EmptyPlace(place=Place(name=None, place='myplace')))"
<<< marked(p)
"Spec(atoms=[], properties=[], main=MarkedPlace(place=Parameter(name='p', type='place')))"
<<< marked(@'my place')
"Spec(atoms=[], properties=[], main=MarkedPlace(place=Place(name=None, place='my place')))"
<<< has(p, x)
"Spec(atoms=[], properties=[], main=InPlace(data=[Name(id='x', ctx=Load())], place=Parameter(name='p', type='place')))"
<<< has not(p, x, y)
"Spec(atoms=[], properties=[], main=NotInPlace(data=[Name(id='x', ctx=Load()), Name(id='y', ctx=Load())], place=Parameter(name='p', type='place')))"
<<< deadlock
'Spec(atoms=[], properties=[], main=Deadlock())'
<<< True
'Spec(atoms=[], properties=[], main=Boolean(val=True))'
<<< False
'Spec(atoms=[], properties=[], main=Boolean(val=False))'
<<< myprop(x=1, p='my place')
"Spec(atoms=[], properties=[], main=Instance(name='myprop', args=[arg(arg='x', annotation=Num(n=1)), arg(arg='p', annotation=Str(s='my place'))]))"
<<< forall x in p (has(q, y))
"Spec(atoms=[], properties=[], main=Quantifier(op=All(), vars=['x'], place=Parameter(name='p', type='place'), child=InPlace(data=[Name(id='y', ctx=Load())], place=Parameter(name='q', type='place')), distinct=False))"
<<< forall x, y in p (has(q, x+y, x-y))
"Spec(atoms=[], properties=[], main=Quantifier(op=All(), vars=['x', 'y'], place=Parameter(name='p', type='place'), child=InPlace(data=[BinOp(left=Name(id='x', ctx=Load()), op=Add(), right=Name(id='y', ctx=Load())), BinOp(left=Name(id='x', ctx=Load()), op=Sub(), right=Name(id='y', ctx=Load()))], place=Parameter(name='q', type='place')), distinct=False))"
<<< forall distinct x, y in p (has(q, x+y, x-y))
"Spec(atoms=[], properties=[], main=Quantifier(op=All(), vars=['x', 'y'], place=Parameter(name='p', type='place'), child=InPlace(data=[BinOp(left=Name(id='x', ctx=Load()), op=Add(), right=Name(id='y', ctx=Load())), BinOp(left=Name(id='x', ctx=Load()), op=Sub(), right=Name(id='y', ctx=Load()))], place=Parameter(name='q', type='place')), distinct=True))"
<<< exists x in p (has(q, y))
"Spec(atoms=[], properties=[], main=Quantifier(op=Exists(), vars=['x'], place=Parameter(name='p', type='place'), child=InPlace(data=[Name(id='y', ctx=Load())], place=Parameter(name='q', type='place')), distinct=False))"
<<< exists x, y in p (has(q, x+y, x-y))
"Spec(atoms=[], properties=[], main=Quantifier(op=Exists(), vars=['x', 'y'], place=Parameter(name='p', type='place'), child=InPlace(data=[BinOp(left=Name(id='x', ctx=Load()), op=Add(), right=Name(id='y', ctx=Load())), BinOp(left=Name(id='x', ctx=Load()), op=Sub(), right=Name(id='y', ctx=Load()))], place=Parameter(name='q', type='place')), distinct=False))"
<<< exists distinct x, y in p (has(q, x+y, x-y))
"Spec(atoms=[], properties=[], main=Quantifier(op=Exists(), vars=['x', 'y'], place=Parameter(name='p', type='place'), child=InPlace(data=[BinOp(left=Name(id='x', ctx=Load()), op=Add(), right=Name(id='y', ctx=Load())), BinOp(left=Name(id='x', ctx=Load()), op=Sub(), right=Name(id='y', ctx=Load()))], place=Parameter(name='q', type='place')), distinct=True))"
"""
if st[0].text in ("True", "False") :
return self.ST.Boolean(lineno=st.srow, col_offset=st.scol,
val=(st[0].text == "True"))
elif st[0].text == "deadlock" :
return self.ST.Deadlock(lineno=st.srow, col_offset=st.scol)
elif st[0].text in ("empty", "marked") :
node = (self.ST.EmptyPlace if st[0].text == "empty"
else self.ST.MarkedPlace)
return node(lineno=st.srow, col_offset=st.scol,
place=self.do(st[2], ctx))
elif st[0].text == "has" :
if st[1].text == "not" :
node = self.ST.NotInPlace
start = 3
else :
node = self.ST.InPlace
start = 2
place = self.do(st[start], ctx)
if (isinstance(place, self.ST.Parameter)
and place.type != "place") :
raise ParseError(st[-4], reason="'place' parameter expected")
return node(lineno=st.srow, col_offset=st.scol,
data=[self.do(c, ctx) for c in st[start+2::2]],
place=place)
elif st[0].text in ("forall", "exists") :
op = (self.ST.All if st[0].text == "forall"
else self.ST.Exists)
distinct = st[1].text == "distinct"
start = 2 if distinct else 1
return self.ST.Quantifier(lineno=st.srow, col_offset=st.scol,
op=op(),
vars=[c.text for c in st[start:-5:2]],
place=self.do(st[-4], ctx),
child=self.do(st[-2], ctx),
distinct=distinct)
else :
return self.ST.Instance(lineno=st.srow, col_offset=st.scol,
name=st[0].text,
args=self.do(st[2], ctx))
def do_ctl_place (self, st, ctx) :
"""ctl_place: '@' STRING+ | NAME
-> ast.ctlarg
<<< has(@'my place', x)
"Spec(atoms=[], properties=[], main=InPlace(data=[Name(id='x', ctx=Load())], place=Place(name=None, place='my place')))"
<<< has(@'another' 'place', y)
"Spec(atoms=[], properties=[], main=InPlace(data=[Name(id='y', ctx=Load())], place=Place(name=None, place='anotherplace')))"
<<< has(@'my place', x, y)
"Spec(atoms=[], properties=[], main=InPlace(data=[Name(id='x', ctx=Load()), Name(id='y', ctx=Load())], place=Place(name=None, place='my place')))"
<<< has not(@'my place', x)
"Spec(atoms=[], properties=[], main=NotInPlace(data=[Name(id='x', ctx=Load())], place=Place(name=None, place='my place')))"
<<< has not(@'my place', x, y)
"Spec(atoms=[], properties=[], main=NotInPlace(data=[Name(id='x', ctx=Load()), Name(id='y', ctx=Load())], place=Place(name=None, place='my place')))"
"""
if st[0].symbol == "NAME" :
return self.ST.Parameter(lineno=st.srow, col_offset=st.scol,
name=st[0].text,
type="place")
else :
return self.ST.Place(lineno=st.srow, col_offset=st.scol,
name=None,
place="".join(self.ST.literal_eval(c.text)
for c in st[1:]))
parse = Translator.parse
if __name__ == "__main__" :
testparser(Translator)
This diff could not be displayed because it is too large.
"""A Python implementation of CPython's parser
This module is largely based on Jonathan Riehl's PyPgen included in
the Basil framework (http://code.google.com/p/basil).
"""
from snakes import SnakesError
from snakes.lang import ast
import tokenize, string, pprint, warnings, inspect, os.path
from snakes.compat import *
def warn (message) :
"""Issue a warning message.
"""
warnings.warn(message, stacklevel=2)
class Token (str) :
"""A token from the lexer.
Behaves as a string that is either the token value (if not empty),
or the token name. Additional attributes allow to extract various
information:
- self.kind: token number (like tokenize.ENDMARKER)
also available as int(self)
- self.text: token text
- self.srow: start row
- self.scol: start column
- self.erow: end row
- self.ecol: end column
- self.line: full line of text from which the token comes
- self.name: token name (ie, tokenize.tok_name[self.kind])
- self.lexer: the Tokenizer instance than produced this token
- self.filename: input file from which the token comes (or
'<string>' if None available)
"""
def __new__ (cls, token, lexer) :
"""Create a new instance.
__new__ is used instead of __init__ because str is a
non-mutable object and this __init__ could not assign a str
content. For more information see:
http://docs.python.org/reference/datamodel.html#object.__new__
"""
kind = token[0]
text = token[1]
name = lexer.tok_name[kind]
self = str.__new__(cls, text or name)
self.kind = kind
self.text = text
self.srow, self.scol = token[2]
self.erow, self.ecol = token[3]
self.line = token[4]
self.name = name
self.lexer = lexer
try :
self.filename = lexer.infile.name
except :
self.filename = "<string>"
return self
def __int__ (self) :
"""Coercion to int (return self.kind).
"""
return self.kind
class Location (str) :
"""A position in a parsed file
Used to aggregate the positions of all the tokens is a parse
(sub)tree. The following attributes are available:
- self.srow: start row
- self.scol: start column
- self.erow: end row
- self.ecol: end column
- self.filename: input file from which the token comes (or
'<string>' if None available)
"""
def __new__ (cls, first, last) :
"""Create a new instance
Expected arguments:
- first: the first Token or Location instance in the region
- last: the last one
"""
self = str.__new__(cls, "%s[%s:%s-%s:%s]"
% (first.filename, first.srow, first.scol,
last.erow, last.ecol))
self.srow, self.scol = first.srow, first.scol
self.erow, self.ecol = last.erow, last.ecol
self.filename = first.filename
self.lexer = first.lexer
return self
class ParseError (SnakesError) :
"""Exception raised when parsing fails.
It's better not to use SyntaxError because this soes not allows to
distinguish when the text being parser has a syntax error from
when because the parser itself has a syntax error.
"""
def __init__ (self, token, expected=None, reason=None) :
"""Initialize a new instance.
Expected arguments are:
- token: the erroneous token (a Token instance)
- expected: either a Token instance or a token kind (like
tokenize.NAME) to indicated what was expected instead
"""
self.token = token
if expected is not None :
expected = int(expected)
self.expected = expected
if token is None :
pos = ""
else :
pos = "%s[%s:%s]: " % (token.filename, token.srow, token.scol)
if reason is not None :
msg = reason
elif self.expected is not None :
msg = ("expected %s but found %r" %
(token.lexer.tok_name[expected], token))
else :
msg = "unexpected token %r" % token
SnakesError.__init__(self, pos + msg)
class Tokenizer (object) :
"""A simple lexical analyser based on Python's tokenize module.
The differences with tokenize module are:
- new simple tokens may be added (just strings, no regexps)
- Python's token may be not included
- tokens may be automatically skipped (removed from the output)
- tokenize.OP kind is refined (eg, ':' gets kind tokenize.COLON)
This class replaces two PyPgen elements:
- module basil.lang.python.TokenUtils
- module basil.lang.python.StdTokenizer
"""
_pyopmap = {
'(' : tokenize.LPAR,
')' : tokenize.RPAR,
'[' : tokenize.LSQB,
']' : tokenize.RSQB,
':' : tokenize.COLON,
',' : tokenize.COMMA,
';' : tokenize.SEMI,
'+' : tokenize.PLUS,
'+=' : tokenize.PLUSEQUAL,
'-' : tokenize.MINUS,
'-=' : tokenize.MINEQUAL,
'*' : tokenize.STAR,
'**' : tokenize.DOUBLESTAR,
'**=' : tokenize.DOUBLESTAREQUAL,
'*=' : tokenize.STAREQUAL,
'/' : tokenize.SLASH,
'//' : tokenize.DOUBLESLASH,
'//=' : tokenize.DOUBLESLASHEQUAL,
'/=' : tokenize.SLASHEQUAL,
'|' : tokenize.VBAR,
'|=' : tokenize.VBAREQUAL,
'&' : tokenize.AMPER,
'&=' : tokenize.AMPEREQUAL,
'<' : tokenize.LESS,
'<=' : tokenize.LESSEQUAL,
'<<' : tokenize.LEFTSHIFT,
'<<=' : tokenize.LEFTSHIFTEQUAL,
'>' : tokenize.GREATER,
'>=' : tokenize.GREATEREQUAL,
'>>' : tokenize.RIGHTSHIFT,
'>>=' : tokenize.RIGHTSHIFTEQUAL,
'=' : tokenize.EQUAL,
'==' : tokenize.EQEQUAL,
'.' : tokenize.DOT,
'%' : tokenize.PERCENT,
'%=' : tokenize.PERCENTEQUAL,
'{' : tokenize.LBRACE,
'}' : tokenize.RBRACE,
'^' : tokenize.CIRCUMFLEX,
'^=' : tokenize.CIRCUMFLEXEQUAL,
'~' : tokenize.TILDE,
'!=' : tokenize.NOTEQUAL,
'<>' : tokenize.NOTEQUAL,
'@' : tokenize.AT
}
def __init__ (self, python=True, opmap={}, skip=None, **extra) :
"""Initialize a new instance.
Expected arguments are:
- python: a bool to indicate whether to include or not
Python's tokens (default to True)
- opmap: a dict to map litteral tokens (given as '...' in the
grammar) to token kinds (default to {}). This parameter is
useful only to redefine Python's mapping
- skip: a collection of tokens that the tokenizer will
automatically skip (default to [COMMENT, NL])
- additional keywords arguments allow to define new tokens,
for instance, providing
DOLLAR='$'
defines a new token called 'DOLLAR' (its kind will be
automatically computed)
An instance of Tokenizer has the following attributes:
- self.opmap: a dict mapping operators token literals to the
corresponding kind, for instance, ':' is mapped to
tokenize.COLON (this can be overridden using argument
opmap)
- self.tok_name: a replacement of tokenize.tok_name that also
include the user-defined tokens
- for each token called FOO (including user-defined ones), an
attribute self.FOO hols the corresponding kind
"""
self._python = python
self._opmap = opmap.copy()
if python :
self.opmap = self._pyopmap.copy()
self.opmap.update(opmap)
else :
self.opmap = opmap.copy()
self.tok_name = {}
self._extra = {}
if python :
for kind, name in tokenize.tok_name.items() :
self.tok_name[kind] = name
setattr(self, name, kind)
if not hasattr(self, "NT_OFFSET") :
self.NT_OFFSET = 256
last = max(n for n in self.tok_name if n != self.NT_OFFSET)
for shift, (name, txt) in enumerate(sorted(extra.items())) :
#WARNING: sorted above is required to guaranty that extra
# tokens will always get the same number (dict order is
# not guaranteed)
kind = last + shift
if kind >= self.NT_OFFSET :
raise TypeError("too many new tokens")
self.tok_name[kind] = name
setattr(self, name, kind)
self._extra[txt] = kind
self.opmap.update(self._extra)
if skip is None :
skip = [self.COMMENT, self.NL]
self._skip = set(skip)
def __repr__ (self) :
"""Encodes an instance as Python source code.
Non-default arguments provided to the constructor are included
so that exactly the same Tokenizer instance can be recovered
from the returned source code.
>>> print repr(Tokenizer())
Tokenizer()
>>> print repr(Tokenizer(DOLLAR='$'))
Tokenizer(DOLLAR='$')
>>> print repr(Tokenizer(skip=[], DOLLAR='$'))
Tokenizer(skip=[], DOLLAR='$')
"""
args = []
if not self._python :
args.append("python=%s" % self._python)
if self._opmap :
args.append("opmap=%r" % self._opmap)
if self._skip != set([self.COMMENT, self.NL]) :
args.append("skip=%r" % list(self._skip))
args.extend("%s=%r" % (self.tok_name[kind], txt) for txt, kind
in self._extra.items())
return "%s(%s)" % (self.__class__.__name__, ", ".join(args))
def tokenize (self, stream) :
"""Break an input stream into tokens.
Expected argument is:
- stream: a file-like object (with a method readline)
Return a generator of Token instances, ParseError is raised
whenever an erroneous token is encountered.
This is basically the same as tokenize.generate_tokens but:
- the appropriate tokens are skipped
- OP kind is converted according to self.opmap
- user-defined tokens are handled
During the iteration, two more attributes can be used:
- self.last: last recognized token (ie, last yielded)
- self.infile: the input stream passed to method tokenize
"""
self.infile = stream
self.last = None
self.lines = []
def readline () :
self.lines.append(stream.readline())
return self.lines[-1]
err = self.ERRORTOKEN
for token in tokenize.generate_tokens(readline) :
if token[0] == err :
try :
token = (self._extra[token[1]],) + token[1:]
except :
raise ParseError(Token(token, self))
elif token[0] in self._skip :
try:
self.skip_token(Token(token, self))
except :
pass
continue
elif token[0] == self.OP :
token = (self.opmap[token[1]],) + token[1:]
self.last = Token(token, self)
yield self.last
def skip_token (self, token) :
pass
try :
Tokenizer._pyopmap['`'] = tokenize.BACKQUOTE
except AttributeError :
pass
class PgenParser (object) :
"""A parser for pgen files.
The following grammar is used:
mstart : ( rule | NEWLINE | newtok )* ENDMARKER
newtok : '$' NAME STRING NEWLINE
rule : NAME COLON rhs NEWLINE
rhs := alt ( VBAR alt )*
alt : item+
item : LSQB rhs RSQB | atom ( STAR | PLUS )?
atom : LPAR rhs RPAR | NAME | STRING
With respect to PyPgen, an additional rule 'newtok' has been added
to allow for user-defined tokens.
This class is adapted from module basil.parsing.PgenParser, it has
attributes MSTART, ..., provinding to the symbol numbers for the
corresponding rules.
"""
MSTART = 256
RULE = 257
RHS = 258
ALT = 259
ITEM = 260
ATOM = 261
NEWTOK = 262
def __init__ (self) :
self.lexer = Tokenizer(NEWTOK='$')
def expect (self, expected, found) :
if expected != found.kind :
raise ParseError(found, expected=expected)
@classmethod
def parse (cls, filename) :
"""Parse a pgen file.
Expected argument is:
- filename: path of pgen file to be parsed
Return a 2-tuple (G, T) where:
- G is the grammar's syntax tree
- T is a Tokenizer instance to be used with the parser
generated from this grammar (ie, including all the required
user-defined tokens)
This is basically the only method that is needed:
>>> mygrammar = PgenParser.parse('myfile.pgen')
"""
return cls().parse_file(filename)
def parse_file (self, filename) :
"""Parse a pgen file.
Expected argument is:
- filename: path of pgen file to be parsed
Return a 2-tuple (g, t):
- g: is the grammar's syntax tree
- t: is a Tokenizer instance to be used with the parser
generated from this grammar (ie, including all the required
user-defined tokens)
Recognize mstart : ( rule | NEWLINE )* ENDMARKER
Like in PyPgen, the parser is recursive descendent, each rule
'X' being recognized by a dedicated method 'handleX' (except
for 'mstart'). Each such method expects a current token (or
None if it has to be fetched from the tokenizer) and returns a
2-tuple (R, T) where:
- R is the resulting syntax tree
- T is the new current token (or None)
"""
self.infile = open(filename)
self.tokens = self.lexer.tokenize(self.infile)
extra = {}
children = []
current = next(self.tokens)
while current.kind != self.lexer.ENDMARKER :
if current.kind == self.lexer.NEWLINE :
children.append((current, []))
current = None
elif current.kind == self.lexer.NEWTOK :
name, text = self.handleNewtok(current)
current = None
extra[name] = text
else :
ruleResult, current = self.handleRule(current)
children.append(ruleResult)
if current is None :
current = next(self.tokens)
children.append((current, []))
return (self.MSTART, children), Tokenizer(**extra)
def handleNewtok (self, current=None) :
"""Recognize newtok : '$' NAME STRING NEWLINE
Unlike the other 'handleX' methods, this one does not return a
syntax tree because it implements a parsing directive.
Instead, it returns a 2-tuple (N, S) where:
- N is the user-defined token name
- S is the token string value
"""
if current is None :
current = next(self.tokens)
self.expect(self.lexer.NEWTOK, current)
name = next(self.tokens)
self.expect(self.lexer.NAME, name)
text = next(self.tokens)
self.expect(self.lexer.STRING, text)
nl = next(self.tokens)
self.expect(self.lexer.NEWLINE, nl)
return name, compile(text, "<string>", "eval",
ast.PyCF_ONLY_AST).body.s
def handleRule (self, current=None) :
"""Recognize rule : NAME COLON rhs NEWLINE
"""
children = []
if current is None :
current = next(self.tokens)
self.expect(self.lexer.NAME, current)
children.append((current, []))
current = next(self.tokens)
self.expect(self.lexer.COLON, current)
children.append((current, []))
rhsResult, current = self.handleRhs()
children.append(rhsResult)
if current is None :
current = next(self.tokens)
self.expect(self.lexer.NEWLINE, current)
children.append((current, []))
result = (self.RULE, children)
return result, None
def handleRhs (self, current=None) :
"""Recognize rhs : alt ( VBAR alt )*
"""
children = []
altResult, current = self.handleAlt(current)
children.append(altResult)
if current is None :
current = next(self.tokens)
while current.kind == self.lexer.VBAR :
children.append((current, []))
altResult, current = self.handleAlt()
children.append(altResult)
if current is None :
current = next(self.tokens)
result = (self.RHS, children)
return result, current
def handleAlt (self, current=None) :
""" Recognize alt : item+
"""
children = []
itemResult, current = self.handleItem(current)
children.append(itemResult)
if current is None :
current = next(self.tokens)
while current.kind in (self.lexer.LSQB, self.lexer.LPAR,
self.lexer.NAME, self.lexer.STRING) :
itemResult, current = self.handleItem(current)
children.append(itemResult)
if current is None :
current = next(self.tokens)
return (self.ALT, children), current
def handleItem (self, current=None) :
"""Recognize item : LSQB rhs RSQB | atom ( STAR | PLUS )?
"""
children = []
if current is None :
current = next(self.tokens)
if current.kind == self.lexer.LSQB :
children.append((current, []))
rhsResult, current = self.handleRhs()
children.append(rhsResult)
if current is None :
current = next(self.tokens)
self.expect(self.lexer.RSQB, current)
children.append((current, []))
current = None
else :
atomResult, current = self.handleAtom(current)
children.append(atomResult)
if current is None :
current = next(self.tokens)
if current.kind in (self.lexer.STAR, self.lexer.PLUS) :
children.append((current, []))
current = None
return (self.ITEM, children), current
def handleAtom (self, current=None) :
"""Recognize atom : LPAR rhs RPAR | NAME | STRING
"""
children = []
if current is None :
current = next(self.tokens)
tokType = current.kind
if tokType == self.lexer.LPAR :
children.append((current, []))
rhsResult, current = self.handleRhs()
children.append(rhsResult)
if current is None :
current = next(self.tokens)
self.expect(self.lexer.RPAR, current)
children.append((current, []))
elif tokType == self.lexer.STRING :
children.append((current, []))
else :
self.expect(self.lexer.NAME, current)
children.append((current, []))
return (self.ATOM, children), None
class Parser (object) :
"""A LL1 parser for a generated grammar.
This class aggregates two elements from PyPgen:
- module basil.lang.python.DFAParser
- class basil.parsing.PyPgen.PyPgenParser
The main differences are:
- simplified interface
- adapt to handle Token instances instead of 3-tuples (kind,
text, lineno)
- remove functions arguments that can now be retreived as
instance attributes
- use Python warnings instead of print statements
- many minor code edits (for my own understanding)
Docstrings are provided only for methods that did not have an
equivalent in PyPgen or that have been changed in a significant
way.
"""
def __init__ (self, grammar, tokenizer) :
"""Initialize a new instance.
Expected arguments are:
- grammar: the gramar object as returned by PyPgen.grammar()
- tokenizer: a Tokenizer instance suitable for this grammar
(eg, that passed to PyPgen's constructor)
"""
self.grammar = grammar
self.start = grammar[2]
self.stringMap = {}
for dfa in self.grammar[0] :
dfaType, dfaName = dfa[:2]
self.stringMap[dfaName] = dfaType
self.symbolMap = {}
for dfa in self.grammar[0] :
dfaType, dfaName = dfa[:2]
self.symbolMap[dfaType] = dfaName
self.tokenizer = tokenizer
self.addAccelerators()
def parseTokens (self, tokens, start=None) :
"""Parse a series of tokens.
Expected arguments:
- tokens: a generator of Token instances
- start: the start symbol to be recognized (or None to use
the default one)
The token generator should provide only tokens that are
compatible with the tokenizer passed to the Parser's
constructor. Otherwise, ParseError will be raised as unknown
tokens will be issued.
"""
self.tokens = tokens
return self._parse(start)
def parseFile (self, filename, start=None) :
"""Parse a text file provided by its path.
Expected arguments:
- filename: a file name
- start: the start symbol to be recognized (or None to use
the default one)
The start symbol may be provided by its number (int) or its
name (str) as specified in the grammar.
"""
self.tokens = self.tokenizer.tokenize(open(filename))
return self._parse(start)
def parseStream (self, stream, start=None) :
"""Parse text from an opened file.
Expected arguments:
- stream: a file-like object (with a method readline)
- start: the start symbol to be recognized (or None to use
the default one)
The start symbol may be provided by its number (int) or its
name (str) as specified in the grammar.
"""
self.tokens = self.tokenizer.tokenize(stream)
return self._parse(start)
def parseString (self, text, start=None, filename="<string>") :
"""Parse text given as a string.
Expected arguments:
- text: a string-like object
- start: the start symbol to be recognized (or None to use
the default one)
The start symbol may be provided by its number (int) or its
name (str) as specified in the grammar.
"""
data = io.StringIO(text)
data.name = filename
self.tokens = self.tokenizer.tokenize(data)
return self._parse(start)
def _parse (self, start=None) :
"""Main parsing method.
Expected argument:
- start: the start symbol to be recognized (or None to use
the default one)
The start symbol may be provided by its number (int) or its
name (str) as specified in the grammar.
"""
if start is None :
start = self.start
elif start in self.stringMap :
start = self.stringMap[start]
elif start not in self.symbolMap :
raise ValueError("unknown start symbol %r" % start)
tokens = self.tokens
# initialize the parsing stack
rootNode = ((start, None, 0), [])
dfa = self.findDFA(start)
self.stack = [(dfa[3][dfa[2]], dfa, rootNode)]
# parse all of it
result = self._LL1_OK
while result == self._LL1_OK :
result, expected = self.addToken(next(tokens))
if result == self._LL1_DONE :
return self._fix_locations(rootNode)
elif result == self._LL1_SYNTAX :
raise ParseError(self.tokenizer.last, expected=expected)
def _tostrings (self, st) :
"""Substitute symbol numbers by strings in a syntax tree.
Expected argument:
- st: a syntax tree as returned by the parser
"""
(kind, token, lineno), children = st
if kind >= self.tokenizer.NT_OFFSET :
name = self.symbolMap
else :
name = self.tokenizer.tok_name
return ((name[kind], token, lineno),
[self._tostrings(c) for c in children])
def _fix_locations (self, st) :
"""Replaces None in non-terminal nodes by a Location instance.
Expected argument:
- st: a syntax tree as returned by the parser
"""
(kind, token, lineno), children = st
children = [self._fix_locations(c) for c in children]
if kind >= self.tokenizer.NT_OFFSET :
token = Location(children[0][0][1], children[-1][0][1])
return ((kind, token, lineno), children)
def pprint (self, st) :
"""Return a human-readable representation of a syntax tree.
Expected argument:
- st: a syntax tree as returned by the parser
All symbol numbers are substituted by the corresponding names
and the text is indented appropriately.
"""
return pprint.pformat(self._tostrings(st))
# the rest of the class has not changed too much
def addAccelerators (self) :
if self.grammar[-1] : # already has accelerators
return
dfas, labels, start, accel = self.grammar
def handleState (state) :
arcs, accel, accept = state
accept = 0
labelCount = len(labels)
accelArray = [-1] * labelCount
for arc in arcs :
labelIndex, arrow = arc
kind = labels[labelIndex][0]
if arrow >= 128 :
warn("too many states (%d >= 128)!" % arrow)
continue
if kind >= self.tokenizer.NT_OFFSET :
targetFirstSet = self.findDFA(kind)[4]
if kind - self.tokenizer.NT_OFFSET >= 128 :
warn("nonterminal too high (%d >= %d)!" %
(kind, 128 + self.tokenizer.NT_OFFSET))
continue
for ibit in range(labelCount) :
if self.testbit(targetFirstSet, ibit) :
accelVal = (arrow | 128 |
((kind - self.tokenizer.NT_OFFSET) << 8))
oldVal = accelArray[ibit]
if oldVal != -1 :
# XXX Make this error reporting more better.
oldType = oldVal >> 8
# FIXME: bug in the original source
#warn("ambiguity at bit %d (for %d: was to %x,"
# " now to %x)."
# % (ibit, states.index(state),
# oldVal, accelVal))
warn("ambiguity at bit %d" % ibit)
accelArray[ibit] = (arrow | 128 |
((kind - self.tokenizer.NT_OFFSET) << 8))
elif labelIndex == 0 :
accept = 1
elif labelIndex >= 0 and labelIndex < labelCount :
accelArray[labelIndex] = arrow
# Now compute the upper and lower bounds.
accelUpper = labelCount
while accelUpper > 0 and accelArray[accelUpper-1] == -1 :
accelUpper -= 1
accelLower = 0
while accelLower < accelUpper and accelArray[accelLower] == -1 :
accelLower += 1
accelArray = accelArray[accelLower:accelUpper]
return (arcs, (accelUpper, accelLower, accelArray), accept)
def handleDFA (dfa) :
kind, name, initial, states, first = dfa
return (kind, name, initial, list(map(handleState, states)))
self.grammar = (list(map(handleDFA, dfas)), labels, start, 1)
_LL1_OK = 0 # replaced E_ prefix with _LL1_ to prevent potential
_LL1_DONE = 1 # conflicts with grammar symbols
_LL1_SYNTAX = 2
def testbit (self, bitstr, ibit) :
return (ord(bitstr[ibit >> 3]) & (1 << (ibit & 0x7))) != 0
def classify (self, token) :
labels = self.grammar[1]
if token.kind == self.tokenizer.NAME :
for i, label in enumerate(labels) :
if (token.kind, token) == label :
return i
for i, label in enumerate(labels) :
if (token.kind == label[0]) and (label[1] is None) :
return i
return -1
def findDFA (self, start) :
return self.grammar[0][start - self.tokenizer.NT_OFFSET]
def addToken (self, token) :
stack = self.stack
ilabel = self.classify(token)
while True :
state, dfa, parent = stack[-1]
# Perform accelerator
arcs, (accelUpper, accelLower, accelTable), accept = state
if accelLower <= ilabel < accelUpper :
accelResult = accelTable[ilabel - accelLower]
if accelResult != -1 :
# Handle accelerator result
if accelResult & 128 :
# Push non-terminal
nt = (accelResult >> 8) + self.tokenizer.NT_OFFSET
arrow = accelResult & 127
nextDFA = self.findDFA(nt)
# INLINE PUSH
newAstNode = ((nt, None, token.srow), [])
parent[1].append(newAstNode)
stack[-1] = (dfa[3][arrow], dfa, parent)
stack.append((nextDFA[3][nextDFA[2]], nextDFA,
newAstNode))
continue
# INLINE SHIFT
parent[1].append(((token.kind, token, token.srow), []))
nextState = dfa[3][accelResult]
stack[-1] = (nextState, dfa, parent)
state = nextState
while state[2] and len(state[0]) == 1 :
# INLINE POP
stack.pop(-1)
if not stack :
return self._LL1_DONE, None
else :
state, dfa, parent = stack[-1]
return self._LL1_OK, None
if accept :
stack.pop(-1)
if not stack :
return self._LL1_SYNTAX, self.tokenizer.ENDMARKER
continue
if ((accelUpper < accelLower) and
(self.grammar[1][accelLower][1] is not None)) :
expected = self.grammar[1][accelLower][1]
else :
expected = None
return self._LL1_SYNTAX, expected
class PyPgen (object) :
"""A grammar generator.
This class aggregates two elements from PyPgen:
- class basil.parsing.PyPgen.PyPgen
- function basil.parsing.PyPgen.buildParser
- parts of function basil.parsing.PyPgen.parserMain
The main differences are:
- simplified interface
- adapt to handle Token instances instead of 3-tuples (kind,
text, lineno)
- remove functions arguments that can now be retreived as
instance attributes
- use Python warnings instead of print statements
- many minor code edits (for my own understanding)
Docstrings are provided only for methods that did not have an
equivalent in PyPgen or that have been changed in a significant
way.
"""
def __init__ (self, gst, tokenizer) :
"""Initialize a new instance.
Expected arguments are:
- gst: the grammar's syntax tree as returned by
PgenParser.parse()
- tokenizer: a Tokenizer instance suitable for this grammar,
also returned by PgenParser.parse()
"""
self.tokenizer = tokenizer
self.EMPTY = self.tokenizer.ENDMARKER
self.gst = gst
self.nfaGrammar = self.dfaGrammar = None
self.nfa = None
self.crntKind = self.tokenizer.NT_OFFSET
self.operatorMap = tokenizer.opmap
def grammar (self) :
"""Generate and return the grammar object.
"""
nfaGrammar = self.handleStart(self.gst)
grammar = self.generateDfaGrammar(nfaGrammar)
self.translateLabels(grammar)
self.generateFirstSets(grammar)
grammar[0] = list(map(tuple, grammar[0]))
# Trick to add accelerators at generation time: it's easier to
# do it this way than to extract the required elements from
# class Parser.
return Parser(tuple(grammar), self.tokenizer).grammar
def python (self, pgen="pgen", inline=False) :
"""Build and return Python code for parsing module.
Expected arguments are:
- pgen: the name of module pgen in the generated source
(default to 'pgen')
- inline: a bool value to indicate whether to import or
inline pgen module in the generated code
If inline=True, the generated code is much bigger but does not
depend on any non-standard module.
"""
pysrc = ("%(pgen)s"
"tokenizer = %(prefix)s%(tokenizer)r\n"
"grammar = %(grammar)s\n"
"parser = %(prefix)sParser(grammar, tokenizer)\n\n"
"if __name__ == '__main__' :\n"
" # just for test purpose\n"
" import sys, pprint\n"
" st = parser.parseStream(sys.stdin)\n"
" print(parser.pprint(st))\n")
format = {"grammar" : pprint.pformat(self.grammar()),
"prefix" : pgen + ".",
"tokenizer" : self.tokenizer,
"pgen" : "import tokenize, %s\n\n" % pgen,
"inline" : "",
}
if inline :
format["prefix"] = ""
source = inspect.getsource(inspect.getmodule(self))
source = source.rsplit("if __name__ == '__main__' :", 1)[0]
format["pgen"] = ("### module '%s.py' inlined\n"
"%s\n### end of '%s.py'\n\n"
% (pgen, source.rstrip(), pgen))
return pysrc % format
@classmethod
def translate (cls, src, tgt=None, pgen="pgen", inline=False) :
"""Translate a pgen file to a Python file that implements the
corresponding parser.
Expected arguments are:
- src: path of the pgen file
- tgt: path of target Python file, if None, its name is
derived from src (replacing its extension by .py)
- pgen, inline: like in PyPgen.python()
Warning: the output file is silently overwritten if it already
exist.
"""
if tgt is None :
tgt = os.path.splitext(src)[0] + ".py"
gst, tokenizer = PgenParser.parse(src)
self = PyPgen(gst, tokenizer)
outfile = open(tgt, "w")
outfile.write(("# this file has been automatically generated running:\n"
"# %s\n\n") % " ".join(sys.argv))
outfile.write(self.python(pgen, inline))
outfile.close()
# the rest of the class has not changed too much
def addLabel (self, labelList, tokKind, tokName) :
labelTup = (tokKind, tokName)
if labelTup in labelList :
return labelList.index(labelTup)
labelIndex = len(labelList)
labelList.append(labelTup)
return labelIndex
def handleStart (self, gst) :
self.nfaGrammar = [[],[(self.tokenizer.ENDMARKER, "EMPTY")]]
self.crntKind = self.tokenizer.NT_OFFSET
kind, children = gst
for child in children :
if int(child[0]) == PgenParser.RULE :
self.handleRule(child)
return self.nfaGrammar
def handleRule (self, gst) :
# NFA := [ type : Int, name : String, [ STATE ], start : Int,
# finish : Int ]
# STATE := [ ARC ]
# ARC := ( labelIndex : Int, stateIndex : Int )
####
# build the NFA shell
self.nfa = [self.crntKind, None, [], -1, -1]
self.crntKind += 1
# work on the AST node
kind, children = gst
name, colon, rhs, newline = children
self.nfa[1] = name[0]
if (self.tokenizer.NAME, name[0]) not in self.nfaGrammar[1] :
self.nfaGrammar[1].append((self.tokenizer.NAME, name[0]))
start, finish = self.handleRhs(rhs)
self.nfa[3] = start
self.nfa[4] = finish
# append the NFA to the grammar
self.nfaGrammar[0].append(self.nfa)
def handleRhs (self, gst) :
kind, children = gst
start, finish = self.handleAlt(children[0])
if len(children) > 1 :
cStart = start
cFinish = finish
start = len(self.nfa[2])
self.nfa[2].append([(self.EMPTY, cStart)])
finish = len(self.nfa[2])
self.nfa[2].append([])
self.nfa[2][cFinish].append((self.EMPTY, finish))
for child in children[2:] :
if int(child[0]) == PgenParser.ALT :
cStart, cFinish = self.handleAlt(child)
self.nfa[2][start].append((self.EMPTY, cStart))
self.nfa[2][cFinish].append((self.EMPTY, finish))
return start, finish
def handleAlt (self, gst) :
kind, children = gst
start, finish = self.handleItem(children[0])
if len(children) > 1 :
for child in children[1:] :
cStart, cFinish = self.handleItem(child)
self.nfa[2][finish].append((self.EMPTY, cStart))
finish = cFinish
return start, finish
def handleItem (self, gst) :
nodeKind, children = gst
if int(children[0][0]) == PgenParser.ATOM :
start, finish = self.handleAtom(children[0])
if len(children) > 1 :
# Short out the child NFA
self.nfa[2][finish].append((self.EMPTY, start))
if children[1][0].kind == self.tokenizer.STAR :
finish = start
else :
start = len(self.nfa[2])
finish = start + 1
self.nfa[2].append([(self.EMPTY, finish)])
self.nfa[2].append([])
cStart, cFinish = self.handleRhs(children[1])
self.nfa[2][start].append((self.EMPTY, cStart))
self.nfa[2][cFinish].append((self.EMPTY, finish))
return start, finish
def handleAtom (self, gst) :
nodeKind, children = gst
tok = children[0][0]
if tok.kind == self.tokenizer.LPAR :
start, finish = self.handleRhs(children[1])
elif tok.kind in (self.tokenizer.STRING, self.tokenizer.NAME) :
start = len(self.nfa[2])
finish = start + 1
labelIndex = self.addLabel(self.nfaGrammar[1], tok.kind, tok)
self.nfa[2].append([(labelIndex, finish)])
self.nfa[2].append([])
return start, finish
def generateDfaGrammar (self, nfaGrammar, start=None) :
# See notes in pgen.lang.python.DFAParser for output schema.
dfas = []
for nfa in nfaGrammar[0] :
dfas.append(self.nfaToDfa(nfa))
kind = dfas[0][0]
if start is not None :
found = False
for dfa in dfas :
if dfa[1] == start :
kind = dfa[0]
found = True
break
if not found :
warn("couldn't find nonterminal %r, "
"using %r instead." % (start, dfas[0][1]))
return [dfas, nfaGrammar[1][:], kind, 0]
def addClosure (self, stateList, nfa, istate) :
stateList[istate] = True
arcs = nfa[2][istate]
for label, arrow in arcs :
if label == self.EMPTY :
self.addClosure(stateList, nfa, arrow)
def nfaToDfa (self, nfa) :
tempStates = []
crntTempState = [[False] * len(nfa[2]), [], False]
self.addClosure(crntTempState[0], nfa, nfa[3])
crntTempState[2] = crntTempState[0][nfa[4]]
if crntTempState[2] :
warn("nonterminal %r may produce empty." % nfa[1])
tempStates.append(crntTempState)
index = 0
while index < len(tempStates) :
crntTempState = tempStates[index]
for componentState in range(len(nfa[2])) :
if not crntTempState[0][componentState] :
continue
nfaArcs = nfa[2][componentState]
for label, nfaArrow in nfaArcs :
if label == self.EMPTY :
continue
foundTempArc = False
for tempArc in crntTempState[1] :
if tempArc[0] == label :
foundTempArc = True
break
if not foundTempArc :
tempArc = [label, -1, [False] * len(nfa[2])]
crntTempState[1].append(tempArc)
self.addClosure(tempArc[2], nfa, nfaArrow)
for arcIndex in range(len(crntTempState[1])) :
label, arrow, targetStateList = crntTempState[1][arcIndex]
targetFound = False
arrow = 0
for destTempState in tempStates :
if targetStateList == destTempState[0] :
targetFound = True
break
arrow += 1
if not targetFound :
assert arrow == len(tempStates)
tempState = [targetStateList[:], [],
targetStateList[nfa[4]]]
tempStates.append(tempState)
# Write arrow value back to the arc
crntTempState[1][arcIndex][1] = arrow
index += 1
tempStates = self.simplifyTempDfa(nfa, tempStates)
return self.tempDfaToDfa(nfa, tempStates)
def sameState (self, s1, s2) :
if len(s1[1]) != len(s2[1]) or s1[2] != s2[2] :
return False
for arcIndex in range(len(s1[1])) :
arc1 = s1[1][arcIndex]
arc2 = s2[1][arcIndex]
if arc1[:-1] != arc2[:-1] :
return False
return True
def simplifyTempDfa (self, nfa, tempStates) :
changes = True
deletedStates = []
while changes :
changes = False
for i in range(1, len(tempStates)) :
if i in deletedStates :
continue
for j in range(i) :
if j in deletedStates :
continue
if self.sameState(tempStates[i], tempStates[j]) :
deletedStates.append(i)
for k in range(len(tempStates)) :
if k in deletedStates :
continue
for arc in tempStates[k][1] :
if arc[1] == i :
arc[1] = j
changes = True
break
for stateIndex in deletedStates :
tempStates[stateIndex] = None
return tempStates
def tempDfaToDfa (self, nfa, tempStates) :
dfaStates = []
dfa = [nfa[0], nfa[1], 0, dfaStates, None]
stateMap = {}
tempIndex = 0
for tempState in tempStates :
if tempState is not None :
stateMap[tempIndex] = len(dfaStates)
dfaStates.append(([], (0,0,()), 0))
tempIndex += 1
for tempIndex in stateMap.keys() :
stateList, tempArcs, accepting = tempStates[tempIndex]
dfaStateIndex = stateMap[tempIndex]
dfaState = dfaStates[dfaStateIndex]
for tempArc in tempArcs :
dfaState[0].append((tempArc[0], stateMap[tempArc[1]]))
if accepting :
dfaState[0].append((self.EMPTY, dfaStateIndex))
return dfa
def translateLabels (self, grammar) :
tokenNames = list(self.tokenizer.tok_name.values())
# Recipe 252143 (remixed for laziness)
tokenValues = dict(([v, k] for k, v in
self.tokenizer.tok_name.items()))
labelList = grammar[1]
for labelIndex, (kind, name) in enumerate(labelList) :
if kind == self.tokenizer.NAME :
isNonTerminal = False
for dfa in grammar[0] :
if dfa[1] == name :
labelList[labelIndex] = (dfa[0], None)
isNonTerminal = True
break
if not isNonTerminal :
if name in tokenNames :
labelList[labelIndex] = (tokenValues[name], None)
else :
warn("can't translate NAME label '%s'" % name)
elif kind == self.tokenizer.STRING :
assert name[0] == name[-1]
sname = name[1:-1]
if (sname[0] in string.letters) or (sname[0] == "_") :
labelList[labelIndex] = (self.tokenizer.NAME, sname)
elif sname in self.operatorMap :
labelList[labelIndex] = (self.operatorMap[sname],
None)
else :
warn("can't translate STRING label %s" % name)
return grammar
def calcFirstSet (self, grammar, dfa) :
if dfa[4] == -1 :
warn("left-recursion for %r" % dfa[1])
return
if dfa[4] != None :
warn("re-calculating FIRST set for %r" % dfa[1])
dfa[4] = -1
symbols = []
result = 0
state = dfa[3][dfa[2]]
for arc in state[0] :
sym = arc[0]
if sym not in symbols :
symbols.append(sym)
kind = grammar[1][sym][0]
if kind >= self.tokenizer.NT_OFFSET :
# Nonterminal
ddfa = grammar[0][kind - self.tokenizer.NT_OFFSET]
if ddfa[4] == -1 :
warn("left recursion below %r" % dfa[1])
else :
if ddfa[4] == None :
self.calcFirstSet(grammar, ddfa)
result |= ddfa[4]
else :
result |= 1 << sym
dfa[4] = result
def generateFirstSets (self, grammar) :
dfas = grammar[0]
index = 0
while index < len(dfas) :
dfa = dfas[index]
if None == dfa[4] :
self.calcFirstSet(grammar, dfa)
index += 1
for dfa in dfas :
set = dfa[4]
result = []
while set > 0 :
crntBits = set & 0xff
result.append(chr(crntBits))
set >>= 8
properSize = (len(grammar[1]) / 8) + 1
if len(result) < properSize :
result.append('\x00' * (properSize - len(result)))
dfa[4] = "".join(result)
return grammar
if __name__ == '__main__' :
# a simple CLI
import sys, getopt
tgt, pgen, inline = None, "snakes.lang.pgen", False
try :
opts, args = getopt.getopt(sys.argv[1:], "h",
["help", "inline", "output=",
"pgen=", "start="])
if ("-h", "") in opts or ("--help", "") in opts :
opts = [("-h", "")]
args = [None]
elif not args :
raise getopt.GetoptError("no input file provided"
" (try -h to get help)")
elif len(args) > 1 :
raise getopt.GetoptError("more than one input file provided")
except getopt.GetoptError :
sys.stderr.write("%s: %s\n" % (__file__, sys.exc_info()[1]))
sys.exit(1)
for (flag, arg) in opts :
if flag in ("-h", "--help") :
print("""usage: %s [OPTIONS] INFILE
Options:
-h, --help print this help and exit
--inline inline 'pgen.py' in the generated file
--output=OUTPUT set output file
--pgen=PGEN name of 'pgen' module in output file""" % __file__)
sys.exit(0)
elif flag == "--inline" :
inline = True
elif flag == "--output" :
tgt = arg
elif flag == "--pgen" :
pgen = arg
PyPgen.translate(args[0], tgt=tgt, pgen=pgen, inline=inline)
"""An implementation of the Zephyr Abstract Syntax Definition Language.
See http://asdl.sourceforge.net/ and
http://www.cs.princeton.edu/~danwang/Papers/dsl97/dsl97-abstract.html.
Only supports top level module decl, not view. I'm guessing that view
is intended to support the browser and I'm not interested in the
browser.
Changes for Python: Add support for module versions
"""
#__metaclass__ = type
import os, sys
import traceback
from . import spark
class Token:
# spark seems to dispatch in the parser based on a token's
# type attribute
def __init__(self, type, lineno):
self.type = type
self.lineno = lineno
def __str__(self):
return self.type
def __repr__(self):
return str(self)
class Id(Token):
def __init__(self, value, lineno):
self.type = 'Id'
self.value = value
self.lineno = lineno
def __str__(self):
return self.value
class String(Token):
def __init__(self, value, lineno):
self.type = 'String'
self.value = value
self.lineno = lineno
class ASDLSyntaxError:
def __init__(self, lineno, token=None, msg=None):
self.lineno = lineno
self.token = token
self.msg = msg
def __str__(self):
if self.msg is None:
return "Error at '%s', line %d" % (self.token, self.lineno)
else:
return "%s, line %d" % (self.msg, self.lineno)
class ASDLScanner(spark.GenericScanner, object):
def tokenize(self, input):
self.rv = []
self.lineno = 1
super(ASDLScanner, self).tokenize(input)
return self.rv
def t_id(self, s):
r"[\w\.]+"
# XXX doesn't distinguish upper vs. lower, which is
# significant for ASDL.
self.rv.append(Id(s, self.lineno))
def t_string(self, s):
r'"[^"]*"'
self.rv.append(String(s, self.lineno))
def t_xxx(self, s): # not sure what this production means
r"<="
self.rv.append(Token(s, self.lineno))
def t_punctuation(self, s):
r"[\{\}\*\=\|\(\)\,\?\:]"
self.rv.append(Token(s, self.lineno))
def t_comment(self, s):
r"\-\-[^\n]*"
pass
def t_newline(self, s):
r"\n"
self.lineno += 1
def t_whitespace(self, s):
r"[ \t]+"
pass
def t_default(self, s):
r" . +"
raise ValueError("unmatched input: %r" % s)
class ASDLParser(spark.GenericParser, object):
def __init__(self):
super(ASDLParser, self).__init__("module")
def typestring(self, tok):
return tok.type
def error(self, tok):
raise ASDLSyntaxError(tok.lineno, tok)
def p_module_0(self, arg):
" module ::= Id Id version { } "
(module, name, version, _0, _1) = arg
if module.value != "module":
raise ASDLSyntaxError(module.lineno,
msg="expected 'module', found %s" % module)
return Module(name, None, version)
def p_module(self, arg):
" module ::= Id Id version { definitions } "
(module, name, version, _0, definitions, _1) = arg
if module.value != "module":
raise ASDLSyntaxError(module.lineno,
msg="expected 'module', found %s" % module)
return Module(name, definitions, version)
def p_version(self, arg):
"version ::= Id String"
(version, V) = arg
if version.value != "version":
raise ASDLSyntaxError(version.lineno,
msg="expected 'version', found %" % version)
return V
def p_definition_0(self, arg):
" definitions ::= definition "
(definition,) = arg
return definition
def p_definition_1(self, arg):
" definitions ::= definition definitions "
(definitions, definition) = arg
return definitions + definition
def p_definition(self, arg):
" definition ::= Id = type "
(id, _, type) = arg
return [Type(id, type)]
def p_type_0(self, arg):
" type ::= product "
(product,) = arg
return product
def p_type_1(self, arg):
" type ::= sum "
(sum,) = arg
return Sum(sum)
def p_type_2(self, arg):
" type ::= sum Id ( fields ) "
(sum, id, _0, attributes, _1) = arg
if id.value != "attributes":
raise ASDLSyntaxError(id.lineno,
msg="expected attributes, found %s" % id)
if attributes:
attributes.reverse()
return Sum(sum, attributes)
def p_product(self, arg):
" product ::= ( fields ) "
(_0, fields, _1) = arg
fields.reverse()
return Product(fields)
def p_sum_0(self, arg):
" sum ::= constructor """
(constructor,) = arg
return [constructor]
def p_sum_1(self, arg):
" sum ::= constructor | sum "
(constructor, _, sum) = arg
return [constructor] + sum
def p_sum_2(self, arg):
" sum ::= constructor | sum "
(constructor, _, sum) = arg
return [constructor] + sum
def p_constructor_0(self, arg):
" constructor ::= Id "
(id,) = arg
return Constructor(id)
def p_constructor_1(self, arg):
" constructor ::= Id ( fields ) "
(id, _0, fields, _1) = arg
fields.reverse()
return Constructor(id, fields)
def p_fields_0(self, arg):
" fields ::= field "
(field,) = arg
return [field]
def p_fields_1(self, arg):
" fields ::= field , fields "
(field, _, fields) = arg
return fields + [field]
def p_field_0(self, arg):
" field ::= Id "
(type,) = arg
return Field(type)
def p_field_1(self, arg):
" field ::= Id Id "
(type, name) = arg
return Field(type, name)
def p_field_2(self, arg):
" field ::= Id * Id "
(type, _, name) = arg
return Field(type, name, seq=1)
def p_field_3(self, arg):
" field ::= Id ? Id "
(type, _, name) = arg
return Field(type, name, opt=1)
def p_field_4(self, arg):
" field ::= Id * "
(type, _) = arg
return Field(type, seq=1)
def p_field_5(self, arg):
" field ::= Id ? "
(type, _) = arg
return Field(type, opt=1)
builtin_types = ("identifier", "string", "int", "bool", "object")
# below is a collection of classes to capture the AST of an AST :-)
# not sure if any of the methods are useful yet, but I'm adding them
# piecemeal as they seem helpful
class AST:
pass # a marker class
class Module(AST):
def __init__(self, name, dfns, version):
self.name = name
self.dfns = dfns
self.version = version
self.types = {} # maps type name to value (from dfns)
for type in dfns:
self.types[type.name.value] = type.value
def __repr__(self):
return "Module(%s, %s)" % (self.name, self.dfns)
class Type(AST):
def __init__(self, name, value):
self.name = name
self.value = value
def __repr__(self):
return "Type(%s, %s)" % (self.name, self.value)
class Constructor(AST):
def __init__(self, name, fields=None):
self.name = name
self.fields = fields or []
def __repr__(self):
return "Constructor(%s, %s)" % (self.name, self.fields)
class Field(AST):
def __init__(self, type, name=None, seq=0, opt=0):
self.type = type
self.name = name
self.seq = seq
self.opt = opt
def __repr__(self):
if self.seq:
extra = ", seq=1"
elif self.opt:
extra = ", opt=1"
else:
extra = ""
if self.name is None:
return "Field(%s%s)" % (self.type, extra)
else:
return "Field(%s, %s%s)" % (self.type, self.name, extra)
class Sum(AST):
def __init__(self, types, attributes=None):
self.types = types
self.attributes = attributes or []
def __repr__(self):
if self.attributes is None:
return "Sum(%s)" % self.types
else:
return "Sum(%s, %s)" % (self.types, self.attributes)
class Product(AST):
def __init__(self, fields):
self.fields = fields
def __repr__(self):
return "Product(%s)" % self.fields
class VisitorBase(object):
def __init__(self, skip=0):
self.cache = {}
self.skip = skip
def visit(self, object, *args):
meth = self._dispatch(object)
if meth is None:
return
try:
meth(object, *args)
except Exception:
err = sys.exc_info()[1]
print("Error visiting %r" % object)
print(err)
traceback.print_exc()
# XXX hack
if hasattr(self, 'file'):
self.file.flush()
os._exit(1)
def _dispatch(self, object):
assert isinstance(object, AST), repr(object)
klass = object.__class__
meth = self.cache.get(klass)
if meth is None:
methname = "visit" + klass.__name__
if self.skip:
meth = getattr(self, methname, None)
else:
meth = getattr(self, methname)
self.cache[klass] = meth
return meth
class Check(VisitorBase):
def __init__(self):
super(Check, self).__init__(skip=1)
self.cons = {}
self.errors = 0
self.types = {}
def visitModule(self, mod):
for dfn in mod.dfns:
self.visit(dfn)
def visitType(self, type):
self.visit(type.value, str(type.name))
def visitSum(self, sum, name):
for t in sum.types:
self.visit(t, name)
def visitConstructor(self, cons, name):
key = str(cons.name)
conflict = self.cons.get(key)
if conflict is None:
self.cons[key] = name
else:
print("Redefinition of constructor %s" % key)
print("Defined in %s and %s" % (conflict, name))
self.errors += 1
for f in cons.fields:
self.visit(f, key)
def visitField(self, field, name):
key = str(field.type)
l = self.types.setdefault(key, [])
l.append(name)
def visitProduct(self, prod, name):
for f in prod.fields:
self.visit(f, name)
def check(mod):
v = Check()
v.visit(mod)
for t in v.types:
if t not in mod.types and not t in builtin_types:
v.errors += 1
uses = ", ".join(v.types[t])
print("Undefined type %s, used in %s" % (t, uses))
return not v.errors
def parse(file):
scanner = ASDLScanner()
parser = ASDLParser()
buf = open(file).read()
tokens = scanner.tokenize(buf)
try:
return parser.parse(tokens)
except ASDLSyntaxError:
err = sys.exc_info()[1]
print(err)
lines = buf.split("\n")
print(lines[err.lineno - 1]) # lines starts at 0, files at 1
if __name__ == "__main__":
import glob
import sys
if len(sys.argv) > 1:
files = sys.argv[1:]
else:
testdir = "tests"
files = glob.glob(testdir + "/*.asdl")
for file in files:
print(file)
mod = parse(file)
print("module %s" % mod.name)
print("%s definitions" % len(mod.dfns))
if not check(mod):
print("Check failed")
else:
for dfn in mod.dfns:
print(dfn.type)
# Copyright (c) 1998-2002 John Aycock
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
__version__ = 'SPARK-0.7 (pre-alpha-5)'
import re
import string
def _namelist(instance):
namelist, namedict, classlist = [], {}, [instance.__class__]
for c in classlist:
for b in c.__bases__:
classlist.append(b)
for name in c.__dict__:
if name not in namedict:
namelist.append(name)
namedict[name] = 1
return namelist
class GenericScanner:
def __init__(self, flags=0):
pattern = self.reflect()
self.re = re.compile(pattern, re.VERBOSE|flags)
self.index2func = {}
for name, number in self.re.groupindex.items():
self.index2func[number-1] = getattr(self, 't_' + name)
def makeRE(self, name):
doc = getattr(self, name).__doc__
rv = '(?P<%s>%s)' % (name[2:], doc)
return rv
def reflect(self):
rv = []
for name in _namelist(self):
if name[:2] == 't_' and name != 't_default':
rv.append(self.makeRE(name))
rv.append(self.makeRE('t_default'))
return string.join(rv, '|')
def error(self, s, pos):
print("Lexical error at position %s" % pos)
raise SystemExit
def tokenize(self, s):
pos = 0
n = len(s)
while pos < n:
m = self.re.match(s, pos)
if m is None:
self.error(s, pos)
groups = m.groups()
for i in range(len(groups)):
if groups[i] and i in self.index2func:
self.index2func[i](groups[i])
pos = m.end()
def t_default(self, s):
r'( . | \n )+'
print("Specification error: unmatched input")
raise SystemExit
#
# Extracted from GenericParser and made global so that [un]picking works.
#
class _State:
def __init__(self, stateno, items):
self.T, self.complete, self.items = [], [], items
self.stateno = stateno
class GenericParser:
#
# An Earley parser, as per J. Earley, "An Efficient Context-Free
# Parsing Algorithm", CACM 13(2), pp. 94-102. Also J. C. Earley,
# "An Efficient Context-Free Parsing Algorithm", Ph.D. thesis,
# Carnegie-Mellon University, August 1968. New formulation of
# the parser according to J. Aycock, "Practical Earley Parsing
# and the SPARK Toolkit", Ph.D. thesis, University of Victoria,
# 2001, and J. Aycock and R. N. Horspool, "Practical Earley
# Parsing", unpublished paper, 2001.
#
def __init__(self, start):
self.rules = {}
self.rule2func = {}
self.rule2name = {}
self.collectRules()
self.augment(start)
self.ruleschanged = 1
_NULLABLE = '\e_'
_START = 'START'
_BOF = '|-'
#
# When pickling, take the time to generate the full state machine;
# some information is then extraneous, too. Unfortunately we
# can't save the rule2func map.
#
def __getstate__(self):
if self.ruleschanged:
#
# XXX - duplicated from parse()
#
self.computeNull()
self.newrules = {}
self.new2old = {}
self.makeNewRules()
self.ruleschanged = 0
self.edges, self.cores = {}, {}
self.states = { 0: self.makeState0() }
self.makeState(0, self._BOF)
#
# XXX - should find a better way to do this..
#
changes = 1
while changes:
changes = 0
for k, v in self.edges.items():
if v is None:
state, sym = k
if state in self.states:
self.goto(state, sym)
changes = 1
rv = self.__dict__.copy()
for s in self.states.values():
del s.items
del rv['rule2func']
del rv['nullable']
del rv['cores']
return rv
def __setstate__(self, D):
self.rules = {}
self.rule2func = {}
self.rule2name = {}
self.collectRules()
start = D['rules'][self._START][0][1][1] # Blech.
self.augment(start)
D['rule2func'] = self.rule2func
D['makeSet'] = self.makeSet_fast
self.__dict__ = D
#
# A hook for GenericASTBuilder and GenericASTMatcher. Mess
# thee not with this; nor shall thee toucheth the _preprocess
# argument to addRule.
#
def preprocess(self, rule, func): return rule, func
def addRule(self, doc, func, _preprocess=1):
fn = func
rules = string.split(doc)
index = []
for i in range(len(rules)):
if rules[i] == '::=':
index.append(i-1)
index.append(len(rules))
for i in range(len(index)-1):
lhs = rules[index[i]]
rhs = rules[index[i]+2:index[i+1]]
rule = (lhs, tuple(rhs))
if _preprocess:
rule, fn = self.preprocess(rule, func)
if lhs in self.rules:
self.rules[lhs].append(rule)
else:
self.rules[lhs] = [ rule ]
self.rule2func[rule] = fn
self.rule2name[rule] = func.__name__[2:]
self.ruleschanged = 1
def collectRules(self):
for name in _namelist(self):
if name[:2] == 'p_':
func = getattr(self, name)
doc = func.__doc__
self.addRule(doc, func)
def augment(self, start):
rule = '%s ::= %s %s' % (self._START, self._BOF, start)
self.addRule(rule, lambda args: args[1], 0)
def computeNull(self):
self.nullable = {}
tbd = []
for rulelist in self.rules.values():
lhs = rulelist[0][0]
self.nullable[lhs] = 0
for rule in rulelist:
rhs = rule[1]
if len(rhs) == 0:
self.nullable[lhs] = 1
continue
#
# We only need to consider rules which
# consist entirely of nonterminal symbols.
# This should be a savings on typical
# grammars.
#
for sym in rhs:
if sym not in self.rules:
break
else:
tbd.append(rule)
changes = 1
while changes:
changes = 0
for lhs, rhs in tbd:
if self.nullable[lhs]:
continue
for sym in rhs:
if not self.nullable[sym]:
break
else:
self.nullable[lhs] = 1
changes = 1
def makeState0(self):
s0 = _State(0, [])
for rule in self.newrules[self._START]:
s0.items.append((rule, 0))
return s0
def finalState(self, tokens):
#
# Yuck.
#
if len(self.newrules[self._START]) == 2 and len(tokens) == 0:
return 1
start = self.rules[self._START][0][1][1]
return self.goto(1, start)
def makeNewRules(self):
worklist = []
for rulelist in self.rules.values():
for rule in rulelist:
worklist.append((rule, 0, 1, rule))
for rule, i, candidate, oldrule in worklist:
lhs, rhs = rule
n = len(rhs)
while i < n:
sym = rhs[i]
if sym not in self.rules or \
not self.nullable[sym]:
candidate = 0
i = i + 1
continue
newrhs = list(rhs)
newrhs[i] = self._NULLABLE+sym
newrule = (lhs, tuple(newrhs))
worklist.append((newrule, i+1,
candidate, oldrule))
candidate = 0
i = i + 1
else:
if candidate:
lhs = self._NULLABLE+lhs
rule = (lhs, rhs)
if lhs in self.newrules:
self.newrules[lhs].append(rule)
else:
self.newrules[lhs] = [ rule ]
self.new2old[rule] = oldrule
def typestring(self, token):
return None
def error(self, token):
print("Syntax error at or near `%s' token" % token)
raise SystemExit
def parse(self, tokens):
sets = [ [(1,0), (2,0)] ]
self.links = {}
if self.ruleschanged:
self.computeNull()
self.newrules = {}
self.new2old = {}
self.makeNewRules()
self.ruleschanged = 0
self.edges, self.cores = {}, {}
self.states = { 0: self.makeState0() }
self.makeState(0, self._BOF)
for i in range(len(tokens)):
sets.append([])
if sets[i] == []:
break
self.makeSet(tokens[i], sets, i)
else:
sets.append([])
self.makeSet(None, sets, len(tokens))
#_dump(tokens, sets, self.states)
finalitem = (self.finalState(tokens), 0)
if finalitem not in sets[-2]:
if len(tokens) > 0:
self.error(tokens[i-1])
else:
self.error(None)
return self.buildTree(self._START, finalitem,
tokens, len(sets)-2)
def isnullable(self, sym):
#
# For symbols in G_e only. If we weren't supporting 1.5,
# could just use sym.startswith().
#
return self._NULLABLE == sym[0:len(self._NULLABLE)]
def skip(self, lrhs, pos=0):
(lhs, rhs) = lrhs
n = len(rhs)
while pos < n:
if not self.isnullable(rhs[pos]):
break
pos = pos + 1
return pos
def makeState(self, state, sym):
assert sym is not None
#
# Compute \epsilon-kernel state's core and see if
# it exists already.
#
kitems = []
for rule, pos in self.states[state].items:
lhs, rhs = rule
if rhs[pos:pos+1] == (sym,):
kitems.append((rule, self.skip(rule, pos+1)))
core = kitems
core.sort()
tcore = tuple(core)
if tcore in self.cores:
return self.cores[tcore]
#
# Nope, doesn't exist. Compute it and the associated
# \epsilon-nonkernel state together; we'll need it right away.
#
k = self.cores[tcore] = len(self.states)
K, NK = _State(k, kitems), _State(k+1, [])
self.states[k] = K
predicted = {}
edges = self.edges
rules = self.newrules
for X in K, NK:
worklist = X.items
for item in worklist:
rule, pos = item
lhs, rhs = rule
if pos == len(rhs):
X.complete.append(rule)
continue
nextSym = rhs[pos]
key = (X.stateno, nextSym)
if nextSym not in rules:
if key not in edges:
edges[key] = None
X.T.append(nextSym)
else:
edges[key] = None
if nextSym not in predicted:
predicted[nextSym] = 1
for prule in rules[nextSym]:
ppos = self.skip(prule)
new = (prule, ppos)
NK.items.append(new)
#
# Problem: we know K needs generating, but we
# don't yet know about NK. Can't commit anything
# regarding NK to self.edges until we're sure. Should
# we delay committing on both K and NK to avoid this
# hacky code? This creates other problems..
#
if X is K:
edges = {}
if NK.items == []:
return k
#
# Check for \epsilon-nonkernel's core. Unfortunately we
# need to know the entire set of predicted nonterminals
# to do this without accidentally duplicating states.
#
core = list(predicted.keys())
core.sort()
tcore = tuple(core)
if tcore in self.cores:
self.edges[(k, None)] = self.cores[tcore]
return k
nk = self.cores[tcore] = self.edges[(k, None)] = NK.stateno
self.edges.update(edges)
self.states[nk] = NK
return k
def goto(self, state, sym):
key = (state, sym)
if key not in self.edges:
#
# No transitions from state on sym.
#
return None
rv = self.edges[key]
if rv is None:
#
# Target state isn't generated yet. Remedy this.
#
rv = self.makeState(state, sym)
self.edges[key] = rv
return rv
def gotoT(self, state, t):
return [self.goto(state, t)]
def gotoST(self, state, st):
rv = []
for t in self.states[state].T:
if st == t:
rv.append(self.goto(state, t))
return rv
def add(self, set, item, i=None, predecessor=None, causal=None):
if predecessor is None:
if item not in set:
set.append(item)
else:
key = (item, i)
if item not in set:
self.links[key] = []
set.append(item)
self.links[key].append((predecessor, causal))
def makeSet(self, token, sets, i):
cur, next = sets[i], sets[i+1]
ttype = token is not None and self.typestring(token) or None
if ttype is not None:
fn, arg = self.gotoT, ttype
else:
fn, arg = self.gotoST, token
for item in cur:
ptr = (item, i)
state, parent = item
add = fn(state, arg)
for k in add:
if k is not None:
self.add(next, (k, parent), i+1, ptr)
nk = self.goto(k, None)
if nk is not None:
self.add(next, (nk, i+1))
if parent == i:
continue
for rule in self.states[state].complete:
lhs, rhs = rule
for pitem in sets[parent]:
pstate, pparent = pitem
k = self.goto(pstate, lhs)
if k is not None:
why = (item, i, rule)
pptr = (pitem, parent)
self.add(cur, (k, pparent),
i, pptr, why)
nk = self.goto(k, None)
if nk is not None:
self.add(cur, (nk, i))
def makeSet_fast(self, token, sets, i):
#
# Call *only* when the entire state machine has been built!
# It relies on self.edges being filled in completely, and
# then duplicates and inlines code to boost speed at the
# cost of extreme ugliness.
#
cur, next = sets[i], sets[i+1]
ttype = token is not None and self.typestring(token) or None
for item in cur:
ptr = (item, i)
state, parent = item
if ttype is not None:
k = self.edges.get((state, ttype), None)
if k is not None:
#self.add(next, (k, parent), i+1, ptr)
#INLINED --v
new = (k, parent)
key = (new, i+1)
if new not in next:
self.links[key] = []
next.append(new)
self.links[key].append((ptr, None))
#INLINED --^
#nk = self.goto(k, None)
nk = self.edges.get((k, None), None)
if nk is not None:
#self.add(next, (nk, i+1))
#INLINED --v
new = (nk, i+1)
if new not in next:
next.append(new)
#INLINED --^
else:
add = self.gotoST(state, token)
for k in add:
if k is not None:
self.add(next, (k, parent), i+1, ptr)
#nk = self.goto(k, None)
nk = self.edges.get((k, None), None)
if nk is not None:
self.add(next, (nk, i+1))
if parent == i:
continue
for rule in self.states[state].complete:
lhs, rhs = rule
for pitem in sets[parent]:
pstate, pparent = pitem
#k = self.goto(pstate, lhs)
k = self.edges.get((pstate, lhs), None)
if k is not None:
why = (item, i, rule)
pptr = (pitem, parent)
#self.add(cur, (k, pparent),
# i, pptr, why)
#INLINED --v
new = (k, pparent)
key = (new, i)
if new not in cur:
self.links[key] = []
cur.append(new)
self.links[key].append((pptr, why))
#INLINED --^
#nk = self.goto(k, None)
nk = self.edges.get((k, None), None)
if nk is not None:
#self.add(cur, (nk, i))
#INLINED --v
new = (nk, i)
if new not in cur:
cur.append(new)
#INLINED --^
def predecessor(self, key, causal):
for p, c in self.links[key]:
if c == causal:
return p
assert 0
def causal(self, key):
links = self.links[key]
if len(links) == 1:
return links[0][1]
choices = []
rule2cause = {}
for p, c in links:
rule = c[2]
choices.append(rule)
rule2cause[rule] = c
return rule2cause[self.ambiguity(choices)]
def deriveEpsilon(self, nt):
if len(self.newrules[nt]) > 1:
rule = self.ambiguity(self.newrules[nt])
else:
rule = self.newrules[nt][0]
#print rule
rhs = rule[1]
attr = [None] * len(rhs)
for i in range(len(rhs)-1, -1, -1):
attr[i] = self.deriveEpsilon(rhs[i])
return self.rule2func[self.new2old[rule]](attr)
def buildTree(self, nt, item, tokens, k):
state, parent = item
choices = []
for rule in self.states[state].complete:
if rule[0] == nt:
choices.append(rule)
rule = choices[0]
if len(choices) > 1:
rule = self.ambiguity(choices)
#print rule
rhs = rule[1]
attr = [None] * len(rhs)
for i in range(len(rhs)-1, -1, -1):
sym = rhs[i]
if sym not in self.newrules:
if sym != self._BOF:
attr[i] = tokens[k-1]
key = (item, k)
item, k = self.predecessor(key, None)
#elif self.isnullable(sym):
elif self._NULLABLE == sym[0:len(self._NULLABLE)]:
attr[i] = self.deriveEpsilon(sym)
else:
key = (item, k)
why = self.causal(key)
attr[i] = self.buildTree(sym, why[0],
tokens, why[1])
item, k = self.predecessor(key, why)
return self.rule2func[self.new2old[rule]](attr)
def ambiguity(self, rules):
#
# XXX - problem here and in collectRules() if the same rule
# appears in >1 method. Also undefined results if rules
# causing the ambiguity appear in the same method.
#
sortlist = []
name2index = {}
for i in range(len(rules)):
lhs, rhs = rule = rules[i]
name = self.rule2name[self.new2old[rule]]
sortlist.append((len(rhs), name))
name2index[name] = i
sortlist.sort()
list = [a_b[1] for a_b in sortlist]
return rules[name2index[self.resolve(list)]]
def resolve(self, list):
#
# Resolve ambiguity in favor of the shortest RHS.
# Since we walk the tree from the top down, this
# should effectively resolve in favor of a "shift".
#
return list[0]
#
# GenericASTBuilder automagically constructs a concrete/abstract syntax tree
# for a given input. The extra argument is a class (not an instance!)
# which supports the "__setslice__" and "__len__" methods.
#
# XXX - silently overrides any user code in methods.
#
class GenericASTBuilder(GenericParser):
def __init__(self, AST, start):
GenericParser.__init__(self, start)
self.AST = AST
def preprocess(self, rule, func):
rebind = lambda lhs, self=self: \
lambda args, lhs=lhs, self=self: \
self.buildASTNode(args, lhs)
lhs, rhs = rule
return rule, rebind(lhs)
def buildASTNode(self, args, lhs):
children = []
for arg in args:
if isinstance(arg, self.AST):
children.append(arg)
else:
children.append(self.terminal(arg))
return self.nonterminal(lhs, children)
def terminal(self, token): return token
def nonterminal(self, type, args):
rv = self.AST(type)
rv[:len(args)] = args
return rv
#
# GenericASTTraversal is a Visitor pattern according to Design Patterns. For
# each node it attempts to invoke the method n_<node type>, falling
# back onto the default() method if the n_* can't be found. The preorder
# traversal also looks for an exit hook named n_<node type>_exit (no default
# routine is called if it's not found). To prematurely halt traversal
# of a subtree, call the prune() method -- this only makes sense for a
# preorder traversal. Node type is determined via the typestring() method.
#
class GenericASTTraversalPruningException:
pass
class GenericASTTraversal:
def __init__(self, ast):
self.ast = ast
def typestring(self, node):
return node.type
def prune(self):
raise GenericASTTraversalPruningException
def preorder(self, node=None):
if node is None:
node = self.ast
try:
name = 'n_' + self.typestring(node)
if hasattr(self, name):
func = getattr(self, name)
func(node)
else:
self.default(node)
except GenericASTTraversalPruningException:
return
for kid in node:
self.preorder(kid)
name = name + '_exit'
if hasattr(self, name):
func = getattr(self, name)
func(node)
def postorder(self, node=None):
if node is None:
node = self.ast
for kid in node:
self.postorder(kid)
name = 'n_' + self.typestring(node)
if hasattr(self, name):
func = getattr(self, name)
func(node)
else:
self.default(node)
def default(self, node):
pass
#
# GenericASTMatcher. AST nodes must have "__getitem__" and "__cmp__"
# implemented.
#
# XXX - makes assumptions about how GenericParser walks the parse tree.
#
class GenericASTMatcher(GenericParser):
def __init__(self, start, ast):
GenericParser.__init__(self, start)
self.ast = ast
def preprocess(self, rule, func):
rebind = lambda func, self=self: \
lambda args, func=func, self=self: \
self.foundMatch(args, func)
lhs, rhs = rule
rhslist = list(rhs)
rhslist.reverse()
return (lhs, tuple(rhslist)), rebind(func)
def foundMatch(self, args, func):
func(args[-1])
return args[-1]
def match_r(self, node):
self.input.insert(0, node)
children = 0
for child in node:
if children == 0:
self.input.insert(0, '(')
children = children + 1
self.match_r(child)
if children > 0:
self.input.insert(0, ')')
def match(self, ast=None):
if ast is None:
ast = self.ast
self.input = []
self.match_r(ast)
self.parse(self.input)
def resolve(self, list):
#
# Resolve ambiguity in favor of the longest RHS.
#
return list[-1]
def _dump(tokens, sets, states):
for i in range(len(sets)):
print('set %s' % i)
for item in sets[i]:
print('\t%s' % item)
for (lhs, rhs), pos in states[item[0]].items:
print('\t\t %s ::= %s . %s'
% (lhs, string.join(rhs[:pos]),string.join(rhs[pos:])))
if i < len(tokens):
print('\ntoken %s\n' % str(tokens[i]))
"Usage: unparse.py <path to source file>"
import sys
#import _ast
from snakes.lang import ast as _ast
try :
import io
except ImportError :
import cStringIO as io
import os
def interleave(inter, f, seq):
"""Call f on each item in seq, calling inter() in between.
"""
seq = iter(seq)
try:
f(next(seq))
except StopIteration:
pass
else:
for x in seq:
inter()
f(x)
class Unparser:
"""Methods in this class recursively traverse an AST and
output source code for the abstract syntax; original formatting
is disregarged. """
def __init__(self, tree, file = sys.stdout):
"""Unparser(tree, file=sys.stdout) -> None.
Print the source for tree to file."""
self.f = file
self._indent = 0
self.dispatch(tree)
self.f.write("\n")
self.f.flush()
def fill(self, text = ""):
"Indent a piece of text, according to the current indentation level"
self.f.write("\n"+" "*self._indent + text)
def write(self, text):
"Append a piece of text to the current line."
self.f.write(text)
def enter(self):
"Print ':', and increase the indentation."
self.write(":")
self._indent += 1
def leave(self):
"Decrease the indentation level."
self._indent -= 1
def dispatch(self, tree):
"Dispatcher function, dispatching tree type T to method _T."
if isinstance(tree, list):
for t in tree:
self.dispatch(t)
return
meth = getattr(self, "_"+tree.__class__.__name__)
meth(tree)
############### Unparsing methods ######################
# There should be one method per concrete grammar type #
# Constructors should be grouped by sum type. Ideally, #
# this would follow the order in the grammar, but #
# currently doesn't. #
########################################################
def _Module(self, tree):
for stmt in tree.body:
self.dispatch(stmt)
# stmt
def _Expr(self, tree):
self.fill()
self.dispatch(tree.value)
def _Import(self, t):
self.fill("import ")
interleave(lambda: self.write(", "), self.dispatch, t.names)
def _ImportFrom(self, t):
self.fill("from ")
self.write(t.module)
self.write(" import ")
interleave(lambda: self.write(", "), self.dispatch, t.names)
# XXX(jpe) what is level for?
def _Assign(self, t):
self.fill()
for target in t.targets:
self.dispatch(target)
self.write(" = ")
self.dispatch(t.value)
def _AugAssign(self, t):
self.fill()
self.dispatch(t.target)
self.write(" "+self.binop[t.op.__class__.__name__]+"= ")
self.dispatch(t.value)
def _Return(self, t):
self.fill("return")
if t.value:
self.write(" ")
self.dispatch(t.value)
def _Pass(self, t):
self.fill("pass")
def _Break(self, t):
self.fill("break")
def _Continue(self, t):
self.fill("continue")
def _Delete(self, t):
self.fill("del ")
self.dispatch(t.targets)
def _Assert(self, t):
self.fill("assert ")
self.dispatch(t.test)
if t.msg:
self.write(", ")
self.dispatch(t.msg)
def _Exec(self, t):
self.fill("exec ")
self.dispatch(t.body)
if t.globals:
self.write(" in ")
self.dispatch(t.globals)
if t.locals:
self.write(", ")
self.dispatch(t.locals)
def _Print(self, t):
self.fill("print ")
do_comma = False
if t.dest:
self.write(">>")
self.dispatch(t.dest)
do_comma = True
for e in t.values:
if do_comma:self.write(", ")
else:do_comma=True
self.dispatch(e)
if not t.nl:
self.write(",")
def _Global(self, t):
self.fill("global ")
interleave(lambda: self.write(", "), self.write, t.names)
def _Yield(self, t):
self.write("(")
self.write("yield")
if t.value:
self.write(" ")
self.dispatch(t.value)
self.write(")")
def _Raise(self, t):
self.fill('raise ')
if t.type:
self.dispatch(t.type)
if t.inst:
self.write(", ")
self.dispatch(t.inst)
if t.tback:
self.write(", ")
self.dispatch(t.tback)
def _TryExcept(self, t):
self.fill("try")
self.enter()
self.dispatch(t.body)
self.leave()
for ex in t.handlers:
self.dispatch(ex)
if t.orelse:
self.fill("else")
self.enter()
self.dispatch(t.orelse)
self.leave()
def _TryFinally(self, t):
self.fill("try")
self.enter()
self.dispatch(t.body)
self.leave()
self.fill("finally")
self.enter()
self.dispatch(t.finalbody)
self.leave()
def _ExceptHandler(self, t):
self.fill("except")
if t.type:
self.write(" ")
self.dispatch(t.type)
if t.name:
self.write(", ")
self.dispatch(t.name)
self.enter()
self.dispatch(t.body)
self.leave()
def _ClassDef(self, t):
self.write("\n")
self.fill("class "+t.name)
if t.bases:
self.write("(")
for a in t.bases:
self.dispatch(a)
self.write(", ")
self.write(")")
self.enter()
self.dispatch(t.body)
self.leave()
def _FunctionDef(self, t):
self.write("\n")
for deco in t.decorator_list:
self.fill("@")
self.dispatch(deco)
self.fill("def "+t.name + "(")
self.dispatch(t.args)
self.write(")")
self.enter()
self.dispatch(t.body)
self.leave()
def _For(self, t):
self.fill("for ")
self.dispatch(t.target)
self.write(" in ")
self.dispatch(t.iter)
self.enter()
self.dispatch(t.body)
self.leave()
if t.orelse:
self.fill("else")
self.enter()
self.dispatch(t.orelse)
self.leave
def _If(self, t):
self.fill("if ")
self.dispatch(t.test)
self.enter()
# XXX elif?
self.dispatch(t.body)
self.leave()
if t.orelse:
self.fill("else")
self.enter()
self.dispatch(t.orelse)
self.leave()
def _While(self, t):
self.fill("while ")
self.dispatch(t.test)
self.enter()
self.dispatch(t.body)
self.leave()
if t.orelse:
self.fill("else")
self.enter()
self.dispatch(t.orelse)
self.leave
def _With(self, t):
self.fill("with ")
self.dispatch(t.context_expr)
if t.optional_vars:
self.write(" as ")
self.dispatch(t.optional_vars)
self.enter()
self.dispatch(t.body)
self.leave()
# expr
def _Str(self, tree):
self.write(repr(tree.s))
def _Name(self, t):
self.write(t.id)
def _Repr(self, t):
self.write("`")
self.dispatch(t.value)
self.write("`")
def _Num(self, t):
self.write(repr(t.n))
def _List(self, t):
self.write("[")
interleave(lambda: self.write(", "), self.dispatch, t.elts)
self.write("]")
def _ListComp(self, t):
self.write("[")
self.dispatch(t.elt)
for gen in t.generators:
self.dispatch(gen)
self.write("]")
def _GeneratorExp(self, t):
self.write("(")
self.dispatch(t.elt)
for gen in t.generators:
self.dispatch(gen)
self.write(")")
def _comprehension(self, t):
self.write(" for ")
self.dispatch(t.target)
self.write(" in ")
self.dispatch(t.iter)
for if_clause in t.ifs:
self.write(" if ")
self.dispatch(if_clause)
def _IfExp(self, t):
self.write("(")
self.dispatch(t.body)
self.write(" if ")
self.dispatch(t.test)
self.write(" else ")
self.dispatch(t.orelse)
self.write(")")
def _Dict(self, t):
self.write("{")
def writem(arg):
(k, v) = arg
self.dispatch(k)
self.write(": ")
self.dispatch(v)
interleave(lambda: self.write(", "), writem, zip(t.keys, t.values))
self.write("}")
def _Tuple(self, t):
self.write("(")
if len(t.elts) == 1:
(elt,) = t.elts
self.dispatch(elt)
self.write(",")
else:
interleave(lambda: self.write(", "), self.dispatch, t.elts)
self.write(")")
unop = {"Invert":"~", "Not": "not", "UAdd":"+", "USub":"-"}
def _UnaryOp(self, t):
self.write(self.unop[t.op.__class__.__name__])
self.write("(")
self.dispatch(t.operand)
self.write(")")
binop = { "Add":"+", "Sub":"-", "Mult":"*", "Div":"/", "Mod":"%",
"LShift":">>", "RShift":"<<", "BitOr":"|", "BitXor":"^", "BitAnd":"&",
"FloorDiv":"//", "Pow": "**"}
def _BinOp(self, t):
self.write("(")
self.dispatch(t.left)
self.write(" " + self.binop[t.op.__class__.__name__] + " ")
self.dispatch(t.right)
self.write(")")
cmpops = {"Eq":"==", "NotEq":"!=", "Lt":"<", "LtE":"<=", "Gt":">", "GtE":">=",
"Is":"is", "IsNot":"is not", "In":"in", "NotIn":"not in"}
def _Compare(self, t):
self.write("(")
self.dispatch(t.left)
for o, e in zip(t.ops, t.comparators):
self.write(" " + self.cmpops[o.__class__.__name__] + " ")
self.dispatch(e)
self.write(")")
boolops = {_ast.And: 'and', _ast.Or: 'or'}
def _BoolOp(self, t):
self.write("(")
s = " %s " % self.boolops[t.op.__class__]
interleave(lambda: self.write(s), self.dispatch, t.values)
self.write(")")
def _Attribute(self,t):
self.dispatch(t.value)
self.write(".")
self.write(t.attr)
def _Call(self, t):
self.dispatch(t.func)
self.write("(")
comma = False
for e in t.args:
if comma: self.write(", ")
else: comma = True
self.dispatch(e)
for e in t.keywords:
if comma: self.write(", ")
else: comma = True
self.dispatch(e)
if t.starargs:
if comma: self.write(", ")
else: comma = True
self.write("*")
self.dispatch(t.starargs)
if t.kwargs:
if comma: self.write(", ")
else: comma = True
self.write("**")
self.dispatch(t.kwargs)
self.write(")")
def _Subscript(self, t):
self.dispatch(t.value)
self.write("[")
self.dispatch(t.slice)
self.write("]")
# slice
def _Ellipsis(self, t):
self.write("...")
def _Index(self, t):
self.dispatch(t.value)
def _Slice(self, t):
if t.lower:
self.dispatch(t.lower)
self.write(":")
if t.upper:
self.dispatch(t.upper)
if t.step:
self.write(":")
self.dispatch(t.step)
def _ExtSlice(self, t):
interleave(lambda: self.write(', '), self.dispatch, t.dims)
# others
def _arguments(self, t):
first = True
nonDef = len(t.args)-len(t.defaults)
for a in t.args[0:nonDef]:
if first:first = False
else: self.write(", ")
self.dispatch(a)
for a,d in zip(t.args[nonDef:], t.defaults):
if first:first = False
else: self.write(", ")
self.dispatch(a),
self.write("=")
self.dispatch(d)
if t.vararg:
if first:first = False
else: self.write(", ")
self.write("*"+t.vararg)
if t.kwarg:
if first:first = False
else: self.write(", ")
self.write("**"+t.kwarg)
def _keyword(self, t):
self.write(t.arg)
self.write("=")
self.dispatch(t.value)
def _Lambda(self, t):
self.write("lambda ")
self.dispatch(t.args)
self.write(": ")
self.dispatch(t.body)
def _alias(self, t):
self.write(t.name)
if t.asname:
self.write(" as "+t.asname)
def roundtrip(filename, output=sys.stdout):
source = open(filename).read()
tree = compile(source, filename, "exec", _ast.PyCF_ONLY_AST)
Unparser(tree, output)
def testdir(a):
try:
names = [n for n in os.listdir(a) if n.endswith('.py')]
except OSError:
sys.stderr.write("Directory not readable: %s\n" % a)
else:
for n in names:
fullname = os.path.join(a, n)
if os.path.isfile(fullname):
output = io.StringIO()
print('Testing %s' % fullname)
try:
roundtrip(fullname, output)
except Exception :
e = sys.exc_info()[1]
print(' Failed to compile, exception is %s' % repr(e))
elif os.path.isdir(fullname):
testdir(fullname)
def main(args):
if args[0] == '--testdir':
for a in args[1:]:
testdir(a)
else:
for a in args:
roundtrip(a)
if __name__=='__main__':
main(sys.argv[1:])
# this file has been automatically generated running:
# snakes/lang/asdl.py --output=snakes/lang/python/asdl.py snakes/lang/python/python.asdl
# timestamp: 2010-04-22 11:40:44.120690
from snakes.lang import ast
from snkast import *
class _AST (ast.AST):
def __init__ (self, **ARGS):
ast.AST.__init__(self)
for k, v in ARGS.items():
setattr(self, k, v)
class arguments (_AST):
_fields = ('args', 'vararg', 'varargannotation', 'kwonlyargs', 'kwarg', 'kwargannotation', 'defaults', 'kw_defaults')
_attributes = ()
def __init__ (self, args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[], **ARGS):
_AST.__init__(self, **ARGS)
self.args = list(args)
self.vararg = vararg
self.varargannotation = varargannotation
self.kwonlyargs = list(kwonlyargs)
self.kwarg = kwarg
self.kwargannotation = kwargannotation
self.defaults = list(defaults)
self.kw_defaults = list(kw_defaults)
class slice (_AST):
pass
class Slice (slice):
_fields = ('lower', 'upper', 'step')
_attributes = ()
def __init__ (self, lower=None, upper=None, step=None, **ARGS):
slice.__init__(self, **ARGS)
self.lower = lower
self.upper = upper
self.step = step
class ExtSlice (slice):
_fields = ('dims',)
_attributes = ()
def __init__ (self, dims=[], **ARGS):
slice.__init__(self, **ARGS)
self.dims = list(dims)
class Index (slice):
_fields = ('value',)
_attributes = ()
def __init__ (self, value, **ARGS):
slice.__init__(self, **ARGS)
self.value = value
class cmpop (_AST):
pass
class Eq (cmpop):
_fields = ()
_attributes = ()
class NotEq (cmpop):
_fields = ()
_attributes = ()
class Lt (cmpop):
_fields = ()
_attributes = ()
class LtE (cmpop):
_fields = ()
_attributes = ()
class Gt (cmpop):
_fields = ()
_attributes = ()
class GtE (cmpop):
_fields = ()
_attributes = ()
class Is (cmpop):
_fields = ()
_attributes = ()
class IsNot (cmpop):
_fields = ()
_attributes = ()
class In (cmpop):
_fields = ()
_attributes = ()
class NotIn (cmpop):
_fields = ()
_attributes = ()
class expr_context (_AST):
pass
class Load (expr_context):
_fields = ()
_attributes = ()
class Store (expr_context):
_fields = ()
_attributes = ()
class Del (expr_context):
_fields = ()
_attributes = ()
class AugLoad (expr_context):
_fields = ()
_attributes = ()
class AugStore (expr_context):
_fields = ()
_attributes = ()
class Param (expr_context):
_fields = ()
_attributes = ()
class keyword (_AST):
_fields = ('arg', 'value')
_attributes = ()
def __init__ (self, arg, value, **ARGS):
_AST.__init__(self, **ARGS)
self.arg = arg
self.value = value
class unaryop (_AST):
pass
class Invert (unaryop):
_fields = ()
_attributes = ()
class Not (unaryop):
_fields = ()
_attributes = ()
class UAdd (unaryop):
_fields = ()
_attributes = ()
class USub (unaryop):
_fields = ()
_attributes = ()
class expr (_AST):
pass
class BoolOp (expr):
_fields = ('op', 'values')
_attributes = ('lineno', 'col_offset')
def __init__ (self, op, values=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.op = op
self.values = list(values)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class BinOp (expr):
_fields = ('left', 'op', 'right')
_attributes = ('lineno', 'col_offset')
def __init__ (self, left, op, right, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.left = left
self.op = op
self.right = right
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class UnaryOp (expr):
_fields = ('op', 'operand')
_attributes = ('lineno', 'col_offset')
def __init__ (self, op, operand, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.op = op
self.operand = operand
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Lambda (expr):
_fields = ('args', 'body')
_attributes = ('lineno', 'col_offset')
def __init__ (self, args, body, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.args = args
self.body = body
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class IfExp (expr):
_fields = ('test', 'body', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, test, body, orelse, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.test = test
self.body = body
self.orelse = orelse
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Dict (expr):
_fields = ('keys', 'values')
_attributes = ('lineno', 'col_offset')
def __init__ (self, keys=[], values=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.keys = list(keys)
self.values = list(values)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Set (expr):
_fields = ('elts',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, elts=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elts = list(elts)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class ListComp (expr):
_fields = ('elt', 'generators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elt, generators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elt = elt
self.generators = list(generators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class SetComp (expr):
_fields = ('elt', 'generators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elt, generators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elt = elt
self.generators = list(generators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class DictComp (expr):
_fields = ('key', 'value', 'generators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, key, value, generators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.key = key
self.value = value
self.generators = list(generators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class GeneratorExp (expr):
_fields = ('elt', 'generators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elt, generators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elt = elt
self.generators = list(generators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Yield (expr):
_fields = ('value',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, value=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Compare (expr):
_fields = ('left', 'ops', 'comparators')
_attributes = ('lineno', 'col_offset')
def __init__ (self, left, ops=[], comparators=[], lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.left = left
self.ops = list(ops)
self.comparators = list(comparators)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Call (expr):
_fields = ('func', 'args', 'keywords', 'starargs', 'kwargs')
_attributes = ('lineno', 'col_offset')
def __init__ (self, func, args=[], keywords=[], starargs=None, kwargs=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.func = func
self.args = list(args)
self.keywords = list(keywords)
self.starargs = starargs
self.kwargs = kwargs
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Num (expr):
_fields = ('n',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, n, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.n = n
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Str (expr):
_fields = ('s',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, s, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.s = s
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Ellipsis (expr):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Attribute (expr):
_fields = ('value', 'attr', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, attr, ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.value = value
self.attr = attr
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Subscript (expr):
_fields = ('value', 'slice', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, slice, ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.value = value
self.slice = slice
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Starred (expr):
_fields = ('value', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.value = value
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Name (expr):
_fields = ('id', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, id, ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.id = id
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class List (expr):
_fields = ('elts', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elts=[], ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elts = list(elts)
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Tuple (expr):
_fields = ('elts', 'ctx')
_attributes = ('lineno', 'col_offset')
def __init__ (self, elts=[], ctx=None, lineno=0, col_offset=0, **ARGS):
expr.__init__(self, **ARGS)
self.elts = list(elts)
self.ctx = ctx
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class boolop (_AST):
pass
class And (boolop):
_fields = ()
_attributes = ()
class Or (boolop):
_fields = ()
_attributes = ()
class stmt (_AST):
pass
class FunctionDef (stmt):
_fields = ('name', 'args', 'body', 'decorator_list', 'returns')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, args, body=[], decorator_list=[], returns=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.name = name
self.args = args
self.body = list(body)
self.decorator_list = list(decorator_list)
self.returns = returns
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class ClassDef (stmt):
_fields = ('name', 'bases', 'keywords', 'starargs', 'kwargs', 'body', 'decorator_list')
_attributes = ('lineno', 'col_offset')
def __init__ (self, name, bases=[], keywords=[], starargs=None, kwargs=None, body=[], decorator_list=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.name = name
self.bases = list(bases)
self.keywords = list(keywords)
self.starargs = starargs
self.kwargs = kwargs
self.body = list(body)
self.decorator_list = list(decorator_list)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Return (stmt):
_fields = ('value',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, value=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Delete (stmt):
_fields = ('targets',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, targets=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.targets = list(targets)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Assign (stmt):
_fields = ('targets', 'value')
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, targets=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.targets = list(targets)
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class AugAssign (stmt):
_fields = ('target', 'op', 'value')
_attributes = ('lineno', 'col_offset')
def __init__ (self, target, op, value, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.target = target
self.op = op
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class For (stmt):
_fields = ('target', 'iter', 'body', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, target, iter, body=[], orelse=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.target = target
self.iter = iter
self.body = list(body)
self.orelse = list(orelse)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class While (stmt):
_fields = ('test', 'body', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, test, body=[], orelse=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.test = test
self.body = list(body)
self.orelse = list(orelse)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class If (stmt):
_fields = ('test', 'body', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, test, body=[], orelse=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.test = test
self.body = list(body)
self.orelse = list(orelse)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class With (stmt):
_fields = ('context_expr', 'optional_vars', 'body')
_attributes = ('lineno', 'col_offset')
def __init__ (self, context_expr, optional_vars=None, body=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.context_expr = context_expr
self.optional_vars = optional_vars
self.body = list(body)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Raise (stmt):
_fields = ('exc', 'cause')
_attributes = ('lineno', 'col_offset')
def __init__ (self, exc=None, cause=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.exc = exc
self.cause = cause
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class TryExcept (stmt):
_fields = ('body', 'handlers', 'orelse')
_attributes = ('lineno', 'col_offset')
def __init__ (self, body=[], handlers=[], orelse=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.body = list(body)
self.handlers = list(handlers)
self.orelse = list(orelse)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class TryFinally (stmt):
_fields = ('body', 'finalbody')
_attributes = ('lineno', 'col_offset')
def __init__ (self, body=[], finalbody=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.body = list(body)
self.finalbody = list(finalbody)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Assert (stmt):
_fields = ('test', 'msg')
_attributes = ('lineno', 'col_offset')
def __init__ (self, test, msg=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.test = test
self.msg = msg
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Import (stmt):
_fields = ('names',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, names=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.names = list(names)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class ImportFrom (stmt):
_fields = ('module', 'names', 'level')
_attributes = ('lineno', 'col_offset')
def __init__ (self, module, names=[], level=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.module = module
self.names = list(names)
self.level = level
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Exec (stmt):
_fields = ('body', 'globals', 'locals')
_attributes = ('lineno', 'col_offset')
def __init__ (self, body, globals=None, locals=None, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.body = body
self.globals = globals
self.locals = locals
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Global (stmt):
_fields = ('names',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, names=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.names = list(names)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Nonlocal (stmt):
_fields = ('names',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, names=[], lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.names = list(names)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Expr (stmt):
_fields = ('value',)
_attributes = ('lineno', 'col_offset')
def __init__ (self, value, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.value = value
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Pass (stmt):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Break (stmt):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class Continue (stmt):
_fields = ()
_attributes = ('lineno', 'col_offset')
def __init__ (self, lineno=0, col_offset=0, **ARGS):
stmt.__init__(self, **ARGS)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class excepthandler (_AST):
pass
class ExceptHandler (excepthandler):
_fields = ('type', 'name', 'body')
_attributes = ('lineno', 'col_offset')
def __init__ (self, type=None, name=None, body=[], lineno=0, col_offset=0, **ARGS):
excepthandler.__init__(self, **ARGS)
self.type = type
self.name = name
self.body = list(body)
self.lineno = int(lineno)
self.col_offset = int(col_offset)
class alias (_AST):
_fields = ('name', 'asname')
_attributes = ()
def __init__ (self, name, asname=None, **ARGS):
_AST.__init__(self, **ARGS)
self.name = name
self.asname = asname
class comprehension (_AST):
_fields = ('target', 'iter', 'ifs')
_attributes = ()
def __init__ (self, target, iter, ifs=[], **ARGS):
_AST.__init__(self, **ARGS)
self.target = target
self.iter = iter
self.ifs = list(ifs)
class arg (_AST):
_fields = ('arg', 'annotation')
_attributes = ()
def __init__ (self, arg, annotation=None, **ARGS):
_AST.__init__(self, **ARGS)
self.arg = arg
self.annotation = annotation
class operator (_AST):
pass
class Add (operator):
_fields = ()
_attributes = ()
class Sub (operator):
_fields = ()
_attributes = ()
class Mult (operator):
_fields = ()
_attributes = ()
class Div (operator):
_fields = ()
_attributes = ()
class Mod (operator):
_fields = ()
_attributes = ()
class Pow (operator):
_fields = ()
_attributes = ()
class LShift (operator):
_fields = ()
_attributes = ()
class RShift (operator):
_fields = ()
_attributes = ()
class BitOr (operator):
_fields = ()
_attributes = ()
class BitXor (operator):
_fields = ()
_attributes = ()
class BitAnd (operator):
_fields = ()
_attributes = ()
class FloorDiv (operator):
_fields = ()
_attributes = ()
class mod (_AST):
pass
class Module (mod):
_fields = ('body',)
_attributes = ()
def __init__ (self, body=[], **ARGS):
mod.__init__(self, **ARGS)
self.body = list(body)
class Interactive (mod):
_fields = ('body',)
_attributes = ()
def __init__ (self, body=[], **ARGS):
mod.__init__(self, **ARGS)
self.body = list(body)
class Expression (mod):
_fields = ('body',)
_attributes = ()
def __init__ (self, body, **ARGS):
mod.__init__(self, **ARGS)
self.body = body
class Suite (mod):
_fields = ('body',)
_attributes = ()
def __init__ (self, body=[], **ARGS):
mod.__init__(self, **ARGS)
self.body = list(body)
"""
>>> testparser(Translator)
"""
import doctest, traceback, sys, re, operator, inspect
from snakes.lang.pgen import ParseError
from snakes.lang.python.pgen import parser
import snakes.lang.python.asdl as ast
from snakes import *
_symbols = parser.tokenizer.tok_name.copy()
# next statement overrides 'NT_OFFSET' entry with 'single_input'
# (this is desired)
_symbols.update(parser.symbolMap)
class ParseTree (list) :
_symbols = _symbols
def __init__ (self, st) :
self.symbol = self._symbols[st[0][0]]
self.kind = st[0][0]
self.srow = st[0][1].srow
self.erow = st[0][1].erow
self.scol = st[0][1].scol
self.ecol = st[0][1].ecol
self.text = st[0][1]
self.filename = self.text.filename
list.__init__(self, (self.__class__(child) for child in st[1]))
def __repr__ (self) :
return repr(self.text)
def involve (self, rule) :
if self.kind == rule :
return True
for child in self :
if child.involve(rule) :
return True
return False
def source (self) :
lines = self.text.lexer.lines[self.srow-1:self.erow]
if self.srow == self.erow :
lines[0] = lines[0][self.scol:self.ecol]
else :
lines[0] = lines[0][self.scol:]
lines[-1] = lines[-1][:self.ecol]
return str("\n".join(l.rstrip("\n") for l in lines))
class Translator (object) :
ParseTree = ParseTree
parser = parser
ST = ast
def __init__ (self, st) :
for value, name in self.parser.tokenizer.tok_name.items() :
setattr(self, name, value)
def isdict (obj) : return isinstance(obj, dict)
for name, d in inspect.getmembers(self.__class__, isdict) :
d = d.copy()
for key, val in d.items() :
try :
d[key] = getattr(self.ST, val.__name__)
except AttributeError :
pass
setattr(self, name, d)
self.ast = self.do(self.ParseTree(st))
def do (self, st, ctx=None) :
if ctx is None :
ctx = self.ST.Load
name = st.symbol
meth = getattr(self, "do_" + name)
tree = meth(st, ctx)
try :
tree.st = st
except AttributeError :
pass
return tree
# unary operations
_unary = {"not" : ast.Not,
"+" : ast.UAdd,
"-" : ast.USub,
"~" : ast.Invert}
def _do_unary (self, st, ctx) :
"""unary: not_test | factor
"""
if len(st) == 1 :
return self.do(st[0], ctx)
else :
return self.ST.UnaryOp(lineno=st.srow, col_offset=st.scol,
op=self._unary[st[0].text](lineno=st[0].srow,
col_offset=st[0].scol),
operand=self.do(st[1], ctx))
# binary operations
_binary = {"+" : ast.Add,
"-" : ast.Sub,
"*" : ast.Mult,
"/" : ast.Div,
"%" : ast.Mod,
"&" : ast.BitAnd,
"|" : ast.BitOr,
"^" : ast.BitXor,
"<<" : ast.LShift,
">>" : ast.RShift,
"**" : ast.Pow,
"//" : ast.FloorDiv}
def _do_binary (self, st, ctx) :
"""binary: expr | xor_expr | and_expr | shift_expr | arith_expr | term
"""
if len(st) == 1 :
return self.do(st[0], ctx)
else :
values = [self.do(child, ctx) for child in st[::2]]
ops = [(self._binary[child.text], child) for child in st[1::2]]
while len(values) > 1 :
left = values.pop(0)
right = values.pop(0)
operator, node = ops.pop(0)
values.insert(0, self.ST.BinOp(lineno=st.srow,
col_offset=st.scol,
left=left,
op=operator(lineno=node.srow,
col_offset=node.scol),
right=right))
return values[0]
# boolean operation
_boolean = {"and" : ast.And,
"or" : ast.Or}
def _do_boolean (self, st, ctx) :
if len(st) == 1 :
return self.do(st[0], ctx)
else :
return self.ST.BoolOp(lineno=st.srow, col_offset=st.scol,
op=self._boolean[st[1].text](lineno=st[1].srow,
col_offset=st[1].scol),
values=[self.do(child, ctx)
for child in st[::2]])
# start of rule handlers
def do_file_input (self, st, ctx) :
"""file_input: (NEWLINE | stmt)* ENDMARKER
-> ast.Module
<<< pass
'Module(body=[Pass()])'
"""
body = reduce(operator.add,
(self.do(child, ctx) for child in st
if child.kind not in (self.NEWLINE, self.ENDMARKER)),
[])
return self.ST.Module(lineno=st.srow,
col_offset=st.scol,
body=body)
def do_decorator (self, st, ctx) :
"""decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
-> ast.AST
<<< @foo
... def f() : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[Name(id='foo', ctx=Load())], returns=None)])"
<<< @foo.bar
... def f() : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[Attribute(value=Name(id='foo', ctx=Load()), attr='bar', ctx=Load())], returns=None)])"
<<< @foo()
... def f() : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[Call(func=Name(id='foo', ctx=Load()), args=[], keywords=[], starargs=None, kwargs=None)], returns=None)])"
<<< @foo(x, y)
... def f() : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[Call(func=Name(id='foo', ctx=Load()), args=[Name(id='x', ctx=Load()), Name(id='y', ctx=Load())], keywords=[], starargs=None, kwargs=None)], returns=None)])"
<<< @foo.bar(x, y)
... def f() : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[Call(func=Attribute(value=Name(id='foo', ctx=Load()), attr='bar', ctx=Load()), args=[Name(id='x', ctx=Load()), Name(id='y', ctx=Load())], keywords=[], starargs=None, kwargs=None)], returns=None)])"
"""
names = self.do(st[1], ctx).split(".")
obj = self.ST.Name(lineno=st[1].srow, col_offset=st[1].scol,
id=names.pop(0), ctx=ctx())
while names :
obj = self.ST.Attribute(lineno=st[1].srow,
col_offset=st[1].scol,
value=obj, attr=names.pop(0), ctx=ctx())
if len(st) == 3 :
return obj
elif len(st) == 5 :
return self.ST.Call(lineno=st[1].srow, col_offset=st[1].scol,
func=obj, args=[], keywords=[],
starargs=None, kwargs=None)
else :
args, keywords, starargs, kwargs = self.do(st[3], ctx)
return self.ST.Call(lineno=st[1].srow, col_offset=st[1].scol,
func=obj, args=args, keywords=keywords,
starargs=starargs, kwargs=kwargs)
def do_decorators (self, st, ctx) :
"""decorators: decorator+
-> ast.AST+
<<< @foo
... @bar
... @spam.egg()
... def f () :
... pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[Name(id='foo', ctx=Load()), Name(id='bar', ctx=Load()), Call(func=Attribute(value=Name(id='spam', ctx=Load()), attr='egg', ctx=Load()), args=[], keywords=[], starargs=None, kwargs=None)], returns=None)])"
"""
return [self.do(child, ctx) for child in st]
def do_decorated (self, st, ctx) :
"""decorated: decorators (classdef | funcdef)
-> ast.AST
<<< @foo
... def f () :
... pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[Name(id='foo', ctx=Load())], returns=None)])"
<<< @foo
... class c () :
... pass
"Module(body=[ClassDef(name='c', bases=[], keywords=[], starargs=None, kwargs=None, body=[Pass()], decorator_list=[Name(id='foo', ctx=Load())])])"
"""
child = self.do(st[1], ctx)
child.decorator_list.extend(self.do(st[0], ctx))
return child
def do_funcdef (self, st, ctx) :
"""funcdef: 'def' NAME parameters ['->' test] ':' suite
-> ast.FunctionDef
<<< def f(x, y) : x+y
"Module(body=[FunctionDef(name='f', args=arguments(args=[arg(arg='x', annotation=None), arg(arg='y', annotation=None)], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Expr(value=BinOp(left=Name(id='x', ctx=Load()), op=Add(), right=Name(id='y', ctx=Load())))], decorator_list=[], returns=None)])"
<<< def f(x, y) -> int : x+y
"Module(body=[FunctionDef(name='f', args=arguments(args=[arg(arg='x', annotation=None), arg(arg='y', annotation=None)], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Expr(value=BinOp(left=Name(id='x', ctx=Load()), op=Add(), right=Name(id='y', ctx=Load())))], decorator_list=[], returns=Name(id='int', ctx=Load()))])"
"""
if len(st) == 5 :
return self.ST.FunctionDef(lineno=st.srow, col_offset=st.scol,
name=st[1].text,
args=self.do(st[2], ctx),
returns=None,
body=self.do(st[-1], ctx),
decorator_list=[])
else :
return self.ST.FunctionDef(lineno=st.srow, col_offset=st.scol,
name=st[1].text,
args=self.do(st[2], ctx),
returns=self.do(st[5], ctx),
body=self.do(st[-1], ctx),
decorator_list=[])
def do_parameters (self, st, ctx) :
"""parameters: '(' [typedargslist] ')'
-> ast.arguments
<<< def f () : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[], returns=None)])"
<<< def f(x, y) : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[arg(arg='x', annotation=None), arg(arg='y', annotation=None)], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[], returns=None)])"
"""
if len(st) == 2 :
return self.ST.arguments(lineno=st.srow, col_offset=st.scol,
args=[], vararg=None,
varargannotation=None,
kwonlyargs=[], kwarg=None,
kwargannotation=None,
defaults=[], kw_defaults=[])
else :
return self.do(st[1], ctx)
def do_typedargslist (self, st, ctx) :
"""typedargslist: ((tfpdef ['=' test] ',')*
('*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef]
| '**' tfpdef)
| tfpdef ['=' test] (',' tfpdef ['=' test])* [','])
-> ast.arguments
<<< def f(x) : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[arg(arg='x', annotation=None)], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[], returns=None)])"
<<< def f(x, *y, z) : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[arg(arg='x', annotation=None)], vararg='y', varargannotation=None, kwonlyargs=[arg(arg='z', annotation=None)], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[None]), body=[Pass()], decorator_list=[], returns=None)])"
<<< def f(*, x=1) : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[arg(arg='x', annotation=None)], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[Num(n=1)]), body=[Pass()], decorator_list=[], returns=None)])"
<<< def f(x, y, *z, a=1, b=2, **d) : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[arg(arg='x', annotation=None), arg(arg='y', annotation=None)], vararg='z', varargannotation=None, kwonlyargs=[arg(arg='a', annotation=None), arg(arg='b', annotation=None)], kwarg='d', kwargannotation=None, defaults=[], kw_defaults=[Num(n=1), Num(n=2)]), body=[Pass()], decorator_list=[], returns=None)])"
<<< def f(x:int) : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[arg(arg='x', annotation=Name(id='int', ctx=Load()))], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[], returns=None)])"
<<< def f(x:int, *y:float, z:bool) : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[arg(arg='x', annotation=Name(id='int', ctx=Load()))], vararg='y', varargannotation=Name(id='float', ctx=Load()), kwonlyargs=[arg(arg='z', annotation=Name(id='bool', ctx=Load()))], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[None]), body=[Pass()], decorator_list=[], returns=None)])"
<<< def f(*, x:int=1) : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[arg(arg='x', annotation=Name(id='int', ctx=Load()))], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[Num(n=1)]), body=[Pass()], decorator_list=[], returns=None)])"
<<< def f(x:int, y, *z:float, a=1, b:bool=False, **d:object) : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[arg(arg='x', annotation=Name(id='int', ctx=Load())), arg(arg='y', annotation=None)], vararg='z', varargannotation=Name(id='float', ctx=Load()), kwonlyargs=[arg(arg='a', annotation=None), arg(arg='b', annotation=Name(id='bool', ctx=Load()))], kwarg='d', kwargannotation=Name(id='object', ctx=Load()), defaults=[], kw_defaults=[Num(n=1), Name(id='False', ctx=Load())]), body=[Pass()], decorator_list=[], returns=None)])"
"""
args = []
vararg = None
varargannotation = None
star = False
kwonlyargs = []
kwarg = None
kwargannotation = None
defaults = []
kw_defaults = []
nodes = list(st)
while nodes :
first = nodes.pop(0)
if first.text == "," :
pass
elif first.text == "*" :
star = True
if nodes and nodes[0].text != "," :
vararg, varargannotation = self.do(nodes.pop(0), ctx)
elif first.text == "**" :
kwarg, kwargannotation = self.do(nodes.pop(0), ctx)
else :
n, a = self.do(first, ctx)
arg = self.ST.arg(lineno=first.srow, col_offset=first.scol,
arg=n, annotation=a)
if nodes and nodes[0].text == "=" :
del nodes[0]
d = self.do(nodes.pop(0), ctx)
else :
d = None
if star :
kwonlyargs.append(arg)
kw_defaults.append(d)
else :
args.append(arg)
if d is not None :
defaults.append(d)
return self.ST.arguments(lineno=st.srow, col_offset=st.scol,
args=args,
vararg=vararg,
varargannotation=varargannotation,
kwonlyargs=kwonlyargs,
kwarg=kwarg,
kwargannotation=kwargannotation,
defaults=defaults,
kw_defaults=kw_defaults)
def do_tfpdef (self, st, ctx) :
"""tfpdef: NAME [':' test]
-> str, ast.AST?
<<< def f(x:int) : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[arg(arg='x', annotation=Name(id='int', ctx=Load()))], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[], returns=None)])"
<<< def f(x:int, y:float) : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[arg(arg='x', annotation=Name(id='int', ctx=Load())), arg(arg='y', annotation=Name(id='float', ctx=Load()))], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[], returns=None)])"
<<< def f(x, y:int) : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[arg(arg='x', annotation=None), arg(arg='y', annotation=Name(id='int', ctx=Load()))], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[], returns=None)])"
"""
if len(st) == 1 :
return st[0].text, None
else :
return st[0].text, self.do(st[2], ctx)
def do_varargslist (self, st, ctx) :
"""varargslist: ((vfpdef ['=' test] ',')*
('*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef]
| '**' vfpdef)
| vfpdef ['=' test] (',' vfpdef ['=' test])* [','])
-> ast.arguments
<<< lambda x, y : x+y
"Module(body=[Expr(value=Lambda(args=arguments(args=[arg(arg='x', annotation=None), arg(arg='y', annotation=None)], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=BinOp(left=Name(id='x', ctx=Load()), op=Add(), right=Name(id='y', ctx=Load()))))])"
<<< lambda x, y, z=1 : x+y+z
"Module(body=[Expr(value=Lambda(args=arguments(args=[arg(arg='x', annotation=None), arg(arg='y', annotation=None), arg(arg='z', annotation=None)], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[Num(n=1)], kw_defaults=[]), body=BinOp(left=BinOp(left=Name(id='x', ctx=Load()), op=Add(), right=Name(id='y', ctx=Load())), op=Add(), right=Name(id='z', ctx=Load()))))])"
"""
tree = self.do_typedargslist(st, ctx)
tree.st = st
return tree
def do_vfpdef (self, st, ctx) :
"""vfpdef: NAME
-> str, None
Return value (str, None) is choosen for compatibility with
do_tfpdef, so that do_typedargslist can be used for
do_varargslist.
<<< lambda x, y : x+y
"Module(body=[Expr(value=Lambda(args=arguments(args=[arg(arg='x', annotation=None), arg(arg='y', annotation=None)], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=BinOp(left=Name(id='x', ctx=Load()), op=Add(), right=Name(id='y', ctx=Load()))))])"
"""
return st[0].text, None
def do_stmt (self, st, ctx) :
"""stmt: simple_stmt | compound_stmt
-> ast.AST+
<<< pass
'Module(body=[Pass()])'
<<< pass; pass
'Module(body=[Pass(), Pass()])'
<<< with x : y
"Module(body=[With(context_expr=Name(id='x', ctx=Load()), optional_vars=None, body=[Expr(value=Name(id='y', ctx=Load()))])])"
"""
child = self.do(st[0], ctx)
if isinstance(child, self.ST.AST) :
return [child]
else :
return child
def do_simple_stmt (self, st, ctx) :
"""simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
-> ast.AST+
<<< del x; pass; import spam as egg
"Module(body=[Delete(targets=[Name(id='x', ctx=Del())]), Pass(), Import(names=[alias(name='spam', asname='egg')])])"
"""
return [self.do(child, ctx) for child in st[::2]
if child.kind != self.NEWLINE]
def do_small_stmt (self, st, ctx) :
"""small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | nonlocal_stmt | assert_stmt)
-> ast.AST
<<< x
"Module(body=[Expr(value=Name(id='x', ctx=Load()))])"
<<< del x
"Module(body=[Delete(targets=[Name(id='x', ctx=Del())])])"
<<< pass
'Module(body=[Pass()])'
<<< import egg as spam
"Module(body=[Import(names=[alias(name='egg', asname='spam')])])"
<<< global x
"Module(body=[Global(names=['x'])])"
<<< nonlocal x
"Module(body=[Nonlocal(names=['x'])])"
<<< assert x
"Module(body=[Assert(test=Name(id='x', ctx=Load()), msg=None)])"
"""
return self.do(st[0], ctx)
def do_expr_stmt (self, st, ctx) :
"""expr_stmt: testlist (augassign (yield_expr|testlist) |
('=' (yield_expr|testlist))*)
-> ast.Expr
<<< x = y = 1
"Module(body=[Assign(targets=[Name(id='x', ctx=Store()), Name(id='y', ctx=Store())], value=Num(n=1))])"
<<< x, y = z = 1, 2
"Module(body=[Assign(targets=[Tuple(elts=[Name(id='x', ctx=Store()), Name(id='y', ctx=Store())], ctx=Store()), Name(id='z', ctx=Store())], value=Tuple(elts=[Num(n=1), Num(n=2)], ctx=Load()))])"
<<< x += 1
"Module(body=[AugAssign(target=Name(id='x', ctx=Store()), op=Add(), value=Num(n=1))])"
<<< x += yield 5
"Module(body=[AugAssign(target=Name(id='x', ctx=Store()), op=Add(), value=Yield(value=Num(n=5)))])"
<<< x = y = yield 1, 2
"Module(body=[Assign(targets=[Name(id='x', ctx=Store()), Name(id='y', ctx=Store())], value=Yield(value=Tuple(elts=[Num(n=1), Num(n=2)], ctx=Load())))])"
"""
if len(st) == 1 :
return self.ST.Expr(lineno=st.srow, col_offset=st.scol,
value=self.do(st[0], ctx))
elif st[1].symbol == "augassign" :
target = self.do(st[0], self.ST.Store)
if isinstance(target, self.ST.Tuple) :
raise ParseError(st[0].text, reason="illegal expression for"
" augmented assignment")
return self.ST.AugAssign(lineno=st.srow, col_offset=st.scol,
target=target,
op=self.do(st[1], ctx),
value=self.do(st[2], ctx))
else :
return self.ST.Assign(lineno=st.srow, col_offset=st.scol,
targets=[self.do(child, ast.Store)
for child in st[:-1:2]],
value=self.do(st[-1], ctx))
def do_augassign (self, st, ctx) :
"""augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' |
'<<=' | '>>=' | '**=' | '//=')
-> ast.AST
<<< x += 1
"Module(body=[AugAssign(target=Name(id='x', ctx=Store()), op=Add(), value=Num(n=1))])"
<<< x -= 1
"Module(body=[AugAssign(target=Name(id='x', ctx=Store()), op=Sub(), value=Num(n=1))])"
<<< x *= 1
"Module(body=[AugAssign(target=Name(id='x', ctx=Store()), op=Mult(), value=Num(n=1))])"
<<< x /= 1
"Module(body=[AugAssign(target=Name(id='x', ctx=Store()), op=Div(), value=Num(n=1))])"
<<< x %= 1
"Module(body=[AugAssign(target=Name(id='x', ctx=Store()), op=Mod(), value=Num(n=1))])"
<<< x &= 1
"Module(body=[AugAssign(target=Name(id='x', ctx=Store()), op=BitAnd(), value=Num(n=1))])"
<<< x |= 1
"Module(body=[AugAssign(target=Name(id='x', ctx=Store()), op=BitOr(), value=Num(n=1))])"
<<< x ^= 1
"Module(body=[AugAssign(target=Name(id='x', ctx=Store()), op=BitXor(), value=Num(n=1))])"
<<< x <<= 1
"Module(body=[AugAssign(target=Name(id='x', ctx=Store()), op=LShift(), value=Num(n=1))])"
<<< x >>= 1
"Module(body=[AugAssign(target=Name(id='x', ctx=Store()), op=RShift(), value=Num(n=1))])"
<<< x **= 1
"Module(body=[AugAssign(target=Name(id='x', ctx=Store()), op=Pow(), value=Num(n=1))])"
<<< x //= 1
"Module(body=[AugAssign(target=Name(id='x', ctx=Store()), op=FloorDiv(), value=Num(n=1))])"
"""
return self._binary[st[0].text[:-1]](lineno=st.srow, col_offset=st.scol)
def do_del_stmt (self, st, ctx) :
"""del_stmt: 'del' exprlist
-> ast.Delete
<<< del x
"Module(body=[Delete(targets=[Name(id='x', ctx=Del())])])"
<<< del x, y
"Module(body=[Delete(targets=[Name(id='x', ctx=Del()), Name(id='y', ctx=Del())])])"
"""
targets = self.do(st[1], ctx=self.ST.Del)
if isinstance(targets, self.ST.Tuple) :
targets = targets.elts
else :
targets = [targets]
return self.ST.Delete(lineno=st.srow, col_offset=st.scol,
targets=targets)
def do_pass_stmt (self, st, ctx) :
"""pass_stmt: 'pass'
-> ast.Pass
<<< pass
'Module(body=[Pass()])'
"""
return self.ST.Pass(lineno=st.srow, col_offset=st.scol)
def do_flow_stmt (self, st, ctx) :
"""flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt
| yield_stmt
-> ast.AST
<<< break
'Module(body=[Break()])'
<<< continue
'Module(body=[Continue()])'
<<< return
'Module(body=[Return(value=None)])'
<<< raise
'Module(body=[Raise(exc=None, cause=None)])'
<<< yield
'Module(body=[Expr(value=Yield(value=None))])'
"""
return self.do(st[0], ctx)
def do_break_stmt (self, st, ctx) :
"""break_stmt: 'break'
-> ast.Break()
<<< break
'Module(body=[Break()])'
"""
return self.ST.Break(lineno=st.srow, col_offset=st.scol)
def do_continue_stmt (self, st, ctx) :
"""continue_stmt: 'continue'
-> ast.Continue
<<< continue
'Module(body=[Continue()])'
"""
return self.ST.Continue(lineno=st.srow, col_offset=st.scol)
def do_return_stmt (self, st, ctx) :
"""return_stmt: 'return' [testlist]
-> ast.Return
<<< return
'Module(body=[Return(value=None)])'
<<< return 1
'Module(body=[Return(value=Num(n=1))])'
<<< return 1, 2
'Module(body=[Return(value=Tuple(elts=[Num(n=1), Num(n=2)], ctx=Load()))])'
"""
if len(st) == 1 :
return self.ST.Return(lineno=st.srow, col_offset=st.scol,
value=None)
else :
return self.ST.Return(lineno=st.srow, col_offset=st.scol,
value=self.do(st[1], ctx))
def do_yield_stmt (self, st, ctx) :
"""yield_stmt: yield_expr
-> ast.Expr
<<< yield
'Module(body=[Expr(value=Yield(value=None))])'
<<< yield 42
'Module(body=[Expr(value=Yield(value=Num(n=42)))])'
"""
return self.ST.Expr(lineno=st.srow, col_offset=st.scol,
value=self.do(st[0], ctx))
def do_raise_stmt (self, st, ctx) :
"""raise_stmt: 'raise' [test ['from' test]]
-> ast.Raise
<<< raise
'Module(body=[Raise(exc=None, cause=None)])'
<<< raise Exception
"Module(body=[Raise(exc=Name(id='Exception', ctx=Load()), cause=None)])"
<<< raise Exception from Exception
"Module(body=[Raise(exc=Name(id='Exception', ctx=Load()), cause=Name(id='Exception', ctx=Load()))])"
"""
count = len(st)
if count == 1 :
return self.ST.Raise(lineno=st.srow, col_offset=st.scol,
exc=None, cause=None)
elif count == 2 :
return self.ST.Raise(lineno=st.srow, col_offset=st.scol,
exc=self.do(st[1], ctx),
cause=None)
else :
return self.ST.Raise(lineno=st.srow, col_offset=st.scol,
exc=self.do(st[1], ctx),
cause=self.do(st[3], ctx))
def do_import_stmt (self, st, ctx) :
"""import_stmt: import_name | import_from
-> ast.AST
<<< import foo
"Module(body=[Import(names=[alias(name='foo', asname=None)])])"
<<< import foo.bar
"Module(body=[Import(names=[alias(name='foo.bar', asname=None)])])"
<<< import foo as bar
"Module(body=[Import(names=[alias(name='foo', asname='bar')])])"
<<< import foo.bar as egg
"Module(body=[Import(names=[alias(name='foo.bar', asname='egg')])])"
<<< import foo as bar, egg as spam
"Module(body=[Import(names=[alias(name='foo', asname='bar'), alias(name='egg', asname='spam')])])"
<<< import foo, bar, egg as spam
"Module(body=[Import(names=[alias(name='foo', asname=None), alias(name='bar', asname=None), alias(name='egg', asname='spam')])])"
"""
return self.do(st[0], ctx)
def do_import_name (self, st, ctx) :
"""import_name: 'import' dotted_as_names
-> ast.Import
<<< import foo
"Module(body=[Import(names=[alias(name='foo', asname=None)])])"
<<< import foo.bar
"Module(body=[Import(names=[alias(name='foo.bar', asname=None)])])"
<<< import foo as bar
"Module(body=[Import(names=[alias(name='foo', asname='bar')])])"
<<< import foo.bar as egg
"Module(body=[Import(names=[alias(name='foo.bar', asname='egg')])])"
<<< import foo as bar, egg as spam
"Module(body=[Import(names=[alias(name='foo', asname='bar'), alias(name='egg', asname='spam')])])"
<<< import foo, bar, egg as spam
"Module(body=[Import(names=[alias(name='foo', asname=None), alias(name='bar', asname=None), alias(name='egg', asname='spam')])])"
"""
return self.ST.Import(lineno=st.srow, col_offset=st.scol,
names=self.do(st[1], ctx))
def do_import_from (self, st, ctx) :
"""import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)
'import' ('*' | '(' import_as_names ')' | import_as_names))
-> ast.ImportFrom
<<< from foo import egg as spam
"Module(body=[ImportFrom(module='foo', names=[alias(name='egg', asname='spam')], level=0)])"
<<< from .foo import egg as spam
"Module(body=[ImportFrom(module='foo', names=[alias(name='egg', asname='spam')], level=1)])"
<<< from ...foo import egg as spam
"Module(body=[ImportFrom(module='foo', names=[alias(name='egg', asname='spam')], level=3)])"
<<< from ...foo.bar import egg as spam
"Module(body=[ImportFrom(module='foo.bar', names=[alias(name='egg', asname='spam')], level=3)])"
<<< from foo import egg as spam, bar as baz
"Module(body=[ImportFrom(module='foo', names=[alias(name='egg', asname='spam'), alias(name='bar', asname='baz')], level=0)])"
<<< from .foo import egg as spam, bar as baz
"Module(body=[ImportFrom(module='foo', names=[alias(name='egg', asname='spam'), alias(name='bar', asname='baz')], level=1)])"
<<< from ...foo import egg as spam, bar as baz
"Module(body=[ImportFrom(module='foo', names=[alias(name='egg', asname='spam'), alias(name='bar', asname='baz')], level=3)])"
<<< from ...foo.bar import egg as spam, baz
"Module(body=[ImportFrom(module='foo.bar', names=[alias(name='egg', asname='spam'), alias(name='baz', asname=None)], level=3)])"
<<< from foo import *
"Module(body=[ImportFrom(module='foo', names=[alias(name='*', asname=None)], level=0)])"
<<< from .foo import *
"Module(body=[ImportFrom(module='foo', names=[alias(name='*', asname=None)], level=1)])"
<<< from ...foo import *
"Module(body=[ImportFrom(module='foo', names=[alias(name='*', asname=None)], level=3)])"
<<< from ...foo.bar import *
"Module(body=[ImportFrom(module='foo.bar', names=[alias(name='*', asname=None)], level=3)])"
"""
level = 0
next = 1
for i, child in enumerate(st[1:]) :
text = child.text
if text not in (".", "...") :
next = i+1
break
level += len(text)
if text == "import" :
module = ""
next += 1
else :
module = self.do(st[next], ctx)
next += 2
text = st[next].text
if text == "*" :
names = [self.ST.alias(lineno=st[next].srow,
col_offset=st[next].scol,
name="*", asname=None)]
elif text == "(" :
names = self.do(st[next+1], ctx)
else :
names = self.do(st[next], ctx)
return self.ST.ImportFrom(lineno=st.srow, col_offset=st.scol,
module=module, names=names, level=level)
def do_import_as_name (self, st, ctx) :
"""import_as_name: NAME ['as' NAME]
-> ast.alias
<<< from foo import egg as spam
"Module(body=[ImportFrom(module='foo', names=[alias(name='egg', asname='spam')], level=0)])"
"""
if len(st) == 1 :
return self.ST.alias(lineno=st.srow, col_offset=st.scol,
name=st[0].text,
asname=None)
else :
return self.ST.alias(lineno=st.srow, col_offset=st.scol,
name=st[0].text,
asname=st[2].text)
def do_dotted_as_name (self, st, ctx) :
"""dotted_as_name: dotted_name ['as' NAME]
-> ast.alias
<<< import foo.bar as egg
"Module(body=[Import(names=[alias(name='foo.bar', asname='egg')])])"
"""
if len(st) == 1 :
return self.ST.alias(lineno=st.srow, col_offset=st.scol,
name=self.do(st[0], ctx),
asname=None)
else :
return self.ST.alias(lineno=st.srow, col_offset=st.scol,
name=self.do(st[0], ctx),
asname=st[2].text)
def do_import_as_names (self, st, ctx) :
"""import_as_names: import_as_name (',' import_as_name)* [',']
-> ast.alias+
<<< from foo import egg as spam
"Module(body=[ImportFrom(module='foo', names=[alias(name='egg', asname='spam')], level=0)])"
<<< from foo import egg as spam, bar as baz
"Module(body=[ImportFrom(module='foo', names=[alias(name='egg', asname='spam'), alias(name='bar', asname='baz')], level=0)])"
"""
return [self.do(child, ctx) for child in st[::2]]
def do_dotted_as_names (self, st, ctx) :
"""dotted_as_names: dotted_as_name (',' dotted_as_name)*
-> ast.alias+
<<< import foo.bar, egg.spam as baz
"Module(body=[Import(names=[alias(name='foo.bar', asname=None), alias(name='egg.spam', asname='baz')])])"
"""
return [self.do(child, ctx) for child in st[::2]]
def do_dotted_name (self, st, ctx) :
"""dotted_name: NAME ('.' NAME)*
-> str
<<< import foo.bar
"Module(body=[Import(names=[alias(name='foo.bar', asname=None)])])"
"""
return ".".join(child.text for child in st[::2])
def do_global_stmt (self, st, ctx) :
"""global_stmt: 'global' NAME (',' NAME)*
-> ast.Global
<<< global x
"Module(body=[Global(names=['x'])])"
<<< global x, y
"Module(body=[Global(names=['x', 'y'])])"
"""
return self.ST.Global(lineno=st.srow, col_offset=st.scol,
names=[child.text for child in st[1::2]])
def do_nonlocal_stmt (self, st, ctx) :
"""nonlocal_stmt: 'nonlocal' NAME (',' NAME)*
-> ast.Nonlocal
<<< nonlocal x
"Module(body=[Nonlocal(names=['x'])])"
<<< nonlocal x, y
"Module(body=[Nonlocal(names=['x', 'y'])])"
"""
return self.ST.Nonlocal(lineno=st.srow, col_offset=st.scol,
names=[child.text for child in st[1::2]])
def do_assert_stmt (self, st, ctx) :
"""assert_stmt: 'assert' test [',' test]
-> ast.Assert
<<< assert x
"Module(body=[Assert(test=Name(id='x', ctx=Load()), msg=None)])"
<<< assert x, y
"Module(body=[Assert(test=Name(id='x', ctx=Load()), msg=Name(id='y', ctx=Load()))])"
"""
if len(st) == 2 :
return self.ST.Assert(lineno=st.srow, col_offset=st.scol,
test=self.do(st[1], ctx),
msg=None)
else :
return self.ST.Assert(lineno=st.srow, col_offset=st.scol,
test=self.do(st[1], ctx),
msg=self.do(st[3], ctx))
def do_compound_stmt (self, st, ctx) :
"""compound_stmt: (if_stmt | while_stmt | for_stmt | try_stmt
| with_stmt | funcdef | classdef | decorated)
-> ast.AST
<<< with x : pass
"Module(body=[With(context_expr=Name(id='x', ctx=Load()), optional_vars=None, body=[Pass()])])"
<<< if x : pass
"Module(body=[If(test=Name(id='x', ctx=Load()), body=[Pass()], orelse=[])])"
<<< while x : pass
"Module(body=[While(test=Name(id='x', ctx=Load()), body=[Pass()], orelse=[])])"
<<< for x in l : pass
"Module(body=[For(target=Name(id='x', ctx=Store()), iter=Name(id='l', ctx=Load()), body=[Pass()], orelse=[])])"
<<< try : pass
... except : pass
'Module(body=[TryExcept(body=[Pass()], handlers=[ExceptHandler(type=None, name=None, body=[Pass()])], orelse=[])])'
<<< def f () : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[], returns=None)])"
<<< class c () : pass
"Module(body=[ClassDef(name='c', bases=[], keywords=[], starargs=None, kwargs=None, body=[Pass()], decorator_list=[])])"
<<< @foo
... def f () : pass
"Module(body=[FunctionDef(name='f', args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=[Pass()], decorator_list=[Name(id='foo', ctx=Load())], returns=None)])"
"""
return self.do(st[0], ctx)
def do_if_stmt (self, st, ctx) :
"""if_stmt: 'if' test ':' suite ('elif' test ':' suite)*
['else' ':' suite]
-> ast.If
<<< if x == 1 : pass
"Module(body=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Eq()], comparators=[Num(n=1)]), body=[Pass()], orelse=[])])"
<<< if x == 1 : pass
... else : pass
"Module(body=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Eq()], comparators=[Num(n=1)]), body=[Pass()], orelse=[Pass()])])"
<<< if x == 1 : pass
... elif y == 2 : pass
... elif z == 3 : pass
... else : pass
"Module(body=[If(test=Compare(left=Name(id='x', ctx=Load()), ops=[Eq()], comparators=[Num(n=1)]), body=[Pass()], orelse=[If(test=Compare(left=Name(id='y', ctx=Load()), ops=[Eq()], comparators=[Num(n=2)]), body=[Pass()], orelse=[If(test=Compare(left=Name(id='z', ctx=Load()), ops=[Eq()], comparators=[Num(n=3)]), body=[Pass()], orelse=[Pass()])])])])"
"""
nodes = list(st)
first = None
last = None
while nodes :
if len(nodes) == 3 :
last.orelse.extend(self.do(nodes[2], ctx))
del nodes[:3]
else :
next = self.ST.If(lineno=nodes[0].srow,
col_offset=nodes[0].scol,
test=self.do(nodes[1], ctx),
body=self.do(nodes[3], ctx),
orelse=[])
if first is None :
first = next
if last is not None :
last.orelse.append(next)
last = next
del nodes[:4]
return first
def do_while_stmt (self, st, ctx) :
"""while_stmt: 'while' test ':' suite ['else' ':' suite]
-> ast.While
<<< while x : pass
"Module(body=[While(test=Name(id='x', ctx=Load()), body=[Pass()], orelse=[])])"
<<< while x :
... pass
... pass
"Module(body=[While(test=Name(id='x', ctx=Load()), body=[Pass(), Pass()], orelse=[])])"
<<< while x :
... pass
... else :
... pass
"Module(body=[While(test=Name(id='x', ctx=Load()), body=[Pass()], orelse=[Pass()])])"
"""
if len(st) == 4 :
return self.ST.While(lineno=st.srow, col_offset=st.scol,
test=self.do(st[1], ctx),
body=self.do(st[3], ctx),
orelse=[])
else :
return self.ST.While(lineno=st.srow, col_offset=st.scol,
test=self.do(st[1], ctx),
body=self.do(st[3], ctx),
orelse=self.do(st[6], ctx))
def do_for_stmt (self, st, ctx) :
"""for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
-> ast.For
<<< for x in l : pass
"Module(body=[For(target=Name(id='x', ctx=Store()), iter=Name(id='l', ctx=Load()), body=[Pass()], orelse=[])])"
<<< for x in l :
... pass
... pass
"Module(body=[For(target=Name(id='x', ctx=Store()), iter=Name(id='l', ctx=Load()), body=[Pass(), Pass()], orelse=[])])"
<<< for x in l :
... pass
... else :
... pass
"Module(body=[For(target=Name(id='x', ctx=Store()), iter=Name(id='l', ctx=Load()), body=[Pass()], orelse=[Pass()])])"
<<< for x, y in l : pass
"Module(body=[For(target=Tuple(elts=[Name(id='x', ctx=Store()), Name(id='y', ctx=Store())], ctx=Store()), iter=Name(id='l', ctx=Load()), body=[Pass()], orelse=[])])"
<<< for x in a, b : pass
"Module(body=[For(target=Name(id='x', ctx=Store()), iter=Tuple(elts=[Name(id='a', ctx=Load()), Name(id='b', ctx=Load())], ctx=Load()), body=[Pass()], orelse=[])])"
"""
if len(st) == 6 :
return self.ST.For(lineno=st.srow, col_offset=st.scol,
target=self.do(st[1], ast.Store),
iter=self.do(st[3], ctx),
body=self.do(st[5], ctx),
orelse=[])
else :
return self.ST.For(lineno=st.srow, col_offset=st.scol,
target=self.do(st[1], ast.Store),
iter=self.do(st[3], ctx),
body=self.do(st[5], ctx),
orelse=self.do(st[8], ctx))
def do_try_stmt (self, st, ctx) :
"""try_stmt: ('try' ':' suite
((except_clause ':' suite)+
['else' ':' suite]
['finally' ':' suite] |
'finally' ':' suite))
-> ast.TryExcept | ast.TryFinally
<<< try : pass
... except : pass
'Module(body=[TryExcept(body=[Pass()], handlers=[ExceptHandler(type=None, name=None, body=[Pass()])], orelse=[])])'
<<< try : pass
... except : pass
... else : pass
... finally : pass
'Module(body=[TryFinally(body=[TryExcept(body=[Pass()], handlers=[ExceptHandler(type=None, name=None, body=[Pass()])], orelse=[Pass()])], finalbody=[Pass()])])'
<<< try : pass
... except TypeError : pass
... except ValueError : pass
... except : pass
"Module(body=[TryExcept(body=[Pass()], handlers=[ExceptHandler(type=Name(id='TypeError', ctx=Load()), name=None, body=[Pass()]), ExceptHandler(type=Name(id='ValueError', ctx=Load()), name=None, body=[Pass()]), ExceptHandler(type=None, name=None, body=[Pass()])], orelse=[])])"
"""
handlers = []
finalbody = None
orelse = []
nodes = st[3:]
while nodes :
if nodes[0].text == "else" :
orelse.extend(self.do(nodes[2], ctx))
elif nodes[0].text == "finally" :
finalbody = self.do(nodes[2], ctx)
else :
t, n = self.do(nodes[0], ctx)
handlers.append(self.ST.ExceptHandler(lineno=nodes[0].srow,
col_offset=nodes[0].scol,
type=t, name=n,
body=self.do(nodes[2], ctx)))
del nodes[:3]
stmt = self.ST.TryExcept(lineno=st.srow, col_offset=st.scol,
body=self.do(st[2], ctx),
handlers=handlers,
orelse=orelse)
if finalbody is None :
return stmt
else :
return self.ST.TryFinally(lineno=st.srow, col_offset=st.scol,
body=[stmt], finalbody=finalbody)
def do_with_stmt (self, st, ctx) :
"""with_stmt: 'with' test [ with_var ] ':' suite
-> ast.With
<<< with x : pass
"Module(body=[With(context_expr=Name(id='x', ctx=Load()), optional_vars=None, body=[Pass()])])"
<<< with x as y : pass
"Module(body=[With(context_expr=Name(id='x', ctx=Load()), optional_vars=Name(id='y', ctx=Store()), body=[Pass()])])"
<<< with x :
... pass
"Module(body=[With(context_expr=Name(id='x', ctx=Load()), optional_vars=None, body=[Pass()])])"
<<< with x as y :
... pass
"Module(body=[With(context_expr=Name(id='x', ctx=Load()), optional_vars=Name(id='y', ctx=Store()), body=[Pass()])])"
"""
if len(st) == 5 :
return self.ST.With(lineno=st.srow, col_offset=st.scol,
context_expr=self.do(st[1], ctx),
optional_vars=self.do(st[2], self.ST.Store),
body=self.do(st[4], ctx))
else :
return self.ST.With(lineno=st.srow, col_offset=st.scol,
context_expr=self.do(st[1], ctx),
optional_vars=None,
body=self.do(st[3], ctx))
def do_with_var (self, st, ctx) :
"""with_var: 'as' expr
-> ast.Name
<<< with x as y : pass
"Module(body=[With(context_expr=Name(id='x', ctx=Load()), optional_vars=Name(id='y', ctx=Store()), body=[Pass()])])"
"""
return self.do(st[1], ctx)
def do_except_clause (self, st, ctx) :
"""except_clause: 'except' [test ['as' NAME]]
-> ast.AST?, ast.Name?
<<< try : pass
... except NameError : pass
... except TypeError as err : pass
... except : pass
"Module(body=[TryExcept(body=[Pass()], handlers=[ExceptHandler(type=Name(id='NameError', ctx=Load()), name=None, body=[Pass()]), ExceptHandler(type=Name(id='TypeError', ctx=Load()), name=Name(id='err', ctx=Store()), body=[Pass()]), ExceptHandler(type=None, name=None, body=[Pass()])], orelse=[])])"
"""
if len(st) == 1 :
return None, None
elif len(st) == 2 :
return self.do(st[1], ctx), None
else :
return self.do(st[1], ctx), self.ST.Name(lineno=st[3].srow,
col_offset=st[3].scol,
id=st[3].text,
ctx=self.ST.Store())
def do_suite (self, st, ctx) :
"""suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT
-> ast.AST+
<<< with x : pass
"Module(body=[With(context_expr=Name(id='x', ctx=Load()), optional_vars=None, body=[Pass()])])"
<<< with x :
... pass
"Module(body=[With(context_expr=Name(id='x', ctx=Load()), optional_vars=None, body=[Pass()])])"
<<< with x :
... pass
... pass
"Module(body=[With(context_expr=Name(id='x', ctx=Load()), optional_vars=None, body=[Pass(), Pass()])])"
"""
if len(st) == 1 :
return self.do(st[0], ctx)
else :
return reduce(operator.add,
(self.do(child, ctx) for child in st[2:-1]),
[])
def do_test (self, st, ctx) :
"""test: or_test ['if' or_test 'else' test] | lambdef
-> ast.AST
<<< 3
'Module(body=[Expr(value=Num(n=3))])'
<<< 3 if x else 4
"Module(body=[Expr(value=IfExp(test=Name(id='x', ctx=Load()), body=Num(n=3), orelse=Num(n=4)))])"
<<< lambda x : x+1
"Module(body=[Expr(value=Lambda(args=arguments(args=[arg(arg='x', annotation=None)], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=BinOp(left=Name(id='x', ctx=Load()), op=Add(), right=Num(n=1))))])"
"""
if len(st) == 1 :
return self.do(st[0], ctx)
else :
return self.ST.IfExp(lineno=st.srow, col_offset=st.scol,
test=self.do(st[2], ctx),
body=self.do(st[0], ctx),
orelse=self.do(st[4], ctx))
def do_test_nocond (self, st, ctx) :
"""test_nocond: or_test | lambdef_nocond
-> ast.AST
<<< [x for x in (lambda: True, lambda: False) if x()]
"Module(body=[Expr(value=ListComp(elt=Name(id='x', ctx=Load()), generators=[comprehension(target=Name(id='x', ctx=Store()), iter=Tuple(elts=[Lambda(args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=Name(id='True', ctx=Load())), Lambda(args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=Name(id='False', ctx=Load()))], ctx=Load()), ifs=[Call(func=Name(id='x', ctx=Load()), args=[], keywords=[], starargs=None, kwargs=None)])]))])"
"""
return self.do(st[0], ctx)
def do_lambdef (self, st, ctx) :
"""lambdef: 'lambda' [varargslist] ':' test
-> ast.Lambda
<<< lambda : True
"Module(body=[Expr(value=Lambda(args=arguments(args=[], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=Name(id='True', ctx=Load())))])"
<<< lambda x : x+1
"Module(body=[Expr(value=Lambda(args=arguments(args=[arg(arg='x', annotation=None)], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=BinOp(left=Name(id='x', ctx=Load()), op=Add(), right=Num(n=1))))])"
"""
if len(st) == 3 :
return self.ST.Lambda(lineno=st.srow, col_offset=st.scol,
args=self.ST.arguments(lineno=st.srow,
col_offset=st.scol,
args=[], vararg=None,
varargannotation=None,
kwonlyargs=[], kwarg=None,
kwargannotation=None,
defaults=[], kw_defaults=[]),
body=self.do(st[-1], ctx))
else :
return self.ST.Lambda(lineno=st.srow, col_offset=st.scol,
args=self.do(st[1], ctx),
body=self.do(st[-1], ctx))
def do_lambdef_nocond (self, st, ctx) :
"""lambdef_nocond: 'lambda' [varargslist] ':' test_nocond
-> ast.Lambda
<<< [x for x in l if lambda y : x]
"Module(body=[Expr(value=ListComp(elt=Name(id='x', ctx=Load()), generators=[comprehension(target=Name(id='x', ctx=Store()), iter=Name(id='l', ctx=Load()), ifs=[Lambda(args=arguments(args=[arg(arg='y', annotation=None)], vararg=None, varargannotation=None, kwonlyargs=[], kwarg=None, kwargannotation=None, defaults=[], kw_defaults=[]), body=Name(id='x', ctx=Load()))])]))])"
"""
tree = self.do_lambdef(st, ctx)
tree.st = st
return tree
def do_or_test (self, st, ctx) :
"""or_test: and_test ('or' and_test)*
-> ast.AST
<<< x or y
"Module(body=[Expr(value=BoolOp(op=Or(), values=[Name(id='x', ctx=Load()), Name(id='y', ctx=Load())]))])"
<<< x or y or z
"Module(body=[Expr(value=BoolOp(op=Or(), values=[Name(id='x', ctx=Load()), Name(id='y', ctx=Load()), Name(id='z', ctx=Load())]))])"
"""
return self._do_boolean(st, ctx)
def do_and_test (self, st, ctx) :
"""and_test: not_test ('and' not_test)*
-> ast.AST
<<< x and y
"Module(body=[Expr(value=BoolOp(op=And(), values=[Name(id='x', ctx=Load()), Name(id='y', ctx=Load())]))])"
<<< x and y and z
"Module(body=[Expr(value=BoolOp(op=And(), values=[Name(id='x', ctx=Load()), Name(id='y', ctx=Load()), Name(id='z', ctx=Load())]))])"
"""
return self._do_boolean(st, ctx)
def do_not_test (self, st, ctx) :
"""not_test: 'not' not_test | comparison
-> ast.AST
<<< not x
"Module(body=[Expr(value=UnaryOp(op=Not(), operand=Name(id='x', ctx=Load())))])"
<<< not not x
"Module(body=[Expr(value=UnaryOp(op=Not(), operand=UnaryOp(op=Not(), operand=Name(id='x', ctx=Load()))))])"
"""
return self._do_unary(st, ctx)
def do_comparison (self, st, ctx) :
"""comparison: star_expr (comp_op star_expr)*
-> ast.AST
<<< *x < *y <= *z
"Module(body=[Expr(value=Compare(left=Starred(value=Name(id='x', ctx=Load()), ctx=Load()), ops=[Lt(), LtE()], comparators=[Starred(value=Name(id='y', ctx=Load()), ctx=Load()), Starred(value=Name(id='z', ctx=Load()), ctx=Load())]))])"
<<< x < y <= z
"Module(body=[Expr(value=Compare(left=Name(id='x', ctx=Load()), ops=[Lt(), LtE()], comparators=[Name(id='y', ctx=Load()), Name(id='z', ctx=Load())]))])"
"""
if len(st) == 1 :
return self.do(st[0], ctx)
else :
return self.ST.Compare(lineno=st.srow, col_offset=st.scol,
left=self.do(st[0], ctx),
ops = [self.do(child, ctx)
for child in st[1::2]],
comparators = [self.do(child, ctx)
for child in st[2::2]])
_comparison = {"<" : ast.Lt,
">" : ast.Gt,
"==" : ast.Eq,
">=" : ast.GtE,
"<=" : ast.LtE,
"!=" : ast.NotEq,
"<>" : ast.NotEq,
"in" : ast.In,
"not in" : ast.NotIn,
"is" : ast.Is,
"is not" : ast.IsNot}
def do_comp_op (self, st, ctx) :
"""comp_op: '<'|'>'|'=='|'>='|'<='|'!='|'<>'|'in'|'not' 'in'|'is'
|'is' 'not'
-> ast.cmpop
<<< 1 < 2
'Module(body=[Expr(value=Compare(left=Num(n=1), ops=[Lt()], comparators=[Num(n=2)]))])'
<<< 1 > 2
'Module(body=[Expr(value=Compare(left=Num(n=1), ops=[Gt()], comparators=[Num(n=2)]))])'
<<< 1 == 2
'Module(body=[Expr(value=Compare(left=Num(n=1), ops=[Eq()], comparators=[Num(n=2)]))])'
<<< 1 >= 2
'Module(body=[Expr(value=Compare(left=Num(n=1), ops=[GtE()], comparators=[Num(n=2)]))])'
<<< 1 <= 2
'Module(body=[Expr(value=Compare(left=Num(n=1), ops=[LtE()], comparators=[Num(n=2)]))])'
<<< 1 != 2
'Module(body=[Expr(value=Compare(left=Num(n=1), ops=[NotEq()], comparators=[Num(n=2)]))])'
<<< 1 <> 2
'Module(body=[Expr(value=Compare(left=Num(n=1), ops=[NotEq()], comparators=[Num(n=2)]))])'
<<< 1 in 2
'Module(body=[Expr(value=Compare(left=Num(n=1), ops=[In()], comparators=[Num(n=2)]))])'
<<< 1 not in 2
'Module(body=[Expr(value=Compare(left=Num(n=1), ops=[NotIn()], comparators=[Num(n=2)]))])'
<<< 1 is 2
'Module(body=[Expr(value=Compare(left=Num(n=1), ops=[Is()], comparators=[Num(n=2)]))])'
<<< 1 is not 2
'Module(body=[Expr(value=Compare(left=Num(n=1), ops=[IsNot()], comparators=[Num(n=2)]))])'
"""
text = " ".join(child.text for child in st)
return self._comparison[text](lineno=st.srow, col_offset=st.scol)
def do_star_expr (self, st, ctx) :
"""star_expr: ['*'] expr
-> ast.AST
<<< x
"Module(body=[Expr(value=Name(id='x', ctx=Load()))])"
<<< *x
"Module(body=[Expr(value=Starred(value=Name(id='x', ctx=Load()), ctx=Load()))])"
"""
if len(st) == 1 :
return self.do(st[0], ctx)
else :
return self.ST.Starred(lineno=st.srow, col_offset=st.scol,
value=self.do(st[1], ctx),
ctx=ctx())
def do_expr (self, st, ctx) :
"""expr: xor_expr ('|' xor_expr)*
-> ast.AST
<<< 1
'Module(body=[Expr(value=Num(n=1))])'
<<< 1 | 2
'Module(body=[Expr(value=BinOp(left=Num(n=1), op=BitOr(), right=Num(n=2)))])'
<<< 1 | 2 | 3
'Module(body=[Expr(value=BinOp(left=BinOp(left=Num(n=1), op=BitOr(), right=Num(n=2)), op=BitOr(), right=Num(n=3)))])'
"""
return self._do_binary(st, ctx)
def do_xor_expr (self, st, ctx) :
"""xor_expr: and_expr ('^' and_expr)*
-> ast.AST
<<< 1
'Module(body=[Expr(value=Num(n=1))])'
<<< 1 ^ 2
'Module(body=[Expr(value=BinOp(left=Num(n=1), op=BitXor(), right=Num(n=2)))])'
<<< 1 ^ 2 ^ 3
'Module(body=[Expr(value=BinOp(left=BinOp(left=Num(n=1), op=BitXor(), right=Num(n=2)), op=BitXor(), right=Num(n=3)))])'
"""
return self._do_binary(st, ctx)
def do_and_expr (self, st, ctx) :
"""and_expr: shift_expr ('&' shift_expr)*
-> ast.AST
<<< 1
'Module(body=[Expr(value=Num(n=1))])'
<<< 1 & 2
'Module(body=[Expr(value=BinOp(left=Num(n=1), op=BitAnd(), right=Num(n=2)))])'
<<< 1 & 2 & 3
'Module(body=[Expr(value=BinOp(left=BinOp(left=Num(n=1), op=BitAnd(), right=Num(n=2)), op=BitAnd(), right=Num(n=3)))])'
"""
return self._do_binary(st, ctx)
def do_shift_expr (self, st, ctx) :
"""shift_expr: arith_expr (('<<'|'>>') arith_expr)*
-> ast.AST
<<< 1
'Module(body=[Expr(value=Num(n=1))])'
<<< 1 << 2
'Module(body=[Expr(value=BinOp(left=Num(n=1), op=LShift(), right=Num(n=2)))])'
<<< 1 << 2 >> 3
'Module(body=[Expr(value=BinOp(left=BinOp(left=Num(n=1), op=LShift(), right=Num(n=2)), op=RShift(), right=Num(n=3)))])'
"""
return self._do_binary(st, ctx)
def do_arith_expr (self, st, ctx) :
"""arith_expr: term (('+'|'-') term)*
-> ast.AST
<<< 1
'Module(body=[Expr(value=Num(n=1))])'
<<< 1 + 2
'Module(body=[Expr(value=BinOp(left=Num(n=1), op=Add(), right=Num(n=2)))])'
<<< 1 + 2 - 3
'Module(body=[Expr(value=BinOp(left=BinOp(left=Num(n=1), op=Add(), right=Num(n=2)), op=Sub(), right=Num(n=3)))])'
"""
return self._do_binary(st, ctx)
def do_term (self, st, ctx) :
"""term: factor (('*'|'/'|'%'|'//') factor)*
-> ast.AST
<<< 1
'Module(body=[Expr(value=Num(n=1))])'
<<< 1 * 2
'Module(body=[Expr(value=BinOp(left=Num(n=1), op=Mult(), right=Num(n=2)))])'
<<< 1 * 2 / 3
'Module(body=[Expr(value=BinOp(left=BinOp(left=Num(n=1), op=Mult(), right=Num(n=2)), op=Div(), right=Num(n=3)))])'
<<< 1 * 2 / 3 % 4
'Module(body=[Expr(value=BinOp(left=BinOp(left=BinOp(left=Num(n=1), op=Mult(), right=Num(n=2)), op=Div(), right=Num(n=3)), op=Mod(), right=Num(n=4)))])'
<<< 1 * 2 / 3 % 4 // 5
'Module(body=[Expr(value=BinOp(left=BinOp(left=BinOp(left=BinOp(left=Num(n=1), op=Mult(), right=Num(n=2)), op=Div(), right=Num(n=3)), op=Mod(), right=Num(n=4)), op=FloorDiv(), right=Num(n=5)))])'
"""
return self._do_binary(st, ctx)
def do_factor (self, st, ctx) :
"""factor: ('+'|'-'|'~') factor | power
-> ast.AST
<<< 1
'Module(body=[Expr(value=Num(n=1))])'
<<< +1
'Module(body=[Expr(value=UnaryOp(op=UAdd(), operand=Num(n=1)))])'
<<< -1
'Module(body=[Expr(value=Num(n=-1))])'
<<< ~1
'Module(body=[Expr(value=UnaryOp(op=Invert(), operand=Num(n=1)))])'
<<< +-1
'Module(body=[Expr(value=UnaryOp(op=UAdd(), operand=Num(n=-1)))])'
<<< -+1
'Module(body=[Expr(value=UnaryOp(op=USub(), operand=UnaryOp(op=UAdd(), operand=Num(n=1))))])'
<<< +-~1
'Module(body=[Expr(value=UnaryOp(op=UAdd(), operand=UnaryOp(op=USub(), operand=UnaryOp(op=Invert(), operand=Num(n=1)))))])'
"""
if len(st) == 1 :
return self.do(st[0], ctx)
else :
tree = self._do_unary(st, ctx)
if (isinstance(tree.op, self.ST.USub)
and isinstance(tree.operand, self.ST.Num)
and tree.operand.n > 0) :
tree = self.ST.Num(lineno=st.srow, col_offset=st.scol,
n = -tree.operand.n)
return tree
def do_power (self, st, ctx) :
"""power: atom trailer* ['**' factor]
-> ast.AST
<<< 1 ** 2
'Module(body=[Expr(value=BinOp(left=Num(n=1), op=Pow(), right=Num(n=2)))])'
<<< a.b ** 2
"Module(body=[Expr(value=BinOp(left=Attribute(value=Name(id='a', ctx=Load()), attr='b', ctx=Load()), op=Pow(), right=Num(n=2)))])"
"""
if len(st) == 1 :
return self.do(st[0], ctx)
else :
left = self.do(st[0], ctx)
power = None
for child in st[1:] :
if child.text == "**" :
power = self.do(st[-1], ctx)
break
trailer = self.do(child, ctx)
left = trailer(left, st.srow, st.scol)
if power :
return self.ST.BinOp(lineno=st.srow, col_offset=st.scol,
left=left,
op=self.ST.Pow(lineno=st[-2].srow,
col_offset=st[-2].scol),
right=power)
else :
return left
def do_atom (self, st, ctx) :
"""atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
-> ast.AST
<<< foo
"Module(body=[Expr(value=Name(id='foo', ctx=Load()))])"
<<< True
"Module(body=[Expr(value=Name(id='True', ctx=Load()))])"
<<< 42
'Module(body=[Expr(value=Num(n=42))])'
<<< 1.5
'Module(body=[Expr(value=Num(n=1.5))])'
<<< 'hello'
"Module(body=[Expr(value=Str(s='hello'))])"
<<< 'hello' 'world'
"Module(body=[Expr(value=Str(s='helloworld'))])"
<<< '''hello
... world'''
"Module(body=[Expr(value=Str(s='hello\\\\nworld'))])"
<<< [1, 2, 3]
'Module(body=[Expr(value=List(elts=[Num(n=1), Num(n=2), Num(n=3)], ctx=Load()))])'
<<< [x for x in l]
"Module(body=[Expr(value=ListComp(elt=Name(id='x', ctx=Load()), generators=[comprehension(target=Name(id='x', ctx=Store()), iter=Name(id='l', ctx=Load()), ifs=[])]))])"
<<< (1, 2, 3)
'Module(body=[Expr(value=Tuple(elts=[Num(n=1), Num(n=2), Num(n=3)], ctx=Load()))])'
<<< (1,)
'Module(body=[Expr(value=Tuple(elts=[Num(n=1)], ctx=Load()))])'
<<< (1)
'Module(body=[Expr(value=Num(n=1))])'
<<< (x for x in l)
"Module(body=[Expr(value=GeneratorExp(elt=Name(id='x', ctx=Load()), generators=[comprehension(target=Name(id='x', ctx=Store()), iter=Name(id='l', ctx=Load()), ifs=[])]))])"
<<< {1, 2, 3}
'Module(body=[Expr(value=Set(elts=[Num(n=1), Num(n=2), Num(n=3)]))])'
<<< {x for x in l}
"Module(body=[Expr(value=SetComp(elt=Name(id='x', ctx=Load()), generators=[comprehension(target=Name(id='x', ctx=Store()), iter=Name(id='l', ctx=Load()), ifs=[])]))])"
<<< {1:2, 3:4}
'Module(body=[Expr(value=Dict(keys=[Num(n=1), Num(n=3)], values=[Num(n=2), Num(n=4)]))])'
<<< {x:y for x, y in l}
"Module(body=[Expr(value=DictComp(key=Name(id='x', ctx=Load()), value=Name(id='y', ctx=Load()), generators=[comprehension(target=Tuple(elts=[Name(id='x', ctx=Store()), Name(id='y', ctx=Store())], ctx=Store()), iter=Name(id='l', ctx=Load()), ifs=[])]))])"
"""
kind, text = st[0].kind, st[0].text
if kind == self.NUMBER :
return self.ST.Num(lineno=st.srow, col_offset=st.scol,
n=self.ST.literal_eval(text))
elif kind == self.NAME :
return self.ST.Name(lineno=st.srow, col_offset=st.scol,
id=text, ctx=ctx())
elif kind == self.STRING :
return self.ST.Str(lineno=st.srow, col_offset=st.scol,
s="".join(self.ST.literal_eval(child.text)
for child in st))
elif text == "..." :
return self.ST.Ellipsis(lineno=st.srow, col_offset=st.scol)
elif text == "[" :
if len(st) == 2 :
return self.ST.List(lineno=st.srow, col_offset=st.scol,
elts=[], ctx=ctx())
else :
loop, elts, atom = self.do(st[1], ctx)
if atom is not None :
elts=[atom]
if loop is None :
return self.ST.List(lineno=st.srow, col_offset=st.scol,
elts=elts, ctx=ctx())
else :
return self.ST.ListComp(lineno=st.srow, col_offset=st.scol,
elt=loop, generators=elts)
elif text == "(" :
if len(st) == 2 :
return self.ST.Tuple(lineno=st.srow, col_offset=st.scol,
elts=[], ctx=ctx())
elif st[1].symbol == "yield_expr" :
return self.do(st[1], ctx)
else :
loop, elts, atom = self.do(st[1], ctx)
if atom is not None :
return atom
elif loop is None :
return self.ST.Tuple(lineno=st.srow, col_offset=st.scol,
elts=elts, ctx=ctx())
else :
return self.ST.GeneratorExp(lineno=st.srow, col_offset=st.scol,
elt=loop, generators=elts)
else : # text == "{"
if len(st) == 2 :
return self.ST.Dict(lineno=st.srow, col_offset=st.scol,
keys=[], values=[])
else :
return self.do(st[1], ctx)
def do_testlist_comp (self, st, ctx) :
"""testlist_comp: test ( comp_for | (',' test)* [','] )
-> ast.AST?, ast.AST+
<<< [1, 2, 3]
'Module(body=[Expr(value=List(elts=[Num(n=1), Num(n=2), Num(n=3)], ctx=Load()))])'
<<< [x for x in l]
"Module(body=[Expr(value=ListComp(elt=Name(id='x', ctx=Load()), generators=[comprehension(target=Name(id='x', ctx=Store()), iter=Name(id='l', ctx=Load()), ifs=[])]))])"
"""
if len(st) == 1 :
return None, None, self.do(st[0])
elif st[1].text == "," :
return None, [self.do(child, ctx) for child in st[::2]], None
else :
return self.do(st[0], ctx), self.do(st[1], ctx)[0], None
def do_trailer (self, st, ctx) :
"""trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
-> (tree, line, column -> ast.AST)
<<< a.b
"Module(body=[Expr(value=Attribute(value=Name(id='a', ctx=Load()), attr='b', ctx=Load()))])"
<<< a.b.c
"Module(body=[Expr(value=Attribute(value=Attribute(value=Name(id='a', ctx=Load()), attr='b', ctx=Load()), attr='c', ctx=Load()))])"
<<< a.b[c].d
"Module(body=[Expr(value=Attribute(value=Subscript(value=Attribute(value=Name(id='a', ctx=Load()), attr='b', ctx=Load()), slice=Index(value=Name(id='c', ctx=Load())), ctx=Load()), attr='d', ctx=Load()))])"
<<< f()
"Module(body=[Expr(value=Call(func=Name(id='f', ctx=Load()), args=[], keywords=[], starargs=None, kwargs=None))])"
<<< f(x)
"Module(body=[Expr(value=Call(func=Name(id='f', ctx=Load()), args=[Name(id='x', ctx=Load())], keywords=[], starargs=None, kwargs=None))])"
<<< f(x, *l, y=2)
"Module(body=[Expr(value=Call(func=Name(id='f', ctx=Load()), args=[Name(id='x', ctx=Load())], keywords=[keyword(arg='y', value=Num(n=2))], starargs=Name(id='l', ctx=Load()), kwargs=None))])"
<<< f(x, *l, y=2, **d)
"Module(body=[Expr(value=Call(func=Name(id='f', ctx=Load()), args=[Name(id='x', ctx=Load())], keywords=[keyword(arg='y', value=Num(n=2))], starargs=Name(id='l', ctx=Load()), kwargs=Name(id='d', ctx=Load())))])"
<<< f(*l)
"Module(body=[Expr(value=Call(func=Name(id='f', ctx=Load()), args=[], keywords=[], starargs=Name(id='l', ctx=Load()), kwargs=None))])"
<<< f(**d)
"Module(body=[Expr(value=Call(func=Name(id='f', ctx=Load()), args=[], keywords=[], starargs=None, kwargs=Name(id='d', ctx=Load())))])"
<<< f(x)(a=1, b=2)
"Module(body=[Expr(value=Call(func=Call(func=Name(id='f', ctx=Load()), args=[Name(id='x', ctx=Load())], keywords=[], starargs=None, kwargs=None), args=[], keywords=[keyword(arg='a', value=Num(n=1)), keyword(arg='b', value=Num(n=2))], starargs=None, kwargs=None))])"
"""
hd = st[0].text
if hd == "." :
def trail (tree, lineno, col_offset) :
return self.ST.Attribute(lineno=lineno,
col_offset=col_offset,
value=tree,
attr=st[1].text,
ctx=ctx())
elif hd == "[" :
subscript = self.do(st[1], ctx)
if len(subscript) == 1 :
subscript = subscript[0]
else :
subscript = self.ST.ExtSlice(lineno=st[1].srow,
col_offset=st[1].scol,
dims=subscript,
ctx=ctx())
def trail (tree, lineno, col_offset) :
return self.ST.Subscript(lineno=lineno,
col_offset=col_offset,
value=tree,
slice=subscript,
ctx=ctx())
elif len(st) == 2 : # hd = "("
def trail (tree, lineno, col_offset) :
return self.ST.Call(lineno=lineno, col_offset=col_offset,
func=tree, args=[], keywords=[],
starargs=None, kwargs=None)
else : # hd = "("
def trail (tree, lineno, col_offset) :
args, keywords, starargs, kwargs = self.do(st[1], ctx)
return self.ST.Call(lineno=lineno, col_offset=col_offset,
func=tree, args=args, keywords=keywords,
starargs=starargs, kwargs=kwargs)
return trail
def do_subscriptlist (self, st, ctx) :
"""subscriptlist: subscript (',' subscript)* [',']
-> ast.Slice+
<<< l[:]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=None, upper=None, step=None), ctx=Load()))])"
<<< l[1:]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=Num(n=1), upper=None, step=None), ctx=Load()))])"
<<< l[1::]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=Num(n=1), upper=None, step=Name(id='None', ctx=Load())), ctx=Load()))])"
<<< l[1:2:]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=Num(n=1), upper=Num(n=2), step=Name(id='None', ctx=Load())), ctx=Load()))])"
<<< l[1:2:3]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=Num(n=1), upper=Num(n=2), step=Num(n=3)), ctx=Load()))])"
<<< l[::]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=None, upper=None, step=Name(id='None', ctx=Load())), ctx=Load()))])"
<<< l[:2:]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=None, upper=Num(n=2), step=Name(id='None', ctx=Load())), ctx=Load()))])"
<<< l[:2:3]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=None, upper=Num(n=2), step=Num(n=3)), ctx=Load()))])"
<<< l[::3]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=None, upper=None, step=Num(n=3)), ctx=Load()))])"
<<< l[1:2:3,4:5:6]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=ExtSlice(dims=[Slice(lower=Num(n=1), upper=Num(n=2), step=Num(n=3)), Slice(lower=Num(n=4), upper=Num(n=5), step=Num(n=6))]), ctx=Load()))])"
"""
return [self.do(child, ctx) for child in st[::2]]
def do_subscript (self, st, ctx) :
"""subscript: test | [test] ':' [test] [sliceop]
-> ast.Slice | ast.Index
<<< l[:]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=None, upper=None, step=None), ctx=Load()))])"
<<< l[1:]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=Num(n=1), upper=None, step=None), ctx=Load()))])"
<<< l[1::]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=Num(n=1), upper=None, step=Name(id='None', ctx=Load())), ctx=Load()))])"
<<< l[1:2:]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=Num(n=1), upper=Num(n=2), step=Name(id='None', ctx=Load())), ctx=Load()))])"
<<< l[1:2:3]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=Num(n=1), upper=Num(n=2), step=Num(n=3)), ctx=Load()))])"
<<< l[::]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=None, upper=None, step=Name(id='None', ctx=Load())), ctx=Load()))])"
<<< l[:2:]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=None, upper=Num(n=2), step=Name(id='None', ctx=Load())), ctx=Load()))])"
<<< l[:2:3]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=None, upper=Num(n=2), step=Num(n=3)), ctx=Load()))])"
<<< l[::3]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=None, upper=None, step=Num(n=3)), ctx=Load()))])"
"""
count = len(st)
if count == 1 and st[0].text == ":" :
return self.ST.Slice(lineno=st.srow, col_offset=st.scol,
lower=None, upper=None, step=None)
elif count == 1 :
return self.ST.Index(lineno=st.srow, col_offset=st.scol,
value=self.do(st[0], ctx))
elif count == 4 :
return self.ST.Slice(lineno=st.srow, col_offset=st.scol,
lower=self.do(st[0], ctx),
upper=self.do(st[2], ctx),
step=self.do(st[3], ctx))
elif count == 3 and st[-1].symbol == "test" :
return self.ST.Slice(lineno=st.srow, col_offset=st.scol,
lower=self.do(st[0], ctx),
upper=self.do(st[2], ctx),
step=None)
elif count == 3 and st[0].text == ":" :
return self.ST.Slice(lineno=st.srow, col_offset=st.scol,
lower=None,
upper=self.do(st[1], ctx),
step=self.do(st[2], ctx))
elif count == 3 :
return self.ST.Slice(lineno=st.srow, col_offset=st.scol,
lower=self.do(st[0], ctx),
upper=None,
step=self.do(st[2], ctx))
elif count == 2 and st[-1].symbol == "sliceop" :
return self.ST.Slice(lineno=st.srow, col_offset=st.scol,
lower=None,
upper=None,
step=self.do(st[1], ctx))
elif count == 2 and st[0].text == ":" :
return self.ST.Slice(lineno=st.srow, col_offset=st.scol,
lower=None,
upper=self.do(st[1], ctx),
step=None)
else :
return self.ST.Slice(lineno=st.srow, col_offset=st.scol,
lower=self.do(st[0], ctx),
upper=None,
step=None)
def do_sliceop (self, st, ctx) :
"""sliceop: ':' [test]
-> ast.AST
<<< l[1::]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=Num(n=1), upper=None, step=Name(id='None', ctx=Load())), ctx=Load()))])"
<<< l[::3]
"Module(body=[Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=None, upper=None, step=Num(n=3)), ctx=Load()))])"
"""
if len(st) == 1 :
return self.ST.Name(lineno=st.srow, col_offset=st.scol,
id="None", ctx=ctx())
else :
return self.do(st[1], ctx)
def do_exprlist (self, st, ctx) :
"""exprlist: star_expr (',' star_expr)* [',']
-> ast.AST+
<<< del x
"Module(body=[Delete(targets=[Name(id='x', ctx=Del())])])"
<<< del x, y
"Module(body=[Delete(targets=[Name(id='x', ctx=Del()), Name(id='y', ctx=Del())])])"
"""
tree = self.do_testlist(st, ctx)
tree.st = st
return tree
def do_testlist (self, st, ctx) :
"""testlist: test (',' test)* [',']
-> ast.AST | ast.Tuple
<<< 1
'Module(body=[Expr(value=Num(n=1))])'
<<< 1, 2
'Module(body=[Expr(value=Tuple(elts=[Num(n=1), Num(n=2)], ctx=Load()))])'
"""
lst = [self.do(child, ctx) for child in st[::2]]
if len(lst) == 1 :
return lst[0]
else :
return self.ST.Tuple(lineno=st.srow, col_offset=st.scol,
elts=lst, ctx=ctx())
def do_dictorsetmaker (self, st, ctx) :
"""dictorsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) |
(test (comp_for | (',' test)* [','])) )
-> ast.Dict | ast.DictComp | ast.Set | ast.SetComp
<<< {1, 2, 3}
'Module(body=[Expr(value=Set(elts=[Num(n=1), Num(n=2), Num(n=3)]))])'
<<< {x for x in l}
"Module(body=[Expr(value=SetComp(elt=Name(id='x', ctx=Load()), generators=[comprehension(target=Name(id='x', ctx=Store()), iter=Name(id='l', ctx=Load()), ifs=[])]))])"
<<< {1:2, 3:4}
'Module(body=[Expr(value=Dict(keys=[Num(n=1), Num(n=3)], values=[Num(n=2), Num(n=4)]))])'
<<< {x:y for x, y in l}
"Module(body=[Expr(value=DictComp(key=Name(id='x', ctx=Load()), value=Name(id='y', ctx=Load()), generators=[comprehension(target=Tuple(elts=[Name(id='x', ctx=Store()), Name(id='y', ctx=Store())], ctx=Store()), iter=Name(id='l', ctx=Load()), ifs=[])]))])"
<<< {42}
'Module(body=[Expr(value=Set(elts=[Num(n=42)]))])'
"""
if len(st) == 1 :
return self.ST.Set(lineno=st.srow, col_offset=st.scol,
elts=[self.do(st[0], ctx)])
elif st[1].text == ":" :
if st[3].text == "," :
return self.ST.Dict(lineno=st.srow, col_offset=st.scol,
keys=[self.do(child, ctx)
for child in st[::4]],
values=[self.do(child, ctx)
for child in st[2::4]])
else :
return self.ST.DictComp(lineno=st.srow, col_offset=st.scol,
key=self.do(st[0], ctx),
value=self.do(st[2], ctx),
generators=self.do(st[3], ctx)[0])
else :
loop, elts, atom = self.do_testlist_comp(st, ctx)
if loop is None :
return self.ST.Set(lineno=st.srow, col_offset=st.scol,
elts=elts)
else :
return self.ST.SetComp(lineno=st.srow, col_offset=st.scol,
elt=loop, generators=elts)
def do_classdef (self, st, ctx) :
"""classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
-> ast.ClassDef
<<< class c : pass
"Module(body=[ClassDef(name='c', bases=[], keywords=[], starargs=None, kwargs=None, body=[Pass()], decorator_list=[])])"
<<< class c () : pass
"Module(body=[ClassDef(name='c', bases=[], keywords=[], starargs=None, kwargs=None, body=[Pass()], decorator_list=[])])"
<<< class c (object, foo=bar) : pass
"Module(body=[ClassDef(name='c', bases=[Name(id='object', ctx=Load())], keywords=[keyword(arg='foo', value=Name(id='bar', ctx=Load()))], starargs=None, kwargs=None, body=[Pass()], decorator_list=[])])"
"""
if len(st) <= 6 :
return self.ST.ClassDef(lineno=st.srow, col_offset=st.scol,
name=st[1].text,
bases=[],
keywords=[],
starargs=None,
kwargs=None,
body=self.do(st[-1], ctx),
decorator_list=[])
else :
args, keywords, starargs, kwargs = self.do(st[3], ctx)
return self.ST.ClassDef(lineno=st.srow, col_offset=st.scol,
name=st[1].text,
bases=args,
keywords=keywords,
starargs=starargs,
kwargs=kwargs,
body=self.do(st[-1], ctx),
decorator_list=[])
def do_arglist (self, st, ctx) :
"""arglist: (argument ',')* (argument [',']
|'*' test (',' argument)* [',' '**' test]
|'**' test)
-> args=ast.AST*, keywords=ast.keyword*, starargs=ast.AST?, kwargs=ast.AST?
<<< f(x)
"Module(body=[Expr(value=Call(func=Name(id='f', ctx=Load()), args=[Name(id='x', ctx=Load())], keywords=[], starargs=None, kwargs=None))])"
<<< f(x, *l, y=2)
"Module(body=[Expr(value=Call(func=Name(id='f', ctx=Load()), args=[Name(id='x', ctx=Load())], keywords=[keyword(arg='y', value=Num(n=2))], starargs=Name(id='l', ctx=Load()), kwargs=None))])"
<<< f(x, *l, y=2, **d)
"Module(body=[Expr(value=Call(func=Name(id='f', ctx=Load()), args=[Name(id='x', ctx=Load())], keywords=[keyword(arg='y', value=Num(n=2))], starargs=Name(id='l', ctx=Load()), kwargs=Name(id='d', ctx=Load())))])"
<<< f(*l)
"Module(body=[Expr(value=Call(func=Name(id='f', ctx=Load()), args=[], keywords=[], starargs=Name(id='l', ctx=Load()), kwargs=None))])"
<<< f(**d)
"Module(body=[Expr(value=Call(func=Name(id='f', ctx=Load()), args=[], keywords=[], starargs=None, kwargs=Name(id='d', ctx=Load())))])"
<<< f(x=1, y=2, x=3)
Traceback (most recent call last):
...
ParseError: ... keyword argument repeated
"""
args = []
keywords = []
allkw = set()
starargs = None
kwargs = None
nodes = [n for n in st if n.text != ","]
while nodes :
if nodes[0].text == "*" :
starargs = self.do(nodes[1], ctx)
del nodes[:2]
elif nodes[0].text == "**" :
kwargs = self.do(nodes[1], ctx)
del nodes[:2]
else :
arg = self.do(nodes[0], ctx)
if isinstance(arg, self.ST.keyword) :
if arg.arg in allkw :
raise ParseError(nodes[0].text,
reason="keyword argument repeated")
keywords.append(arg)
allkw.add(arg.arg)
elif starargs is not None :
raise ParseError(nodes[0].text, reason="only named"
" arguments may follow *expression")
else :
args.append(arg)
del nodes[0]
return args, keywords, starargs, kwargs
def do_argument (self, st, ctx) :
"""argument: test [comp_for] | test '=' test
-> ast.keyword | ast.GeneratorExp
<<< f(x)
"Module(body=[Expr(value=Call(func=Name(id='f', ctx=Load()), args=[Name(id='x', ctx=Load())], keywords=[], starargs=None, kwargs=None))])"
<<< f(x=1)
"Module(body=[Expr(value=Call(func=Name(id='f', ctx=Load()), args=[], keywords=[keyword(arg='x', value=Num(n=1))], starargs=None, kwargs=None))])"
<<< f(x for x in l)
"Module(body=[Expr(value=Call(func=Name(id='f', ctx=Load()), args=[GeneratorExp(elt=Name(id='x', ctx=Load()), generators=[comprehension(target=Name(id='x', ctx=Store()), iter=Name(id='l', ctx=Load()), ifs=[])])], keywords=[], starargs=None, kwargs=None))])"
"""
test = self.do(st[0], ctx)
if len(st) == 1 :
return test
elif len(st) == 3 :
if not isinstance(test, self.ST.Name) :
raise ParseError(st[0].text, reason="keyword can't be an"
" expression")
return self.ST.keyword(lineno=st.srow, col_offset=st.scol,
arg=test.id,
value=self.do(st[2], ctx))
else :
comp, ifs = self.do(st[1], ctx)
return self.ST.GeneratorExp(lineno=st.srow, col_offset=st.scol,
elt=test, generators=comp)
def do_comp_iter (self, st, ctx) :
"""comp_iter: comp_for | comp_if
-> comprehension*, ast.AST*
<<< [a for b in c if d if e for f in g if h]
"Module(body=[Expr(value=ListComp(elt=Name(id='a', ctx=Load()), generators=[comprehension(target=Name(id='b', ctx=Store()), iter=Name(id='c', ctx=Load()), ifs=[Name(id='d', ctx=Load()), Name(id='e', ctx=Load())]), comprehension(target=Name(id='f', ctx=Store()), iter=Name(id='g', ctx=Load()), ifs=[Name(id='h', ctx=Load())])]))])"
"""
return self.do(st[0], ctx)
def do_comp_for (self, st, ctx) :
"""comp_for: 'for' exprlist 'in' or_test [comp_iter]
-> comprehension+, []
<<< [a for b in c if d if e for f in g if h]
"Module(body=[Expr(value=ListComp(elt=Name(id='a', ctx=Load()), generators=[comprehension(target=Name(id='b', ctx=Store()), iter=Name(id='c', ctx=Load()), ifs=[Name(id='d', ctx=Load()), Name(id='e', ctx=Load())]), comprehension(target=Name(id='f', ctx=Store()), iter=Name(id='g', ctx=Load()), ifs=[Name(id='h', ctx=Load())])]))])"
"""
if len(st) == 4 :
return [self.ST.comprehension(lineno=st.srow,
col_offset=st.scol,
target=self.do(st[1], ast.Store),
iter=self.do(st[3], ctx),
ifs=[])], []
else :
comp, ifs = self.do(st[4], ctx)
return [self.ST.comprehension(lineno=st.srow,
col_offset=st.scol,
target=self.do(st[1], ast.Store),
iter=self.do(st[3], ctx),
ifs=ifs)] + comp, []
def do_comp_if (self, st, ctx) :
"""comp_if: 'if' test_nocond [comp_iter]
-> comprehension*, ast.AST+
<<< [a for b in c if d if e for f in g if h]
"Module(body=[Expr(value=ListComp(elt=Name(id='a', ctx=Load()), generators=[comprehension(target=Name(id='b', ctx=Store()), iter=Name(id='c', ctx=Load()), ifs=[Name(id='d', ctx=Load()), Name(id='e', ctx=Load())]), comprehension(target=Name(id='f', ctx=Store()), iter=Name(id='g', ctx=Load()), ifs=[Name(id='h', ctx=Load())])]))])"
"""
if len(st) == 2 :
return [], [self.do(st[1], ctx)]
else :
comp, ifs = self.do(st[2], ctx)
return comp, [self.do(st[1], ctx)] + ifs
def do_yield_expr (self, st, ctx) :
"""yield_expr: 'yield' [testlist]
-> ast.Yield
<<< yield
'Module(body=[Expr(value=Yield(value=None))])'
<<< yield 42
'Module(body=[Expr(value=Yield(value=Num(n=42)))])'
<<< yield 42, 43
'Module(body=[Expr(value=Yield(value=Tuple(elts=[Num(n=42), Num(n=43)], ctx=Load())))])'
"""
if len(st) == 2 :
return self.ST.Yield(lineno=st.srow, col_offset=st.scol,
value=self.do(st[1], ctx))
else :
return self.ST.Yield(lineno=st.srow, col_offset=st.scol,
value=None)
@classmethod
def parse (cls, expr, mode="exec", filename="<string>") :
tree = cls(cls.parser.parseString(expr.strip() + "\n",
filename=filename)).ast
if mode == "exec" :
return tree
elif mode == "eval" :
if len(tree.body) > 1 or not isinstance(tree.body[0], cls.ST.Expr) :
raise ParseError(None, reason="invalid syntax")
return cls.ST.Expression(body=tree.body[0].value)
elif mode == "single" :
return cls.ST.Interactive(body=tree.body)
else :
raise ValueError("arg 2 must be 'exec', 'eval' or 'single'")
parse = Translator.parse
class ParseTestParser (doctest.DocTestParser) :
_EXAMPLE_RE = re.compile(r'''
# Source consists of a PS1 line followed by zero or more PS2 lines.
(?P<source>
(?:^(?P<indent> [ ]*) <<< .*) # PS1 line
(?:\n [ ]* \.\.\. .*)*) # PS2 lines
\n?
# Want consists of any non-blank lines that do not start with PS1.
(?P<want> (?:(?![ ]*$) # Not a blank line
(?![ ]*<<<) # Not a line starting with PS1
.*$\n? # But any other line
)*)
''', re.MULTILINE | re.VERBOSE)
def __init__ (self, translator) :
self.Translator = translator
def parse (self, string, name="<string>") :
examples = doctest.DocTestParser.parse(self, string, name)
try :
rule = name.split(".do_", 1)[1]
except :
rule = None
for i, exple in enumerate(examples) :
if isinstance(exple, str) :
continue
elif name.split(".")[1] not in self.Translator.__dict__ :
examples[i] = ("skipping example for another language: %r"
% exple.source)
continue
if rule is not None :
source = exple.source.strip() + "\n"
try :
tree = self.Translator.ParseTree(self.Translator.parser.parseString(source))
except :
print("could not parse %r at %s:%s" % (source, name,
exple.lineno))
raise
if not tree.involve(self.Translator.parser.stringMap[rule]) :
print(("test at %s:%s does not involve rule %s"
% (name, exple.lineno, rule)))
examples[i] = "<test skipped>"
continue
examples[i] = doctest.Example(
source=("ast.dump(parse(%r))") % exple.source,
want=exple.want,
exc_msg=exple.exc_msg,
lineno=exple.lineno,
indent=exple.indent,
options=exple.options)
return examples
def testparser (translator) :
for rule in translator.parser.stringMap :
try :
assert "<<<" in getattr(translator, "do_" + rule).__doc__
except AttributeError :
print("no handler for rule %r" % rule)
continue
except TypeError :
print("missing doc for rule %r" % rule)
continue
except AssertionError :
print("missing test for rule %r" % rule)
continue
finder = doctest.DocTestFinder(parser=ParseTestParser(translator))
runner = doctest.DocTestRunner(optionflags=doctest.NORMALIZE_WHITESPACE
| doctest.ELLIPSIS)
for name, method in inspect.getmembers(translator, inspect.ismethod) :
if not name.startswith("do_") :
continue
for test in finder.find(method, "%s.%s" % (translator.__name__, name)) :
runner.run(test)
runner.summarize()
if __name__ == "__main__" :
testparser(Translator)
This diff could not be displayed because it is too large.
module Python version "$Revision: SNAKES $"
{
mod = Module(stmt* body)
| Interactive(stmt* body)
| Expression(expr body)
| Suite(stmt* body)
stmt = FunctionDef(identifier name, arguments args,
stmt* body, expr* decorator_list, expr? returns)
| ClassDef(identifier name,
expr* bases,
keyword* keywords,
expr? starargs,
expr? kwargs,
stmt* body,
expr *decorator_list)
| Return(expr? value)
| Delete(expr* targets)
| Assign(expr* targets, expr value)
| AugAssign(expr target, operator op, expr value)
| For(expr target, expr iter, stmt* body, stmt* orelse)
| While(expr test, stmt* body, stmt* orelse)
| If(expr test, stmt* body, stmt* orelse)
| With(expr context_expr, expr? optional_vars, stmt* body)
| Raise(expr? exc, expr? cause)
| TryExcept(stmt* body, excepthandler* handlers, stmt* orelse)
| TryFinally(stmt* body, stmt* finalbody)
| Assert(expr test, expr? msg)
| Import(alias* names)
| ImportFrom(identifier module, alias* names, int? level)
| Exec(expr body, expr? globals, expr? locals)
| Global(identifier* names)
| Nonlocal(identifier* names)
| Expr(expr value)
| Pass | Break | Continue
attributes (int lineno, int col_offset)
expr = BoolOp(boolop op, expr* values)
| BinOp(expr left, operator op, expr right)
| UnaryOp(unaryop op, expr operand)
| Lambda(arguments args, expr body)
| IfExp(expr test, expr body, expr orelse)
| Dict(expr* keys, expr* values)
| Set(expr* elts)
| ListComp(expr elt, comprehension* generators)
| SetComp(expr elt, comprehension* generators)
| DictComp(expr key, expr value, comprehension* generators)
| GeneratorExp(expr elt, comprehension* generators)
| Yield(expr? value)
| Compare(expr left, cmpop* ops, expr* comparators)
| Call(expr func, expr* args, keyword* keywords,
expr? starargs, expr? kwargs)
| Num(object n)
| Str(string s)
| Ellipsis
| Attribute(expr value, identifier attr, expr_context ctx)
| Subscript(expr value, slice slice, expr_context ctx)
| Starred(expr value, expr_context ctx)
| Name(identifier id, expr_context ctx)
| List(expr* elts, expr_context ctx)
| Tuple(expr* elts, expr_context ctx)
attributes (int lineno, int col_offset)
expr_context = Load | Store | Del | AugLoad | AugStore | Param
slice = Slice(expr? lower, expr? upper, expr? step)
| ExtSlice(slice* dims)
| Index(expr value)
boolop = And | Or
operator = Add | Sub | Mult | Div | Mod | Pow | LShift
| RShift | BitOr | BitXor | BitAnd | FloorDiv
unaryop = Invert | Not | UAdd | USub
cmpop = Eq | NotEq | Lt | LtE | Gt | GtE | Is | IsNot | In | NotIn
comprehension = (expr target, expr iter, expr* ifs)
excepthandler = ExceptHandler(expr? type, identifier? name, stmt* body)
attributes (int lineno, int col_offset)
arguments = (arg* args, identifier? vararg, expr? varargannotation,
arg* kwonlyargs, identifier? kwarg,
expr? kwargannotation, expr* defaults,
expr* kw_defaults)
arg = (identifier arg, expr? annotation)
keyword = (identifier arg, expr value)
alias = (identifier name, identifier? asname)
}
# Grammar for Python in SNAKES
# This is a mixture of the grammars from various Python versions
$ELLIPSIS '...'
file_input: (NEWLINE | stmt)* ENDMARKER
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
decorators: decorator+
decorated: decorators (classdef | funcdef)
funcdef: 'def' NAME parameters ['-' '>' test] ':' suite
parameters: '(' [typedargslist] ')'
typedargslist: ((tfpdef ['=' test] ',')*
('*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef]
| '**' tfpdef)
| tfpdef ['=' test] (',' tfpdef ['=' test])* [','])
tfpdef: NAME [':' test]
varargslist: ((vfpdef ['=' test] ',')*
('*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef]
| '**' vfpdef)
| vfpdef ['=' test] (',' vfpdef ['=' test])* [','])
vfpdef: NAME
stmt: simple_stmt | compound_stmt
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | nonlocal_stmt | assert_stmt)
expr_stmt: testlist (augassign (yield_expr|testlist) |
('=' (yield_expr|testlist))*)
augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' |
'<<=' | '>>=' | '**=' | '//=')
del_stmt: 'del' exprlist
pass_stmt: 'pass'
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
break_stmt: 'break'
continue_stmt: 'continue'
return_stmt: 'return' [testlist]
yield_stmt: yield_expr
raise_stmt: 'raise' [test ['from' test]]
import_stmt: import_name | import_from
import_name: 'import' dotted_as_names
import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)
'import' ('*' | '(' import_as_names ')' | import_as_names))
import_as_name: NAME ['as' NAME]
dotted_as_name: dotted_name ['as' NAME]
import_as_names: import_as_name (',' import_as_name)* [',']
dotted_as_names: dotted_as_name (',' dotted_as_name)*
dotted_name: NAME ('.' NAME)*
global_stmt: 'global' NAME (',' NAME)*
nonlocal_stmt: 'nonlocal' NAME (',' NAME)*
assert_stmt: 'assert' test [',' test]
compound_stmt: (if_stmt | while_stmt | for_stmt | try_stmt | with_stmt
| funcdef | classdef | decorated)
if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
while_stmt: 'while' test ':' suite ['else' ':' suite]
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
try_stmt: ('try' ':' suite
((except_clause ':' suite)+
['else' ':' suite]
['finally' ':' suite] |
'finally' ':' suite))
with_stmt: 'with' test [ with_var ] ':' suite
with_var: 'as' expr
except_clause: 'except' [test ['as' NAME]]
suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT
test: or_test ['if' or_test 'else' test] | lambdef
test_nocond: or_test | lambdef_nocond
lambdef: 'lambda' [varargslist] ':' test
lambdef_nocond: 'lambda' [varargslist] ':' test_nocond
or_test: and_test ('or' and_test)*
and_test: not_test ('and' not_test)*
not_test: 'not' not_test | comparison
comparison: star_expr (comp_op star_expr)*
comp_op: '<'|'>'|'=='|'>='|'<='|'!='|'<>'|'in'|'not' 'in'|'is'|'is' 'not'
star_expr: ['*'] expr
expr: xor_expr ('|' xor_expr)*
xor_expr: and_expr ('^' and_expr)*
and_expr: shift_expr ('&' shift_expr)*
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
arith_expr: term (('+'|'-') term)*
term: factor (('*'|'/'|'%'|'//') factor)*
factor: ('+'|'-'|'~') factor | power
power: atom trailer* ['**' factor]
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
testlist_comp: test ( comp_for | (',' test)* [','] )
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
subscript: test | [test] ':' [test] [sliceop]
sliceop: ':' [test]
exprlist: star_expr (',' star_expr)* [',']
testlist: test (',' test)* [',']
dictorsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) |
(test (comp_for | (',' test)* [','])) )
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
arglist: (argument ',')* (argument [',']
|'*' test (',' argument)* [',' '**' test]
|'**' test)
argument: test [comp_for] | test '=' test
comp_iter: comp_for | comp_if
comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_if: 'if' test_nocond [comp_iter]
yield_expr: 'yield' [testlist]
"Usage: unparse.py <path to source file>"
import sys
import _ast
from snakes.compat import *
def interleave(inter, f, seq):
"""Call f on each item in seq, calling inter() in between.
"""
seq = iter(seq)
try:
f(next(seq))
except StopIteration:
pass
else:
for x in seq:
inter()
f(x)
class Unparser:
"""Methods in this class recursively traverse an AST and
output source code for the abstract syntax; original formatting
is disregarged. """
def __init__(self, tree, file = sys.stdout):
"""Unparser(tree, file=sys.stdout) -> None.
Print the source for tree to file."""
self.f = file
self._indent = 0
self.dispatch(tree)
self.f.write("\n")
self.f.flush()
def fill(self, text = ""):
"Indent a piece of text, according to the current indentation level"
self.f.write("\n" + " "*self._indent + text)
def write(self, text):
"Append a piece of text to the current line."
self.f.write(text)
def enter(self):
"Print ':', and increase the indentation."
self.write(":")
self._indent += 1
def leave(self):
"Decrease the indentation level."
self._indent -= 1
def dispatch(self, tree):
"Dispatcher function, dispatching tree type T to method _T."
if isinstance(tree, list):
for t in tree:
self.dispatch(t)
return
meth = getattr(self, "_"+tree.__class__.__name__)
meth(tree)
############### Unparsing methods ######################
# There should be one method per concrete grammar type #
# Constructors should be grouped by sum type. Ideally, #
# this would follow the order in the grammar, but #
# currently doesn't. #
########################################################
def _Module(self, tree):
for stmt in tree.body:
self.dispatch(stmt)
# stmt
def _Expr(self, tree):
self.fill()
self.dispatch(tree.value)
def _Import(self, t):
self.fill("import ")
interleave(lambda: self.write(", "), self.dispatch, t.names)
def _ImportFrom(self, t):
self.fill("from ")
self.write(t.module)
self.write(" import ")
interleave(lambda: self.write(", "), self.dispatch, t.names)
# XXX(jpe) what is level for?
def _Assign(self, t):
self.fill()
for target in t.targets:
self.dispatch(target)
self.write(" = ")
self.dispatch(t.value)
def _AugAssign(self, t):
self.fill()
self.dispatch(t.target)
self.write(" "+self.binop[t.op.__class__.__name__]+"= ")
self.dispatch(t.value)
def _Return(self, t):
self.fill("return")
if t.value:
self.write(" ")
self.dispatch(t.value)
def _Pass(self, t):
self.fill("pass")
def _Break(self, t):
self.fill("break")
def _Continue(self, t):
self.fill("continue")
def _Delete(self, t):
self.fill("del ")
self.dispatch(t.targets)
def _Assert(self, t):
self.fill("assert ")
self.dispatch(t.test)
if t.msg:
self.write(", ")
self.dispatch(t.msg)
def _Exec(self, t):
self.fill("exec ")
self.dispatch(t.body)
if t.globals:
self.write(" in ")
self.dispatch(t.globals)
if t.locals:
self.write(", ")
self.dispatch(t.locals)
def _Print(self, t):
self.fill("print ")
do_comma = False
if t.dest:
self.write(">>")
self.dispatch(t.dest)
do_comma = True
for e in t.values:
if do_comma:self.write(", ")
else:do_comma=True
self.dispatch(e)
if not t.nl:
self.write(",")
def _Global(self, t):
self.fill("global ")
interleave(lambda: self.write(", "), self.write, t.names)
def _Yield(self, t):
self.write("(")
self.write("yield")
if t.value:
self.write(" ")
self.dispatch(t.value)
self.write(")")
def _Raise(self, t):
self.fill('raise ')
if t.type:
self.dispatch(t.type)
if t.inst:
self.write(", ")
self.dispatch(t.inst)
if t.tback:
self.write(", ")
self.dispatch(t.tback)
def _TryExcept(self, t):
self.fill("try")
self.enter()
self.dispatch(t.body)
self.leave()
for ex in t.handlers:
self.dispatch(ex)
if t.orelse:
self.fill("else")
self.enter()
self.dispatch(t.orelse)
self.leave()
def _TryFinally(self, t):
self.fill("try")
self.enter()
self.dispatch(t.body)
self.leave()
self.fill("finally")
self.enter()
self.dispatch(t.finalbody)
self.leave()
def _ExceptHandler(self, t):
self.fill("except")
if t.type:
self.write(" ")
self.dispatch(t.type)
if t.name:
self.write(", ")
self.dispatch(t.name)
self.enter()
self.dispatch(t.body)
self.leave()
def _ClassDef(self, t):
self.write("\n")
self.fill("class "+t.name)
if t.bases:
self.write("(")
for a in t.bases:
self.dispatch(a)
self.write(", ")
self.write(")")
self.enter()
self.dispatch(t.body)
self.leave()
def _FunctionDef(self, t):
self.write("\n")
for deco in t.decorator_list:
self.fill("@")
self.dispatch(deco)
self.fill("def "+t.name + "(")
self.dispatch(t.args)
self.write(")")
self.enter()
self.dispatch(t.body)
self.leave()
def _For(self, t):
self.fill("for ")
self.dispatch(t.target)
self.write(" in ")
self.dispatch(t.iter)
self.enter()
self.dispatch(t.body)
self.leave()
if t.orelse:
self.fill("else")
self.enter()
self.dispatch(t.orelse)
self.leave
def _If(self, t):
self.fill("if ")
self.dispatch(t.test)
self.enter()
# XXX elif?
self.dispatch(t.body)
self.leave()
if t.orelse:
self.fill("else")
self.enter()
self.dispatch(t.orelse)
self.leave()
def _While(self, t):
self.fill("while ")
self.dispatch(t.test)
self.enter()
self.dispatch(t.body)
self.leave()
if t.orelse:
self.fill("else")
self.enter()
self.dispatch(t.orelse)
self.leave
def _With(self, t):
self.fill("with ")
self.dispatch(t.context_expr)
if t.optional_vars:
self.write(" as ")
self.dispatch(t.optional_vars)
self.enter()
self.dispatch(t.body)
self.leave()
# expr
def _Str(self, tree):
self.write(repr(tree.s))
def _Name(self, t):
self.write(t.id)
def _Repr(self, t):
self.write("`")
self.dispatch(t.value)
self.write("`")
def _Num(self, t):
self.write(repr(t.n))
def _List(self, t):
self.write("[")
interleave(lambda: self.write(", "), self.dispatch, t.elts)
self.write("]")
def _ListComp(self, t):
self.write("[")
self.dispatch(t.elt)
for gen in t.generators:
self.dispatch(gen)
self.write("]")
def _GeneratorExp(self, t):
self.write("(")
self.dispatch(t.elt)
for gen in t.generators:
self.dispatch(gen)
self.write(")")
def _comprehension(self, t):
self.write(" for ")
self.dispatch(t.target)
self.write(" in ")
self.dispatch(t.iter)
for if_clause in t.ifs:
self.write(" if ")
self.dispatch(if_clause)
def _IfExp(self, t):
self.write("(")
self.dispatch(t.body)
self.write(" if ")
self.dispatch(t.test)
self.write(" else ")
self.dispatch(t.orelse)
self.write(")")
def _Dict(self, t):
self.write("{")
def writem(arg):
(k, v) = arg
self.dispatch(k)
self.write(": ")
self.dispatch(v)
interleave(lambda: self.write(", "), writem, zip(t.keys, t.values))
self.write("}")
def _Set(self, t) :
self.write("set([")
if len(t.elts) == 1:
(elt,) = t.elts
self.dispatch(elt)
self.write(",")
else:
interleave(lambda: self.write(", "), self.dispatch, t.elts)
self.write("])")
def _Tuple(self, t):
self.write("(")
if len(t.elts) == 1:
(elt,) = t.elts
self.dispatch(elt)
self.write(",")
else:
interleave(lambda: self.write(", "), self.dispatch, t.elts)
self.write(")")
unop = {"Invert":"~", "Not": "not", "UAdd":"+", "USub":"-"}
def _UnaryOp(self, t):
self.write(self.unop[t.op.__class__.__name__])
self.write("(")
self.dispatch(t.operand)
self.write(")")
binop = { "Add":"+", "Sub":"-", "Mult":"*", "Div":"/", "Mod":"%",
"LShift":">>", "RShift":"<<", "BitOr":"|", "BitXor":"^", "BitAnd":"&",
"FloorDiv":"//", "Pow": "**"}
def _BinOp(self, t):
self.write("(")
self.dispatch(t.left)
self.write(" " + self.binop[t.op.__class__.__name__] + " ")
self.dispatch(t.right)
self.write(")")
cmpops = {"Eq":"==", "NotEq":"!=", "Lt":"<", "LtE":"<=", "Gt":">", "GtE":">=",
"Is":"is", "IsNot":"is not", "In":"in", "NotIn":"not in"}
def _Compare(self, t):
self.write("(")
self.dispatch(t.left)
for o, e in zip(t.ops, t.comparators):
self.write(" " + self.cmpops[o.__class__.__name__] + " ")
self.dispatch(e)
self.write(")")
boolops = {"And": 'and', "Or": 'or'}
def _BoolOp(self, t):
self.write("(")
s = " %s " % self.boolops[t.op.__class__.__name__]
interleave(lambda: self.write(s), self.dispatch, t.values)
self.write(")")
def _Attribute(self,t):
self.dispatch(t.value)
self.write(".")
self.write(t.attr)
def _Call(self, t):
self.dispatch(t.func)
self.write("(")
comma = False
for e in t.args:
if comma: self.write(", ")
else: comma = True
self.dispatch(e)
for e in t.keywords:
if comma: self.write(", ")
else: comma = True
self.dispatch(e)
if t.starargs:
if comma: self.write(", ")
else: comma = True
self.write("*")
self.dispatch(t.starargs)
if t.kwargs:
if comma: self.write(", ")
else: comma = True
self.write("**")
self.dispatch(t.kwargs)
self.write(")")
def _Subscript(self, t):
self.dispatch(t.value)
self.write("[")
self.dispatch(t.slice)
self.write("]")
# slice
def _Ellipsis(self, t):
self.write("...")
def _Index(self, t):
self.dispatch(t.value)
def _Slice(self, t):
if t.lower:
self.dispatch(t.lower)
self.write(":")
if t.upper:
self.dispatch(t.upper)
if t.step:
self.write(":")
self.dispatch(t.step)
def _ExtSlice(self, t):
interleave(lambda: self.write(', '), self.dispatch, t.dims)
# others
def _arguments(self, t):
first = True
nonDef = len(t.args)-len(t.defaults)
for a in t.args[0:nonDef]:
if first:first = False
else: self.write(", ")
self.dispatch(a)
for a,d in zip(t.args[nonDef:], t.defaults):
if first:first = False
else: self.write(", ")
self.dispatch(a),
self.write("=")
self.dispatch(d)
if t.vararg:
if first:first = False
else: self.write(", ")
self.write("*"+t.vararg)
if t.kwarg:
if first:first = False
else: self.write(", ")
self.write("**"+t.kwarg)
def _keyword(self, t):
self.write(t.arg)
self.write("=")
self.dispatch(t.value)
def _Lambda(self, t):
self.write("lambda ")
self.dispatch(t.args)
self.write(": ")
self.dispatch(t.body)
def _alias(self, t):
self.write(t.name)
if t.asname:
self.write(" as "+t.asname)
def roundtrip(filename, output=sys.stdout):
source = open(filename).read()
tree = compile(source, filename, "exec", _ast.PyCF_ONLY_AST)
Unparser(tree, output)
def testdir(a):
try:
names = [n for n in os.listdir(a) if n.endswith('.py')]
except OSError:
sys.stderr.write("Directory not readable: %s\n" % a)
else:
for n in names:
fullname = os.path.join(a, n)
if os.path.isfile(fullname):
output = io.StringIO()
print('Testing %s' % fullname)
try:
roundtrip(fullname, output)
except Exception:
e = sys.exc_info()[1]
print(' Failed to compile, exception is %r' % e)
elif os.path.isdir(fullname):
testdir(fullname)
def main(args):
if args[0] == '--testdir':
for a in args[1:]:
testdir(a)
else:
for a in args:
roundtrip(a)
if __name__=='__main__':
main(sys.argv[1:])
This diff could not be displayed because it is too large.
"""A plugins system.
The first example shows how to load a plugin: we load
C{snakes.plugins.hello} and plug it into C{snakes.nets}, which results
in a new module that actually C{snakes.nets} extended by
C{snakes.plugins.hello}.
>>> import snakes.plugins as plugins
>>> hello_nets = plugins.load('hello', 'snakes.nets')
>>> n = hello_nets.PetriNet('N')
>>> n.hello()
Hello from N
>>> n = hello_nets.PetriNet('N', hello='Hi, this is %s!')
>>> n.hello()
Hi, this is N!
The next example shows how to simulate the effect of C{import module}:
we give to C{load} a thrid argument that is the name of the created
module, from which it becomes possible to import names or C{*}.
B{Warning:} this feature will not work C{load} is not called from the
module where we then do the C{from ... import ...}. This is exactly
the same when, from a module C{foo} that you load a module C{bar}: if
C{bar} loads other modules they will not be imported in C{foo}.
>>> plugins.load('hello', 'snakes.nets', 'another_version')
<module ...>
>>> from another_version import PetriNet
>>> n = PetriNet('another net')
>>> n.hello()
Hello from another net
>>> n = PetriNet('yet another net', hello='Hi, this is %s!')
>>> n.hello()
Hi, this is yet another net!
How to define a plugin is explained in the example C{hello}.
"""
import imp, sys, inspect
from functools import wraps
def update (module, objects) :
"""Update a module content
"""
for obj in objects :
if isinstance(obj, tuple) :
try :
n, o = obj
except :
raise ValueError("expected (name, object) and got '%r'" % obj)
setattr(module, n, o)
elif inspect.isclass(obj) or inspect.isfunction(obj) :
setattr(module, obj.__name__, obj)
else :
raise ValueError("cannot plug '%r'" % obj)
def build (name, module, *objects) :
"""Builds an extended module.
The parameter C{module} is exactly that taken by the function
C{extend} of a plugin. This list argument C{objects} holds all the
objects, constructed in C{extend}, that are extensions of objects
from C{module}. The resulting value should be returned by
C{extend}.
@param name: the name of the constructed module
@type name: C{str}
@param module: the extended module
@type module: C{module}
@param objects: the sub-objects
@type objects: each is a class object
@return: the new module
@rtype: C{module}
"""
result = imp.new_module(name)
result.__dict__.update(module.__dict__)
update(result, objects)
result.__plugins__ = (module.__dict__.get("__plugins__",
(module.__name__,))
+ (name,))
for obj in objects :
if inspect.isclass(obj) :
obj.__plugins__ = result.__plugins__
return result
def load (plugins, base, name=None) :
"""Load plugins.
C{plugins} can be a single plugin name or module or a list of such
values. If C{name} is not C{None}, the extended module is loaded
ad C{name} in C{sys.modules} as well as in the global environment
from which C{load} was called.
@param plugins: the module that implements the plugin, or its name,
or a collection of such values
@type plugins: C{str} or C{module}, or a C{list}/C{tuple}/... of
such values
@param base: the module being extended or its name
@type base: C{str} or C{module}
@param name: the name of the created module
@type name: C{str}
@return: the extended module
@rtype: C{module}
"""
if type(base) is str :
result = __import__(base, fromlist=["__name__"])
else :
result = base
if isinstance(plugins, str) :
plugins = [plugins]
else :
try :
plugins = list(plugins)
except TypeError :
plugins = [plugins]
for i, p in enumerate(plugins) :
if isinstance(p, str) and not p.startswith("snakes.plugins.") :
plugins[i] = "snakes.plugins." + p
for plug in plugins :
if type(plug) is str :
plug = __import__(plug, fromlist=["__name__"])
result = plug.extend(result)
if name is not None :
result.__name__ = name
sys.modules[name] = result
inspect.stack()[1][0].f_globals[name] = result
return result
def plugin (base, depends=[], conflicts=[]) :
"""Decorator for extension functions
@param base: name of base module (usually 'snakes.nets')
@type base: str
@param depends: list of plugins on which this one depends
@type depends: list of str
@param conflicts: list of plugins with which this one conflicts
@type conflicts: list of str
@return: the appropriate decorator
@rtype: function
"""
def wrapper (fun) :
@wraps(fun)
def extend (module) :
try :
loaded = set(module.__plugins__)
except AttributeError :
loaded = set()
for name in depends :
if name not in loaded :
module = load(name, module)
loaded.update(module.__plugins__)
conf = set(conflicts) & loaded
if len(conf) > 0 :
raise ValueError("plugin conflict (%s)" % ", ".join(conf))
objects = fun(module)
if type(objects) is not tuple :
objects = (objects,)
return build(fun.__module__, module, *objects)
module = sys.modules[fun.__module__]
module.__test__ = {"extend" : extend}
objects = fun(__import__(base, fromlist=["__name__"]))
if type(objects) is not tuple :
objects = (objects,)
update(module, objects)
return extend
return wrapper
def new_instance (cls, obj) :
"""Create a copy of C{obj} which is an instance of C{cls}
"""
result = object.__new__(cls)
result.__dict__.update(obj.__dict__)
return result
import snakes.plugins
from snakes.plugins import new_instance
from snakes.pnml import Tree
from snakes.data import iterate
class Cluster (object) :
def __init__ (self, nodes=[], children=[]) :
"""
>>> Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])])
Cluster(...)
"""
self._nodes = set(nodes)
self._children = []
self._cluster = {}
for child in children :
self.add_child(child)
__pnmltag__ = "clusters"
def __pnmldump__ (self) :
"""
>>> Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])]).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>...
<clusters>
<node>
...
</node>
<node>
...
</node>
<clusters>
...
</clusters>
</clusters>
</pnml>
"""
result = Tree(self.__pnmltag__, None)
for node in self._nodes :
result.add_child(Tree("node", node))
for child in self._children :
result.add_child(Tree.from_obj(child))
return result
@classmethod
def __pnmlload__ (cls, tree) :
"""
>>> t = Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])]).__pnmldump__()
>>> Cluster.__pnmlload__(t)
Cluster(['...', '...'],
[Cluster(['...', '...'],
[Cluster(['A'], [])]),
Cluster(['...', '...', '...'],
[Cluster(['...', '...'], [])])])
"""
result = cls()
for child in tree.children :
if child.name == "node" :
result.add_node(child.data)
else :
result.add_child(child.to_obj())
return result
def __str__ (self) :
return "cluster_%s" % str(id(self)).replace("-", "m")
def __repr__ (self) :
"""
>>> Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])])
Cluster(['...', '...'],
[Cluster(['...', '...'],
[Cluster(['A'], [])]),
Cluster(['...', '...', '...'],
[Cluster(['...', '...'], [])])])
"""
return "%s([%s], [%s])" % (self.__class__.__name__,
", ".join(repr(n) for n in self.nodes()),
", ".join(repr(c) for c in self.children()))
def copy (self) :
"""
>>> Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])]).copy()
Cluster(['...', '...'],
[Cluster(['...', '...'],
[Cluster(['A'], [])]),
Cluster(['...', '...', '...'],
[Cluster(['...', '...'], [])])])
"""
return self.__class__(self._nodes,
(child.copy() for child in self._children))
def get_path (self, name) :
"""
>>> Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])]).get_path('C')
[1, 0]
"""
if name in self._nodes :
return []
else :
for num, child in enumerate(self._children) :
if name in child :
return [num] + child.get_path(name)
def add_node (self, name, path=None) :
"""
>>> c = Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])])
>>> c.add_node('c')
Cluster([...'c'...], ...)
>>> c.add_node('E', [1, 0])
Cluster([...'E'...], [])
>>> c
Cluster([...'c'...],
[Cluster(['...', '...'],
[Cluster(['A'], [])]),
Cluster(['...', '...', '...'],
[Cluster([...'E'...], [])])])
"""
if path in (None, [], ()) :
self._nodes.add(name)
return self
else :
while len(self._children) <= path[0] :
self._children.append(self.__class__())
target = self._children[path[0]].add_node(name, path[1:])
self._cluster[name] = target
return target
def remove_node (self, name) :
"""
>>> c = Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])])
>>> c.remove_node('4')
>>> c
Cluster(['...', '...'],
[Cluster(['...', '...'],
[Cluster(['A'], [])]),
Cluster(['...', '...'],
[Cluster(['...', '...'], [])])])
"""
if name in self._cluster :
self._cluster[name].remove_node(name)
else :
self._nodes.remove(name)
def rename_node (self, old, new) :
"""
>>> c = Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])])
>>> c.rename_node('4', '42')
>>> c
Cluster(['...', '...'],
[Cluster(['...', '...'],
[Cluster(['A'], [])]),
Cluster([...'42'...],
[Cluster(['...', '...'], [])])])
"""
if old in self._cluster :
self._cluster[old].rename_node(old, new)
self._cluster[new] = self._cluster[old]
del self._cluster[old]
elif old in self._nodes :
self._nodes.remove(old)
self._nodes.add(new)
else :
for child in self.children() :
child.rename_node(old, new)
def add_child (self, cluster=None) :
"""
>>> c = Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])])
>>> c.add_child(c.copy())
>>> c
Cluster(['...', '...'],
[Cluster(['...', '...'],
[Cluster(['A'], [])]),
Cluster(['...', '...', '...'],
[Cluster(['...', '...'], [])]),
Cluster(['...', '...'],
[Cluster(['...', '...'],
[Cluster(['A'], [])]),
Cluster(['...', '...', '...'],
[Cluster(['...', '...'], [])])])])
"""
if cluster is None :
cluster = Cluster()
self._cluster.update(cluster._cluster)
for node in cluster._nodes :
self._cluster[node] = cluster
self._children.append(cluster)
def nodes (self, all=False) :
"""
>>> list(sorted(Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])]).nodes()))
['a', 'b']
>>> list(sorted(Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])]).nodes(True)))
['1', '2', '3', '4', '5', 'A', 'C', 'D', 'a', 'b']
"""
if all :
result = set()
for cluster in self :
result.update(cluster.nodes())
return result
else :
return set(self._nodes)
def children (self) :
"""
>>> Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])]).children()
(Cluster(['...', '...'],
[Cluster(['A'], [])]),
Cluster(['...', '...', '...'],
[Cluster(['...', '...'], [])]))
"""
return tuple(self._children)
def __contains__ (self, name) :
"""
>>> c = Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])])
>>> 'a' in c
True
>>> 'x' in c
False
>>> '4' in c
True
"""
if name in self._nodes :
return True
for child in self._children :
if name in child :
return True
return False
def __iter__ (self) :
"""
>>> c = Cluster(['a', 'b'],
... [Cluster(['1', '2'],
... [Cluster(['A'])]),
... Cluster(['3', '4', '5'],
... [Cluster(['C', 'D'])])])
>>> for cluster in c :
... print(list(sorted(cluster.nodes())))
['a', 'b']
['1', '2']
['A']
['3', '4', '5']
['C', 'D']
"""
yield self
for child in self._children :
for item in child :
yield item
@snakes.plugins.plugin("snakes.nets")
def extend (module) :
class PetriNet (module.PetriNet) :
def __init__ (self, name, **options) :
module.PetriNet.__init__(self, name, **options)
self.clusters = Cluster()
def copy (self, name=None, **options) :
result = module.PetriNet.copy(self, name, **options)
result.clusters = self.clusters.copy()
return result
def __pnmldump__ (self) :
result = module.PetriNet.__pnmldump__(self)
result.add_child(Tree.from_obj(self.clusters))
return result
@classmethod
def __pnmlload__ (cls, tree) :
result = new_instance(cls, module.PetriNet.__pnmlload__(tree))
result.clusters = tree.child(Cluster.__pnmltag__).to_obj()
return result
def add_place (self, place, **options) :
path = options.pop("cluster", None)
module.PetriNet.add_place(self, place, **options)
self.clusters.add_node(place.name, path)
def remove_place (self, name, **options) :
module.PetriNet.remove_place(self, name, **options)
self.clusters.remove_node(name)
def add_transition (self, trans, **options) :
path = options.pop("cluster", None)
module.PetriNet.add_transition(self, trans, **options)
self.clusters.add_node(trans.name, path)
def remove_transition (self, name, **options) :
module.PetriNet.remove_transition(self, name, **options)
self.clusters.remove_node(name)
def rename_node (self, old, new, **options) :
module.PetriNet.rename_node(self, old, new, **options)
self.clusters.rename_node(old, new)
return PetriNet, Cluster
"""Draw Petri nets using PyGraphViz
- adds a method C{draw} to C{PetriNet} and C{StateGraph} that creates
a drawing of the object in a file.
>>> import snakes.plugins
>>> snakes.plugins.load('gv', 'snakes.nets', 'nets')
<module ...>
>>> from nets import *
>>> n = PetriNet('N')
>>> n.add_place(Place('p00', [0]))
>>> n.add_transition(Transition('t10'))
>>> n.add_place(Place('p11'))
>>> n.add_transition(Transition('t01'))
>>> n.add_input('p00', 't10', Variable('x'))
>>> n.add_output('p11', 't10', Expression('x+1'))
>>> n.add_input('p11', 't01', Variable('y'))
>>> n.add_output('p00', 't01', Expression('y-1'))
>>> for engine in ('neato', 'dot', 'circo', 'twopi', 'fdp') :
... n.draw(',test-gv-%s.png' % engine, engine=engine)
>>> s = StateGraph(n)
>>> s.build()
>>> s.draw(',test-gv-graph.png')
>>> for node in sorted(n.node(), key=str) :
... node.pos.moveto(-100, -100)
>>> n.layout()
>>> any(node.pos == (-100, -100) for node in sorted(n.node(), key=str))
False
"""
import os, os.path, subprocess, collections
import snakes.plugins
from snakes.plugins.clusters import Cluster
from snakes.compat import *
class Graph (Cluster) :
def __init__ (self, attr) :
Cluster.__init__(self)
self.attributes = {}
self.attr = dict(style="invis")
self.attr.update(attr)
self.edges = collections.defaultdict(list)
def add_node (self, node, attr) :
self.attributes[node] = attr
Cluster.add_node(self, node)
def add_edge (self, src, dst, attr) :
self.edges[src, dst].append(attr)
def has_edge (self, src, dst) :
if (src, dst) in self.edges :
return True
for child in self.children() :
if child.has_edge(src, dst) :
return True
return False
def _dot_attr (self, attr, tag=None) :
if tag is None :
tag = ""
else :
tag = "%s " % tag
return (["%s[" % tag,
["%s=%s" % (key, self.escape(str(val)))
for key, val in attr.items()],
"];"])
def _dot (self) :
body = []
lines = ["subgraph %s {" % self, self._dot_attr(self.attr, "graph"),
body, "}"]
for node in self.nodes() :
body.append(node)
body.append(self._dot_attr(self.attributes[node]))
for child in self.children() :
body.extend(child._dot())
for (src, dst), lst in self.edges.items() :
for attr in lst :
body.append("%s -> %s" % (src, dst))
body.append(self._dot_attr(attr))
return lines
def _dot_text (self, lines, indent=0) :
for l in lines :
if isinstance(l, str) :
yield " "*indent*2 + l
else :
for x in self._dot_text(l, indent+1) :
yield x
def dot (self) :
self.done = set()
return "\n".join(self._dot_text(["digraph {",
['node [label="N",'
' fillcolor="#FFFFFF",'
' fontcolor="#000000",'
' style=filled];',
'edge [style="solid"];',
'graph [splines="true",'
' overlap="false"];'],
self._dot(),
"}"]))
def escape (self, text) :
return '"%s"' % text.replace('"', r'\"')
def render (self, filename, engine="dot", debug=False) :
if engine not in ("dot", "neato", "twopi", "circo", "fdp") :
raise ValueError("unknown GraphViz engine %r" % engine)
outfile = open(filename + ".dot", "w")
outfile.write(self.dot())
outfile.close()
if debug :
dot = subprocess.Popen([engine, "-T" + filename.rsplit(".", 1)[-1],
"-o" + filename, outfile.name],
stdin=subprocess.PIPE)
else :
dot = subprocess.Popen([engine, "-T" + filename.rsplit(".", 1)[-1],
"-o" + filename, outfile.name],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
dot.communicate()
if not debug :
os.unlink(outfile.name)
if dot.returncode != 0 :
raise IOError("%s exited with status %s" % (engine, dot.returncode))
def layout (self, engine="dot", debug=False) :
if engine not in ("dot", "neato", "twopi", "circo", "fdp") :
raise ValueError("unknown GraphViz engine %r" % engine)
if debug :
dot = subprocess.Popen([engine, "-Tplain"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
else :
dot = subprocess.Popen([engine, "-Tplain"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
if PY3 :
out, err = dot.communicate(bytes(self.dot(),
snakes.defaultencoding))
out = out.decode(snakes.defaultencoding)
else :
out, err = dot.communicate(self.dot())
if dot.returncode != 0 :
raise IOError("%s exited with status %s"
% (engine, dot.returncode))
for line in (l.strip() for l in out.splitlines()
if l.strip().startswith("node")) :
node, name, x, y, rest = line.split(None, 4)
yield name, float(x), float(y)
@snakes.plugins.plugin("snakes.nets",
depends=["snakes.plugins.clusters",
"snakes.plugins.pos"])
def extend (module) :
class PetriNet (module.PetriNet) :
"An extension with a method C{draw}"
def draw (self, filename=None, engine="dot", debug=False,
graph_attr=None, cluster_attr=None,
place_attr=None, trans_attr=None, arc_attr=None) :
"""
@param filename: the name of the image file to create or
C{None} if only the computed graph is needed
@type filename: C{None} or C{str}
@param engine: the layout engine to use: 'dot' (default),
'neato', 'circo', 'twopi' or 'fdp'
@type engine: C{str}
@param place_attr: a function to format places, it will be
called with the place and its attributes dict as
parameters
@type place_attr: C{function(Place,dict)->None}
@param trans_attr: a function to format transitions, it
will be called with the transition and its attributes dict
as parameters
@type trans_attr: C{function(Transition,dict)->None}
@param arc_attr: a function to format arcs, it will be
called with the label and its attributes dict as
parameters
@type arc_attr: C{function(ArcAnnotation,dict)->None}
@param cluster_attr: a function to format clusters of
nodes, it will be called with the cluster and its
attributes dict as parameters
@type cluster_attr: C{function(snakes.plugins.clusters.Cluster,dict)->None}
@return: C{None} if C{filename} is not C{None}, the
computed graph otherwise
@rtype: C{None} or C{pygraphviz.AGraph}
"""
nodemap = dict((node.name, "node_%s" % num)
for num, node in enumerate(self.node()))
g = self._copy(nodemap, self.clusters, cluster_attr,
place_attr, trans_attr)
self._copy_edges(nodemap, g, arc_attr)
if graph_attr :
graph_attr(self, g.attr)
if filename is None :
g.nodemap = nodemap
return g
else :
g.render(filename, engine, debug)
def _copy (self, nodemap, sub, cluster_attr, place_attr, trans_attr) :
attr = dict(style="invis")
if cluster_attr :
cluster_attr(sub, attr)
graph = Graph(attr)
for name in sub.nodes() :
if self.has_place(name) :
node = self.place(name)
attr = dict(shape="ellipse",
label="%s\\n%s" % (node.name, node.tokens))
if place_attr :
place_attr(node, attr)
else :
node = self.transition(name)
attr = dict(shape="rectangle",
label="%s\\n%s" % (node.name, str(node.guard)))
if trans_attr :
trans_attr(node, attr)
graph.add_node(nodemap[name], attr)
for child in sub.children() :
graph.add_child(self._copy(nodemap, child, cluster_attr,
place_attr, trans_attr))
return graph
def _copy_edges (self, nodemap, graph, arc_attr) :
for trans in self.transition() :
for place, label in trans.input() :
attr = dict(arrowhead="normal",
label=" %s " % label)
if arc_attr :
arc_attr(label, attr)
graph.add_edge(nodemap[place.name], nodemap[trans.name],
attr)
for place, label in trans.output() :
attr = dict(arrowhead="normal",
label=" %s " % label)
if arc_attr :
arc_attr(label, attr)
graph.add_edge(nodemap[trans.name], nodemap[place.name],
attr)
def layout (self, xscale=1.0, yscale=1.0, engine="dot",
debug=False, graph_attr=None, cluster_attr=None,
place_attr=None, trans_attr=None, arc_attr=None) :
g = self.draw(None, engine, debug, graph_attr, cluster_attr,
place_attr, trans_attr, arc_attr)
node = dict((v, k) for k, v in g.nodemap.items())
for n, x, y in g.layout(engine, debug) :
self.node(node[n]).pos.moveto(x*xscale, y*yscale)
class StateGraph (module.StateGraph) :
"An extension with a method C{draw}"
def draw (self, filename=None, engine="dot", debug=False,
node_attr=None, edge_attr=None, graph_attr=None) :
"""
@param filename: the name of the image file to create or
C{None} if only the computed graph is needed
@type filename: C{None} or C{str}
@param engine: the layout engine to use: 'dot' (default),
'neato', 'circo', 'twopi' or 'fdp'
@type engine: C{str}
@param node_attr: a function to format nodes, it will be
called with the state number, the C{StateGraph} object
and attributes dict as parameters
@type node_attr: C{function(int,StateGraph,dict)->None}
@param edge_attr: a function to format edges, it
will be called with the transition, its mode and
attributes dict as parameters
@type trans_attr: C{function(Transition,Substitution,dict)->None}
@param graph_attr: a function to format grapg, it
will be called with the state graphe and attributes dict
as parameters
@type graph_attr: C{function(StateGraph,dict)->None}
@return: C{None} if C{filename} is not C{None}, the
computed graph otherwise
@rtype: C{None} or C{pygraphviz.AGraph}
"""
attr = dict(style="invis",
splines="true")
if graph_attr :
graph_attr(self, attr)
graph = Graph(attr)
for state in self._done :
self.goto(state)
attr = dict(shape="rectangle")
if state == 0 :
attr["shape"] = ""
if node_attr :
node_attr(state, self, attr)
graph.add_node(str(state), attr)
for succ, (trans, mode) in self.successors().items() :
attr = dict(arrowhead="normal",
label="%s\\n%s" % (trans.name, mode))
if edge_attr :
edge_attr(trans, mode, attr)
graph.add_edge(str(state), str(succ), attr)
if filename is None :
return graph
else :
graph.render(filename, engine, debug)
return PetriNet, StateGraph
"""An example plugin that allows instances class C{PetriNet} to say hello.
A new method C{hello} is added. The constructor is added a keyword
argument C{hello} that must be the C{str} to print when calling
C{hello}, with one C{%s} that will be replaced by the name of the net
when C{hello} is called.
Defining a plugins need writing a module with a single function called
C{extend} that takes a single argument that is the module to be
extended.
Inside the function, extensions of the classes in the module are
defined as normal sub-classes.
The function C{extend} should return the extended module created by
C{snakes.plugins.build} that takes as arguments: the name of the
extended module, the module taken as argument and the sub-classes
defined (expected as a list argument C{*args} in no special order).
If the plugin depends on other plugins, for instance C{foo} and
C{bar}, the function C{extend} should be decorated by
C{@depends('foo', 'bar')}.
Read the source code of this module to have an example
"""
import snakes.plugins
@snakes.plugins.plugin("snakes.nets")
def extend (module) :
"""Extends C{module}
"""
class PetriNet (module.PetriNet) :
"""Extension of the class C{PetriNet} in C{module}
"""
def __init__ (self, name, **args) :
"""When extending an existing method, take care that you
may be working on an already extended class, so you so not
know how its arguments have been changed. So, always use
those from the unextended class plus C{**args}, remove
from it what your plugin needs and pass it to the method
of the extended class if you need to call it.
>>> PetriNet('N').hello()
Hello from N
>>> PetriNet('N', hello='Hi! This is %s...').hello()
Hi! This is N...
@param args: plugin options
@keyword hello: the message to print, with C{%s} where the
net name should appear.
@type hello: C{str}
"""
self._hello = args.pop("hello", "Hello from %s")
module.PetriNet.__init__(self, name, **args)
def hello (self) :
"""A new method C{hello}
>>> n = PetriNet('N')
>>> n.hello()
Hello from N
"""
print(self._hello % self.name)
return PetriNet
"""A plugin to add labels to nodes and nets.
"""
from snakes.plugins import plugin, new_instance
from snakes.pnml import Tree
@plugin("snakes.nets")
def extend (module) :
class Transition (module.Transition) :
def label (self, *get, **set) :
if not hasattr(self, "_labels") :
self._labels = {}
result = tuple(self._labels[g] for g in get)
self._labels.update(set)
if len(get) == 1 :
return result[0]
elif len(get) > 1 :
return result
elif len(set) == 0 :
return self._labels.copy()
def has_label (self, name, *names) :
if len(names) == 0 :
return name in self._labels
else :
return tuple(n in self._labels for n in (name,) + names)
def copy (self, name=None, **options) :
if not hasattr(self, "_labels") :
self._labels = {}
result = module.Transition.copy(self, name, **options)
result._labels = self._labels.copy()
return result
def __pnmldump__ (self) :
"""
>>> t = Transition('t')
>>> t.label(foo='bar', spam=42)
>>> t.__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<transition id="t">
<label name="foo">
<object type="str">
bar
</object>
</label>
<label name="spam">
<object type="int">
42
</object>
</label>
</transition>
</pnml>
"""
t = module.Transition.__pnmldump__(self)
if hasattr(self, "_labels") :
for key, val in self._labels.items() :
t.add_child(Tree("label", None,
Tree.from_obj(val),
name=key))
return t
@classmethod
def __pnmlload__ (cls, tree) :
"""
>>> old = Transition('t')
>>> old.label(foo='bar', spam=42)
>>> p = old.__pnmldump__()
>>> new = Transition.__pnmlload__(p)
>>> new
Transition('t', Expression('True'))
>>> new.__class__
<class 'snakes.plugins.labels.Transition'>
>>> new.label('foo', 'spam')
('bar', 42)
"""
t = new_instance(cls, module.Transition.__pnmlload__(tree))
t._labels = dict((lbl["name"], lbl.child().to_obj())
for lbl in tree.get_children("label"))
return t
class Place (module.Place) :
def label (self, *get, **set) :
if not hasattr(self, "_labels") :
self._labels = {}
result = tuple(self._labels[g] for g in get)
self._labels.update(set)
if len(get) == 1 :
return result[0]
elif len(get) > 1 :
return result
elif len(set) == 0 :
return self._labels.copy()
def has_label (self, name, *names) :
if len(names) == 0 :
return name in self._labels
else :
return tuple(n in self._labels for n in (name,) + names)
def copy (self, name=None, **options) :
if not hasattr(self, "_labels") :
self._labels = {}
result = module.Place.copy(self, name, **options)
result._labels = self._labels.copy()
return result
def __pnmldump__ (self) :
"""
>>> p = Place('p')
>>> p.label(foo='bar', spam=42)
>>> p.__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<place id="p">
<type domain="universal"/>
<initialMarking>
<multiset/>
</initialMarking>
<label name="foo">
<object type="str">
bar
</object>
</label>
<label name="spam">
<object type="int">
42
</object>
</label>
</place>
</pnml>
"""
t = module.Place.__pnmldump__(self)
if hasattr(self, "_labels") :
for key, val in self._labels.items() :
t.add_child(Tree("label", None,
Tree.from_obj(val),
name=key))
return t
@classmethod
def __pnmlload__ (cls, tree) :
"""
>>> old = Place('p')
>>> old.label(foo='bar', spam=42)
>>> p = old.__pnmldump__()
>>> new = Place.__pnmlload__(p)
>>> new
Place('p', MultiSet([]), tAll)
>>> new.__class__
<class 'snakes.plugins.labels.Place'>
>>> new.label('foo', 'spam')
('bar', 42)
"""
p = new_instance(cls, module.Place.__pnmlload__(tree))
p._labels = dict((lbl["name"], lbl.child().to_obj())
for lbl in tree.get_children("label"))
return p
class PetriNet (module.PetriNet) :
def label (self, *get, **set) :
if not hasattr(self, "_labels") :
self._labels = {}
result = tuple(self._labels[g] for g in get)
self._labels.update(set)
if len(get) == 1 :
return result[0]
elif len(get) > 1 :
return result
elif len(set) == 0 :
return self._labels.copy()
def has_label (self, name, *names) :
if len(names) == 0 :
return name in self._labels
else :
return tuple(n in self._labels for n in (name,) + names)
def copy (self, name=None, **options) :
if not hasattr(self, "_labels") :
self._labels = {}
result = module.PetriNet.copy(self, name, **options)
result._labels = self._labels.copy()
return result
def __pnmldump__ (self) :
"""
>>> n = PetriNet('n')
>>> n.label(foo='bar', spam=42)
>>> n.__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<net id="n">
<label name="foo">
<object type="str">
bar
</object>
</label>
<label name="spam">
<object type="int">
42
</object>
</label>
</net>
</pnml>
"""
t = module.PetriNet.__pnmldump__(self)
if hasattr(self, "_labels") :
for key, val in self._labels.items() :
t.add_child(Tree("label", None,
Tree.from_obj(val),
name=key))
return t
@classmethod
def __pnmlload__ (cls, tree) :
"""
>>> old = PetriNet('n')
>>> old.label(foo='bar', spam=42)
>>> p = old.__pnmldump__()
>>> new = PetriNet.__pnmlload__(p)
>>> new
PetriNet('n')
>>> new.__class__
<class 'snakes.plugins.labels.PetriNet'>
>>> new.label('foo', 'spam')
('bar', 42)
"""
n = new_instance(cls, module.PetriNet.__pnmlload__(tree))
n._labels = dict((lbl["name"], lbl.child().to_obj())
for lbl in tree.get_children("label"))
return n
def merge_places (self, target, sources, **options) :
module.PetriNet.merge_places(self, target, sources, **options)
new = self.place(target)
for place in sources :
new.label(**dict(self.place(place).label()))
def merge_transitions (self, target, sources, **options) :
module.PetriNet.merge_transitions(self, target, sources, **options)
new = self.transition(target)
for trans in sources :
new.label(**dict(self.transition(trans).label()))
return Transition, Place, PetriNet
"""A plugin to compose nets.
The compositions are based on place status and automatically merge
some nodes (buffers and variables, tick transitions).
>>> import snakes.plugins
>>> snakes.plugins.load('ops', 'snakes.nets', 'nets')
<module ...>
>>> from nets import *
>>> from snakes.plugins.status import entry, internal, exit, buffer
>>> basic = PetriNet('basic')
>>> basic.add_place(Place('e', status=entry))
>>> basic.add_place(Place('x', status=exit))
>>> basic.add_transition(Transition('t'))
>>> basic.add_input('e', 't', Value(1))
>>> basic.add_output('x', 't', Value(2))
>>> basic.add_place(Place('b', [1], status=buffer('buf')))
>>> n = basic.copy()
>>> n.hide(entry)
>>> n.node('e').status
Status(None)
>>> n.hide(buffer('buf'), buffer(None))
>>> n.node('b').status
Buffer('buffer')
>>> n = basic / 'buf'
>>> n.node('[b/buf]').status
Buffer('buffer')
>>> n = basic & basic
>>> n.status(internal)
('[x&e]',)
>>> n.place('[x&e]').pre
{'[t&]': Value(2)}
>>> n.place('[x&e]').post
{'[&t]': Value(1)}
>>> n.status(buffer('buf'))
('[b&b]',)
>>> n = basic + basic
>>> n.status(entry)
('[e+e]',)
>>> list(sorted(n.place('[e+e]').post.items()))
[('[+t]', Value(1)), ('[t+]', Value(1))]
>>> n.status(exit)
('[x+x]',)
>>> list(sorted(n.place('[x+x]').pre.items()))
[('[+t]', Value(2)), ('[t+]', Value(2))]
>>> n = basic * basic
>>> n.status(entry)
('[e,x*e]',)
>>> n.place('[e,x*e]').post
{'[t*]': Value(1), '[*t]': Value(1)}
>>> n.place('[e,x*e]').pre
{'[t*]': Value(2)}
>>> n1 = basic.copy()
>>> n1.declare('global x; x=1')
>>> n2 = basic.copy()
>>> n2.globals['y'] = 2
>>> n = n1 + n2
>>> n.globals['x'], n.globals['y']
(1, 2)
>>> n._declare
['global x; x=1']
"""
import snakes.plugins
from snakes.plugins.status import Status, entry, exit, internal
from snakes.data import cross
from snakes.plugins.clusters import Cluster
def _glue (op, one, two) :
result = one.__class__("(%s%s%s)" % (one.name, op, two.name))
def new (name) :
return "[%s%s]" % (name, op)
for net in (one, two) :
result.clusters.add_child(Cluster())
result._declare = list(set(result._declare) | set(net._declare))
result.globals.update(net.globals)
for place in net.place() :
result.add_place(place.copy(new(place.name)),
cluster=[-1]+net.clusters.get_path(place.name))
for trans in net.transition() :
result.add_transition(trans.copy(new(trans.name)),
cluster=[-1]+net.clusters.get_path(trans.name))
for place, label in trans.input() :
result.add_input(new(place.name),
new(trans.name),
label.copy())
for place, label in trans.output() :
result.add_output(new(place.name),
new(trans.name),
label.copy())
def new (name) :
return "[%s%s]" % (op, name)
for status in result.status :
result.status.merge(status)
new = result.status(status)
if len(new) == 1 :
name = "[%s%s%s]" % (",".join(sorted(one.status(status))),
op,
",".join(sorted(two.status(status))))
if name != new[0] :
result.rename_node(new[0], name)
return result
@snakes.plugins.plugin("snakes.nets",
depends=["snakes.plugins.clusters",
"snakes.plugins.status"])
def extend (module) :
"Build the extended module"
class PetriNet (module.PetriNet) :
def __or__ (self, other) :
"Parallel"
return _glue("|", self, other)
def __and__ (self, other) :
"Sequence"
result = _glue("&", self, other)
remove = set()
for x, e in cross((self.status(exit), other.status(entry))) :
new = "[%s&%s]" % (x, e)
new_x, new_e = "[%s&]" % x, "[&%s]" % e
result.merge_places(new, (new_x, new_e), status=internal)
remove.update((new_x, new_e))
for p in remove :
result.remove_place(p)
return result
def __add__ (self, other) :
"Choice"
result = _glue("+", self, other)
for status in (entry, exit) :
remove = set()
for l, r in cross((self.status(status),
other.status(status))) :
new = "[%s+%s]" % (l, r)
new_l, new_r = "[%s+]" % l, "[+%s]" % r
result.merge_places(new, (new_l, new_r), status=status)
remove.update((new_l, new_r))
for p in remove :
result.remove_place(p)
return result
def __mul__ (self, other) :
"Iteration"
result = _glue("*", self, other)
remove = set()
for e1, x1, e2 in cross((self.status(entry),
self.status(exit),
other.status(entry))) :
new = "[%s,%s*%s]" % (e1, x1, e2)
new_e1, new_x1 = "[%s*]" % e1, "[%s*]" % x1
new_e2 = "[*%s]" % e2
result.merge_places(new, (new_e1, new_x1, new_e2),
status=entry)
remove.update((new_e1, new_x1, new_e2))
for p in remove :
result.remove_place(p)
return result
def hide (self, old, new=None) :
if new is None :
new = Status(None)
for node in self.status(old) :
self.set_status(node, new)
def __div__ (self, name) :
result = self.copy()
for node in result.node() :
result.rename_node(node.name, "[%s/%s]" % (node, name))
for status in result.status :
if status._value == name :
result.hide(status, status.__class__(status._name, None))
return result
def __truediv__ (self, other) :
return self.__div__(other)
return PetriNet
"""A plugin to add positions to the nodes.
- C{Place} and C{Transition} constructors are added an optional
argument C{pos=(x,y)} to set their position
- C{Place} and C{Transition} are added an attribute C{pos} that is
pair of numbers with attributes C{x} and C{y} and methods
C{shift(dx, dy)} and C{moveto(x, y)}
- Petri nets are added methods C{bbox()} that returns a pair of
extrema C{((xmin, ymin), (xmax, ymax))}, a method C{shift(dx, dy)}
that shift all the nodes, and a method C{transpose()} that rotates
the net in such a way that the top-down direction becomes
left-right
>>> import snakes.plugins
>>> snakes.plugins.load('pos', 'snakes.nets', 'nets')
<module ...>
>>> from nets import PetriNet, Place, Transition
>>> n = PetriNet('N')
>>> n.add_place(Place('p00'))
>>> n.add_transition(Transition('t10', pos=(1, 0)))
>>> n.add_place(Place('p11', pos=(1, 1)))
>>> n.add_transition(Transition('t01', pos=(0, 1)))
>>> n.node('t10').pos
Position(1, 0)
>>> n.node('t10').pos.x
1
>>> n.node('t10').pos.y
0
>>> n.node('t10').pos.y = 1
Traceback (most recent call last):
...
AttributeError: readonly attribute
>>> n.node('t10').pos()
(1, 0)
>>> n.bbox()
((0, 0), (1, 1))
>>> n.shift(1, 2)
>>> n.bbox()
((1, 2), (2, 3))
>>> n.node('t01').copy().pos
Position(1, 3)
>>> n.transpose()
>>> n.node('t01').pos
Position(-3, 1)
"""
from snakes import SnakesError
from snakes.compat import *
from snakes.plugins import plugin, new_instance
from snakes.pnml import Tree
class Position (object) :
"The position of a node"
def __init__ (self, x, y) :
self.__dict__["x"] = x
self.__dict__["y"] = y
def __str__ (self) :
return "(%s, %s)" % (str(self.x), str(self.y))
def __repr__ (self) :
return "Position(%s, %s)" % (str(self.x), str(self.y))
def __setattr__ (self, name, value) :
if name in ("x", "y") :
raise AttributeError("readonly attribute")
else :
self.__dict__[name] = value
def moveto (self, x, y) :
self.__init__(x, y)
def shift (self, dx, dy) :
self.__init__(self.x + dx, self.y + dy)
def __getitem__ (self, rank) :
if rank == 0 :
return self.x
elif rank == 1 :
return self.y
else :
raise IndexError("Position index out of range")
def __iter__ (self) :
yield self.x
yield self.y
def __call__ (self) :
return (self.x, self.y)
@plugin("snakes.nets")
def extend (module) :
class Place (module.Place) :
def __init__ (self, name, tokens=[], check=None, **args) :
x, y = args.pop("pos", (0, 0))
self.pos = Position(x, y)
module.Place.__init__(self, name, tokens, check, **args)
def copy (self, name=None, **args) :
x, y = args.pop("pos", self.pos())
result = module.Place.copy(self, name, **args)
result.pos.moveto(x, y)
return result
def __pnmldump__ (self) :
"""
>>> p = Place('p', pos=(1, 2))
>>> p.__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<place id="p">
<type domain="universal"/>
<initialMarking>
<multiset/>
</initialMarking>
<graphics>
<position x="1" y="2"/>
</graphics>
</place>
</pnml>
"""
t = module.Place.__pnmldump__(self)
try :
gfx = t.child("graphics")
except SnakesError :
gfx = Tree("graphics", None)
t.add_child(gfx)
gfx.add_child(Tree("position", None,
x=str(self.pos.x),
y=str(self.pos.y)))
return t
@classmethod
def __pnmlload__ (cls, tree) :
"""
>>> old = Place('p', pos=(1, 2))
>>> p = old.__pnmldump__()
>>> new = Place.__pnmlload__(p)
>>> new.pos
Position(1, 2)
>>> new
Place('p', MultiSet([]), tAll)
>>> new.__class__
<class 'snakes.plugins.pos.Place'>
"""
result = new_instance(cls, module.Place.__pnmlload__(tree))
try :
p = tree.child("graphics").child("position")
x, y = eval(p["x"]), eval(p["y"])
result.pos = Position(x, y)
except SnakesError :
result.pos = Position(0, 0)
return result
class Transition (module.Transition) :
def __init__ (self, name, guard=None, **args) :
x, y = args.pop("pos", (0, 0))
self.pos = Position(x, y)
module.Transition.__init__(self, name, guard, **args)
def copy (self, name=None, **args) :
x, y = args.pop("pos", self.pos())
result = module.Transition.copy(self, name, **args)
result.pos.moveto(x, y)
return result
def __pnmldump__ (self) :
"""
>>> t = Transition('t', pos=(2, 1))
>>> t.__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<transition id="t">
<graphics>
<position x="2" y="1"/>
</graphics>
</transition>
</pnml>
"""
t = module.Transition.__pnmldump__(self)
t.add_child(Tree("graphics", None,
Tree("position", None,
x=str(self.pos.x),
y=str(self.pos.y))))
return t
@classmethod
def __pnmlload__ (cls, tree) :
"""
>>> old = Transition('t', pos=(2, 1))
>>> p = old.__pnmldump__()
>>> new = Transition.__pnmlload__(p)
>>> new.pos
Position(2, 1)
>>> new
Transition('t', Expression('True'))
>>> new.__class__
<class 'snakes.plugins.pos.Transition'>
"""
result = new_instance(cls, module.Transition.__pnmlload__(tree))
try :
p = tree.child("graphics").child("position")
x, y = eval(p["x"]), eval(p["y"])
result.pos = Position(x, y)
except SnakesError :
result.pos = Position(0, 0)
return result
class PetriNet (module.PetriNet) :
def add_place (self, place, **args) :
if "pos" in args :
x, y = args.pop("pos")
place.pos.moveto(x, y)
module.PetriNet.add_place(self, place, **args)
def add_transition (self, trans, **args) :
if "pos" in args :
x, y = args.pop("pos")
trans.pos.moveto(x, y)
module.PetriNet.add_transition(self, trans, **args)
def merge_places (self, target, sources, **args) :
pos = args.pop("pos", None)
module.PetriNet.merge_places(self, target, sources, **args)
if pos is None :
pos = reduce(complex.__add__,
(complex(*self._place[name].pos())
for name in sources)) / len(sources)
x, y = pos.real, pos.imag
else :
x, y = pos
self._place[target].pos.moveto(x, y)
def merge_transitions (self, target, sources, **args) :
pos = args.pop("pos", None)
module.PetriNet.merge_transitions(self, target, sources, **args)
if pos is None :
pos = reduce(complex.__add__,
(complex(*self._trans[name].pos())
for name in sources)) / len(sources)
x, y = pos.real, pos.imag
else :
x, y = pos
self._trans[target].pos.moveto(x, y)
def bbox (self) :
if len(self._node) == 0 :
return (0, 0), (0, 0)
else :
nodes = iter(self._node.values())
xmin, ymin = next(nodes).pos()
xmax, ymax = xmin, ymin
for n in nodes :
x, y = n.pos()
xmin = min(xmin, x)
xmax = max(xmax, x)
ymin = min(ymin, y)
ymax = max(ymax, y)
return (xmin, ymin), (xmax, ymax)
def shift (self, dx, dy) :
for node in self.node() :
node.pos.shift(dx, dy)
def transpose (self) :
for node in self.node() :
x, y = node.pos()
node.pos.moveto(-y, x)
return Place, Transition, PetriNet, Position
from snakes.plugins import plugin
from snakes.pnml import Tree, loads, dumps
import imp, sys, socket, traceback, operator
class QueryError (Exception) :
pass
class Query (object) :
def __init__ (self, name, *larg, **karg) :
self._name = name
self._larg = tuple(larg)
self._karg = dict(karg)
__pnmltag__ = "query"
def __pnmldump__ (self) :
"""
>>> Query('set', 'x', 42).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="set">
<argument>
<object type="str">
x
</object>
</argument>
<argument>
<object type="int">
42
</object>
</argument>
</query>
</pnml>
>>> Query('test', x=1).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="test">
<keyword name="x">
<object type="int">
1
</object>
</keyword>
</query>
</pnml>
>>> Query('test', 'x', 42, y=1).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="test">
<argument>
<object type="str">
x
</object>
</argument>
<argument>
<object type="int">
42
</object>
</argument>
<keyword name="y">
<object type="int">
1
</object>
</keyword>
</query>
</pnml>
>>> Query('set', 'x', Query('call', 'x.upper')).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<query name="set">
<argument>
<object type="str">
x
</object>
</argument>
<argument>
<query name="call">
<argument>
<object type="str">
x.upper
</object>
</argument>
</query>
</argument>
</query>
</pnml>
"""
children = []
for arg in self._larg :
children.append(Tree("argument", None,
Tree.from_obj(arg)))
for name, value in self._karg.items() :
children.append(Tree("keyword", None,
Tree.from_obj(value),
name=name))
return Tree(self.__pnmltag__, None, *children, **{"name": self._name})
@classmethod
def __pnmlload__ (cls, tree) :
"""
>>> Tree._tag2obj = {'query': Query}
>>> t = Query('test', 'x', 42, y=1).__pnmldump__()
>>> q = Query.__pnmlload__(t)
>>> q._name
'test'
>>> q._larg
('x', 42)
>>> q._karg
{'y': 1}
"""
larg = (child.child().to_obj()
for child in tree.get_children("argument"))
karg = dict((child["name"], child.child().to_obj())
for child in tree.get_children("keyword"))
return cls(tree["name"], *larg, **karg)
def run (self, envt) :
"""
>>> import imp
>>> env = imp.new_module('environment')
>>> Query('set', 'x', 'hello').run(env)
>>> env.x
'hello'
>>> Query('set', 'x', Query('call', 'x.upper')).run(env)
>>> env.x
'HELLO'
>>> Query('test', 1, 2, 3).run(env)
Traceback (most recent call last):
...
QueryError: unknown query 'test'
"""
try :
handler = getattr(self, "_run_%s" % self._name)
except AttributeError :
raise QueryError("unknown query %r" % self._name)
self._envt = envt
larg = tuple(a.run(envt) if isinstance(a, self.__class__) else a
for a in self._larg)
karg = dict((n, v.run(envt) if isinstance(v, self.__class__) else v)
for n, v in self._karg.items())
try :
return handler(*larg, **karg)
except TypeError :
cls, val, tb = sys.exc_info()
try :
fun, msg = str(val).strip().split("()", 1)
except :
raise val
if fun.startswith("_run_") and hasattr(self, fun) :
raise TypeError(fun[5:] + "()" + msg)
raise val
def _get_object (self, path) :
obj = self._envt
for n in path :
obj = getattr(obj, n)
return obj
def _run_set (self, name, value) :
"""
>>> import imp
>>> env = imp.new_module('environment')
>>> Query('set', 'x', 1).run(env)
>>> env.x
1
"""
path = name.split(".")
setattr(self._get_object(path[:-1]), path[-1], value)
def _run_get (self, name) :
"""
>>> import imp
>>> env = imp.new_module('environment')
>>> env.x = 2
>>> Query('get', 'x').run(env)
2
"""
path = name.split(".")
return self._get_object(path)
def _run_del (self, name) :
"""
>>> import imp
>>> env = imp.new_module('environment')
>>> env.x = 2
>>> Query('del', 'x').run(env)
>>> env.x
Traceback (most recent call last):
...
AttributeError: 'module' object has no attribute 'x'
"""
path = name.split(".")
delattr(self._get_object(path[:-1]), path[-1])
def _run_call (self, fun, *larg, **karg) :
"""
>>> import imp
>>> env = imp.new_module('environment')
>>> env.x = 'hello'
>>> Query('call', 'x.center', 7).run(env)
' hello '
>>> env.__dict__.update(__builtins__)
>>> Query('call', Query('call', 'getattr',
... Query('call', 'x.center', 7),
... 'upper')).run(env)
' HELLO '
"""
if isinstance(fun, str) :
fun = self._get_object(fun.split("."))
return fun(*larg, **karg)
@plugin("snakes.nets")
def extend (module) :
class UDPServer (object) :
def __init__ (self, port, size=2**20, verbose=0) :
self._size = size
self._verbose = verbose
self._sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self._sock.bind(("", port))
self._env = imp.new_module("snk")
self._env.__dict__.update(__builtins__)
self._env.__dict__.update(operator.__dict__)
self._env.__dict__.update(module.__dict__)
def recvfrom (self) :
return self._sock.recvfrom(self._size)
def sendto (self, data, address) :
self._sock.sendto(data.strip() + "\n", address)
def run (self) :
while True :
data, address = self.recvfrom()
data = data.strip()
if self._verbose :
print("# query from %s:%u" % address)
try :
if self._verbose > 1 :
print(data)
res = loads(data).run(self._env)
if res is None :
res = Tree("answer", None, status="ok")
else :
res = Tree("answer", None, Tree.from_obj(res),
status="ok")
except :
cls, val, tb = sys.exc_info()
res = Tree("answer", str(val).strip(),
error=cls.__name__, status="error")
if self._verbose > 1 :
print("# error")
for entry in traceback.format_exception(cls, val, tb) :
for line in entry.splitlines() :
print("## %s" % line)
if self._verbose :
if self._verbose > 1 :
print("# answer")
print(res.to_pnml())
elif res["status"] == "error" :
print("# answer: %s: %s" % (res["error"], res.data))
else :
print("# answer: %s" % res["status"])
self.sendto(res.to_pnml(), address)
class TCPServer (UDPServer) :
def __init__ (self, port, size=2**20, verbose=0) :
self._size = size
self._verbose = verbose
self._sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self._sock.bind(("", port))
self._sock.listen(1)
self._env = imp.new_module("snk")
self._env.__dict__.update(__builtins__)
self._env.__dict__.update(operator.__dict__)
self._env.__dict__.update(module.__dict__)
self._connection = {}
def recvfrom (self) :
connection, address = self._sock.accept()
self._connection[address] = connection
parts = []
while True :
parts.append(connection.recv(self._size))
if len(parts[-1]) < self._size :
break
return "".join(parts), address
def sendto (self, data, address) :
self._connection[address].send(data.rstrip() + "\n")
self._connection[address].close()
del self._connection[address]
return Query, UDPServer, TCPServer
"""A plugin to add nodes status.
Several status are defined by default: C{entry}, C{internal}, C{exit},
C{buffer}, C{safebuffer} for places and {tick} for transitions.
>>> import snakes.plugins
>>> snakes.plugins.load('status', 'snakes.nets', 'nets')
<module ...>
>>> from nets import *
>>> import snakes.plugins.status as status
>>> n = PetriNet('N')
>>> n.add_place(Place('p1'), status=status.entry)
>>> n.place('p1')
Place('p1', MultiSet([]), tAll, status=Status('entry'))
"""
import operator, weakref
import snakes.plugins
from snakes import ConstraintError
from snakes.plugins import new_instance
from snakes.compat import *
from snakes.data import iterate
from snakes.pnml import Tree
class Status (object) :
"The status of a node"
def __init__ (self, name, value=None) :
"""Initialize with a status name and an optional value
@param name: the name of the status
@type name: C{str}
@param value: an optional additional value to make a
difference between status with te same name
@type value: hashable
"""
self._name = name
self._value = value
__pnmltag__ = "status"
def __pnmldump__ (self) :
"""Dump a C{Status} as a PNML tree
@return: PNML tree
@rtype: C{pnml.Tree}
>>> Status('foo', 42).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>...
<status>
<name>
foo
</name>
<value>
<object type="int">
42
</object>
</value>
</status>
</pnml>
"""
return Tree(self.__pnmltag__, None,
Tree("name", self._name),
Tree("value", None, Tree.from_obj(self._value)))
@classmethod
def __pnmlload__ (cls, tree) :
"""Create a C{Status} from a PNML tree
@param tree: the tree to convert
@type tree: C{pnml.Tree}
@return: the status built
@rtype: C{Status}
>>> t = Status('foo', 42).__pnmldump__()
>>> Status.__pnmlload__(t)
Status('foo',42)
"""
return cls(tree.child("name").data,
tree.child("value").child().to_obj())
def copy (self) :
"""Return a copy of the status
A status is normally never muted, so this may be useless,
unless the user decides to store additional data in status.
@return: a copy of the status
@rtype: C{Status}
"""
return self.__class__(self._name, self._value)
def __str__ (self) :
"""Short textual representation
>>> str(internal)
'internal'
>>> str(buffer('buf'))
'buffer(buf)'
@return: a textual representation
@rtype: C{str}
"""
if self._value is None :
return str(self._name)
else :
return "%s(%s)" % (self._name, self._value)
def __repr__ (self) :
"""Detailed textual representation
>>> repr(internal)
"Status('internal')"
>>> repr(buffer('buf'))
"Buffer('buffer','buf')"
@return: a textual representation suitable for C{eval}
@rtype: C{str}
"""
if self._value is None :
return "%s(%s)" % (self.__class__.__name__, repr(self._name))
else :
return "%s(%s,%s)" % (self.__class__.__name__,
repr(self._name), repr(self._value))
def __hash__ (self) :
"""Hash a status
@return: the hash value
@rtype: C{int}
"""
return hash((self._name, self._value))
def __eq__ (self, other) :
"""Compares two status for equality
They are equal if they have the same name and value
>>> internal == Status('internal')
True
>>> Status('a', 1) == Status('a', 2)
False
>>> Status('a', 1) == Status('b', 1)
False
@param other: a status
@type other: C{Status}
@return: C{True} is they are equal, C{False} otherwise
@rtype: C{bool}
"""
try :
return (self._name, self._value) == (other._name, other._value)
except :
return False
def __ne__ (self, other) :
return not(self == other)
def __add__ (self, other) :
if self == other :
return self.copy()
else :
raise ConstraintError("incompatible status")
def name (self) :
return self._name
def value (self) :
return self._value
def merge (self, net, nodes, name=None) :
"""Merge C{nodes} in C{net} into a new node called C{name}
This does nothing by default, other status will refine this
method. Merged nodes are removed, only the newly created one
remains.
@param net: the Petri net where nodes should be merged
@type net: C{PetriNet}
@param nodes: a collection of node names to be merged
@type nodes: iterable of C{str}
@param name: the name of the new node or C{Node} if it should
be generated
@type name: C{str}
"""
pass
entry = Status('entry')
exit = Status('exit')
internal = Status('internal')
class Buffer (Status) :
"A status for buffer places"
def merge (self, net, nodes, name=None) :
"""Merge C{nodes} in C{net}
Buffer places with the status status C{Buffer('buffer', None)}
are not merged. Other buffer places are merged exactly has
C{PetriNet.merge_places} does.
If C{name} is C{None} the name generated is a concatenation of
the nodes names separated by '+', with parenthesis outside.
>>> import snakes.plugins
>>> snakes.plugins.load('status', 'snakes.nets', 'nets')
<module ...>
>>> from nets import *
>>> n = PetriNet('N')
>>> import snakes.plugins.status as status
>>> buf = status.buffer('buf')
>>> n.add_place(Place('p3', range(2), status=buf))
>>> n.add_place(Place('p4', range(3), status=buf))
>>> n.status.merge(buf, 'b')
>>> p = n.place('b')
>>> p
Place('b', MultiSet([...]), tAll, status=Buffer('buffer','buf'))
>>> p.tokens == MultiSet([0, 0, 1, 1, 2])
True
@param net: the Petri net where places should be merged
@type net: C{PetriNet}
@param nodes: a collection of place names to be merged
@type nodes: iterable of C{str}
@param name: the name of the new place or C{Node} if it should
be generated
@type name: C{str}
"""
if self._value is None :
return
if name is None :
name = "(%s)" % "+".join(sorted(nodes))
net.merge_places(name, nodes, status=self)
for src in nodes :
net.remove_place(src)
def buffer (name) :
"""Generate a buffer status called C{name}
@param name: the name of the buffer
@type name: C{str}
@return: C{Buffer('buffer', name)}
@rtype: C{Buffer}
"""
return Buffer('buffer', name)
class Safebuffer (Buffer) :
"A status for safe buffers (ie, variables) places"
def merge (self, net, nodes, name=None) :
"""Merge C{nodes} in C{net}
Safe buffers places with the status C{Safebuffer('safebuffer',
None)} are not merged. Other safe buffers places are merged if
they all have the same marking, which becomes the marking of
the resulting place. Otherwise, C{ConstraintError} is raised.
If C{name} is C{None} the name generated is a concatenation of
the nodes names separated by '+', with parenthesis outside.
>>> import snakes.plugins
>>> snakes.plugins.load('status', 'snakes.nets', 'nets')
<module ...>
>>> from nets import *
>>> import snakes.plugins.status as status
>>> n = PetriNet('N')
>>> var = status.safebuffer('var')
>>> n.add_place(Place('p5', [1], status=var))
>>> n.add_place(Place('p6', [1], status=var))
>>> n.add_place(Place('p7', [1], status=var))
>>> n.status.merge(var, 'v')
>>> n.place('v')
Place('v', MultiSet([1]), tAll, status=Safebuffer('safebuffer','var'))
>>> n.add_place(Place('p8', [3], status=var))
>>> try : n.status.merge(var, 'vv')
... except ConstraintError : print(sys.exc_info()[1])
incompatible markings
@param net: the Petri net where places should be merged
@type net: C{PetriNet}
@param nodes: a collection of place names to be merged
@type nodes: iterable of C{str}
@param name: the name of the new place or C{Node} if it should
be generated
@type name: C{str}
"""
if self._value is None :
return
marking = net.place(nodes[0]).tokens
for node in nodes[1:] :
if net.place(node).tokens != marking :
raise ConstraintError("incompatible markings")
if name is None :
name = "(%s)" % "+".join(sorted(nodes))
net.merge_places(name, nodes, status=self)
for src in nodes :
net.remove_place(src)
net.set_status(name, self)
net.place(name).reset(marking)
def safebuffer (name) :
"""Generate a safebuffer status called C{name}
@param name: the name of the safebuffer
@type name: C{str}
@return: C{Safebuffer('safebuffer', name)}
@rtype: C{Safebuffer}
"""
return Safebuffer('safebuffer', name)
class Tick (Status) :
"A status for tick transition"
def merge (self, net, nodes, name=None) :
"""Merge C{nodes} in C{net}
Tick transitions are merged exactly as
C{PetriNet.merge_transitions} does.
If C{name} is C{None} the name generated is a concatenation of
the nodes names separated by '+', with parenthesis outside.
>>> import snakes.plugins
>>> snakes.plugins.load('status', 'snakes.nets', 'nets')
<module ...>
>>> from nets import *
>>> import snakes.plugins.status as status
>>> n = PetriNet('N')
>>> tick = status.tick('tick')
>>> n.add_transition(Transition('t1', Expression('x==1'), status=tick))
>>> n.add_transition(Transition('t2', Expression('y==2'), status=tick))
>>> n.add_transition(Transition('t3', Expression('z==3'), status=tick))
>>> n.status.merge(tick, 't')
>>> n.transition('t')
Transition('t', Expression('((...) and (...)) and (...)'), status=Tick('tick','tick'))
@param net: the Petri net where transitions should be merged
@type net: C{PetriNet}
@param nodes: a collection of transition names to be merged
@type nodes: iterable of C{str}
@param name: the name of the new transition or C{Node} if it
should be generated
@type name: C{str}
"""
if self._value is None :
return
if name is None :
name = "(%s)" % "+".join(nodes)
net.merge_transitions(name, nodes, status=self)
for src in nodes :
net.remove_transition(src)
def tick (name) :
"""Generate a tick status called C{name}
@param name: the name of the tick
@type name: C{str}
@return: C{Tick('tick', name)}
@rtype: C{Tick}
"""
return Tick('tick', name)
class StatusDict (object) :
"A container to access the nodes of a net by their status"
def __init__ (self, net) :
"""
@param net: the Petri net for which nodes will be recorded
@type net: C{PetriNet}
"""
self._nodes = {}
self._net = weakref.ref(net)
def copy (self, net=None) :
"""
@param net: the Petri net for which nodes will be recorded
(C{None} if it is the same as the copied object)
@type net: C{PetriNet}
"""
if net is None :
net = self._net()
result = self.__class__(net)
for status in self._nodes :
result._nodes[status.copy()] = self._nodes[status].copy()
return result
def __iter__ (self) :
return iter(self._nodes)
def record (self, node) :
"""Called when C{node} is added to the net
@param node: the added node
@type node: C{Node}
"""
if node.status not in self._nodes :
self._nodes[node.status] = set([node.name])
else :
self._nodes[node.status].add(node.name)
def remove (self, node) :
"""Called when C{node} is removed from the net
@param node: the added node
@type node: C{Node}
"""
if node.status in self._nodes :
self._nodes[node.status].discard(node.name)
if len(self._nodes[node.status]) == 0 :
del self._nodes[node.status]
def __call__ (self, status) :
"""Return the nodes having C{status}
@param status: the searched status
@type status: C{Status}
@return: the node names in the net having this status
@rtype: C{tuple} of C{str}
"""
return tuple(self._nodes.get(status, tuple()))
def merge (self, status, name=None) :
"""Merge the nodes in the net having C{status}
This is a shortcut to call C{status.merge} with the right
parameters.
@param status: the status for which nodes have to be merged
"""
if status :
nodes = self(status)
if len(nodes) > 1 :
status.merge(self._net(), nodes, name)
@snakes.plugins.plugin("snakes.nets")
def extend (module) :
"Build the extended module"
class Place (module.Place) :
def __init__ (self, name, tokens=[], check=None, **args) :
self.status = args.pop("status", Status(None))
module.Place.__init__(self, name, tokens, check, **args)
def copy (self, name=None, **args) :
result = module.Place.copy(self, name, **args)
result.status = self.status.copy()
return result
def __repr__ (self) :
if self.status == Status(None) :
return module.Place.__repr__(self)
else :
return "%s, status=%s)" % (module.Place.__repr__(self)[:-1],
repr(self.status))
def __pnmldump__ (self) :
"""
>>> p = Place('p', status=Status('foo', 42))
>>> p.__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>...
<place id="p">
<type domain="universal"/>
<initialMarking>
<multiset/>
</initialMarking>
<status>
<name>
foo
</name>
<value>
<object type="int">
42
</object>
</value>
</status>
</place>
</pnml>
"""
t = module.Place.__pnmldump__(self)
t.add_child(Tree.from_obj(self.status))
return t
@classmethod
def __pnmlload__ (cls, tree) :
"""
>>> t = Place('p', status=Status('foo', 42)).__pnmldump__()
>>> Place.__pnmlload__(t).status
Status('foo',42)
"""
result = new_instance(cls, module.Place.__pnmlload__(tree))
try :
result.status = tree.child("status").to_obj()
except SnakesError :
result.status = Status(None)
return result
class Transition (module.Transition) :
def __init__ (self, name, guard=None, **args) :
self.status = args.pop("status", Status(None)).copy()
module.Transition.__init__(self, name, guard, **args)
def __pnmldump__ (self) :
"""
>>> p = Transition('p', status=Status('foo', 42))
>>> p.__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>...
<transition id="p">
<status>
<name>
foo
</name>
<value>
<object type="int">
42
</object>
</value>
</status>
</transition>
</pnml>
"""
t = module.Transition.__pnmldump__(self)
t.add_child(Tree.from_obj(self.status))
return t
@classmethod
def __pnmlload__ (cls, tree) :
"""
>>> t = Transition('p', status=Status('foo', 42)).__pnmldump__()
>>> Transition.__pnmlload__(t).status
Status('foo',42)
"""
result = new_instance(cls, module.Transition.__pnmlload__(tree))
try :
result.status = tree.child("status").to_obj()
except SnakesError :
result.status = Status(None)
return result
def copy (self, name=None, **args) :
result = module.Transition.copy(self, name, **args)
result.status = self.status.copy()
return result
def __repr__ (self) :
if self.status == Status(None) :
return module.Transition.__repr__(self)
else :
return "%s, status=%s)" % (module.Transition.__repr__(self)[:-1],
repr(self.status))
class PetriNet (module.PetriNet) :
def __init__ (self, name, **args) :
module.PetriNet.__init__(self, name, **args)
self.status = StatusDict(self)
@classmethod
def __pnmlload__ (cls, tree) :
t = new_instance(cls, module.PetriNet.__pnmlload__(tree))
t.status = StatusDict(t)
return t
def copy (self, name=None, **args) :
result = module.PetriNet.copy(self, name, **args)
result.status = self.status.copy(result)
return result
def add_place (self, place, **args) :
place.status = args.pop("status", place.status)
module.PetriNet.add_place(self, place, **args)
self.status.record(place)
def remove_place (self, name, **args) :
place = self.place(name)
self.status.remove(place)
module.PetriNet.remove_place(self, name, **args)
def add_transition (self, trans, **args) :
trans.status = args.pop("status", trans.status)
module.PetriNet.add_transition(self, trans, **args)
self.status.record(trans)
def remove_transition (self, name, **args) :
trans = self.transition(name)
self.status.remove(trans)
module.PetriNet.remove_transition(self, name, **args)
def set_status (self, node, status) :
node = self.node(node)
self.status.remove(node)
node.status = status
self.status.record(node)
def rename_node (self, old, new, **args) :
old_node = self.node(old).copy()
module.PetriNet.rename_node(self, old, new, **args)
self.status.remove(old_node)
self.status.record(self.node(new))
def copy_place (self, source, targets, **args) :
status = args.pop("status", self.place(source).status)
module.PetriNet.copy_place(self, source, targets, **args)
for new in iterate(targets) :
self.set_status(new, status)
def copy_transition (self, source, targets, **args) :
status = args.pop("status", self.transition(source).status)
module.PetriNet.copy_transition(self, source, targets, **args)
for new in iterate(targets) :
self.set_status(new, status)
def merge_places (self, target, sources, **args) :
if "status" in args :
status = args.pop("status")
else :
status = reduce(operator.add,
(self.place(s).status for s in sources))
module.PetriNet.merge_places(self, target, sources, **args)
self.set_status(target, status)
def merge_transitions (self, target, sources, **args) :
if "status" in args :
status = args.pop("status")
else :
status = reduce(operator.add,
(self.place(s).status for s in sources))
module.PetriNet.merge_transitions(self, target, sources, **args)
self.set_status(target, status)
return Place, Transition, PetriNet, Status, \
("entry", entry), ("exit", exit), ("internal", internal), \
Buffer, buffer, Safebuffer, safebuffer, Tick, tick
"""An implementation of the M-nets synchronisation.
This plugins extends the basic Petri net model in order to provide an
action-based synchronisation scheme that implements that of M-nets.
The plugin proposes a generalisation of the M-nets synchronisation in
that it does not impose a fixed correspondence between action names
and action arities.
- class C{Action} corresponds to a synchronisable action, it has a
name, a send/receive flag and a list of parameters. Actions have no
predetermined arities, only conjugated actions with the same arity
will be able to synchronise.
- class C{MultiAction} corresponds to a multiset of actions. It is
forbidden to build a multiaction that holds a pair of conjugated
actions (this leads to infinite nets when synchronising).
- Transition.__init__ accepts a parameter C{actions} that is a
collection of instances of C{Action}, this multiaction is added in
the attribute C{actions} of the transition.
- PetriNet is given new methods: C{synchronise(action_name)} to
perform the M-net synchronisation, C{restrict(action_name)} to
perform the restriction and C{scope(action_name)} for the scoping.
B{Remark:} the instances of C{Substitution} used in this plugins must
map variable names to instances of C{Variable} or C{Value}, but not to
other variable names.
>>> import snakes.plugins
>>> snakes.plugins.load('synchro', 'snakes.nets', 'nets')
<module ...>
>>> from nets import PetriNet, Place, Transition, Expression
>>> n = PetriNet('N')
>>> n.add_place(Place('e1'))
>>> n.add_place(Place('x1'))
>>> n.add_transition(Transition('t1', guard=Expression('x!=y'),
... actions=[Action('a', True, [Variable('x'), Value(2)]),
... Action('a', True, [Value(3), Variable('y')]),
... Action('b', False, [Variable('x'), Variable('y')])]))
>>> n.add_input('e1', 't1', Variable('x'))
>>> n.add_output('x1', 't1', Variable('z'))
>>> n.add_place(Place('e2'))
>>> n.add_place(Place('x2'))
>>> n.add_transition(Transition('t2', guard=Expression('z>0'),
... actions=[Action('a', False, [Variable('w'), Variable('y')]),
... Action('c', False, [Variable('z')])]))
>>> n.add_input('e2', 't2', Variable('w'))
>>> n.add_output('x2', 't2', Variable('z'))
>>> n.transition('t1').vars() == set(['x', 'y', 'z'])
True
>>> n.transition('t2').copy().vars() == set(['w', 'y', 'z'])
True
>>> n.synchronise('a')
>>> for t in sorted(n.transition(), key=str) :
... print('%s %s' % (t, t.guard))
... for place, label in sorted(t.input(), key=str) :
... print(' %s >> %s' % (place, label))
... for place, label in sorted(t.output(), key=str) :
... print(' %s << %s' % (place, label))
((t1{...}+t2{...})[a(...)]{...}+t2{...})[a(...)] (...)
...
t2 z>0
e2 >> w
x2 << z
>>> n.restrict('a')
>>> [t.name for t in sorted(n.transition(), key=str)]
["((t1{...}+t2{...})[a(...)]{...}+t2{...})[a(...)]",
"((t1{...}+t2{...})[a(...)]{...}+t2{...})[a(...)]"]
"""
from snakes import ConstraintError
from snakes.data import Substitution, WordSet, iterate
from snakes.nets import Value, Variable
from snakes.pnml import Tree
import snakes.plugins
from snakes.plugins import new_instance
from snakes.compat import *
class Action (object) :
def __init__ (self, name, send, params) :
"""
@param name: the name of the action
@type name: C{str}
@param send: a flag indicating whether this is a send or
receive action
@type send: C{bool}
@param params: the list of parameters
@type params: C{list} of C{Variable} or C{Value}
"""
self.name = name
self.send = send
self.params = list(params)
__pnmltag__ = "action"
def __pnmldump__ (self) :
"""
>>> Action('a', True, [Value(1), Variable('x')]).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>...
<action name="a" send="True">
<value>
<object type="int">
1
</object>
</value>
<variable>
x
</variable>
</action>
</pnml>
"""
result = Tree(self.__pnmltag__, None,
name=self.name,
send=str(self.send))
for param in self.params :
result.add_child(Tree.from_obj(param))
return result
@classmethod
def __pnmlload__ (cls, tree) :
"""
>>> t = Action('a', True, [Value(1), Variable('x')]).__pnmldump__()
>>> Action.__pnmlload__(t)
Action('a', True, [Value(1), Variable('x')])
"""
params = [Tree.to_obj(child) for child in tree.children]
return cls(tree["name"], tree["send"] == "True", params)
def __str__ (self) :
"""
>>> a = Action('a', True, [Value(1), Variable('x')])
>>> str(a)
'a!(1,x)'
>>> a.send = False
>>> str(a)
'a?(1,x)'
"""
if self.send :
return "%s!(%s)" % (self.name, ",".join([str(p) for p in self]))
else :
return "%s?(%s)" % (self.name, ",".join([str(p) for p in self]))
def __repr__ (self) :
"""
>>> a = Action('a', True, [Value(1), Variable('x')])
>>> repr(a)
"Action('a', True, [Value(1), Variable('x')])"
>>> a.send = False
>>> repr(a)
"Action('a', False, [Value(1), Variable('x')])"
"""
return "%s(%s, %s, [%s])" % (self.__class__.__name__, repr(self.name),
str(self.send),
", ".join([repr(p) for p in self]))
def __len__ (self) :
"""Return the number of parameters, aka the arity of the
action.
>>> len(Action('a', True, [Value(1), Variable('x')]))
2
@return: the arity of the action
@rtype: non negative C{int}
"""
return len(self.params)
def __iter__ (self) :
"""Iterate on the parameters
>>> list(Action('a', True, [Value(1), Variable('x')]))
[Value(1), Variable('x')]
"""
for action in self.params :
yield action
def __eq__ (self, other) :
"""Two actions are equal if they have the same name, same send
flags and same parameters.
>>> Action('a', True, [Value(1), Variable('x')]) == Action('a', True, [Value(1), Variable('x')])
True
>>> Action('a', True, [Value(1), Variable('x')]) == Action('b', True, [Value(1), Variable('x')])
False
>>> Action('a', True, [Value(1), Variable('x')]) == Action('a', False, [Value(1), Variable('x')])
False
>>> Action('a', True, [Value(1), Variable('x')]) == Action('a', True, [Value(2), Variable('x')])
False
>>> Action('a', True, [Value(1), Variable('x')]) == Action('a', True, [Value(1)])
False
@param other: the action to compare
@type other: C{Action}
@return: C{True} if the two actions are equal, C{False}
otherwise
@rtype: C{bool}
"""
if self.name != other.name :
return False
elif self.send != other.send :
return False
elif len(self.params) != len(other.params) :
return False
for p, q in zip(self.params, other.params) :
if p != q :
return False
return True
def __ne__ (self, other) :
return not (self == other)
def copy (self, subst=None) :
"""Copy the action, optionally substituting its parameters.
>>> a = Action('a', True, [Variable('x'), Value(2)])
>>> a.copy()
Action('a', True, [Variable('x'), Value(2)])
>>> a = Action('a', True, [Variable('x'), Value(2)])
>>> a.copy(Substitution(x=Value(3)))
Action('a', True, [Value(3), Value(2)])
@param subst: if not C{None}, a substitution to apply to the
parameters of the copy
@type subst: C{None} or C{Substitution} mapping variables
names to C{Value} or C{Variable}
@return: a copy of the action, substituted by C{subst} if not
C{None}
@rtype: C{Action}
"""
result = self.__class__(self.name, self.send,
[p.copy() for p in self.params])
if subst is not None :
result.substitute(subst)
return result
def substitute (self, subst) :
"""Substitute the parameters according to C{subst}
>>> a = Action('a', True, [Variable('x'), Value(2)])
>>> a.substitute(Substitution(x=Value(3)))
>>> a
Action('a', True, [Value(3), Value(2)])
@param subst: a substitution to apply to the parameters
@type subst: C{Substitution} mapping variables names to
C{Value} or C{Variable}
"""
for i, p in enumerate(self.params) :
if isinstance(p, Variable) and p.name in subst :
self.params[i] = subst(p.name)
def vars (self) :
"""
>>> Action('a', True, [Value(3), Variable('x'), Variable('y'), Variable('x')]).vars() == set(['x', 'y'])
True
@return: the set of variable names appearing in the parameters
of the action
@rtype: C{set} of C{str}
"""
return set(p.name for p in self.params if isinstance(p, Variable))
def __and__ (self, other) :
"""Compute an unification of two conjugated actions.
An unification is a C{Substitution} that maps variable names
to C{Variable} or C{Values}. If both actions are substituted
by this unification, their parameters lists become equal. If
no unification can be found, C{ConstraintError} is raised (or,
rarely, C{DomainError} depending on the cause of the failure).
>>> s = Action('a', True, [Value(3), Variable('x'), Variable('y'), Variable('x')])
>>> r = Action('a', False, [Value(3), Value(2), Variable('t'), Variable('z')])
>>> u = s & r
>>> u == Substitution(y=Variable('t'), x=Value(2), z=Value(2))
True
>>> s.substitute(u)
>>> r.substitute(u)
>>> s.params == r.params
True
>>> s.params
[Value(3), Value(2), Variable('t'), Value(2)]
>>> s = Action('a', True, [Value(2), Variable('x'), Variable('y'), Variable('x')])
>>> r = Action('a', False, [Value(3), Value(2), Variable('t'), Variable('z')])
>>> try : s & r
... except ConstraintError : print(sys.exc_info()[1])
incompatible values
>>> r = Action('a', False, [Value(2), Value(2), Variable('t')])
>>> try : s & r
... except ConstraintError : print(sys.exc_info()[1])
arities do not match
>>> r = Action('b', False, [Value(3), Value(2), Variable('t'), Variable('z')])
>>> try : s & r
... except ConstraintError : print(sys.exc_info()[1])
actions not conjugated
>>> r = Action('a', True, [Value(3), Value(2), Variable('t'), Variable('z')])
>>> try : s & r
... except ConstraintError : print(sys.exc_info()[1])
actions not conjugated
@param other: the other action to unify with
@type other: C{Action}
@return: a substitution that unify both actions
@rtype: C{Substitution}
"""
if (self.name != other.name) or (self.send == other.send) :
raise ConstraintError("actions not conjugated")
elif len(self) != len(other) :
raise ConstraintError("arities do not match")
result = Substitution()
for x, y in zip(self.params, other.params) :
# apply the unifier already computed
if isinstance(x, Variable) and x.name in result :
x = result(x.name)
if isinstance(y, Variable) and y.name in result :
y = result(y.name)
# unify the current pair of parameters
if isinstance(x, Value) and isinstance(y, Value) :
if x.value != y.value :
raise ConstraintError("incompatible values")
elif isinstance(x, Variable) and isinstance(y, Value) :
result += Substitution({x.name : y.copy()})
elif isinstance(x, Value) and isinstance(y, Variable) :
result += Substitution({y.name : x.copy()})
elif isinstance(x, Variable) and isinstance(y, Variable) :
if x.name != y.name :
result += Substitution({x.name : y.copy()})
else :
raise ConstraintError("unexpected action parameter")
return result
class MultiAction (object) :
def __init__ (self, actions) :
"""
>>> try : MultiAction([Action('a', True, [Variable('x')]),
... Action('a', False, [Value(2)])])
... except ConstraintError : print(sys.exc_info()[1])
conjugated actions in the same multiaction
@param actions: a collection of actions with no conjugated
actions in it
@type actions: C{list} of C{Action}
"""
self._actions = []
self._sndrcv = {}
self._count = {}
for act in actions :
self.add(act)
__pnmltag__ = "multiaction"
def __pnmldump__ (self) :
"""
>>> MultiAction([Action('a', True, [Variable('x')]),
... Action('b', False, [Variable('y'), Value(2)])
... ]).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>...
<multiaction>
<action name="a" send="True">
<variable>
x
</variable>
</action>
<action name="b" send="False">
<variable>
y
</variable>
<value>
<object type="int">
2
</object>
</value>
</action>
</multiaction>
</pnml>
"""
return Tree(self.__pnmltag__, None,
*(Tree.from_obj(action) for action in self._actions))
@classmethod
def __pnmlload__ (cls, tree) :
"""
>>> t = MultiAction([Action('a', True, [Variable('x')]),
... Action('b', False, [Variable('y'), Value(2)])
... ]).__pnmldump__()
>>> MultiAction.__pnmlload__(t)
MultiAction([Action('a', True, [Variable('x')]),
Action('b', False, [Variable('y'), Value(2)])])
"""
return cls(child.to_obj() for child in tree.children)
def __repr__ (self) :
"""
>>> MultiAction([Action('a', True, [Variable('x')]),
... Action('b', False, [Variable('y'), Value(2)])])
MultiAction([Action('a', True, [Variable('x')]),
Action('b', False, [Variable('y'), Value(2)])])
"""
return "%s([%s])" % (self.__class__.__name__,
", ".join(repr(act) for act in self._actions))
def __str__ (self) :
"""
>>> str(MultiAction([Action('a', True, [Variable('x')]),
... Action('b', False, [Variable('y'), Value(2)])]))
'[a!(x), b?(y,2)]'
"""
return "[%s]" % ", ".join(str(act) for act in self._actions)
def send (self, name) :
"""Returns the send flag of the action C{name} in this
multiaction.
This value is unique as conjugated actions are forbidden in
the same multiaction.
>>> m = MultiAction([Action('a', True, [Variable('x')]),
... Action('b', False, [Variable('y'), Value(2)])])
>>> m.send('a'), m.send('b')
(True, False)
"""
return self._sndrcv[name]
def add (self, action) :
"""Add an action to the multiaction.
This may raise C{ConstraintError} if the added action is
conjugated to one that already belongs to the multiaction.
@param action: the action to add
@type action: C{Action}
"""
if self._sndrcv.get(action.name, action.send) != action.send :
raise ConstraintError("conjugated actions in the same multiaction")
self._sndrcv[action.name] = action.send
self._count[action.name] = self._count.get(action.name, 0) + 1
self._actions.append(action)
def remove (self, action) :
"""Remove an action from the multiaction.
This may raise C{ValueError} if the removed action does
belongs to the multiaction.
@param action: the action to remove
@type action: C{Action}
"""
self._actions.remove(action)
self._count[action.name] -= 1
if self._count[action.name] == 0 :
del self._count[action.name]
del self._sndrcv[action.name]
def __iter__ (self) :
"""Iterate over the actions in the multiaction.
>>> list(MultiAction([Action('a', True, [Variable('x')]),
... Action('b', False, [Variable('y'), Value(2)])]))
[Action('a', True, [Variable('x')]),
Action('b', False, [Variable('y'), Value(2)])]
"""
for action in self._actions :
yield action
def __len__ (self) :
"""Return the number of actions in a multiaction.
>>> len(MultiAction([Action('a', True, [Variable('x')]),
... Action('b', False, [Variable('y'), Value(2)])]))
2
@return: the number of contained actions
@rtype: non negative C{int}
"""
return len(self._actions)
def substitute (self, subst) :
"""Substitute bu C{subt} all the actions in the multiaction.
>>> m = MultiAction([Action('a', True, [Variable('x')]),
... Action('b', False, [Variable('y'), Variable('x')])])
>>> m.substitute(Substitution(x=Value(4)))
>>> m
MultiAction([Action('a', True, [Value(4)]),
Action('b', False, [Variable('y'), Value(4)])])
"""
for action in self._actions :
action.substitute(subst)
def copy (self, subst=None) :
""" Copy the multiaction (and the actions is contains)
optionally substituting it.
@param subst: if not C{None}, the substitution to apply to the
copy.
@type subst: C{None} or C{Substitution}
@return: a copy of the multiaction, optionally substituted
@rtype: C{MultiAction}
"""
result = self.__class__(act.copy() for act in self._actions)
if subst is not None :
result.substitute(subst)
return result
def __contains__ (self, action) :
"""Search an action in the multiaction.
The searched action may be a complete C{Action}, just an
action name, or a pair C{(name, send_flag)}.
>>> m = MultiAction([Action('a', True, [Variable('x'), Value(2)]),
... Action('a', True, [Value(3), Variable('y')]),
... Action('b', False, [Variable('x'), Variable('y')])])
>>> 'a' in m, 'b' in m, 'c' in m
(True, True, False)
>>> ('a', True) in m, ('a', False) in m
(True, False)
>>> Action('a', True, [Variable('x'), Value(2)]) in m
True
>>> Action('a', True, [Variable('x')]) in m
False
>>> Action('a', False, [Variable('x'), Value(2)]) in m
False
>>> Action('c', True, [Variable('x'), Value(2)]) in m
False
@param action: an complete action, or its name or its name and
send flag
@type action: C{Action} or C{str} or C{tuple(str, bool)}
@return: C{True} if the specified action was found, C{False}
otherwise
@rtype: C{bool}
"""
if isinstance(action, Action) :
return action in self._actions
elif isinstance(action, tuple) and len(action) == 2 :
return (action[0] in self._sndrcv
and self._sndrcv[action[0]] == action[1])
elif isinstance(action, str) :
return action in self._count
else :
raise ValueError("invalid action specification")
def __add__ (self, other) :
"""Create a multiaction by adding the actions of two others.
>>> m = MultiAction([Action('a', True, [Variable('x'), Value(2)]),
... Action('a', True, [Value(3), Variable('y')]),
... Action('b', False, [Variable('x'), Variable('y')])])
>>> m + m
MultiAction([Action('a', True, [Variable('x'), Value(2)]),
Action('a', True, [Value(3), Variable('y')]),
Action('b', False, [Variable('x'), Variable('y')]),
Action('a', True, [Variable('x'), Value(2)]),
Action('a', True, [Value(3), Variable('y')]),
Action('b', False, [Variable('x'), Variable('y')])])
>>> m + Action('c', True, [])
MultiAction([Action('a', True, [Variable('x'), Value(2)]),
Action('a', True, [Value(3), Variable('y')]),
Action('b', False, [Variable('x'), Variable('y')]),
Action('c', True, [])])
@param other: the other multiaction to combine or a single
action
@type other: C{MultiAction} or C{Action}
@return: the concatenated multiaction
@rtype: C{MultiAction}
"""
if isinstance(other, Action) :
other = self.__class__([other])
result = self.copy()
for action in other._actions :
result.add(action)
return result
def __sub__ (self, other) :
"""Create a multiaction by substracting the actions of two others.
>>> m = MultiAction([Action('a', True, [Variable('x'), Value(2)]),
... Action('a', True, [Value(3), Variable('y')]),
... Action('b', False, [Variable('x'), Variable('y')])])
>>> m - m
MultiAction([])
>>> m - Action('b', False, [Variable('x'), Variable('y')])
MultiAction([Action('a', True, [Variable('x'), Value(2)]),
Action('a', True, [Value(3), Variable('y')])])
@param other: the other multiaction to combine or a single
action
@type other: C{MultiAction} or C{Action}
@return: the resulting multiaction
@rtype: C{MultiAction}
"""
if isinstance(other, Action) :
other = self.__class__([other])
result = self.copy()
for action in other._actions :
result.remove(action)
return result
def vars (self) :
"""Return the set of variable names used in all the actions of
the multiaction.
>>> MultiAction([Action('a', True, [Variable('x'), Value(2)]),
... Action('a', True, [Value(3), Variable('y')]),
... Action('b', False, [Variable('x'), Variable('z')])]).vars() == set(['x', 'y', 'z'])
True
@return: the set of variable names
@rtype: C{set} of C{str}
"""
result = set()
for action in self._actions :
result.update(action.vars())
return result
def synchronise (self, other, name) :
"""Search all the possible synchronisation on an action name
with another multiaction.
This method returns an iterator that yields for each possible
synchronisation a 4-tuple whose components are:
1. The sending action that did synchronise, it is already
unified, so the corresponding receiving action is just the
same with the reversed send flag.
2. The multiaction resulting from the synchronisation that is
also unified.
3. The substitution that must be applied to the transition
that provided the sending action.
4. The substitution that must be applied to the transition
that provided the receiving action.
>>> m = MultiAction([Action('a', True, [Variable('x'), Value(2)]),
... Action('a', True, [Value(3), Variable('y')]),
... Action('b', False, [Variable('x'), Variable('y')])])
>>> n = MultiAction([Action('a', False, [Variable('w'), Variable('y')]),
... Action('c', False, [Variable('y')])])
>>> for a, x, u, v in m.synchronise(n, 'a') :
... print('%s %s %s %s' % (str(a), str(x), list(sorted(u.items())), list(sorted(v.items()))))
a!(w,2) [a!(3,y), b?(w,y), c?(a)] [('a', Value(2)), ('x', Variable('w'))] [('a', Value(2)), ('x', Variable('w')), ('y', Variable('a'))]
a!(3,a) [a!(x,2), b?(x,a), c?(a)] [('w', Value(3)), ('y', Variable('a'))] [('w', Value(3)), ('y', Variable('a'))]
@param other: the other multiaction to synchronise with
@type other: C{MultiAction}
@param name: the name of the action to synchronise on
@type name: C{str}
@return: an iterator over the possible synchronisations
@rtype: iterator of C{tuple(Action, MultiAction, Substitution, Substitution)}
"""
renamer = Substitution()
common = self.vars() & other.vars()
if len(common) > 0 :
names = WordSet(common)
for var in common :
renamer += Substitution({var : Variable(names.fresh(add=True))})
for left in (act for act in self._actions if act.name == name) :
for right in (act for act in other._actions if act.name == name
if act.send != left.send) :
_right = right.copy(renamer)
try :
unifier = left & _right
except :
continue
_unifier = unifier * renamer
_self = self - left
_self.substitute(unifier)
_other = other - right
_other.substitute(_unifier)
yield left.copy(unifier), _self + _other, unifier, _unifier
@snakes.plugins.plugin("snakes.nets")
def extend (module) :
class Transition (module.Transition) :
def __init__ (self, name, guard=None, **args) :
self.actions = MultiAction(args.pop("actions", []))
module.Transition.__init__(self, name, guard, **args)
def vars (self) :
return module.Transition.vars(self) | self.actions.vars()
def substitute (self, subst) :
module.Transition.substitute(self, subst)
self.actions.substitute(subst)
def copy (self, name=None, **args) :
actions = args.pop("actions", None)
result = module.Transition.copy(self, name, **args)
if actions is None :
result.actions = self.actions.copy()
else :
result.actions = MultiAction(actions)
return result
def __pnmldump__ (self) :
"""
>>> m = MultiAction([Action('a', True, [Variable('x')]),
... Action('b', False, [Variable('y'), Value(2)])])
>>> Transition('t', actions=m).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>...
<transition id="t">
<multiaction>
<action name="a" send="True">
<variable>
x
</variable>
</action>
<action name="b" send="False">
<variable>
y
</variable>
<value>
<object type="int">
2
</object>
</value>
</action>
</multiaction>
</transition>
</pnml>
"""
result = module.Transition.__pnmldump__(self)
result.add_child(Tree.from_obj(self.actions))
return result
@classmethod
def __pnmlload__ (cls, tree) :
"""
>>> m = MultiAction([Action('a', True, [Variable('x')]),
... Action('b', False, [Variable('y'), Value(2)])])
>>> t = Transition('t', actions=m).__pnmldump__()
>>> Transition.__pnmlload__(t).actions
MultiAction([Action('a', True, [Variable('x')]),
Action('b', False, [Variable('y'), Value(2)])])
"""
result = new_instance(cls, module.Transition.__pnmlload__(tree))
result.actions = Tree.to_obj(tree.child(MultiAction.__pnmltag__))
return result
class PetriNet (module.PetriNet) :
def synchronise (self, name) :
snd = []
rcv = []
for trans in self.transition() :
if (name, True) in trans.actions :
snd.append(trans)
elif (name, False) in trans.actions :
rcv.append(trans)
loop = True
done = set()
while loop :
loop = False
for _snd in snd :
for _rcv in rcv :
if (_snd.name, _rcv.name) in done :
continue
try :
new = _snd.actions.synchronise(_rcv.actions, name)
except ConstraintError :
continue
for a, m, s, r in new :
t = self._synchronise(_snd, s, _rcv, r, m, a)
if (name, True) in t.actions :
snd.append(t)
loop = True
elif (name, False) in t.actions :
rcv.append(t)
loop = True
done.add((_snd.name, _rcv.name))
def _synchronise (self, snd, s, rcv, r, actions, sync) :
collect = []
varset = WordSet()
for trans, subst in ((snd, s), (rcv, r)) :
new = "%s%s" % (trans.name, str(subst))
self.copy_transition(trans.name, new)
collect.append(new)
new = self.transition(new)
nv = new.vars()
for v in varset & nv :
new.substitute(Substitution({v : varset.fresh(add=True)}))
varset.update(nv)
for var, val in subst.items() :
if isinstance(val, Variable) :
new.substitute(Substitution({var : val.name}))
for place, label in new.input() :
if var in label.vars() :
self.remove_input(place.name, new.name)
self.add_input(place.name, new.name,
label.replace(Variable(var), val))
for place, label in new.output() :
if var in label.vars() :
self.remove_output(place.name, new.name)
self.add_output(place.name, new.name,
label.replace(Variable(var), val))
merged = "(%s%s+%s%s)[%s]" % (snd.name, str(s), rcv.name, str(s),
str(sync).replace("?", "").replace("!", ""))
self.merge_transitions(merged, collect, actions=actions)
for name in collect :
self.remove_transition(name)
return self.transition(merged)
def restrict (self, action) :
removed = [trans.name for trans in self.transition()
if action in trans.actions]
for trans in removed :
self.remove_transition(trans)
def scope (self, action) :
self.synchronise(action)
self.restrict(action)
def merge_transitions (self, target, sources, **args) :
actions = args.pop("actions", None)
module.PetriNet.merge_transitions(self, target, sources, **args)
if actions is None :
actions = MultiAction()
for src in sources :
actions += self.transition(src).actions
self.transition(target).actions = actions
else :
self.transition(target).actions = MultiAction(actions)
def copy_transition (self, source, targets, **args) :
actions = args.pop("actions", None)
module.PetriNet.copy_transition(self, source, targets, **args)
if actions is None :
actions = self.transition(source).actions
else :
actions = MultiAction(actions)
old = self.transition(source)
for trans in iterate(targets) :
self.transition(trans).actions = actions.copy()
return PetriNet, Transition, Action, MultiAction
"""A module to save and load objects in PNML.
Petri nets objects are saved in PNML, other Python objects are saved
in a readable format when possible and pickled as a last solution.
This should result in a complete PNML serialization of any object.
"""
import xml.dom.minidom
import pickle
import sys, inspect, os, os.path, imp, pkgutil
import snakes, snakes.plugins
from snakes import SnakesError
from snakes.compat import *
if PY3 :
import ast
try :
builtins = sys.modules["__builtin__"]
except KeyError :
builtins = sys.modules["builtins"]
def _snk_import (name) :
"Properly import a module, including a plugin"
if name.startswith("snakes.plugins.") :
return snakes.plugins.load(name, "snakes.nets")
else :
return __import__(name, fromlist=["__name__"])
def _snk_modules () :
"List all SNAKES' modules"
queue = ["snakes"]
while len(queue) > 0 :
modname = queue.pop(0)
try :
mod = _snk_import(modname)
except :
continue
yield modname, mod
importer = pkgutil.ImpImporter(mod.__path__[0])
for name, ispkg in importer.iter_modules(prefix=mod.__name__ + ".") :
if ispkg :
queue.append(name)
else :
try :
yield name, _snk_import(name)
except :
pass
def _snk_tags () :
"Lists all PNML tags found in SNAKES"
for modname, mod in _snk_modules() :
for clsname, cls in inspect.getmembers(mod, inspect.isclass) :
if cls.__module__ == modname and "__pnmltag__" in cls.__dict__ :
yield cls.__pnmltag__, modname, clsname
class _set (object) :
"""Set where items are iterated by order of insertion
"""
def __init__ (self, elements=[]) :
"""
>>> _set([4, 5, 1, 2, 4])
_set([4, 5, 1, 2])
"""
self._data = {}
self._last = 0
for e in elements :
self.add(e)
def __repr__ (self) :
"""
>>> _set([4, 5, 1, 2, 4])
_set([4, 5, 1, 2])
"""
return "%s([%s])" % (self.__class__.__name__,
", ".join(repr(x) for x in self))
def add (self, element) :
"""
>>> s = _set([4, 5, 1, 2, 4])
>>> s.add(0)
>>> s
_set([4, 5, 1, 2, 0])
>>> s.add(5)
>>> s
_set([4, 5, 1, 2, 0])
"""
if element not in self._data :
self._data[element] = self._last
self._last += 1
def __contains__ (self, element) :
"""
>>> 4 in _set([4, 5, 1, 2, 4])
True
>>> 0 in _set([4, 5, 1, 2, 4])
False
"""
return element in self._data
def _2nd (self, pair) :
"""
>>> _set()._2nd((1, 2))
2
"""
return pair[1]
def __iter__ (self) :
"""
>>> list(_set([4, 5, 1, 2, 4]))
[4, 5, 1, 2]
"""
return (k for k, v in sorted(self._data.items(), key=self._2nd))
def discard (self, element) :
"""
>>> s = _set([4, 5, 1, 2, 4])
>>> s.discard(0)
>>> s.discard(4)
>>> s
_set([5, 1, 2])
"""
if element in self._data :
del self._data[element]
def remove (self, element) :
"""
>>> s = _set([4, 5, 1, 2, 4])
>>> s.remove(0)
Traceback (most recent call last):
...
KeyError: ...
>>> s.remove(4)
>>> s
_set([5, 1, 2])
"""
del self._data[element]
def copy (self) :
"""
>>> _set([4, 5, 1, 2, 4]).copy()
_set([4, 5, 1, 2])
"""
return self.__class__(self)
def __iadd__ (self, other) :
"""
>>> s = _set([4, 5, 1, 2, 4])
>>> s += range(7)
>>> s
_set([4, 5, 1, 2, 0, 3, 6])
"""
for element in other :
self.add(element)
return self
def __add__ (self, other) :
"""
>>> _set([4, 5, 1, 2, 4]) + range(7)
_set([4, 5, 1, 2, 0, 3, 6])
"""
result = self.copy()
result += other
return result
def __len__ (self) :
"""
>>> len(_set([4, 5, 1, 2, 4]))
4
"""
return len(self._data)
class Tree (object) :
"""Abstraction of a PNML tree
>>> Tree('tag', 'data', Tree('child', None), attr='attribute value')
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<tag attr="attribute value">
<child/>
data
</tag>
</pnml>
"""
def __init__ (self, _name, _data, *_children, **_attributes) :
"""Initialize a PNML tree
>>> Tree('tag', 'data',
... Tree('first_child', None),
... Tree('second_child', None),
... first_attr='attribute value',
... second_attr='another value')
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<tag first_attr="attribute value" second_attr="another value">
<first_child/>
<second_child/>
data
</tag>
</pnml>
Note: parameters names start with a '_' in order to allow for
using them as attributes.
@param _name: the name of the tag
@type _name: C{str}
@param _data: the text held by the tag or C{None}
@type _data: C{str} or C{None}
@param _children: children nodes
@type _children: C{Tree}
@param _attributes: attributes and values of the tag
@type _attributes: C{str}
"""
self.name = _name
if _data is not None and _data.strip() == "" :
_data = None
self.data = _data
self.children = list(_children)
self.attributes = _attributes
@classmethod
def _load_tags (_class) :
if not hasattr(_class, "_tags") :
_class._tags = {}
for tag, mod, cls in _snk_tags() :
if tag not in _class._tags :
_class._tags[tag] = (mod, cls)
def _update_node (self, doc, node) :
for key, val in self.attributes.items() :
node.setAttribute(key, val)
for child in self.children :
node.appendChild(child._to_dom(doc))
if self.data is not None :
node.appendChild(doc.createTextNode(self.data))
def _to_dom (self, doc) :
result = doc.createElement(self.name)
self._update_node(doc, result)
return result
def to_pnml (self) :
"""Dumps a PNML tree to an XML string
>>> print(Tree('tag', 'data', Tree('child', None), attr='value').to_pnml())
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<tag attr="value">
<child/>
data
</tag>
</pnml>
@return: the XML string that represents the PNML tree
@rtype: C{str}
"""
if self.name == "pnml" :
tree = self
else :
tree = self.__class__("pnml", None, self)
try :
plugins = _set(self.__plugins__)
except AttributeError :
plugins = _set()
for node in self.nodes() :
if hasattr(node, "_plugins") :
plugins += node._plugins
if len(plugins) > 0 :
tree.children.insert(0, Tree("snakes", None,
Tree("plugins", None,
Tree.from_obj(tuple(plugins))),
version=snakes.version))
impl = xml.dom.minidom.getDOMImplementation()
doc = impl.createDocument(None, "pnml", None)
node = tree._to_dom(doc)
tree._update_node(doc, doc.documentElement)
if len(plugins) > 0 :
del tree.children[0]
r = doc.toprettyxml(indent=" ",
encoding=snakes.defaultencoding).strip()
if PY3 :
return r.decode()
else :
return r
@classmethod
def from_dom (cls, node) :
"""Load a PNML tree from an XML DOM representation
>>> src = Tree('object', '42', type='int').to_pnml()
>>> dom = xml.dom.minidom.parseString(src)
>>> Tree.from_dom(dom.documentElement)
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<object type="int">
42
</object>
</pnml>
@param node: the DOM node to load
@type node: C{xml.dom.minidom.Element}
@return: the loaded PNML tree
@rtype: C{Tree}
"""
result = cls(node.tagName, node.nodeValue)
for i in range(node.attributes.length) :
name = node.attributes.item(i).localName
result[name] = str(node.getAttribute(name))
if node.hasChildNodes() :
for child in node.childNodes :
if child.nodeType == child.TEXT_NODE :
result.add_data(str(child.data))
elif child.nodeType == child.ELEMENT_NODE :
result.add_child(cls.from_dom(child))
return result
@classmethod
def from_pnml (cls, source, plugins=[]) :
"""Load a PNML tree from an XML string representation
>>> src = Tree('object', '42', type='int').to_pnml()
>>> Tree.from_pnml(src)
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<object type="int">
42
</object>
</pnml>
@param source: the XML string to load or an opened file that
contains it
@type source: C{str} or C{file}
@return: the loaded PNML tree
@rtype: C{Tree}
"""
try :
doc = xml.dom.minidom.parse(source)
except :
doc = xml.dom.minidom.parseString(source)
result = cls.from_dom(doc.documentElement)
plugins = _set(plugins)
cls._load_tags()
tag2obj = {}
trash = []
for node in result.nodes() :
node._tag2obj = tag2obj
if node.has_child("snakes") :
snk = node.child("snakes")
trash.append((node, snk))
plugins += snk.child("plugins").child("object").to_obj()
if node.name in cls._tags :
modname, clsname = cls._tags[node.name]
if modname.startswith("snakes.plugins.") :
plugins.add(modname)
elif node.name not in tag2obj :
tag2obj[node.name] = getattr(_snk_import(modname), clsname)
for parent, child in trash :
parent.children.remove(child)
plugins.discard("snakes.nets")
nets = snakes.plugins.load(plugins, "snakes.nets")
for name, obj in inspect.getmembers(nets, inspect.isclass) :
# Skip classes that cannot be serialised to PNML
try :
tag = obj.__pnmltag__
except AttributeError :
continue
# Get the last class in the hierarchy that has the same
# "__pnmltag__" and is in the same module. This is useful
# for instance for snakes.typing.Type and its subclasses:
# the former should be called used of the laters because
# it dispatches the call to "__pnmlload__" according to
# "__pnmltype__".
bases = [obj] + [c for c in inspect.getmro(obj)
if (c.__module__ == obj.__module__)
and hasattr(c, "__pnmltag__")
and c.__pnmltag__ == tag]
tag2obj[tag] = bases[-1]
return result
def nodes (self) :
"""Iterate over all the nodes (top-down) in a tree
>>> t = Tree('foo', None,
... Tree('bar', None),
... Tree('egg', None,
... Tree('spam', None)))
>>> for node in t.nodes() :
... print(str(node))
<PNML tree 'foo'>
<PNML tree 'bar'>
<PNML tree 'egg'>
<PNML tree 'spam'>
@return: an iterator on all the nodes in the tree, including
this one
@rtype: C{generator}
"""
yield self
for child in self.children :
for node in child.nodes() :
yield node
def update (self, other) :
"""Incorporates children, attributes and data from another
PNML tree
>>> t = Tree('foo', 'hello',
... Tree('bar', None),
... Tree('egg', None,
... Tree('spam', None)))
>>> o = Tree('foo', 'world',
... Tree('python', None),
... attr='value')
>>> t.update(o)
>>> t
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<foo attr="value">
<bar/>
<egg>
<spam/>
</egg>
<python/>
hello
world
</foo>
</pnml>
>>> o = Tree('oops', None,
... Tree('hello', None),
... attr='value')
>>> try : t.update(o)
... except SnakesError : print(sys.exc_info()[1])
tag mismatch 'foo', 'oops'
@param other: the other tree to get data from
@type other: C{Tree}
@raise SnakesError: when C{other} has not the same tag as C{self}
"""
if self.name != other.name :
raise SnakesError("tag mismatch '%s', '%s'" % (self.name, other.name))
self.children.extend(other.children)
self.attributes.update(other.attributes)
self.add_data(other.data)
def add_child (self, child) :
"""Add a child to a PNML tree
>>> t = Tree('foo', None)
>>> t.add_child(Tree('bar', None))
>>> t
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<foo>
<bar/>
</foo>
</pnml>
@param child: the PNML tree to append
@type child: C{Tree}
"""
self.children.append(child)
def add_data (self, data, sep='\n') :
"""Appends data to the current node
>>> t = Tree('foo', None)
>>> t.add_data('hello')
>>> t
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<foo>
hello
</foo>
</pnml>
>>> t.add_data('world')
>>> t
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<foo>
hello
world
</foo>
</pnml>
>>> t.add_data('!', '')
>>> t
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<foo>
hello
world!
</foo>
</pnml>
@param data: the data to add
@type data: C{str}
@param sep: separator to insert between pieces of data
@type sep: C{str}
"""
try :
data = data.strip()
if data != "" :
if self.data is None :
self.data = data
else :
self.data += sep + data
except :
pass
def __getitem__ (self, name) :
"""Returns one attribute
>>> Tree('foo', None, x='egg', y='spam')['x']
'egg'
>>> Tree('foo', None, x='egg', y='spam')['z']
Traceback (most recent call last):
...
KeyError: 'z'
@param name: the name of the attribute
@type name: C{str}
@return: the value of the attribute
@rtype: C{str}
@raise KeyError: if no such attribute is found
"""
return self.attributes[name]
def __setitem__ (self, name, value) :
"""Sets an attribute
>>> t = Tree('foo', None)
>>> t['egg'] = 'spam'
>>> t
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<foo egg="spam"/>
</pnml>
@param name: the name of the attribute
@type name: C{str}
@param value: the value of the attribute
@type value: C{str}
"""
self.attributes[name] = value
def __iter__ (self) :
"""Iterate over children nodes
>>> [str(node) for node in Tree('foo', None,
... Tree('egg', None),
... Tree('spam', None,
... Tree('bar', None)))]
["<PNML tree 'egg'>", "<PNML tree 'spam'>"]
@return: an iterator over direct children of the node
@rtype: C{generator}
"""
return iter(self.children)
def has_child (self, name) :
"""Test if the tree has the given tag as a direct child
>>> t = Tree('foo', None,
... Tree('egg', None),
... Tree('spam', None,
... Tree('bar', None)))
>>> t.has_child('egg')
True
>>> t.has_child('bar')
False
>>> t.has_child('python')
False
@param name: tag name to search for
@type name: C{str}
@return: a Boolean value indicating wether such a child was
found or not
@rtype: C{bool}
"""
for child in self :
if child.name == name :
return True
return False
def child (self, name=None) :
"""Return the direct child that as the given tag
>>> t = Tree('foo', None,
... Tree('egg', 'first'),
... Tree('egg', 'second'),
... Tree('spam', None,
... Tree('bar', None)))
>>> t.child('spam')
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<spam>
<bar/>
</spam>
</pnml>
>>> try : t.child('python')
... except SnakesError : print(sys.exc_info()[1])
no child 'python'
>>> try : t.child('bar')
... except SnakesError : print(sys.exc_info()[1])
no child 'bar'
>>> try : t.child('egg')
... except SnakesError : print(sys.exc_info()[1])
multiple children 'egg'
>>> try : t.child()
... except SnakesError : print(sys.exc_info()[1])
multiple children
@param name: name of the tag to search for, if C{None}, the
fisrt child is returned if it is the only child
@type name: C{str} or C{None}
@return: the only child with the given name, or the only child
if no name is given
@rtype: C{Tree}
@raise SnakesError: when no child or more than one child could
be returned
"""
result = None
for child in self :
if name is None or child.name == name :
if result is None :
result = child
elif name is None :
raise SnakesError("multiple children")
else :
raise SnakesError("multiple children '%s'" % name)
if result is None :
raise SnakesError("no child '%s'" % name)
return result
def get_children (self, name=None) :
"""Iterates over direct children having the given tag
>>> t = Tree('foo', None,
... Tree('egg', 'first'),
... Tree('egg', 'second'),
... Tree('spam', None,
... Tree('bar', None)))
>>> [str(n) for n in t.get_children()]
["<PNML tree 'egg'>", "<PNML tree 'egg'>", "<PNML tree 'spam'>"]
>>> [str(n) for n in t.get_children('egg')]
["<PNML tree 'egg'>", "<PNML tree 'egg'>"]
>>> [str(n) for n in t.get_children('python')]
[]
>>> [str(n) for n in t.get_children('bar')]
[]
@param name: tag to search for or C{None}
@type name: C{str} or C{None}
@return: iterator over all the children if C{name} is C{None},
or over the children with tag C{name} otherwise
@rtype: C{generator}
"""
for child in self :
if name is None or child.name == name :
yield child
def __str__ (self) :
"""Return a simple string representation of the node
>>> str(Tree('foo', None, Tree('child', None)))
"<PNML tree 'foo'>"
@return: simple string representation of the node
@rtype: C{str}
"""
return "<PNML tree %r>" % self.name
def __repr__ (self) :
"""Return a detailed representation of the node.
This is actually the XML text that corresponds to the C{Tree},
as returned by C{Tree.to_pnml}.
>>> print(repr(Tree('foo', None, Tree('child', None))))
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<foo>
<child/>
</foo>
</pnml>
@return: XML string representation of the node
@rtype: C{str}
"""
return self.to_pnml()
_elementary = set(("str", "int", "float", "bool"))
_collection = set(("list", "tuple", "set"))
@classmethod
def from_obj (cls, obj) :
"""Builds a PNML tree from an object.
Objects defined in SNAKES usually have a method
C{__pnmldump__} that handles the conversion, for instance:
>>> import snakes.nets
>>> Tree.from_obj(snakes.nets.Place('p'))
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<place id="p">
<type domain="universal"/>
<initialMarking>
<multiset/>
</initialMarking>
</place>
</pnml>
Most basic Python classes are handled has readable XML:
>>> Tree.from_obj(42)
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<object type="int">
42
</object>
</pnml>
>>> Tree.from_obj([1, 2, 3])
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<object type="list">
<object type="int">
1
</object>
<object type="int">
2
</object>
<object type="int">
3
</object>
</object>
</pnml>
Otherwise, the object is serialised using module C{pickle},
which allows to embed almost anything into PNML.
>>> import re
>>> Tree.from_obj(re.compile('foo|bar')) # serialized data replaced with '...'
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<object type="pickle">
...
</object>
</pnml>
@param obj: the object to convert to PNML
@type obj: C{object}
@return: the corresponding PNML tree
@rtype: C{Tree}
"""
try :
result = obj.__pnmldump__()
result._tag2obj = {result.name : obj}
if hasattr(obj, "__plugins__") :
result._plugins = obj.__plugins__
return result
except AttributeError :
pass
result = Tree("object", None)
_type = type(obj)
_name = _type.__name__
if _name in cls._elementary :
handler = result._set_elementary
elif _name in cls._collection :
handler = result._set_collection
elif inspect.ismethod(obj) :
handler = result._set_method
_name = "method"
elif inspect.isclass(obj) :
handler = result._set_class
_name = "class"
elif inspect.isroutine(obj) :
handler = result._set_function
_name = "function"
elif inspect.ismodule(obj) :
handler = result._set_module
_name = "module"
else :
try :
handler = getattr(result, "_set_" + _name)
except AttributeError :
handler = result._set_pickle
_name = "pickle"
result["type"] = _name
handler(obj)
return result
def _set_NoneType (self, value) :
pass
def _get_NoneType (self) :
pass
def _set_elementary (self, value) :
self.data = str(value)
def _get_elementary (self) :
if self["type"] == "bool" :
return self.data.strip() == "True"
return getattr(builtins, self["type"])(self.data)
def _set_collection (self, value) :
for v in value :
self.add_child(self.from_obj(v))
def _get_collection (self) :
return getattr(builtins, self["type"])(child.to_obj()
for child in self)
def _set_dict (self, value) :
for k, v in value.items() :
self.add_child(Tree("item", None,
Tree("key", None, self.from_obj(k)),
Tree("value", None, self.from_obj(v))))
def _get_dict (self) :
return dict((child.child("key").child("object").to_obj(),
child.child("value").child("object").to_obj())
for child in self)
def _native (self, obj) :
try :
if (obj.__module__ in ("__builtin__", "__builtins__", "builtins")
or obj.__module__ == "snakes"
or obj.__module__.startswith("snakes")
or inspect.isbuiltin(obj)) :
return True
except :
pass
try :
lib = os.path.dirname(inspect.getfile(inspect.getfile))
if os.path.isfile(lib) :
lib = os.path.dirname(lib)
lib += os.sep
try :
path = inspect.getfile(obj)
except :
path = inspect.getmodule(obj).__file__
return path.startswith(lib)
except :
return False
def _name (self, obj) :
try :
name = obj.__module__
except :
name = inspect.getmodule(obj).__name__
if name in ("__builtin__", "__builtins__", "builtins") :
return obj.__name__
else :
return name + "." + obj.__name__
def _set_class (self, value) :
if self._native(value) :
self["name"] = self._name(value)
else :
self._set_pickle(value)
def _get_class (self) :
if self.data :
return self._get_pickle()
elif "." in self["name"] :
module, name = self["name"].rsplit(".", 1)
return getattr(__import__(module, fromlist=[name]), name)
else :
return getattr(builtins, self["name"])
def _set_function (self, value) :
self._set_class(value)
def _get_function (self) :
return self._get_class()
def _set_method (self, value) :
self._set_function(value)
self["name"] = "%s.%s" % (self["name"], value.__name__)
def _get_method (self) :
if self.data :
return self._get_pickle()
module, cls, name = self["name"].rsplit(".", 2)
cls = getattr(__import__(module, fromlist=[name]), name)
return getattr(cls, name)
def _set_module (self, value) :
self["name"] = value.__name__
def _get_module (self) :
return __import__(self["name"], fromlist=["__name__"])
def _set_pickle (self, value) :
self["type"] = "pickle"
self.data = pickle.dumps(value)
if PY3 :
self.data = repr(self.data)
def _get_pickle (self) :
if PY3 :
return pickle.loads(ast.literal_eval(self.data))
else :
return pickle.loads(self.data)
def to_obj (self) :
"""Build an object from its PNML representation
This is just the reverse as C{Tree.from_obj}, objects that
have a C{__pnmldump__} method should also have a
C{__pnmlload__} class method to perform the reverse operation,
together with an attribute C{__pnmltag__}. Indeed, when a PNML
node with tag name 'foo' has to be converted to an object, a
class C{C} such that C{C.__pnmltag__ == 'foo'} is searched in
module C{snakes.nets} and C{C.__pnmlload__(tree)} is called to
rebuild the object.
Standard Python objects and pickled ones are also recognised
and correctly rebuilt.
>>> import snakes.nets
>>> Tree.from_obj(snakes.nets.Place('p')).to_obj()
Place('p', MultiSet([]), tAll)
>>> Tree.from_obj(42).to_obj()
42
>>> Tree.from_obj([1, 2, 3]).to_obj()
[1, 2, 3]
>>> import re
>>> Tree.from_obj(re.compile('foo|bar')).to_obj()
<... object at ...>
@return: the Python object encoded by the PNML tree
@rtype: C{object}
"""
if self.name == "pnml" :
if len(self.children) == 0 :
raise SnakesError("empty PNML content")
elif len(self.children) == 1 :
return self.child().to_obj()
else :
return tuple(child.to_obj() for child in self.children
if child.name != "snakes")
elif self.name == "object" :
if self["type"] in self._elementary :
handler = self._get_elementary
elif self["type"] in self._collection :
handler = self._get_collection
else :
try :
handler = getattr(self, "_get_" + self["type"])
except AttributeError :
handler = self._get_pickle
return handler()
elif self.name != "snakes" :
try :
if self.name in self._tag2obj :
return self._tag2obj[self.name].__pnmlload__(self)
except AttributeError :
pass
raise SnakesError("unsupported PNML tag '%s'" % self.name)
def dumps (obj) :
"""Dump an object to a PNML string
>>> print(dumps(42))
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<object type="int">
42
</object>
</pnml>
@param obj: the object to dump
@type obj: C{object}
@return: the PNML that represents the object
@rtype: C{str}
"""
return Tree.from_obj(obj).to_pnml()
def loads (source, plugins=[]) :
"""Load an object from a PNML string
>>> loads(dumps(42))
42
@param source: the data to parse
@type source: C{str}
@return: the object represented by the source
@rtype: C{object}
"""
return Tree.from_pnml(source, plugins).to_obj()
"""Typing system for places (and transitions guards).
Types are contraint checkers to verify whether a value is in the type
or not. For instance:
>>> 5 in tInteger
True
>>> 4.3 in tInteger
False
The typechecking of several values is possible using the type as a
function. It fails if one value is not in the type.
>>> tInteger(5, 4, 6)
True
>>> tInteger(5, 4, 6.0)
False
Types can be composed in order to build more complexe types. For
example:
>>> 8 in (tInteger & ~Range(1, 5))
True
>>> 3 in (tInteger & ~Range(1, 5))
False
The various compositions are the same as sets operations, plus the
complement: C{&}, C{|}, C{-}, C{^} and C{~} (complement).
Various types are predefined (like C{tInteger}) the others can be
constructed using the various classes in the module. Prefefined types
are:
- C{tAll}: any value is in the type
- C{tNothing}: empty type
- C{tString}: string values
- C{tList}: lists values
- C{tInteger}: integer values
- C{tNatural}: non-negative integers
- C{tPositive}: strictly positive integers
- C{tFloat}: float values
- C{tNumber}: integers or float values
- C{tDict}: Python's C{dict} values
- C{tNone}: the single value C{None}
- C{tBoolean}: C{True} or C{False}
- C{tTuple}: tuple values
- C{tPair}: tuples of length two
Types with finitely many elements can be iterated:
>>> list(sorted(iter(CrossProduct(Range(0, 10, 2), tBoolean) & ~OneOf((4, True), (6, False)))))
[(0, False), (0, True), (2, False), (2, True), (4, False), (6, True), (8, False), (8, True)]
>>> list(iter(OneOf(1, 2, 3)))
[1, 2, 3]
>>> iter(tInteger)
Traceback (most recent call last):
...
TypeError: ...
>>> iter(~OneOf(1, 2, 3))
Traceback (most recent call last):
...
TypeError: ...
"""
import inspect, sys
from snakes.compat import *
from snakes.data import cross
from snakes.pnml import Tree
def _iterable (obj, *types) :
for t in types :
try :
iter(t)
except :
def __iterable__ () :
raise ValueError("iteration over non-sequence")
obj.__iterable__ = __iterable__
break
class Type (object) :
"""Base class for all types.
Implement operations C{&}, C{|}, C{-}, C{^} and C{~} to build new
types. Also implement the typechecking of several values. All the
subclasses should implement the method C{__contains__} to
typecheck a single object.
"""
def __init__ (self) :
"""Abstract method
>>> Type()
Traceback (most recent call last):
...
NotImplementedError: abstract class
@raise NotImplementedError: when called
"""
raise NotImplementedError("abstract class")
def __eq__ (self, other) :
return (self.__class__ == other.__class__
and self.__dict__ == other.__dict__)
def __hash__ (self) :
return hash(repr(self))
def __and__ (self, other) :
"""Intersection type.
>>> Instance(int) & Greater(0)
(Instance(int) & Greater(0))
@param other: the other type in the intersection
@type other: C{Type}
@return: the intersection of both types
@rtype: C{Type}
"""
if other is self :
return self
else :
return _And(self, other)
def __or__ (self, other) :
"""Union type.
>>> Instance(int) | Instance(bool)
(Instance(int) | Instance(bool))
@param other: the other type in the union
@type other: C{Type}
@return: the union of both types
@rtype: C{Type}
"""
if other is self :
return self
else :
return _Or(self, other)
def __sub__ (self, other) :
"""Substraction type.
>>> Instance(int) - OneOf([0, 1, 2])
(Instance(int) - OneOf([0, 1, 2]))
@param other: the other type in the substraction
@type other: C{Type}
@return: the type C{self} minus the type C{other}
@rtype: C{Type}
"""
if other is self :
return tNothing
else :
return _Sub(self, other)
def __xor__ (self, other) :
"""Disjoint union type.
>>> Greater(0) ^ Instance(float)
(Greater(0) ^ Instance(float))
@param other: the other type in the disjoint union
@type other: C{Type}
@return: the disjoint union of both types
@rtype: C{Type}
"""
if other is self :
return tNothing
else :
return _Xor(self, other)
def __invert__ (self) :
"""Complementary type.
>>> ~ Instance(int)
(~Instance(int))
@return: the complementary type
@rtype: C{Type}
"""
return _Invert(self)
def __iterable__ (self) :
"""Called to test if a type is iterable
Should be replaced in subclasses that are not iterable
@raise ValueError: if not iterable
"""
pass
def __call__ (self, *values) :
"""Typecheck values.
>>> Instance(int)(3, 4, 5)
True
>>> Instance(int)(3, 4, 5.0)
False
@param values: values that have to be checked
@type values: C{object}
@return: C{True} if all the values are in the types, C{False}
otherwise
@rtype: C{bool}
"""
for v in values :
if v not in self :
return False
return True
__pnmltag__ = "type"
_typemap = None
@classmethod
def __pnmlload__ (cls, tree) :
"""Load a C{Type} from a PNML tree
Uses the attribute C{__pnmltype__} to know which type
corresponds to a tag "<type domain='xxx'>"
>>> s = List(tNatural | tBoolean).__pnmldump__()
>>> Type.__pnmlload__(s)
Collection(Instance(list),
((Instance(int) & GreaterOrEqual(0))
| OneOf(True, False)))
@param tree: the PNML tree to load
@type tree: C{snakes.pnml.Tree}
@return: the loaded type
@rtype: C{Type}
"""
if cls._typemap is None :
cls._typemap = {}
for n, c in inspect.getmembers(sys.modules[cls.__module__],
inspect.isclass) :
try :
cls._typemap[c.__pnmltype__] = c
except AttributeError :
pass
return cls._typemap[tree["domain"]].__pnmlload__(tree)
class _BinaryType (Type) :
"""A type build from two other ones
This class allows to factorize the PNML related code for various
binary types.
"""
def __pnmldump__ (self) :
return Tree(self.__pnmltag__, None,
Tree("left", None, Tree.from_obj(self._left)),
Tree("right", None, Tree.from_obj(self._right)),
domain=self.__pnmltype__)
@classmethod
def __pnmlload__ (cls, tree) :
return cls(tree.child("left").child().to_obj(),
tree.child("right").child().to_obj())
class _And (_BinaryType) :
"Intersection of two types"
__pnmltype__ = "intersection"
def __init__ (self, left, right) :
self._left = left
self._right = right
_iterable(self, left)
def __repr__ (self) :
return "(%s & %s)" % (repr(self._left), repr(self._right))
def __contains__ (self, value) :
"""Check wether a value is in the type.
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
return (value in self._left) and (value in self._right)
def __iter__ (self) :
self.__iterable__()
for value in self._left :
if value in self._right :
yield value
class _Or (_BinaryType) :
"Union of two types"
__pnmltype__ = "union"
def __init__ (self, left, right) :
self._left = left
self._right = right
_iterable(self, left, right)
def __repr__ (self) :
return "(%s | %s)" % (repr(self._left), repr(self._right))
def __contains__ (self, value) :
"""Check wether a value is in the type.
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
return (value in self._left) or (value in self._right)
def __iter__ (self) :
self.__iterable__()
for value in (self._left & self._right) :
yield value
for value in (self._left ^ self._right) :
yield value
class _Sub (_BinaryType) :
"Subtyping by difference"
__pnmltype__ = "difference"
def __init__ (self, left, right) :
self._left = left
self._right = right
_iterable(self, left)
def __repr__ (self) :
return "(%s - %s)" % (repr(self._left), repr(self._right))
def __contains__ (self, value) :
"""Check wether a value is in the type.
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
return (value in self._left) and (value not in self._right)
def __iter__ (self) :
self.__iterable__()
for value in self._left :
if value not in self._right :
yield value
class _Xor (_BinaryType) :
"Exclusive union of two types"
__pnmltype__ = "xor"
def __init__ (self, left, right) :
self._left = left
self._right = right
_iterable(self, left, right)
def __repr__ (self) :
return "(%s ^ %s)" % (repr(self._left), repr(self._right))
def __contains__ (self, value) :
"""Check wether a value is in the type.
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
if value in self._left :
return value not in self._right
else :
return value in self._right
def __iter__ (self) :
self.__iterable__()
for value in self._left :
if value not in self._right :
yield value
for value in self._right :
if value not in self._left :
yield value
class _Invert (Type) :
"Complement of a type"
def __init__ (self, base) :
self._base = base
_iterable(self, None)
def __repr__ (self) :
return "(~%s)" % repr(self._base)
def __contains__ (self, value) :
"""Check wether a value is in the type.
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
return (value not in self._base)
__pnmltype__ = "complement"
def __pnmldump__ (self) :
return Tree(self.__pnmltag__, None,
Tree.from_obj(self._base),
domain=self.__pnmltype__)
@classmethod
def __pnmlload__ (cls, tree) :
return cls(tree.child().to_obj())
class _All (Type) :
"A type allowing for any value"
def __init__ (self) :
pass
def __and__ (self, other) :
return other
def __or__ (self, other) :
return self
def __sub__ (self, other) :
return ~other
def __xor__ (self, other) :
return ~other
def __invert__ (self) :
return tNothing
def __repr__ (self) :
return "tAll"
def __contains__ (self, value) :
"""Check wether a value is in the type.
@param value: the value to check
@type value: C{object}
@return: C{True}
@rtype: C{bool}
"""
return True
def __call__ (self, *values) :
"""Typecheck values.
@param values: values that have to be checked
@type values: any objet
@return: C{True}
"""
return True
__pnmltype__ = "universal"
def __pnmldump__ (self) :
"""
>>> tAll.__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<type domain="universal"/>
</pnml>
"""
return Tree(self.__pnmltag__, None, domain=self.__pnmltype__)
@classmethod
def __pnmlload__ (cls, tree) :
"""
>>> Type.__pnmlload__(tAll.__pnmldump__())
tAll
"""
return cls()
class _Nothing (Type) :
"A types with no value"
def __init__ (self) :
pass
def __and__ (self, other) :
return self
def __or__ (self, other) :
return other
def __sub__ (self, other) :
return self
def __xor__ (self, other) :
return other
def __invert__ (self) :
return tAll
def __repr__ (self) :
return "tNothing"
def __contains__ (self, value) :
"""Check wether a value is in the type.
@param value: the value to check
@type value: C{object}
@return: C{False}
@rtype: C{bool}
"""
return False
def __call__ (self, *values) :
"""Typecheck values.
@param values: values that have to be checked
@type values: any objet
@return: C{False}
"""
return False
def __iter__ (self) :
pass
__pnmltype__ = "empty"
def __pnmldump__ (self) :
return Tree(self.__pnmltag__, None, domain=self.__pnmltype__)
@classmethod
def __pnmlload__ (cls, tree) :
return cls()
class Instance (Type) :
"""A type whose values are all instances of one class.
>>> [1, 2] in Instance(list)
True
>>> (1, 2) in Instance(list)
False
"""
def __init__ (self, _class) :
"""Initialize the type
>>> Instance(int)
Instance(int)
@param _class: the class of instance
@type _class: any class
@return: initialized object
@rtype: C{Instance}
"""
self._class = _class
def __contains__ (self, value) :
"""Check wether a value is in the type.
>>> 5 in Instance(int)
True
>>> 5.0 in Instance(int)
False
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
return isinstance(value, self._class)
def __repr__ (self) :
"""String representation of the type, suitable for C{eval}
>>> repr(Instance(str))
'Instance(str)'
@return: precise string representation
@rtype: C{str}
"""
return "Instance(%s)" % self._class.__name__
__pnmltype__ = "instance"
def __pnmldump__ (self) :
"""Dump a type to a PNML tree
>>> Instance(int).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<type domain="instance">
<object name="int" type="class"/>
</type>
</pnml>
@return: the PNML representation of the type
@rtype: C{str}
"""
return Tree(self.__pnmltag__, None,
Tree.from_obj(self._class),
domain=self.__pnmltype__)
@classmethod
def __pnmlload__ (cls, tree) :
"""Builds a type from its PNML representation
>>> t = Instance(int).__pnmldump__()
>>> Instance.__pnmlload__(t)
Instance(int)
@param tree: the PNML tree to load
@type tree: C{snakes.pnml.Tree}
@return: the loaded type
@rtype: C{Instance}
"""
return cls(tree.child().to_obj())
def _full_name (fun) :
if fun.__module__ is None :
funname = fun.__name__
for modname in sys.modules :
try :
if sys.modules[modname].__dict__.get(funname, None) is fun :
return ".".join([modname, funname])
except :
pass
return funname
else :
return ".".join([fun.__module__, fun.__name__])
class TypeCheck (Type) :
"""A type whose values are accepted by a given function.
>>> def odd (val) :
... return type(val) is int and (val % 2) == 1
>>> 3 in TypeCheck(odd)
True
>>> 4 in TypeCheck(odd)
False
>>> 3.0 in TypeCheck(odd)
False
"""
def __init__ (self, checker, iterate=None) :
"""Initialize the type
>>> import operator
>>> TypeCheck(operator.truth)
TypeCheck(...truth)
>>> def true_values () : # enumerates only choosen values
... yield True
... yield 42
... yield '42'
... yield [42]
... yield (42,)
... yield {True: 42}
>>> TypeCheck(operator.truth, true_values)
TypeCheck(...truth, snakes.typing.true_values)
@param checker: a function that checks one value and returns
C{True} if it is in te type and C{False} otherwise
@type checker: C{function(value)->bool}
@param iterate: C{None} or an iterator over the values of the
type
@type iterate: C{None} or C{iterator}
"""
self._check = checker
self._iterate = iterate
def __iter__ (self) :
"""
>>> def odd (val) :
... return type(val) is int and (val % 2) == 1
>>> i = iter(TypeCheck(odd))
Traceback (most recent call last):
...
ValueError: type not iterable
>>> def odd_iter () :
... i = 1
... while True :
... yield i
... yield -i
... i += 2
>>> i = iter(TypeCheck(odd, odd_iter))
>>> next(i), next(i), next(i)
(1, -1, 3)
"""
try :
return iter(self._iterate())
except TypeError :
raise ValueError("type not iterable")
def __contains__ (self, value) :
"""Check wether a value is in the type.
>>> def odd (val) :
... return type(val) is int and (val % 2) == 1
>>> 3 in TypeCheck(odd)
True
>>> 4 in TypeCheck(odd)
False
>>> 3.0 in TypeCheck(odd)
False
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
return self._check(value)
def __repr__ (self) :
"""
>>> def odd (val) :
... return type(val) is int and (val % 2) == 1
>>> repr(TypeCheck(odd))
'TypeCheck(snakes.typing.odd)'
>>> def odd_iter () :
... i = 1
... while True :
... yield i
... yield -i
... i += 2
>>> repr(TypeCheck(odd, odd_iter))
'TypeCheck(snakes.typing.odd, snakes.typing.odd_iter)'
"""
if self._iterate is None :
return "%s(%s)" % (self.__class__.__name__,
_full_name(self._check))
else :
return "%s(%s, %s)" % (self.__class__.__name__,
_full_name(self._check),
_full_name(self._iterate))
__pnmltype__ = "checker"
def __pnmldump__ (self) :
"""Dump type to a PNML tree
>>> import operator
>>> TypeCheck(operator.truth).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<type domain="checker">
<checker>
...
</checker>
<iterator>
<object type="NoneType"/>
</iterator>
</type>
</pnml>
>>> def true_values () : # enumerates only choosen values
... yield True
... yield 42
... yield '42'
... yield [42]
... yield (42,)
... yield {True: 42}
>>> TypeCheck(operator.truth, true_values).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<type domain="checker">
<checker>
...
</checker>
<iterator>
<object name="snakes.typing.true_values" type="function"/>
</iterator>
</type>
</pnml>
Note that this last example would not work as C{true_value} as
been defined inside a docstring. In order to allow it to be
re-loaded from PNML, it should be defined at module level.
@return: the type serialized to PNML
@rtype: C{snakes.pnml.Tree}
"""
return Tree(self.__pnmltag__, None,
Tree("checker", None, Tree.from_obj(self._check)),
Tree("iterator", None, Tree.from_obj(self._iterate)),
domain=self.__pnmltype__)
@classmethod
def __pnmlload__ (cls, tree) :
"""Build type from a PNML tree
>>> import operator
>>> TypeCheck.__pnmlload__(TypeCheck(operator.truth).__pnmldump__())
TypeCheck(....truth)
>>> def true_values () : # enumerates only choosen values
... yield True
... yield 42
... yield '42'
... yield [42]
... yield (42,)
... yield {True: 42}
@param tree: the PNML tree to load
@type tree: C{snakes.pnml.Tree}
@return: the loaded type
@rtype: C{TypeChecker}
"""
return cls(tree.child("checker").child().to_obj(),
tree.child("iterator").child().to_obj())
class OneOf (Type) :
"""A type whose values are explicitely enumerated.
>>> 3 in OneOf(1, 2, 3, 4, 5)
True
>>> 0 in OneOf(1, 2, 3, 4, 5)
False
"""
def __init__ (self, *values) :
"""
@param values: the enumeration of the values in the type
@type values: any objects
"""
self._values = values
def __contains__ (self, value) :
"""Check wether a value is in the type.
>>> 3 in OneOf(1, 2, 3, 4, 5)
True
>>> 0 in OneOf(1, 2, 3, 4, 5)
False
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
return value in self._values
def __repr__ (self) :
"""
>>> repr(OneOf(1, 2, 3, 4, 5))
'OneOf(1, 2, 3, 4, 5)'
"""
return "OneOf(%s)" % ", ".join([repr(val) for val in self._values])
def __iter__ (self) :
"""
>>> list(iter(OneOf(1, 2, 3, 4, 5)))
[1, 2, 3, 4, 5]
"""
return iter(self._values)
__pnmltype__ = "enum"
def __pnmldump__ (self) :
"""Dump type to its PNML representation
>>> OneOf(1, 2, 3, 4, 5).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<type domain="enum">
<object type="int">
1
</object>
<object type="int">
2
</object>
<object type="int">
3
</object>
<object type="int">
4
</object>
<object type="int">
5
</object>
</type>
</pnml>
@return: PNML representation of the type
@rtype: C{snakes.pnml.Tree}
"""
return Tree(self.__pnmltag__, None,
*(Tree.from_obj(val) for val in self._values),
**dict(domain=self.__pnmltype__))
@classmethod
def __pnmlload__ (cls, tree) :
"""Build type from its PNML representation
>>> OneOf.__pnmlload__(OneOf(1, 2, 3, 4, 5).__pnmldump__())
OneOf(1, 2, 3, 4, 5)
@param tree: PNML tree to load
@type tree: C{snakes.pnml.Tree}
@return: loaded type
@rtype: C{OneOf}
"""
return cls(*(child.to_obj() for child in tree.children))
class Collection (Type) :
"""A type whose values are a given container, holding items of a
given type and ranging in a given interval.
>>> [0, 1.1, 2, 3.3, 4] in Collection(Instance(list), tNumber, 3, 10)
True
>>> [0, 1.1] in Collection(Instance(list), tNumber, 3, 10) #too short
False
>>> [0, '1.1', 2, 3.3, 4] in Collection(Instance(list), tNumber, 3, 10) #wrong item
False
"""
def __init__ (self, collection, items, min=None, max=None) :
"""Initialise the type
>>> Collection(Instance(list), tNumber, 3, 10)
Collection(Instance(list), (Instance(int) | Instance(float)), min=3, max=10)
@param collection: the collection type
@type collection: any container type
@param items: the type of the items
@type items: any type
@param min: the smallest allowed value
@type min: any value in C{items}
@param max: the greatest allowed value
@type max: any value in C{items}
"""
self._collection = collection
self._class = collection._class
self._items = items
self._max = max
self._min = min
def __contains__ (self, value) :
"""Check wether a value is in the type.
>>> [0, 1.1, 2, 3.3, 4] in Collection(Instance(list), tNumber, 3, 10)
True
>>> [0, 1.1] in Collection(Instance(list), tNumber, 3, 10) #too short
False
>>> [0, '1.1', 2, 3.3, 4] in Collection(Instance(list), tNumber, 3, 10) #wrong item
False
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
if value not in self._collection :
return False
try :
len(value)
iter(value)
except TypeError :
return False
if (self._min is not None) and (len(value) < self._min) :
return False
if (self._max is not None) and (len(value) > self._max) :
return False
for item in value :
if item not in self._items :
return False
return True
def __repr__ (self) :
"""
>>> repr(Collection(Instance(list), tNumber, 3, 10))
'Collection(Instance(list), (Instance(int) | Instance(float)), min=3, max=10)'
"""
if (self._min is None) and (self._max is None) :
return "Collection(%s, %s)" % (repr(self._collection),
repr(self._items))
elif self._min is None :
return "Collection(%s, %s, max=%s)" % (repr(self._collection),
repr(self._items),
repr(self._max))
elif self._max is None :
return "Collection(%s, %s, min=%s)" % (repr(self._collection),
repr(self._items),
repr(self._min))
else :
return "Collection(%s, %s, min=%s, max=%s)" % (repr(self._collection),
repr(self._items),
repr(self._min),
repr(self._max))
__pnmltype__ = "collection"
def __pnmldump__ (self) :
"""Dump type to a PNML tree
>>> Collection(Instance(list), tNumber, 3, 10).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<type domain="collection">
<container>
<type domain="instance">
<object name="list" type="class"/>
</type>
</container>
<items>
<type domain="union">
<left>
<type domain="instance">
<object name="int" type="class"/>
</type>
</left>
<right>
<type domain="instance">
<object name="float" type="class"/>
</type>
</right>
</type>
</items>
<min>
<object type="int">
3
</object>
</min>
<max>
<object type="int">
10
</object>
</max>
</type>
</pnml>
@return: the PNML representation of the type
@rtype: C{snakes.pnml.Tree}
"""
return Tree(self.__pnmltag__, None,
Tree("container", None, Tree.from_obj(self._collection)),
Tree("items", None, Tree.from_obj(self._items)),
Tree("min", None, Tree.from_obj(self._min)),
Tree("max", None, Tree.from_obj(self._max)),
domain=self.__pnmltype__)
@classmethod
def __pnmlload__ (cls, tree) :
"""Load type from its PNML representation
>>> t = Collection(Instance(list), tNumber, 3, 10).__pnmldump__()
>>> Collection.__pnmlload__(t)
Collection(Instance(list), (Instance(int) | Instance(float)),
min=3, max=10)
@param tree: the PNML tree to load
@type tree: C{snakes.pnml.Tree}
@return: the loaded type
@rtype: C{Collection}
"""
return cls(tree.child("container").child().to_obj(),
tree.child("items").child().to_obj(),
tree.child("min").child().to_obj(),
tree.child("max").child().to_obj())
def List (items, min=None, max=None) :
"""Shorthand for instantiating C{Collection}
>>> List(tNumber, min=3, max=10)
Collection(Instance(list), (Instance(int) | Instance(float)), min=3, max=10)
@param items: the type of the elements in the collection
@type items: C{Type}
@param min: the minimum number of elements in the collection
@type min: C{int} or C{None}
@param max: the maximum number of elements in the collection
@type max: C{int} or C{None}
@return: a type that checks the given constraints
@rtype: C{Collection}
"""
return Collection(Instance(list), items, min, max)
def Tuple (items, min=None, max=None) :
"""Shorthand for instantiating C{Collection}
>>> Tuple(tNumber, min=3, max=10)
Collection(Instance(tuple), (Instance(int) | Instance(float)), min=3, max=10)
@param items: the type of the elements in the collection
@type items: C{Type}
@param min: the minimum number of elements in the collection
@type min: C{int} or C{None}
@param max: the maximum number of elements in the collection
@type max: C{int} or C{None}
@return: a type that checks the given constraints
@rtype: C{Collection}
"""
return Collection(Instance(tuple), items, min, max)
def Set (items, min=None, max=None) :
"""Shorthand for instantiating C{Collection}
>>> Set(tNumber, min=3, max=10)
Collection(Instance(set), (Instance(int) | Instance(float)), min=3, max=10)
@param items: the type of the elements in the collection
@type items: C{Type}
@param min: the minimum number of elements in the collection
@type min: C{int} or C{None}
@param max: the maximum number of elements in the collection
@type max: C{int} or C{None}
@return: a type that checks the given constraints
@rtype: C{Collection}
"""
return Collection(Instance(set), items, min, max)
class Mapping (Type) :
"""A type whose values are mapping (eg, C{dict})
>>> {'Yes': True, 'No': False} in Mapping(tString, tAll)
True
>>> {True: 1, False: 0} in Mapping(tString, tAll)
False
"""
def __init__ (self, keys, items, _dict=Instance(dict)) :
"""Initialise a mapping type
>>> Mapping(tInteger, tFloat)
Mapping(Instance(int), Instance(float), Instance(dict))
>>> from snakes.data import hdict
>>> Mapping(tInteger, tFloat, Instance(hdict))
Mapping(Instance(int), Instance(float), Instance(hdict))
@param keys: the type for the keys
@type keys: any type
@param items: the type for the items
@type items: any type
@param _dict: the class that mapping must be instances of
@type _dict: any C{dict} like class
"""
self._keys = keys
self._items = items
self._dict = _dict
def __contains__ (self, value) :
"""Check wether a value is in the type.
>>> {'Yes': True, 'No': False} in Mapping(tString, tAll)
True
>>> {True: 1, False: 0} in Mapping(tString, tAll)
False
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
if not self._dict(value) :
return False
for key, item in value.items() :
if key not in self._keys :
return False
if item not in self._items :
return True
return True
def __repr__ (self) :
"""Return a string representation of the type suitable for C{eval}
>>> repr(Mapping(tString, tAll))
'Mapping(Instance(str), tAll, Instance(dict))'
@return: precise string representation
@rtype: C{str}
"""
return "Mapping(%s, %s, %s)" % (repr(self._keys),
repr(self._items),
repr(self._dict))
__pnmltype__ = "mapping"
def __pnmldump__ (self) :
"""Dump type to a PNML tree
>>> from snakes.hashables import hdict
>>> Mapping(tString, tAll, Instance(hdict)).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<type domain="mapping">
<keys>
<type domain="instance">
<object name="str" type="class"/>
</type>
</keys>
<items>
<type domain="universal"/>
</items>
<container>
<type domain="instance">
<object name="snakes.hashables.hdict" type="class"/>
</type>
</container>
</type>
</pnml>
@return: PNML representation of the type
@rtype: C{snakes.pnml.Tree}
"""
return Tree(self.__pnmltag__, None,
Tree("keys", None, Tree.from_obj(self._keys)),
Tree("items", None, Tree.from_obj(self._items)),
Tree("container", None, Tree.from_obj(self._dict)),
domain=self.__pnmltype__)
@classmethod
def __pnmlload__ (cls, tree) :
"""Load type from its PNML representation
>>> from snakes.hashables import hdict
>>> t = Mapping(tString, tAll, Instance(hdict)).__pnmldump__()
>>> Mapping.__pnmlload__(t)
Mapping(Instance(str), tAll, Instance(hdict))
@param tree: PNML representation of the type
@type tree: C{snakes.pnml.Tree}
@return: the loaded type
@rtype: C{Mapping}
"""
return cls(tree.child("keys").child().to_obj(),
tree.child("items").child().to_obj(),
tree.child("container").child().to_obj())
class Range (Type) :
"""A type whose values are in a given range
Notice that ranges are not built into the memory so that huge
values can be used.
>>> 3 in Range(1, 2**128, 2)
True
>>> 4 in Range(1, 2**128, 2)
False
"""
def __init__ (self, first, last, step=1) :
"""The values are those that the builtin C{range(first, last, step)}
would return.
>>> Range(1, 10)
Range(1, 10)
>>> Range(1, 10, 2)
Range(1, 10, 2)
@param first: first element in the range
@type first: C{int}
@param last: upper bound of the range, not belonging to it
@type last: C{int}
@param step: step between elements in the range
@type step: C{int}
"""
self._first, self._last, self._step = first, last, step
def __contains__ (self, value) :
"""Check wether a value is in the type.
>>> 1 in Range(1, 10, 2)
True
>>> 2 in Range(1, 10, 2)
False
>>> 9 in Range(1, 10, 2)
True
>>> 10 in Range(1, 10, 2)
False
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
return ((self._first <= value < self._last)
and ((value - self._first) % self._step == 0))
def __repr__ (self) :
"""Return a string representation of the type suitable for C{eval}
>>> repr(Range(1, 2**128, 2))
'Range(1, 340282366920938463463374607431768211456, 2)'
@return: precise string representation
@rtype: C{str}
"""
if self._step == 1 :
return "Range(%s, %s)" % (self._first, self._last)
else :
return "Range(%s, %s, %s)" % (self._first, self._last, self._step)
def __iter__ (self) :
"""Iterate over the elements of the type
>>> list(iter(Range(1, 10, 3)))
[1, 4, 7]
@return: an iterator over the values belonging to the range
@rtype: C{generator}
"""
return iter(xrange(self._first, self._last, self._step))
__pnmltype__ = "range"
def __pnmldump__ (self) :
"""Dump type to a PNML tree
>>> Range(1, 10, 2).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<type domain="range">
<first>
<object type="int">
1
</object>
</first>
<last>
<object type="int">
10
</object>
</last>
<step>
<object type="int">
2
</object>
</step>
</type>
</pnml>
@return: PNML representation of the type
@rtype: C{snakes.pnml.Tree}
"""
return Tree(self.__pnmltag__, None,
Tree("first", None, Tree.from_obj(self._first)),
Tree("last", None, Tree.from_obj(self._last)),
Tree("step", None, Tree.from_obj(self._step)),
domain=self.__pnmltype__)
@classmethod
def __pnmlload__ (cls, tree) :
"""Build type from its PNML representation
>>> Range.__pnmlload__(Range(1, 10, 2).__pnmldump__())
Range(1, 10, 2)
@param tree: PNML tree to load
@type tree: C{snakes.pnml.Tree}
@return: the loaded type
@rtype: C{Range}
"""
return cls(tree.child("first").child().to_obj(),
tree.child("last").child().to_obj(),
tree.child("step").child().to_obj())
class Greater (Type) :
"""A type whose values are greater than a minimum.
The minimum and the checked values can be of any type as soon as
they can be compared with C{>}.
>>> 6 in Greater(3)
True
>>> 3 in Greater(3)
False
"""
def __init__ (self, min) :
"""Initialises the type
>>> Greater(5)
Greater(5)
@param min: the greatest value not included in the type
@type min: any C{object} that support comparison
"""
self._min = min
def __contains__ (self, value) :
"""Check wether a value is in the type.
>>> 5 in Greater(3)
True
>>> 5 in Greater(3.0)
True
>>> 3 in Greater(3.0)
False
>>> 1.0 in Greater(5)
False
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
try :
return value > self._min
except :
return False
def __repr__ (self) :
"""Return a string representation of the type suitable for C{eval}
>>> repr(Greater(3))
'Greater(3)'
@return: precise string representation
@rtype: C{str}
"""
return "Greater(%s)" % repr(self._min)
__pnmltype__ = "greater"
def __pnmldump__ (self) :
"""Dump type to its PNML representation
>>> Greater(42).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<type domain="greater">
<object type="int">
42
</object>
</type>
</pnml>
@return: PNML representation of the type
@rtype: C{snakes.pnml.Tree}
"""
return Tree(self.__pnmltag__, None,
Tree.from_obj(self._min),
domain=self.__pnmltype__)
@classmethod
def __pnmlload__ (cls, tree) :
"""Build type from its PNLM representation
>>> Greater.__pnmlload__(Greater(42).__pnmldump__())
Greater(42)
@param tree: PNML representation to load
@type tree: C{snakes.pnml.Tree}
@return: loaded type
@rtype: C{Greater}
"""
return cls(tree.child().to_obj())
class GreaterOrEqual (Type) :
"""A type whose values are greater or equal than a minimum.
See the description of C{Greater}
"""
def __init__ (self, min) :
"""Initialises the type
>>> GreaterOrEqual(5)
GreaterOrEqual(5)
@param min: the minimal allowed value
@type min: any C{object} that support comparison
"""
self._min = min
def __contains__ (self, value) :
"""Check wether a value is in the type.
>>> 5 in GreaterOrEqual(3)
True
>>> 5 in GreaterOrEqual(3.0)
True
>>> 3 in GreaterOrEqual(3.0)
True
>>> 1.0 in GreaterOrEqual(5)
False
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
try :
return value >= self._min
except :
False
def __repr__ (self) :
"""Return a strign representation of the type suitable for C{eval}
>>> repr(GreaterOrEqual(3))
'GreaterOrEqual(3)'
@return: precise string representation
@rtype: C{str}
"""
return "GreaterOrEqual(%s)" % repr(self._min)
__pnmltype__ = "greatereq"
def __pnmldump__ (self) :
"""Dump type to its PNML representation
>>> GreaterOrEqual(42).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<type domain="greatereq">
<object type="int">
42
</object>
</type>
</pnml>
@return: PNML representation of the type
@rtype: C{snakes.pnml.Tree}
"""
return Tree(self.__pnmltag__, None,
Tree.from_obj(self._min),
domain=self.__pnmltype__)
@classmethod
def __pnmlload__ (cls, tree) :
"""Build type from its PNLM representation
>>> GreaterOrEqual.__pnmlload__(GreaterOrEqual(42).__pnmldump__())
GreaterOrEqual(42)
@param tree: PNML representation to load
@type tree: C{snakes.pnml.Tree}
@return: loaded type
@rtype: C{GreaterOrEqual}
"""
return cls(tree.child().to_obj())
class Less (Type) :
"""A type whose values are less than a maximum.
See the description of C{Greater}
"""
def __init__ (self, max) :
"""Initialises the type
>>> Less(5)
Less(5)
@param min: the smallest value not included in the type
@type min: any C{object} that support comparison
"""
self._max = max
def __contains__ (self, value) :
"""Check wether a value is in the type.
>>> 5.0 in Less(5)
False
>>> 4.9 in Less(5)
True
>>> 4 in Less(5.0)
True
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
return value < self._max
def __repr__ (self) :
"""Return a string representation of the type suitable for C{eval}
>>> repr(Less(3))
'Less(3)'
@return: precise string representation
@rtype: C{str}
"""
return "Less(%s)" % repr(self._max)
__pnmltype__ = "less"
def __pnmldump__ (self) :
"""Dump type to its PNML representation
>>> Less(3).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<type domain="less">
<object type="int">
3
</object>
</type>
</pnml>
@return: PNML representation of the type
@rtype: C{snakes.pnml.Tree}
"""
return Tree(self.__pnmltag__, None,
Tree.from_obj(self._max),
domain=self.__pnmltype__)
@classmethod
def __pnmlload__ (cls, tree) :
"""Build type from its PNML representation
>>> Less.__pnmlload__(Less(3).__pnmldump__())
Less(3)
@param tree: PNML tree to load
@type tree: C{snakes.pnml.Tree}
@return: loaded type
@rtype: C{Less}
"""
return cls(tree.child().to_obj())
class LessOrEqual (Type) :
"""A type whose values are less than or equal to a maximum.
See the description of C{Greater}
"""
def __init__ (self, max) :
"""Initialises the type
>>> LessOrEqual(5)
LessOrEqual(5)
@param min: the greatest value the type
@type min: any C{object} that support comparison
"""
self._max = max
def __contains__ (self, value) :
"""Check wether a value is in the type.
>>> 5 in LessOrEqual(5.0)
True
>>> 5.1 in LessOrEqual(5)
False
>>> 1.0 in LessOrEqual(5)
True
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
return value <= self._max
def __repr__ (self) :
"""Return a string representation of the type suitable for C{eval}
>>> repr(LessOrEqual(3))
'LessOrEqual(3)'
@return: precise string representation
@rtype: C{str}
"""
return "LessOrEqual(%s)" % repr(self._max)
__pnmltype__ = "lesseq"
def __pnmldump__ (self) :
"""Dump type to its PNML representation
>>> LessOrEqual(4).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<type domain="lesseq">
<object type="int">
4
</object>
</type>
</pnml>
@return: PNML representation of the type
@rtype: C{snakes.pnml.Tree}
"""
return Tree(self.__pnmltag__, None,
Tree.from_obj(self._max),
domain=self.__pnmltype__)
@classmethod
def __pnmlload__ (cls, tree) :
"""Build type from its PNML representation
>>> LessOrEqual.__pnmlload__(LessOrEqual(4).__pnmldump__())
LessOrEqual(4)
@param tree: PNML tree to load
@type tree: C{snakes.pnml.Tree}
@return: loaded type
@rtype: C{LessOrEqual}
"""
return cls(tree.child().to_obj())
class CrossProduct (Type) :
"""A type whose values are tuples, each component of them being in
given types. The resulting type is the cartesian cross product of
the compound types.
>>> (1, 3, 5) in CrossProduct(Range(1, 10), Range(1, 10, 2), Range(1, 10))
True
>>> (2, 4, 6) in CrossProduct(Range(1, 10), Range(1, 10, 2), Range(1, 10))
False
"""
def __init__ (self, *types) :
"""Initialise the type
>>> CrossProduct(Instance(int), Instance(float))
CrossProduct(Instance(int), Instance(float))
@param types: the types of each component of the allowed tuples
@type types: C{Type}
"""
self._types = types
_iterable(self, *types)
def __repr__ (self) :
"""Return a string representation of the type suitable for C{eval}
>>> repr(CrossProduct(Range(1, 10), Range(1, 10, 2), Range(1, 10)))
'CrossProduct(Range(1, 10), Range(1, 10, 2), Range(1, 10))'
@return: precise string representation
@rtype: C{str}
"""
return "CrossProduct(%s)" % ", ".join([repr(t) for t in self._types])
def __contains__ (self, value) :
"""Check wether a value is in the type.
>>> (1, 3, 5) in CrossProduct(Range(1, 10), Range(1, 10, 2), Range(1, 10))
True
>>> (2, 4, 6) in CrossProduct(Range(1, 10), Range(1, 10, 2), Range(1, 10))
False
@param value: the value to check
@type value: C{object}
@return: C{True} if C{value} is in the type, C{False} otherwise
@rtype: C{bool}
"""
if not isinstance(value, tuple) :
return False
elif len(self._types) != len(value) :
return False
for item, t in zip(value, self._types) :
if not item in t :
return False
return True
def __iter__ (self) :
"""A cross product is iterable if so are all its components.
>>> list(iter(CrossProduct(Range(1, 3), Range(3, 5))))
[(1, 3), (1, 4), (2, 3), (2, 4)]
>>> iter(CrossProduct(Range(1, 100), tAll))
Traceback (most recent call last):
...
ValueError: iteration over non-sequence
@return: an iterator over the values in the type
@rtype: C{generator}
@raise ValueError: when one component is not iterable
"""
self.__iterable__()
return cross(self._types)
__pnmltype__ = "crossproduct"
def __pnmldump__ (self) :
"""Dumps type to its PNML representation
>>> CrossProduct(Instance(int), Instance(float)).__pnmldump__()
<?xml version="1.0" encoding="utf-8"?>
<pnml>
<type domain="crossproduct">
<type domain="instance">
<object name="int" type="class"/>
</type>
<type domain="instance">
<object name="float" type="class"/>
</type>
</type>
</pnml>
@return: PNML representation of the type
@rtype: C{snakes.pnml.Tree}
"""
return Tree(self.__pnmltag__, None,
*(Tree.from_obj(t) for t in self._types),
**dict(domain=self.__pnmltype__))
@classmethod
def __pnmlload__ (cls, tree) :
"""Build type from its PNML representation
>>> t = CrossProduct(Instance(int), Instance(float)).__pnmldump__()
>>> CrossProduct.__pnmlload__(t)
CrossProduct(Instance(int), Instance(float))
@param tree: PNML tree to load
@type tree: C{snakes.pnml.Tree}
@return: loaded type
@rtype: C{CrossProduct}
"""
return cls(*(child.to_obj() for child in tree.children))
tAll = _All()
tNothing = _Nothing()
tString = Instance(str)
tList = List(tAll)
tInteger = Instance(int)
tNatural = tInteger & GreaterOrEqual(0)
tPositive = tInteger & Greater(0)
tFloat = Instance(float)
tNumber = tInteger|tFloat
tDict = Instance(dict)
tNone = OneOf(None)
tBoolean = OneOf(True, False)
tTuple = Tuple(tAll)
tPair = Tuple(tAll, min=2, max=2)
File mode changed
from snakes import SnakesError
class CompilationError (SnakesError) :
pass
class DeclarationError (SnakesError) :
pass
import sys, operator, inspect, re, collections
from snakes.utils.abcd import CompilationError, DeclarationError
from snakes.lang.abcd.parser import ast
from snakes.lang import unparse
import snakes.utils.abcd.transform as transform
from snakes.data import MultiSet
from snakes import *
class Decl (object) :
OBJECT = "object"
TYPE = "type"
BUFFER = "buffer"
SYMBOL = "symbol"
CONST = "const"
NET = "net"
TASK = "task"
IMPORT = "import"
def __init__ (self, node, kind=None, **data) :
self.node = node
classname = node.__class__.__name__
if kind is not None :
self.kind = kind
elif classname == "AbcdTypedef" :
self.kind = self.TYPE
elif classname == "AbcdBuffer" :
self.kind = self.BUFFER
elif classname == "AbcdSymbol" :
self.kind = self.SYMBOL
elif classname == "AbcdConst" :
self.kind = self.CONST
elif classname == "AbcdNet" :
self.kind = self.NET
elif classname == "AbcdTask" :
self.kind = self.TASK
elif classname in ("Import", "ImportFrom") :
self.kind = self.IMPORT
else :
self.kind = self.OBJECT
for key, val in data.items() :
setattr(self, key, val)
class GetInstanceArgs (object) :
"""Bind arguments for a net instance"""
def __init__ (self, node) :
self.argspec = []
self.arg = {}
self.buffer = {}
self.net = {}
self.task = {}
seen = set()
for a in node.args.args + node.args.kwonlyargs :
if a.arg in seen :
self._raise(CompilationError,
"duplicate argument %r" % a.arg)
seen.add(a.arg)
if a.annotation is None :
self.argspec.append((a.arg, "arg"))
else :
self.argspec.append((a.arg, a.annotation.id))
def __call__ (self, *args) :
self.arg.clear()
self.buffer.clear()
self.net.clear()
self.task.clear()
for (name, kind), value in zip(self.argspec, args) :
getattr(self, kind)[name] = value
return self.arg, self.buffer, self.net, self.task
class Builder (object) :
def __init__ (self, snk, path=[], up=None) :
self.snk = snk
self.path = path
self.up = up
self.env = {"True": Decl(None, kind=Decl.CONST, value=True),
"False": Decl(None, kind=Decl.CONST, value=False),
"None": Decl(None, kind=Decl.CONST, value=None),
"dot": Decl(None, kind=Decl.CONST, value=self.snk.dot),
"BlackToken": Decl(None, kind=Decl.TYPE,
type=self.snk.Instance(self.snk.BlackToken))}
self.stack = []
if up :
self.globals = up.globals
else :
self.globals = snk.Evaluator(dot=self.snk.dot,
BlackToken=self.snk.BlackToken)
self.instances = MultiSet()
# utilities
def _raise (self, error, message) :
"""raise an exception with appropriate location
"""
if self.stack :
pos = "[%s:%s]: " % (self.stack[-1].lineno,
self.stack[-1].col_offset)
else :
pos = ""
raise error(pos+message)
def _eval (self, expr, *largs, **kwargs) :
env = self.globals.copy()
if isinstance(expr, ast.AST) :
expr = unparse(expr)
return env(expr, dict(*largs, **kwargs))
# declarations management
def __setitem__ (self, name, value) :
if name in self.env :
self._raise(DeclarationError, "duplicated declaration of %r" % name)
self.env[name] = value
def __getitem__ (self, name) :
if name in self.env :
return self.env[name]
elif self.up is None :
self._raise(DeclarationError, "%r not declared" % name)
else :
return self.up[name]
def __contains__ (self, name) :
if name in self.env :
return True
elif self.up is None :
return False
else :
return name in self.up
def goto (self, name) :
if name in self.env :
return self
elif self.up is None :
self._raise(DeclarationError, "%r not declared" % name)
else :
return self.up.goto(name)
def get_buffer (self, name) :
if name not in self :
self._raise(DeclarationError,
"buffer %r not declared" % name)
decl = self[name]
if decl.kind != Decl.BUFFER :
self._raise(DeclarationError,
"%r declared as %s but used as buffer"
% (name, decl.kind))
elif decl.capacity is not None :
pass
#self._raise(NotImplementedError, "capacities not (yet) supported")
return decl
def get_net (self, name) :
if name not in self :
self._raise(DeclarationError,
"net %r not declared" % name)
decl = self[name]
if decl.kind != Decl.NET :
self._raise(DeclarationError,
"%r declared as %s but used as net"
% (name, decl.kind))
return decl
def get_task (self, name) :
if name not in self :
self._raise(DeclarationError,
"task %r not declared" % name)
decl = self[name]
if decl.kind != Decl.TASK :
self._raise(DeclarationError,
"%r declared as %s but used as task"
% (name, decl.kind))
return decl
# main compiler entry point
def build (self, node, prefix="", fallback=None) :
self.stack.append(node)
if prefix :
prefix += "_"
method = "build_" + prefix + node.__class__.__name__
visitor = getattr(self, method, fallback or self.build_fail)
try :
return visitor(node)
finally :
self.stack.pop(-1)
def build_fail (self, node) :
self._raise(CompilationError, "do not know how to compile %s"
% node.__class__.__name__)
def build_arc (self, node) :
return self.build(node, "arc", self.build_arc_expr)
# specification
def build_AbcdSpec (self, node) :
for decl in node.context :
self.build(decl)
tasks = [self._build_TaskNet(decl.node)
for name, decl in self.env.items()
if decl.kind == Decl.TASK and decl.used]
net = reduce(operator.or_, tasks, self.build(node.body))
# set local buffers marking, and hide them
for name, decl in ((n, d) for n, d in self.env.items()
if d.kind == Decl.BUFFER) :
status = self.snk.buffer(name)
for place in net.status(status) :
place = net.place(place)
try :
place.reset(decl.marking)
except ValueError as err :
self._raise(CompilationError,
"invalid initial marking (%s)" % err)
if decl.capacity is None :
cap = None
else :
#cap = [c.n if c else None for c in decl.capacity]
# TODO: accept more than integers as capacities
cap = []
for c in decl.capacity :
if c is None :
cap.append(None)
else :
try :
cap.append(self._eval(c))
except :
err = sys.exc_info()[1]
self._raise(CompilationError,
"could not evaluate %r, %s"
% (unparse(c), err))
place.label(path=self.path,
capacity=cap)
# TODO: check capacity
net.hide(status)
if self.up is None :
# set entry marking
for place in net.status(self.snk.entry) :
net.place(place).reset(self.snk.dot)
# rename nodes
self._rename_nodes(net)
# copy global declarations
net.globals.update(self.globals)
# add info about source file
net.label(srcfile=str(node.st.text.filename))
# add assertions
net.label(asserts=node.asserts)
return net
def _build_TaskNet (self, node) :
self._raise(NotImplementedError, "tasks not (yet) supported")
def _rename_nodes (self, net) :
# generate unique names
total = collections.defaultdict(int)
count = collections.defaultdict(int)
def ren (node) :
if net.has_transition(node.name) :
status = node.label("srctext")
else :
if node.status == self.snk.entry :
status = "e"
elif node.status == self.snk.internal :
status = "i"
elif node.status == self.snk.exit :
status = "x"
else :
status = node.label("buffer")
name = ".".join(node.label("path") + [status])
if total[name] > 1 :
count[name] += 1
name = "%s#%s" % (name, count[name])
return name
# count occurrences of each name base
_total = collections.defaultdict(int)
for node in net.node() :
_total[ren(node)] += 1
total = _total
# rename nodes using a depth-first traversal
done = set(net.status(self.snk.entry))
todo = [net.node(n) for n in done]
while todo :
node = todo.pop(-1)
new = ren(node)
if new != node.name :
net.rename_node(node.name, new)
done.add(new)
for n in net.post(new) - done :
todo.append(net.node(n))
done.add(n)
# rename isolated nodes
for letter, method in (("p", net.place), ("t", net.transition)) :
for node in method() :
if node.name not in done :
net.rename_node(node.name, ren(node))
# declarations
def build_AbcdTypedef (self, node) :
"""
>>> import snakes.nets
>>> b = Builder(snakes.nets)
>>> b.build(ast.AbcdTypedef(name='number', type=ast.UnionType(types=[ast.NamedType(name='int'), ast.NamedType(name='float')])))
>>> b.env['number'].type
(Instance(int) | Instance(float))
>>> b.build(ast.ImportFrom(module='inspect', names=[ast.alias(name='isbuiltin')]))
>>> b.build(ast.AbcdTypedef(name='builtin', type=ast.NamedType(name='isbuiltin')))
>>> b.env['builtin'].type
TypeCheck(inspect.isbuiltin)
"""
self[node.name] = Decl(node, type=self.build(node.type))
def build_AbcdBuffer (self, node) :
self[node.name] = Decl(node,
type=self.build(node.type),
capacity=node.capacity,
marking=self._eval(node.content))
def build_AbcdSymbol (self, node) :
for name in node.symbols :
value = self.snk.Symbol(name, False)
self[name] = Decl(node, value=value)
self.globals[name] = value
def build_AbcdConst (self, node) :
value = self._eval(node.value)
self[node.name] = Decl(node, value=value)
self.globals[node.name] = value
def build_AbcdNet (self, node) :
self[node.name] = Decl(node, getargs=GetInstanceArgs(node))
def build_AbcdTask (self, node) :
self._raise(NotImplementedError, "tasks not (yet) supported")
self[node.name] = Decl(node, used=False)
def build_Import (self, node) :
for alias in node.names :
self[alias.asname or alias.name] = Decl(node)
self.globals.declare(unparse(node))
def build_ImportFrom (self, node) :
self.build_Import(node)
# processes
def build_AbcdAction (self, node) :
if node.guard is True :
return self._build_True(node)
elif node.guard is False :
return self._build_False(node)
else :
return self._build_action(node)
def _build_True (self, node) :
net = self.snk.PetriNet("true")
e = self.snk.Place("e", [], self.snk.tBlackToken,
status=self.snk.entry)
e.label(path=self.path)
net.add_place(e)
x = self.snk.Place("x", [], self.snk.tBlackToken,
status=self.snk.exit)
x.label(path=self.path)
net.add_place(x)
t = self.snk.Transition("t")
t.label(srctext=node.st.source(),
srcloc=(node.st.srow, node.st.scol,
node.st.erow, node.st.ecol),
path=self.path)
net.add_transition(t)
net.add_input("e", "t", self.snk.Value(self.snk.dot))
net.add_output("x", "t", self.snk.Value(self.snk.dot))
return net
def _build_False (self, node) :
net = self.snk.PetriNet("false")
e = self.snk.Place("e", [], self.snk.tBlackToken,
status=self.snk.entry)
e.label(path=self.path)
net.add_place(e)
x = self.snk.Place("x", [], self.snk.tBlackToken,
status=self.snk.exit)
x.label(path=self.path)
net.add_place(x)
return net
def _build_action (self, node) :
net = self.snk.PetriNet("flow")
e = self.snk.Place("e", [], self.snk.tBlackToken,
status=self.snk.entry)
e.label(path=self.path)
net.add_place(e)
x = self.snk.Place("x", [], self.snk.tBlackToken,
status=self.snk.exit)
x.label(path=self.path)
net.add_place(x)
t = self.snk.Transition("t", self.snk.Expression(unparse(node.guard)),
status=self.snk.tick("action"))
t.label(srctext=node.st.source(),
srcloc=(node.st.srow, node.st.scol,
node.st.erow, node.st.ecol),
path=self.path)
net.add_transition(t)
net.add_input("e", "t", self.snk.Value(self.snk.dot))
net.add_output("x", "t", self.snk.Value(self.snk.dot))
net = reduce(operator.or_, [self.build(a) for a in node.accesses],
net)
net.hide(self.snk.tick("action"))
return net
def build_AbcdFlowOp (self, node) :
return self.build(node.op)(self.build(node.left),
self.build(node.right))
def _get_instance_arg (self, arg) :
if arg.__class__.__name__ == "Name" and arg.id in self :
return self[arg.id]
else :
try :
self._eval(arg)
except :
self._raise(CompilationError,
"could not evaluate argument %r"
% arg.st.source())
return arg
def build_AbcdInstance (self, node) :
if node.net not in self :
self._raise(DeclarationError, "%r not declared" % node.net)
elif node.starargs :
self._raise(CompilationError, "* argument not allowed here")
elif node.kwargs :
self._raise(CompilationError, "** argument not allowed here")
decl = self[node.net]
if decl.kind != Decl.NET :
self._raise(DeclarationError,
"%r declared as %s but used as net"
% (name, decl.kind))
# unpack args
posargs, kwargs = [], {}
for arg in node.args :
posargs.append(self._get_instance_arg(arg))
for kw in node.keywords :
kwargs[kw.arg] = self._get_instance_arg(kw.value)
# bind args
try :
args, buffers, nets, tasks = decl.getargs(*posargs, **kwargs)
except TypeError :
c, v, t = sys.exc_info()
self._raise(CompilationError, str(v))
for d, kind in ((buffers, Decl.BUFFER),
(nets, Decl.NET),
(tasks, Decl.TASK)) :
for k, v in d.items() :
if v.kind != kind :
self._raise(DeclarationError,
"%r declared as %s but used as %s"
% (k, v.kind, kind))
d[k] = v.node.name
# build sub-net
binder = transform.ArgsBinder(args, buffers, nets, tasks)
spec = binder.visit(decl.node.body)
if node.asname :
name = str(node.asname)
else :
name = node.st.source()
if name in self.instances :
name = "%s#%s" % (name, self.instances(name))
self.instances.add(name)
path = self.path + [name]
builder = self.__class__(self.snk, path, self)
return builder.build(spec)
# control flow operations
def build_Sequence (self, node) :
return self.snk.PetriNet.__and__
def build_Choice (self, node) :
return self.snk.PetriNet.__add__
def build_Parallel (self, node) :
return self.snk.PetriNet.__or__
def build_Loop (self, node) :
return self.snk.PetriNet.__mul__
# accesses :
def build_SimpleAccess (self, node) :
decl = self.get_buffer(node.buffer)
net = self.snk.PetriNet("access")
net.add_transition(self.snk.Transition("t", status=self.snk.tick("action")))
b = self.snk.Place(str(node.buffer), [], decl.type,
status=self.snk.buffer(node.buffer))
b.label(path=self.path,
buffer=str(node.buffer),
srctext=decl.node.st.source(),
srcloc=(decl.node.st.srow, decl.node.st.scol,
decl.node.st.erow, decl.node.st.ecol))
net.add_place(b)
self.build(node.arc)(net, node.buffer, "t", self.build_arc(node.tokens))
return net
def build_FlushAccess (self, node) :
decl = self.get_buffer(node.buffer)
net = self.snk.PetriNet("access")
net.add_transition(self.snk.Transition("t", status=self.snk.tick("action")))
b = self.snk.Place(str(node.buffer), [], decl.type,
status=self.snk.buffer(node.buffer))
b.label(path=self.path,
buffer=str(node.buffer),
srctext=decl.node.st.source(),
srcloc=(decl.node.st.srow, decl.node.st.scol,
decl.node.st.erow, decl.node.st.ecol))
net.add_place(b)
net.add_input(node.buffer, "t", self.snk.Flush(node.target))
return net
def build_SwapAccess (self, node) :
decl = self.get_buffer(node.buffer)
net = self.snk.PetriNet("access")
net.add_transition(self.snk.Transition("t", status=self.snk.tick("action")))
b = self.snk.Place(node.buffer, [], decl.type,
status=self.snk.buffer(node.buffer))
b.label(path=self.path,
buffer=str(node.buffer),
srctext=decl.node.st.source(),
srcloc=(decl.node.st.srow, decl.node.st.scol,
decl.node.st.erow, decl.node.st.ecol))
net.add_place(b)
net.add_input(node.buffer, "t", self.build_arc(node.target))
net.add_output(node.buffer, "t", self.build_arc(node.tokens))
return net
def build_Spawn (self, node) :
self._raise(NotImplementedError, "tasks not (yet) supported")
def build_Wait (self, node) :
self._raise(NotImplementedError, "tasks not (yet) supported")
def build_Suspend (self, node) :
self._raise(NotImplementedError, "tasks not (yet) supported")
def build_Resume (self, node) :
self._raise(NotImplementedError, "tasks not (yet) supported")
# arc labels
def build_arc_Name (self, node) :
if node.id in self :
decl = self[node.id]
if decl.kind in (Decl.CONST, Decl.SYMBOL) :
return self.snk.Value(decl.value)
return self.snk.Variable(node.id)
def build_arc_Num (self, node) :
return self.snk.Value(node.n)
def build_arc_Str (self, node) :
return self.snk.Value(node.s)
def build_arc_Tuple (self, node) :
return self.snk.Tuple([self.build_arc(elt) for elt in node.elts])
def build_arc_expr (self, node) :
return self.snk.Expression(unparse(node))
# arcs
def build_Produce (self, node) :
def arc (net, place, trans, label) :
net.add_output(place, trans, label)
return arc
def build_Test (self, node) :
def arc (net, place, trans, label) :
net.add_input(place, trans, self.snk.Test(label))
return arc
def build_Consume (self, node) :
def arc (net, place, trans, label) :
net.add_input(place, trans, label)
return arc
def build_Fill (self, node) :
def arc (net, place, trans, label) :
net.add_output(place, trans, self.snk.Flush(str(label)))
return arc
# types
def build_UnionType (self, node) :
return reduce(operator.or_, (self.build(child)
for child in node.types))
def build_IntersectionType (self, node) :
return reduce(operator.and_, (self.build(child)
for child in node.types))
def build_CrossType (self, node) :
return self.snk.CrossProduct(*(self.build(child)
for child in node.types))
def build_ListType (self, node) :
return self.snk.List(self.build(node.items))
def build_TupleType (self, node) :
return self.snk.Collection(self.snk.Instance(tuple),
(self.build(node.items)))
def build_SetType (self, node) :
return self.snk.Set(self.build(node.items))
def build_DictType (self, node) :
return self.snk.Mapping(keys=self.build(node.keys),
items=self.build(node.items),
_dict=self.snk.Instance(self.snk.hdict))
def build_EnumType (self, node) :
return self.snk.OneOf(*(self._eval(child) for child in node.items))
def build_NamedType (self, node) :
name = node.name
if name in self and self[name].kind == Decl.TYPE :
return self[name].type
elif name in self.globals :
obj = self.globals[name]
if inspect.isclass(obj) :
return self.snk.Instance(obj)
elif inspect.isroutine(obj) :
return self.snk.TypeCheck(obj)
elif hasattr(sys.modules["__builtin__"], name) :
obj = getattr(sys.modules["__builtin__"], name)
if inspect.isclass(obj) :
return self.snk.Instance(obj)
elif inspect.isroutine(obj) :
return self.snk.TypeCheck(obj)
self._raise(CompilationError,
"invalid type %r" % name)
if __name__ == "__main__" :
import doctest
doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE
| doctest.REPORT_ONLY_FIRST_FAILURE
| doctest.ELLIPSIS)
from snakes.lang.abcd.parser import parse
node = parse(open(sys.argv[1]).read())
import snakes.plugins
snk = snakes.plugins.load(["ops", "gv", "labels"], "snakes.nets", "snk")
build = Builder(snk)
net = build.build(node)
net.draw(sys.argv[1] + ".png")
import heapq
from snakes.nets import StateGraph
import snakes.lang
import snkast as ast
class Checker (object) :
def __init__ (self, net) :
self.g = StateGraph(net)
self.f = [self.build(f) for f in net.label("asserts")]
def build (self, tree) :
src = """
def check (_) :
return %s
""" % tree.st.source()[7:]
ctx = dict(self.g.net.globals)
ctx["bounded"] = self.bounded
exec(src, ctx)
fun = ctx["check"]
fun.lineno = tree.lineno
return fun
def bounded (self, marking, max) :
return all(len(marking(p)) == 1 for p in marking)
def run (self) :
for state in self.g :
marking = self.g.net.get_marking()
for place in marking :
if max(marking(place).values()) > 1 :
return None, self.trace(state)
for check in self.f :
try :
if not check(marking) :
return check.lineno, self.trace(state)
except :
pass
return None, None
def path (self, tgt, src=0) :
q = [(0, src, ())]
visited = set()
while True :
(c, v1, path) = heapq.heappop(q)
if v1 not in visited :
path = path + (v1,)
if v1 == tgt :
return path
visited.add(v1)
for v2 in self.g.successors(v1) :
if v2 not in visited :
heapq.heappush(q, (c+1, v2, path))
def trace (self, state) :
path = self.path(state)
return tuple(self.g.successors(i)[j]
for i, j in zip(path[:-1], path[1:]))
import sys, optparse, os.path
import pdb, traceback
import snakes.plugins
from snakes.utils.abcd.build import Builder
from snakes.lang.abcd.parser import parse
from snakes.lang.pgen import ParseError
from snakes.utils.abcd import CompilationError, DeclarationError
from snakes.utils.abcd.simul import Simulator
from snakes.utils.abcd.checker import Checker
##
## error messages
##
ERR_ARG = 1
ERR_OPT = 2
ERR_IO = 3
ERR_PARSE = 4
ERR_PLUGIN = 5
ERR_COMPILE = 6
ERR_OUTPUT = 7
ERR_BUG = 255
def err (message) :
sys.stderr.write("abcd: %s\n" % message.strip())
def die (code, message=None) :
if message :
err(message)
if options.debug :
pdb.post_mortem(sys.exc_info()[2])
else :
sys.exit(code)
def bug () :
sys.stderr.write("""
********************************************************************
*** An unexpected error ocurred. Please report this bug to ***
*** <franck.pommereau@gmail.com>, together with the execution ***
*** trace below and, if possible, a stripped-down version of the ***
*** ABCD source code that caused this bug. Thank you for your ***
*** help in improving SNAKES! ***
********************************************************************
""")
traceback.print_exc()
if options.debug :
pdb.post_mortem(sys.exc_info()[2])
else :
sys.exit(ERR_BUG)
##
## options parsing
##
gv_engines = ("dot", "neato", "twopi", "circo", "fdp")
opt = optparse.OptionParser(prog="abcd",
usage="%prog [OPTION]... FILE")
opt.add_option("-l", "--load",
dest="plugins", action="append", default=[],
help="load plugin (this option can be repeated)",
metavar="PLUGIN")
opt.add_option("-p", "--pnml",
dest="pnml", action="store", default=None,
help="save net as PNML",
metavar="OUTFILE")
for engine in gv_engines :
opt.add_option("-" + engine[0], "--" + engine,
dest="gv" + engine, action="store", default=None,
help="draw net using '%s' (from GraphViz)" % engine,
metavar="OUTFILE")
opt.add_option("-a", "--all-names",
dest="allnames", action="store_true", default=False,
help="draw control-flow places names (default: hide)")
opt.add_option("--debug",
dest="debug", action="store_true", default=False,
help="launch debugger on compiler error (default: no)")
opt.add_option("-s", "--simul",
dest="simul", action="store_true", default=False,
help="launch interactive code simulator")
opt.add_option("--check",
dest="check", action="store_true", default=False,
help="check assertions")
def getopts (args) :
global options, abcd
(options, args) = opt.parse_args(args)
plugins = []
for p in options.plugins :
plugins.extend(t.strip() for t in p.split(","))
if "ops" not in options.plugins :
plugins.append("ops")
if "labels" not in plugins :
plugins.append("labels")
for engine in gv_engines :
gvopt = getattr(options, "gv%s" % engine)
if gvopt and "gv" not in plugins :
plugins.append("gv")
break
options.plugins = plugins
if len(args) < 1 :
err("no input file provided")
opt.print_help()
die(ERR_ARG)
elif len(args) > 1 :
err("more than one input file provided")
opt.print_help()
die(ERR_ARG)
abcd = args[0]
if options.pnml == abcd :
err("input file also used as output (--pnml)")
opt.print_help()
die(ERR_ARG)
for engine in gv_engines :
if getattr(options, "gv%s" % engine) == abcd :
err("input file also used as output (--%s)" % engine)
opt.print_help()
die(ERR_ARG)
##
## drawing nets
##
def place_attr (place, attr) :
# fix color
if place.status == snk.entry :
attr["fillcolor"] = "green"
elif place.status == snk.internal :
pass
elif place.status == snk.exit :
attr["fillcolor"] = "yellow"
else :
attr["fillcolor"] = "lightblue"
# fix shape
if (not options.allnames
and place.status in (snk.entry, snk.internal, snk.exit)) :
attr["shape"] = "circle"
# render marking
if place._check == snk.tBlackToken :
count = len(place.tokens)
if count == 0 :
marking = " "
elif count == 1 :
marking = "@"
else :
marking = "%s@" % count
else :
marking = str(place.tokens)
# node label
if (options.allnames
or place.status not in (snk.entry, snk.internal, snk.exit)) :
attr["label"] = "%s\\n%s" % (place.name, marking)
else :
attr["label"] = "%s" % marking
def trans_attr (trans, attr) :
pass
def arc_attr (label, attr) :
if label == snk.Value(snk.dot) :
del attr["label"]
elif isinstance(label, snk.Test) :
attr["arrowhead"] = "none"
attr["label"] = " %s " % label._annotation
elif isinstance(label, snk.Flush) :
attr["arrowhead"] = "box"
attr["label"] = " %s " % label._annotation
def draw (net, engine, target) :
try :
net.draw(target, engine=engine,
place_attr=place_attr,
trans_attr=trans_attr,
arc_attr=arc_attr)
except :
die(ERR_OUTPUT, str(sys.exc_info()[1]))
##
## save pnml
##
def save_pnml (net, target) :
try :
out = open(target, "w")
out.write(snk.dumps(net))
out.close()
except :
die(ERR_OUTPUT, str(sys.exc_info()[1]))
##
## main
##
def main (args=sys.argv[1:], src=None) :
global snk
# get options
try:
if src is None :
getopts(args)
else :
getopts(list(args) + ["<string>"])
except SystemExit :
raise
except :
die(ERR_OPT, str(sys.exc_info()[1]))
# read source
if src is not None :
source = src
else :
try :
source = open(abcd).read()
except :
die(ERR_IO, "could not read input file %r" % abcd)
# parse
try :
node = parse(source, filename=abcd)
except ParseError :
die(ERR_PARSE, str(sys.exc_info()[1]))
except :
bug()
# compile
dirname = os.path.dirname(abcd)
if dirname and dirname not in sys.path :
sys.path.append(dirname)
elif "." not in sys.path :
sys.path.append(".")
try :
snk = snakes.plugins.load(options.plugins, "snakes.nets", "snk")
except :
die(ERR_PLUGIN, str(sys.exc_info()[1]))
build = Builder(snk)
try :
net = build.build(node)
net.label(srcfile=abcd)
except (CompilationError, DeclarationError) :
die(ERR_COMPILE, str(sys.exc_info()[1]))
except :
bug()
# output
if options.pnml :
save_pnml(net, options.pnml)
for engine in gv_engines :
target = getattr(options, "gv%s" % engine)
if target :
draw(net, engine, target)
trace, lineno = [], None
if options.check :
lineno, trace = Checker(net).run()
if options.simul :
Simulator(snk, source, net, trace, lineno).run()
elif trace :
if lineno is None :
print("unsafe execution:")
else :
asserts = dict((a.lineno, a) for a in net.label("asserts"))
print("line %s, %r failed:"
% (lineno, asserts[lineno].st.source()))
for trans, mode in trace :
print(" %s %s" % (trans, mode))
return net
if __name__ == "__main__" :
main()
import math, operator, collections
import Tkinter as tk
import tkMessageBox as popup
try :
import Tkinter.scrolledtext as ScrolledText
except ImportError :
import ScrolledText
class Action (object) :
def __init__ (self, trans, mode, shift) :
self.trans = trans
self.mode = mode
self.net = trans.net
self.pre = self.net.get_marking()
self.post = None
srow, scol, erow, ecol = trans.label("srcloc")
self.start = "%s.%s" % (srow, scol + shift)
self.stop = "%s.%s" % (erow, ecol + shift)
self.line = srow
def fire (self) :
self.trans.fire(self.mode)
self.post = self.net.get_marking()
def __eq__ (self, other) :
try :
return (self.trans == other.trans and self.mode == other.mode
and self.net == other.net)
except AttributeError :
return False
class Trace (object) :
def __init__ (self, net) :
self.net = net
self.actions = []
def add (self, action) :
self.actions.append(action)
def back (self) :
self.net.set_marking(self.actions[-1].pre)
self.actions.pop()
def empty (self) :
return not self.actions
def __getitem__ (self, idx) :
return self.actions[idx]
def __len__ (self) :
return len(self.actions)
class Simulator (object) :
def __init__ (self, snk, src, net, trace=None, errline=None) :
self.snk = snk
self.src = src
self.net = net
self.srclength = len(src.splitlines())
self.width = int(math.ceil(math.log10(self.srclength)))
self.shift = self.width + 2
self.modes = []
self.trans2modes = collections.defaultdict(set)
self.trace = Trace(self.net)
self.build_gui(errline)
if trace is not None :
for trans, mode in trace :
trans = self.net.transition(trans.name)
action = Action(trans, mode, self.shift)
action.fire()
self.extend_trace(action)
self.trace.add(action)
self.update()
if trace :
self._back.configure(state=tk.NORMAL)
def update (self) :
self.update_modes()
self.update_state()
def build_gui (self, errline) :
self._win = tk.Tk()
self._win.title("ABCD simulator")
self._win.bind("<Return>", self.fire)
self._win.bind("<BackSpace>", self.back)
self._win.bind("<Escape>", self.quit)
# paned windows and frames
self._pan_main = tk.PanedWindow(self._win, orient=tk.HORIZONTAL)
self._pan_main.grid(row=0, column=0, sticky=tk.N+tk.S+tk.E+tk.W)
self.__pan_left = tk.PanedWindow(self._pan_main, orient=tk.VERTICAL)
self._pan_main.add(self.__pan_left)
self._pan_right = tk.PanedWindow(self._pan_main, orient=tk.VERTICAL)
self._pan_main.add(self._pan_right)
self._modes_frame = tk.Frame(self.__pan_left)
self.__pan_left.add(self._modes_frame)
self._trace_frame = tk.Frame(self.__pan_left)
self.__pan_left.add(self._trace_frame)
self._state_frame = tk.Frame(self._pan_right)
self._pan_right.add(self._state_frame)
# modes
self._modes_x = tk.Scrollbar (self._modes_frame,
orient=tk.HORIZONTAL)
self._modes_x.grid(row=1, column=0, sticky=tk.W+tk.E)
self._modes_y = tk.Scrollbar (self._modes_frame, orient=tk.VERTICAL)
self._modes_y.grid(row=0, column=1, sticky=tk.N+tk.S)
self._modes = tk.Listbox(self._modes_frame,
xscrollcommand=self._modes_x.set,
yscrollcommand=self._modes_y.set,
width=50,
font="monospace",
activestyle="none",
selectbackground="green",
selectborderwidth=0,
highlightthickness=0,
selectmode=tk.SINGLE,
disabledforeground="black")
self._modes.grid(row=0, column=0, sticky=tk.N+tk.S+tk.E+tk.W)
self._modes_x["command"] = self._modes.xview
self._modes_y["command"] = self._modes.yview
self._modes.bind("<Button-1>", self.select_mode)
self._modes.bind("<Double-Button-1>", self.select_mode_fire)
# fire button
self._fire = tk.Button(self._modes_frame,
text="Fire",
command=self.fire,
state=tk.DISABLED)
self._fire.grid(row=2, column=0, columnspan=2, sticky=tk.W+tk.E)
# back button
self._back = tk.Button(self._modes_frame,
text="Undo last action",
command=self.back,
state=tk.DISABLED)
self._back.grid(row=3, column=0, columnspan=2, sticky=tk.W+tk.E)
# resume button
self._resume = tk.Button(self._modes_frame,
text="Resume simulation",
command=self.resume,
state=tk.DISABLED)
self._resume.grid(row=4, column=0, columnspan=2, sticky=tk.W+tk.E)
# traces
self._trace_x = tk.Scrollbar (self._trace_frame,
orient=tk.HORIZONTAL)
self._trace_x.grid(row=1, column=0, sticky=tk.W+tk.E)
self._trace_y = tk.Scrollbar (self._trace_frame, orient=tk.VERTICAL)
self._trace_y.grid(row=0, column=1, sticky=tk.N+tk.S)
self._trace = tk.Listbox(self._trace_frame,
font="monospace",
width=50,
activestyle="none",
selectbackground="blue",
xscrollcommand=self._trace_x.set,
yscrollcommand=self._trace_y.set,
disabledforeground="black")
self._trace.grid(row=0, column=0, sticky=tk.N+tk.S+tk.E+tk.W)
self._trace_x["command"] = self._trace.xview
self._trace_y["command"] = self._trace.yview
self._trace.bind("<Button-1>", self.select_trace)
self._trace.insert(tk.END, "<init>")
# save trace button
# self._save = tk.Button(self._trace_frame,
# text="Save trace",
# command=self.save)
# self._save.grid(row=2, column=0, columnspan=2, sticky=tk.W+tk.E)
# source
self._source = ScrolledText.ScrolledText(self._pan_right,
font="monospace",
width=70,
height=min([self.srclength,
25]))
self._pan_right.add(self._source)
self._source.tag_config("linenum",
background="#eee",
foreground="#222")
for num, line in enumerate(self.src.splitlines()) :
if num :
self._source.insert(tk.END, "\n")
self._source.insert(tk.END, "%s: %s"
% (str(num+1).rjust(self.width), line))
self._source.tag_add("linenum",
"%s.0" % (num+1),
"%s.%s" % (num+1, self.width+1))
self._source.configure(state=tk.DISABLED)
if errline is not None :
self._source.tag_add("error",
"%s.%s" % (errline, self.shift),
"%s.end" % errline)
self._source.tag_config("error", background="red")
# states
self._state = ScrolledText.ScrolledText(self._pan_right,
font="monospace",
width=70,
state=tk.DISABLED,
height=10)
self._pan_right.add(self._state)
# setup cells expansion
for widget in (self._win.winfo_toplevel(), self._win,
self._modes_frame, self._trace_frame) :
widget.rowconfigure(0, weight=1)
widget.columnconfigure(0, weight=1)
# tag places and transitions
for place in self.net.place() :
if place.status not in (self.snk.entry, self.snk.internal,
self.snk.exit) :
srow, scol, erow, ecol = place.label("srcloc")
self._source.tag_add(place.name,
"%s.%s" % (srow, scol + self.shift),
"%s.%s" % (erow, ecol + self.shift))
self.buffer_bind(self._source, place.name)
for trans in self.net.transition() :
srow, scol, erow, ecol = trans.label("srcloc")
self._source.tag_add(trans.name,
"%s.%s" % (srow, scol + self.shift),
"%s.%s" % (erow, ecol + self.shift))
self.trans_bind(trans.name)
def trans_bind (self, tag) :
def trans_enter (evt) :
self.trans_enter(tag)
self._source.tag_bind(tag, "<Enter>", trans_enter)
def trans_leave (evt) :
self.trans_leave(tag)
self._source.tag_bind(tag, "<Leave>", trans_leave)
def buffer_bind (self, widget, tag) :
def buffer_enter (evt) :
self.buffer_enter(tag)
widget.tag_bind(tag, "<Enter>", buffer_enter)
def buffer_leave (evt) :
self.buffer_leave(tag)
widget.tag_bind(tag, "<Leave>", buffer_leave)
def run (self) :
self._win.mainloop()
def update_modes (self) :
self._modes.delete(0, tk.END)
self.modes = []
self.selected_mode = None
self.trans2modes = collections.defaultdict(set)
for trans in self.net.transition() :
self._source.tag_config(trans.name, background="white")
for mode in trans.modes() :
self.modes.append(Action(trans, mode, self.shift))
self.modes.sort(key=operator.attrgetter("line"))
for idx, action in enumerate(self.modes) :
self.trans2modes[action.trans.name].add(idx)
self._modes.insert(tk.END, "%s @ %s"
% (action.mode, action.trans.name))
self._source.tag_config(action.trans.name, background="yellow")
self._source.tag_raise(action.trans.name)
if not self.modes :
self._fire.configure(state=tk.DISABLED)
def fire (self, evt=None) :
if evt is not None :
self._fire.flash()
action = self.modes[self.selected_mode]
action.fire()
self.trace.add(action)
self.extend_trace(action)
self.update()
self._fire.configure(state=tk.DISABLED)
self._back.configure(state=tk.NORMAL)
def extend_trace (self, action) :
self._trace.insert(tk.END, "%s @ %s"
% (action.mode, action.trans.name))
self._trace.see(self._trace.size()-1)
def trans_enter (self, trans) :
if trans in self.trans2modes :
state = self._modes["state"]
self._modes.configure(state=tk.NORMAL)
for idx in self.trans2modes[trans] :
self._modes.itemconfig(idx,
background="orange",
selectbackground="orange")
self._modes.see(idx)
self._source.tag_config(trans, background="orange")
self._modes.configure(state=state)
def trans_leave (self, trans) :
if trans in self.trans2modes :
state = self._modes["state"]
self._modes.configure(state=tk.NORMAL)
for idx in self.trans2modes[trans] :
self._modes.itemconfig(idx,
background="white",
selectbackground="green")
if self.selected_mode in self.trans2modes[trans] :
self._source.tag_config(trans, background="green")
else :
self._source.tag_config(trans, background="yellow")
self._modes.configure(state=state)
def select_mode (self, evt) :
if not self.modes or self._modes["state"] == "disabled" :
return
if isinstance(evt, int) :
idx = evt
else :
idx = self._modes.nearest(evt.y)
for num in range(len(self.modes)) :
self._source.tag_config(self.modes[num].trans.name,
background="yellow")
self._source.tag_config(self.modes[idx].trans.name,
background="green")
self.selected_mode = idx
self._fire.configure(state=tk.NORMAL)
def select_mode_fire (self, evt) :
if self.modes or self._modes["state"] == "disabled" :
self.select_mode(evt)
self.fire(evt)
def back (self, evt=None) :
if evt is not None :
self._back.flash()
self.trace.back()
self._trace.delete(self._trace.size()-1)
if self._trace.curselection() :
last = self._trace.size() - 1
self._trace.selection_clear(0, last)
self._trace.see(last)
self.update()
if self.trace.empty() :
self._back.configure(state=tk.DISABLED)
def save (self, evt=None) :
if evt is not None :
self._save.flash()
def update_state (self) :
self._state.configure(state=tk.NORMAL)
self._state.delete("1.0", tk.END)
pos = 1
for place in self.net.place() :
if place.status not in (self.snk.entry, self.snk.internal,
self.snk.exit) :
self._state.insert(tk.END, "%s = %s\n"
% (place.name, place.tokens))
self._state.tag_add(place.name,
"%s.0" % pos, "%s.end" % pos)
pos += 1
self.buffer_bind(self._state, place.name)
self._state.configure(state=tk.DISABLED)
def buffer_enter (self, tag) :
self._state.tag_configure(tag, background="#cff")
self._source.tag_configure(tag, background="#cff")
def buffer_leave (self, tag) :
self._state.tag_configure(tag, background="white")
self._source.tag_configure(tag, background="white")
def quit (self, evt=None) :
if popup.askokcancel("Really quit?", "Are you sure you want to"
" quit the simulator?") :
self._win.quit()
def select_trace (self, evt) :
idx = self._trace.nearest(evt.y)
if idx == 0 :
self.net.set_marking(self.trace[0].pre)
else :
self.net.set_marking(self.trace[idx-1].post)
self._modes.configure(state=tk.NORMAL)
self.update()
if idx < len(self.trace) :
self._modes.selection_set(self.modes.index(self.trace[idx]))
self.select_mode(idx)
self._fire.configure(state=tk.DISABLED)
self._modes.configure(state=tk.DISABLED)
self._resume.configure(state=tk.NORMAL)
def resume (self) :
if self._trace.curselection() :
last = self._trace.size() - 1
self._trace.selection_clear(0, last)
self._trace.see(last)
self.net.set_marking(self.trace[-1].post)
self.update_state()
self._resume.configure(state=tk.DISABLED)
self._modes.configure(state=tk.NORMAL)
from snakes.lang.abcd.parser import ast
class NodeCopier (ast.NodeTransformer) :
def copy (self, node, **replace) :
args = {}
for name in node._fields + node._attributes :
old = getattr(node, name, None)
if name in replace :
new = replace[name]
elif isinstance(old, list):
new = []
for val in old :
if isinstance(val, ast.AST) :
new.append(self.visit(val))
else :
new.append(val)
elif isinstance(old, ast.AST):
new = self.visit(old)
else :
new = old
args[name] = new
if hasattr(node, "st") :
args["st"] = node.st
return node.__class__(**args)
def generic_visit (self, node) :
return self.copy(node)
class ArgsBinder (NodeCopier) :
def __init__ (self, args, buffers, nets, tasks) :
NodeCopier.__init__(self)
self.args = args
self.buffers = buffers
self.nets = nets
self.tasks = tasks
def visit_Name (self, node) :
if node.id in self.args :
return self.copy(self.args[node.id])
else :
return self.copy(node)
def visit_Instance (self, node) :
if node.net in self.nets :
return self.copy(node, net=self.nets[node.net])
else :
return self.copy(node)
def _visit_access (self, node) :
if node.buffer in self.buffers :
return self.copy(node, buffer=self.buffers[node.buffer])
else :
return self.copy(node)
def visit_SimpleAccess (self, node) :
return self._visit_access(node)
def visit_FlushAccess (self, node) :
return self._visit_access(node)
def visit_SwapAccess (self, node) :
return self._visit_access(node)
def _visit_task (self, node) :
if node.net in self.tasks :
return self.copy(node, net=self.tasks[node.net])
else :
return self.copy(node)
def visit_Spawn (self, node) :
return self._visit_task(node)
def visit_Wait (self, node) :
return self._visit_task(node)
def visit_Suspend (self, node) :
return self._visit_task(node)
def visit_Resume (self, node) :
return self._visit_task(node)
def visit_AbcdNet (self, node) :
args = self.args.copy()
buffers = self.buffers.copy()
nets = self.nets.copy()
tasks = self.tasks.copy()
netargs = ([a.arg for a in node.args.args + node.args.kwonlyargs]
+ [node.args.vararg, node.args.kwarg])
copy = True
for a in netargs :
for d in (args, buffers, nets, tasks) :
if a in d :
del d[a]
copy = False
if copy :
return self.copy(node)
else :
return self.__class__(args, buffers, nets, tasks).visit(node)
if __name__ == "__main__" :
import doctest
doctest.testmod()
import sys, os, os.path
import inspect, fnmatch, collections
import textwrap, doctest, ast
import snakes
from snakes.lang import unparse
##
## console messages
##
CLEAR = "\033[0m"
BLUE = "\033[1;34m"
BOLD = "\033[1;38m"
GRAY = "\033[1;30m"
LIGHTGRAY = "\033[0;30m"
GREEN = "\033[1;32m"
CYAN = "\033[1;36m"
MAGENTA = "\033[1;35m"
RED = "\033[1;31m"
YELLOW = "\033[1;33m"
def log (message, color=None, eol=True) :
if color :
sys.stderr.write(color)
if isinstance(message, (list, tuple)) :
message = " ".join(str(m) for m in message)
sys.stderr.write(message)
if color :
sys.stderr.write(CLEAR)
if eol :
sys.stderr.write("\n")
sys.stderr.flush()
def debug (message, eol=True) :
log("[debug] ", GRAY, eol=False)
log(message, LIGHTGRAY, eol=eol)
def info (message, eol=True) :
log("[info] ", BLUE, False)
log(message, eol=eol)
def warn (message, eol=True) :
log("[warn] ", YELLOW, False)
log(message, eol=eol)
def err (message, eol=True) :
log("[error] ", RED, False)
log(message, eol=eol)
def die (message, code=1) :
err(message)
sys.exit(code)
##
## extract doc
##
class DocExtract (object) :
def __init__ (self, inpath, outpath, exclude=[]) :
self.path = inpath.rstrip(os.sep)
self.outpath = outpath
self.out = None
self.exclude = exclude
self._last = "\n\n"
def openout (self, path) :
if self.out is not None :
self.out.close()
relpath = path[len(os.path.dirname(self.path)):].strip(os.sep)
parts = relpath.split(os.sep)
relpath = os.path.dirname(relpath.split(os.sep, 1)[-1])
self.package = (parts[-1] == "__init__.py")
if self.package :
self.module = ".".join(parts[:-1])
target = "index.md"
else :
parts[-1] = os.path.splitext(parts[-1])[0]
self.module = ".".join(parts)
target = parts[-1] + ".md"
if any(fnmatch.fnmatch(self.module, glob) for glob in self.exclude) :
warn("skip %s" % self.module)
return False
outdir = os.path.join(self.outpath, relpath)
outpath = os.path.join(outdir, target)
info("%s -> %r" % (self.module, outpath))
if not os.path.exists(outdir) :
os.makedirs(outdir)
self.out = open(outpath, "w")
self.classname = None
return True
def write (self, text) :
if len(text) > 1 :
self._last = text[-2:]
elif len(text) > 0 :
self._last = self._last[-1] + text[-1]
else :
return
self.out.write(text)
def newline (self) :
if self._last != "\n\n" :
self.write("\n")
def writeline (self, text="") :
self.write(text.rstrip() + "\n")
def writetext (self, text, **args) :
br = args.pop("break_on_hyphens", False)
for line in textwrap.wrap(text, break_on_hyphens=br, **args) :
self.writeline(line)
def writelist (self, text, bullet=" * ", **args) :
br = args.pop("break_on_hyphens", False)
for line in textwrap.wrap(text, break_on_hyphens=br,
initial_indent=bullet,
subsequent_indent=" "*len(bullet),
**args) :
self.writeline(line)
def process (self) :
for dirpath, dirnames, filenames in os.walk(self.path) :
for name in sorted(filenames) :
if not name.endswith(".py") :
continue
path = os.path.join(dirpath, name)
if not self.openout(path) :
continue
node = ast.parse(open(path).read())
if ".plugins." in self.module :
self.visit_plugin(node)
else :
self.visit_module(node)
def _pass (self, node) :
pass
def visit (self, node) :
name = getattr(node, "name", "__")
if (name.startswith("_") and not (name.startswith("__")
and name.endswith("__"))) :
return
try :
getattr(self, "visit_" + node.__class__.__name__, self._pass)(node)
except :
src = unparse(node)
if len(src) > 40 :
src = src[:40] + "..."
err("line %s source %r" % (node.lineno, src))
raise
def visit_module (self, node) :
doc = ast.get_docstring(node)
self.write_module()
for child in ast.iter_child_nodes(node) :
self.visit(child)
def visit_plugin (self, node) :
doc = ast.get_docstring(node)
self.write_module()
extend = None
for child in ast.iter_child_nodes(node) :
if (getattr(child, "name", None) == "extend"
and isinstance(child, ast.FunctionDef)) :
extend = child
else :
self.visit(child)
doc = ast.get_docstring(extend)
self.write_plugin()
for child in ast.iter_child_nodes(extend) :
self.visit(child)
def visit_ClassDef (self, node) :
doc = ast.get_docstring(node)
self.write_class(node)
self.classname = node.name
for child in ast.iter_child_nodes(node) :
self.visit(child)
self.classname = None
def visit_FunctionDef (self, node) :
self.write_function(node)
self.args = [n.id for n in node.args.args]
if self.args and self.args[0] == "self" :
del self.args[0]
if node.args.vararg :
self.args.append(node.args.vararg)
if node.args.kwarg :
self.args.append(node.args.kwarg)
self.visit(node.body[0])
self.args = []
def visit_Expr (self, node) :
self.visit(node.value)
def visit_Str (self, node) :
self.write_doc(inspect.cleandoc(node.s))
def write_module (self) :
if self.package :
self.writeline("# Package `%s` #" % self.module)
else :
self.writeline("# Module `%s` #" % self.module)
self.newline()
def write_plugin (self) :
self.writeline("## Extensions ##")
self.newline()
def write_function (self, node) :
self.newline()
if self.classname :
self.writeline("#### Method `%s.%s` ####"
% (self.classname, node.name))
else :
self.writeline("### Function `%s` ###" % node.name)
self.newline()
self.writeline(" :::python")
for line in unparse(node).splitlines() :
if line.startswith("def") :
self.writeline(" %s ..." % line)
break
else :
self.writeline(" " + line)
self.newline
def write_class (self, node) :
self.newline()
self.writeline("### Class `%s` ###" % node.name)
self.newline()
self.writeline(" :::python")
for line in unparse(node).splitlines() :
if line.startswith("class") :
self.writeline(" %s ..." % line)
break
else :
self.writeline(" " + line)
self.newline
parse = doctest.DocTestParser().parse
def write_doc (self, doc) :
if doc is None :
return
docs = self.parse(doc)
test, skip = False, False
for doc in docs :
if isinstance(doc, str) :
doc = doc.strip()
if test :
if not doc :
continue
test, skip = False, False
self.newline()
lines = doc.strip().splitlines()
for num, line in enumerate(lines) :
if line.startswith("@") :
self.write_epydoc("\n".join(lines[num:]))
break
self.writeline(line)
elif not skip :
if doc.source.strip() == "pass" :
skip = True
else :
if not test :
test = True
self.newline()
self.writeline(" :::python")
for i, line in enumerate(doc.source.splitlines()) :
if i > 0 :
self.writeline(" ... %s" % line)
else :
self.writeline(" >>> %s" % line)
for line in doc.want.splitlines() :
self.writeline(" %s" % line)
def write_epydoc (self, doc) :
info = {"param" : {},
"type" : {},
"keyword" : {},
"raise" : {},
"todo" : [],
"note" : [],
"attention": [],
"bug" : [],
"warning" : [],
}
for item in doc.lstrip("@").split("\n@") :
left, text = item.split(":", 1)
left = left.split()
assert 1 <= len(left) <= 2, "unsupported item %r" % item
if len(left) == 1 :
left.append(None)
tag, name = [x.strip() if x else x for x in left]
text = " ".join(text.strip().split())
if isinstance(info.get(tag, None), list) :
assert name is None, "unsupported item %r" % item
info[tag].append(text)
elif isinstance(info.get(tag, None), dict) :
assert name is not None, "unsupported item %r" % item
assert name not in info[tag], "duplicated item %r" % item
info[tag][name] = text
else :
assert name is None, "unsupported item %r" % item
assert tag not in info, "duplicated tag %r" % item
info[tag] = text
if any(k in info for k in ("author", "organization", "copyright",
"license", "contact")) :
self.newline()
self.writeline('<div class="api-info">')
for tag in ("author", "organization", "copyright",
"license", "contact") :
if tag in info :
self.writeline('<div class="api-%s">' % tag)
self.writetext('<span class="api-title">%s:</span> %s'
% (tag.capitalize(), info[tag]),
subsequent_indent=" ")
self.writeline('</div>')
self.writeline('</div>')
if any(info[k] for k in
("todo", "note", "attention", "bug", "warning")) :
self.newline()
self.writeline('<div class="api-remarks">')
self.writeline("##### Remarks #####")
self.newline()
for tag in ("note", "todo", "attention", "bug", "warning") :
for text in info[tag] :
self.writeline('<div class="api-%s">' % tag)
self.writetext('<span class="api-title">%s:</span> %s'
% (tag.capitalize(), text),
subsequent_indent=" ")
self.writeline('</div>')
self.writeline('</div>')
if (any(info[k] for k in ("param", "type", "keyword"))
or any(k in info for k in ("return", "rtype"))) :
self.newline()
self.writeline('<div class="api-call">')
self.writeline("##### Call API #####")
self.newline()
for arg in self.args :
if arg in info["param"] :
self.writelist("`%s` (%s): %s"
% (arg,
info["type"].get(arg, "`object`"),
info["param"][arg]))
else :
self.writelist("`%s` (%s)"
% (arg,
info["type"].get(arg, "`object`")))
for kw, text in sorted(info["keyword"].items()) :
self.writelist("`%s`: %s" % (kw, text))
if any(k in info for k in ("return", "rtype")) :
if "return" in info :
self.writelist("return %s: %s"
% (info.get("rtype", "`object`"),
info["return"]))
else :
self.writelist("return %s"
% (info.get("rtype", "`object`")))
self.writeline('</div>')
if info["raise"] :
self.newline()
self.writeline('<div class="api-errors">')
self.writeline("##### Exceptions #####")
self.newline()
for exc, reason in sorted(info["raise"].items()) :
self.writelist("`%s`: %s" % (exc, reason))
self.writeline('</div>')
def main (finder, args) :
try :
source = os.path.dirname(snakes.__file__)
target = args[0]
exclude = args[1:]
if not os.path.isdir(source) :
raise Exception("could not find SNAKES sources")
elif not os.path.isdir(target) :
raise Exception("no directory %r" % target)
except ValueError :
die("""Usage: python %s TARGET [EXCLUDE...]
TARGET target directory to write files
EXCLUDE pattern to exclude modules (not file names)
""" % __file__)
except Exception as error :
die(str(error))
finder(source, target, exclude).process()
if __name__ == "__main__" :
main(DocExtract, sys.argv[1:])
from snakes.lang.ctlstar.parser import parse
from snakes.lang.ctlstar import asdl as ast
from snakes.lang import getvars, bind
import _ast
class SpecError (Exception) :
def __init__ (self, node, reason) :
Exception.__init__(self, "[line %s] %s" % (node.lineno, reason))
def astcopy (node) :
if not isinstance(node, _ast.AST) :
return node
attr = {}
for name in node._fields + node._attributes :
value = getattr(node, name)
if isinstance(value, list) :
attr[name] = [astcopy(child) for child in value]
else :
attr[name] = astcopy(value)
return node.__class__(**attr)
class Builder (object) :
def __init__ (self, spec) :
self.spec = spec
self.decl = {}
for node in spec.atoms + spec.properties :
if node.name in self.decl :
raise SpecError(node, "%r already declare line %s"
% (name.name, self.decl[name.name].lineno))
self.decl[node.name] = node
self.main = spec.main
def build (self, node) :
node = astcopy(node)
return self._build(node, {})
def _build (self, node, ctx) :
if isinstance(node, ast.atom) :
try :
builder = getattr(self, "_build_%s" % node.__class__.__name__)
except AttributeError :
node.atomic = True
else :
node = builder(node, ctx)
node.atomic = True
elif isinstance(node, ast.CtlBinary) :
node.left = self._build(node.left, ctx)
node.right = self._build(node.right, ctx)
node.atomic = (isinstance(node.op, (ast.boolop, ast.Imply,
ast.Iff))
and node.left.atomic
and node.right.atomic)
elif isinstance(node, ast.CtlUnary) :
node.child = self._build(node.child, ctx)
node.atomic = (isinstance(node.op, ast.Not)
and node.child.atomic)
else :
assert False, "how did we get there?"
return node
def _build_place (self, param, ctx) :
if isinstance(param, ast.Parameter) :
if param.name not in ctx :
raise SpecError(param, "place %r should be instantiated"
% param.name)
return ctx[param.name]
else :
return param
def _build_InPlace (self, node, ctx) :
node.data = [bind(child, ctx) for child in node.data]
node.place = self._build_place(node.place, ctx)
return node
def _build_NotInPlace (self, node, ctx) :
return self._build_InPlace(node, ctx)
def _build_EmptyPlace (self, node, ctx) :
node.place = self._build_place(node.place, ctx)
return node
def _build_MarkedPlace (self, node, ctx) :
return self._build_EmptyPlace(node, ctx)
# skip Deadlock and Boolean: nothing to do
def _build_Quantifier (self, node, ctx) :
node.place = self._build_place(node.place, ctx)
ctx = ctx.copy()
for name in node.vars :
ctx[name] = ast.Token(name, node.place.place)
node.child = self._build(node.child, ctx)
return node
def _build_Instance (self, node, ctx) :
if node.name not in self.decl :
raise SpecError(node, "undeclared object %r" % node.name)
ctx = ctx.copy()
decl = self.decl[node.name]
for arg in decl.args :
ctx[arg.name] = arg
if isinstance(decl, ast.Property) :
return self._build_Instance_Property(node, decl, ctx)
else :
return self._build_Instance_Atom(node, decl, ctx)
def _build_Instance_Property (self, node, prop, ctx) :
bound = set(a.name for a in prop.args)
args = dict((a.arg, a.annotation) for a in node.args)
for param in prop.params :
if param.name in bound :
raise SpecError(node, "argument %r already bound"
% param.name)
elif param.name in args :
arg = args.pop(param.name)
bound.add(param.name)
else :
raise SpecError(node, "missing argument %r" % param.name)
if param.type == "place" :
if not isinstance(arg, ast.Place) :
raise SpecError(node, "expected place for %r"
% param.name)
arg.name = param.name
ctx[param.name] = arg
if args :
raise SpecError(node, "too many arguments (%s)"
% ", ".join(repr(a) for a in args))
return self._build(astcopy(prop.body), ctx)
def _build_Instance_Atom (self, node, atom, ctx) :
bound = set(a.name for a in atom.args)
args = dict((a.arg, a.annotation) for a in node.args)
new = astcopy(atom)
for param in atom.params :
if param.name in bound :
raise SpecError(node, "argument %r already bound"
% param.name)
elif param.name in args :
arg = args.pop(param.name)
bound.add(param.name)
else :
raise SpecError(node, "missing argument %r" % param.name)
if param.type == "place" :
if not isinstance(arg, ast.Place) :
raise SpecError(node, "expected place for %r"
% param.name)
arg.name = param.name
else :
arg = ast.Argument(name=param.name,
value=arg,
type=param.type)
new.args.append(arg)
if args :
raise SpecError(node, "too many arguments (%s)"
% ", ".join(repr(a) for a in args))
del new.params[:]
return new
def build (spec, main=None) :
b = Builder(spec)
if main is None :
return b.build(spec.main)
else :
return b.build(main)
#!/bin/sh
exec python bin/abcd --dot ,railroad.png \
--pnml ,railroad.pnml \
doc/examples/abcd/railroad.abcd
\ No newline at end of file
import doctest, sys, os, glob
retcode = 0
import snakes
version = open("VERSION").read().strip()
if snakes.version != version :
print("Mismatched versions:")
print(" snakes.version = %r" % snakes.version)
print(" VERSION = %r" % version)
sys.exit(1)
def test (module) :
print(" Testing '%s'" % module.__name__)
f, t = doctest.testmod(module, #verbose=True,
optionflags=doctest.NORMALIZE_WHITESPACE
| doctest.REPORT_ONLY_FIRST_FAILURE
| doctest.ELLIPSIS)
return f
modules = ["snakes",
"snakes.hashables",
"snakes.lang",
"snakes.lang.python.parser",
"snakes.lang.abcd.parser",
"snakes.lang.ctlstar.parser",
"snakes.data",
"snakes.typing",
"snakes.nets",
"snakes.pnml",
"snakes.plugins",
"snakes.plugins.pos",
"snakes.plugins.status",
"snakes.plugins.ops",
"snakes.plugins.synchro",
"snakes.plugins.hello",
"snakes.plugins.gv",
"snakes.plugins.clusters",
"snakes.plugins.labels",
"snakes.utils.abcd.build",
]
stop = False
if len(sys.argv) > 1 :
if sys.argv[1] == "--stop" :
stop = True
del sys.argv[1]
doscripts = True
if len(sys.argv) > 1 :
modules = sys.argv[1:]
doscripts = False
for modname in modules :
try :
__import__(modname)
retcode = max(retcode, test(sys.modules[modname]))
if retcode and stop :
break
except :
print(" Could not test %r:" % modname)
c, e, t = sys.exc_info()
print(" %s: %s" % (c.__name__, e))
if doscripts :
for script in (glob.glob("test-scripts/test*.sh")
+ glob.glob("test-scripts/test*.py")) :
print(" Running '%s'" % script)
retcode = max(retcode, os.system(script))
if retcode and stop :
break
sys.exit(retcode)
(provide 'abcd-mode)
(define-derived-mode abcd-mode python-mode "ABCD"
(font-lock-add-keywords
nil
`((,(concat "\\<\\(buffer\\|typedef\\|net\\|enum\\|task\\|const\\|symbol\\)\\>")
1 font-lock-keyword-face t))))
"""Draws a dependency graph of non-terminals in a pgen grammar
(pygraphviz required).
Usage: python pgen2dot.py INFILE OUTFILE [ENGINE [OPTION=VAL]...]
"""
import sys, string, os.path
import pygraphviz as gv
import snakes.lang.pgen as pgen
if len(sys.argv) < 3 :
print("Usage: python pgen2dot.py INFILE OUTFILE [ENGINE [OPTION=VAL]...]")
sys.exit(1)
elif len(sys.argv) >= 4 :
engine = sys.argv[3]
else :
engine = "dot"
nodes = set()
edges = set()
def walk (st, lex, rule=None) :
tok, children = st
if tok == pgen.PgenParser.RULE :
rule = children[0][0]
nodes.add(rule)
for child in children[1:] :
walk(child, lex, rule)
elif isinstance(tok, str) and tok.strip() and tok[0] in string.ascii_lowercase :
nodes.add(tok)
if rule is not None :
edges.add((rule, tok))
else :
for child in children :
walk(child, lex, rule)
st, lex = pgen.PgenParser.parse(sys.argv[1])
walk(st, lex)
g = gv.AGraph(directed=True)
g.add_edges_from(edges)
g.graph_attr["overlap"] = "false"
g.graph_attr["splines"] = "true"
for arg in sys.argv[4:] :
key, val = arg.split("=", 1)
g.graph_attr[key] = val
g.draw(sys.argv[2], prog=engine)
"""Lists unreachable non-terminals in a pgen grammar
Usage: python pgen2min.py INFILE
"""
import sys, string, os.path
import snakes.lang.pgen as pgen
import collections
if len(sys.argv) < 2 :
print("Usage: python pgen2dot.py INFILE")
sys.exit(1)
root = None
nodes = set()
edges = collections.defaultdict(set)
def walk (st, lex, rule=None) :
global root
tok, children = st
if tok == pgen.PgenParser.RULE :
rule = children[0][0]
if root is None :
root = rule
nodes.add(rule)
for child in children[1:] :
walk(child, lex, rule)
elif isinstance(tok, str) and tok.strip() and tok[0] in string.ascii_lowercase :
nodes.add(tok)
if rule is not None :
edges[rule].add(tok)
else :
for child in children :
walk(child, lex, rule)
st, lex = pgen.PgenParser.parse(sys.argv[1])
walk(st, lex)
reached = set()
next = set([root])
while not next.issubset(reached) :
reached.update(next)
prev = next
next = set()
for n in prev :
next.update(edges[n])
for node in sorted(nodes - reached) :
print(node)