More information about the modules that is not on this page can be found on the somewhat outdated PyDSTool design document. Underscores on attributes, methods or functions suggest that they should be treated as "internal" and not for general use, usually because they provide no useful function to end-users. Those that might be useful for advanced users are also underscored because usage of those methods, or direct alteration of those attributes, is difficult to get right as an end-user, and mistakes could lead to irreparable damage to the object.
Throughout this page, names followed by an asterisk indicate that it is not intended for end-users, and usually that the name is present for entirely internal usage by PyDStool. Since Python does not provide private as well as public interfaces to a class, an asterisk is a different type of indication that the name is not intended for typical end-user use.
PyDSTool uses the numpy package to provide double precision floating point arithmetic. All floating point calculations in PyDSTool are to double precision. By default, Python uses double precision for its float type, and we have used the double type in the C code implementations of numerical integration. Complex numbers for data points, numeric intervals, and so forth, are not currently supported in PyDSTool. All numeric types mentioned below will refer to 32-bit integers or double-precision (64-bit) floats.
This module defines interval ranges and operations for floating point values. This module exports two classes, Interval and IntervalMembership.
Interval is a numeric interval (a.k.a. range) class.
Interval objects have attributes:
name: str (a label - does not have to be unique)
typestr: 'int' or 'float' (immutable) - doesn't handle 'complex' yet
_loval (low endpoint val): numeric
_hival (hi endpoint val): numeric
Examples of usage are provided in test code at the end of the class definitions in Interval.py.
Points and Pointsets are described in detail on their own page: Pointsets
A Variable object is the specification of a single variable's value as a function of an independent variable (generally time). It is the basic representational unit for a (multi-dimensional) trajectory.
A properly configured Variable object x is callable, when the argument (independent variable) values are within the set specified by attribute indepdomain, which is an Interval object for continuous domains, and an array dictionary for discrete domains. The domain of the variable is given by the depdomain attribute, which is also a Interval or array dictionary. These intervals default to the bi-infinite real line unless otherwise specified.
The Variable class also provides bounds checking that may be required on the independent or dependent variables of a system. If bounds-checking is on, calls to a Variable object first pass through a filter to verify that the independent variable is within its bounds, before the Variable object's output method is called. This method has different forms depending on the nature of the Generator that created it. After this method has produced a value, it will pass through an additional filter that checks its boundedness.
The Variable objects that are based on an underlying mesh have a simple form for the output method that returns a value. The method is typically an interpolation function that also contains the mesh. The output method is more complex for the "on demand" types of trajectory calculator.
Wrapping mathematical functions inside the methods of generic objects is a sure way to lose call efficiency. Unfortunately, this is a necessary expense so that a consistent API is presented to a user or to other objects. In particular, such wrapping is necessary in order to provide event detection and bounds checking facilities in a unified way. For instance, consider a 1D curve that we wish to specify using an explicit function, let's say the sine function. In order for the curve to be recognizable as such by PyDSTool functions and utilities, the explicit function sin has to be wrapped into a Variable object, which itself is callable.
To do this, a user would build a ExplicitFnGen generator. For simplicity of exposition, and as developers, we can hack a little to make the wrapping more transparent. The OutputFn class is the generic function wrapper for functions that need domain checks enforced on the independent variable values passed as arguments, and on the output value resulting from the function call.
sin_opfunc = OutputFn(math.sin)
The OutputFn object contains extra information about the types of the input and output of the function, and the domains in which those values lie. Without additional parameters at initialization, these default to (Float, Float) and ([-Inf, Inf], [Inf, Inf]), respectively. We can create a Variable object directly, using this OutputFn.
sin_var = Variable(sin_opfunc, Interval('t', 'float', [-Inf,Inf]), Interval('x', 'float', [-1,1]))
In this example, we haven't specified any restriction on the domains, but we can investigate the efficiency of this wrapping under different calling conditions. Here is the output of a script that measures the time taken for 10000 computations of sine.
Time to compute 10000 calls of sin(0.5) to ... math.sin 0.0209426058339 ufunc sin 0.149249695143 sin_var variable (with default checklevel (0)) 0.135739598189 sin_var variable (with explicit checklevel = 0) 0.180701737232 sin_var variable (with checklevel = 2) 1.34982770157 sin_var variable using vectorized call (with default checklevel (0)) 0.0412761957175 sin_var variable using vectorized call (with explicit checklevel = 0) 0.0448467612505 sin_var variable using vectorized call (with checklevel = 2) 0.666583881471
Apart from noticing that math.sin should always be called over the universal function sin, there are two lessons for PyDSTool developers here.
The first is to call a Variable object with a vector of argument values whenever possible. The speed-up of vectorization is approximately a factor of 3.
The additional slow-down factor associated with switching on bounds checking (checklevel = 2) is about equal when using vectorized calls or individual calls. The second lesson is that, where possible, bounds checking should be used during exploratory work, but if confidence in the input and output bounds is known or gained during the exploration, then bounds checking can be turned off for maximum efficiency.
The Trajectory class implements parameterized and non-parameterized trajectories and curves. It is essentially a collection of 1D Variable objects brought together (usually under a common independent variable in parameterized curves) to create an N-dimensional curve over some domain. The main feature in this abstraction is the sample method, which enables users to sample values from a continuously or discretely defined trajectory in different ways.
Calling the sample method with no arguments on a Trajectory defined using an underlying mesh will efficiently return the values at the mesh points. Trajectories are constructed by Generators (a core PyDSTool class) using an explicit or implicit function, or using a mesh of points. When called, the objects return a state value, using interpolation between mesh points if necessary. Linear and quadratic interpolation are supported.
As of version 0.87:
Hybrid models are now a sub-class of Model so that they can be
organized hierarchically, with Generator-only based models as the leaves.
Auxiliary vars of a model are those that are exported by all of its sub-models.
All sub-models of a hybrid system must "export" the same
variables (e.g., see differences in fingermodel_vode.py)
Can promote a Generator's auxiliary variables to be treated as
observable or internal vars at Model level, provided all sub-models export them. forceObsVars and forceIntVars used to promote the auxiliary vars or to change internal <-> observable variables (now has no effect if variables already of desired type)
All user interface routes to defining a dynamical system end up with the creation of a FuncSpec object, that contains the full abstract definition of a system before it is instantiated to a target language definition (as a Generator object). The FuncSpec.py module is detailed on the page FunctionalSpec.
The dual goals of efficiency and convenience in the user specification of functions (be they right-hand sides of ODEs, auxiliary functions, and so on) are somewhat at odds with eachother. Users are not expected to write their own C functions in order to achieve greatest efficiency, mainly because of the delicacy of interfacing a large core of code with third-party C code correctly.
User-provided functions in native Python are not a great alternative, because they are exceptionally slow to execute when embedded in larger calculations. This is particularly true if "index-free" notation is to be provided to the user, as this creates additional overhead when PyDSTool has to map names to array indices and vice-versa on entry and exit from every call to the user's functions.
In response to these two extremes, a key feature in the current implementation is to allow users to specify partial function signature information and function bodies to PyDSTool, using an elementary "syntax". PyDSTool then creates its own internal versions of the functions as dynamically executed Python code strings. These functions are free to use indices as they remain entirely hidden from the user, while the user can use names for all the variables, parameters, etc., and be free from being concerned about the full detail of function signatures, or other interfacing details in order that PyDSTool can use the functions correctly.
Additionally, the same user code specifications can be turned into C functions for use with external, compiled, modules. This unified feature of the user interface is mostly handled in the FuncSpec.py module (see FunctionalSpec page).
Returning to the native Python form for user-specified functions, it is most convenient to make these functions accessible to objects as if they were regular class methods. Therefore, these functions are dynamically added to classes by a command such as
setattr(<class_name>, <func_name>, <ptr_to_func>).
In Python, <ptr_to_func> is implemented as eval(func_name). Furthermore, in order to avoid name clashes in a class's namespace when a user creates multiple functions, these functions generally need unique internal names. However, when bound to methods that often perform the same role in an object, such as functions that specify the RHS of a differential equation, the methods must have fixed names that are known in advance. Thus, the actual methods created are generally wrappers around the unique names, and the unique names remain locally-defined functions to the object.
At present, the major drawback of this approach is the way that Python handles the namespaces of objects, with respect to dynamically created attributes and methods, and the locally-defined functions that they typically refer to. Many important core Python utilities, in particular those that "copy" objects and "load" or "save" objects to permanent storage, rely on the contents of an object's associated namespace to know what constitutes an object. This information indicates how the operations will divide-and-conquer their task. Unfortunately, it is the case that locally created functions do not appear in the object's namespace (i.e. that designated by __main__) even when they are referred to by functions that do. Therefore, without additional attention, those core operations would fail on such objects.
The core classes to which these issues apply are Generator and Variable. In these classes, two special methods __getstate__ and __setstate__ are defined. These provide overrides for the default behaviour when "copy" and "save" operations are invoked. They provide the opportunity to do special housekeeping immediately before and after pickling. The _funcreg dictionary provides information about the dynamically-created methods that refer to locally-defined functions, and permits __getstate__ to delete references to those methods prior to pickling. After unpickling, __setstate__ may use the same dictionary to reconstruct those methods from the original sources that it holds. Alternatively, the object may call a method such as addMethods that rebuilds those methods anew.
Read more about object persistence in Python in this article.
As a postscript to the above discussion of object persistence, external Python modules that are largely implemented in C may define Python types that are essentially direct "lifts" of C objects. Prime examples of these are so-called Python Cfuncs, and numpy uses them for the fundamental numeric types float, int, and complex, which are thin wrappers around C implementations. The core Python utilities for copying, and so forth, do not know how to deal with such "third-party" types and throw exceptions. Therefore, an addition to the __getstate__ and __setstate__ methods discussed above is the temporary re-mapping of such types when they appear in object attributes. As a result, most of the core PyDSTool classes implement these special methods.
An introduction to the Generator classes is given on the page Generators.
In terms of class inheritance, there are two types of Generator: those for which time varies continuously, and those for which it varies discontinuously. In consequence, all end-user ("concrete") classes inherit from either the ctsGen or discGen abstract classes.
In terms of functionality, there is another distinction here. Some generators create trajectory objects that are accurately defined only at their underlying mesh-points, others create trajectory objects (curves) that compute trajectory points "on demand" to full accuracy (so-called dense output). Examples of the first type are the lookup table and the standard Runge-Kutta style integrators for ODEs. Examples of the second type are the explicit and implicit function generators, for which an underlying mesh is not required. Taylor series integration bridges the gap between these two extremes, and we hope to implement such integrators by Summer 2006.
PyDSTool supports three ODE integrators.
Hairer and Wanner's implementation of a Dormand-Prince method (related to a Runge-Kutta method) for IVPs. This method has order 8(5, 3) and dense output of order 7.
Hairer and Wanner's Radau method, which is efficient for stiff systems and supports mass-matrix and DAE formalisms.
The CVODE integrator. Although the CVODE method internally uses an adaptive time-step, the SciPy interface to the CVODE code returns values only at specified time steps. The PyDSTool interface therefore makes CVODE resemble a fixed-step integrator, and there is no control over the actual time-steps used via the API except to specify error tolerances.
See here for implementation details.
For general information about using these integrators (including some implementation information) see the UserDocumentation page.
See the page on the sub-package PyCont.
This is done primarily through the MINPACK Fortran libraries, available via SciPy. PyDSTool's ParamEst class is a thin wrapper around these, and provides examples of residual cost functions for use with Generator and Model objects. Examples are provided in the download, such as the files pest_test1.py, etc.
Important exported functions from this module include:
who is useful in providing information about
all PyDSTool-related objects currently named in memory. The function behaves much like it does in MATLAB, and takes an optional dictionary argument containing PyDSTool objects (such as that returned by a call to local() or global()).
exportPointset and importPointset write and
read text files containing arrays of data, in a flexible variety of formats. Note that importPointset strictly speaking only returns a dictionary of arrays, ready for passing to a Pointset initialization (so that the user has the opportunity to add other initialization information). This utility has the option to load a separate data file containing corresponding time (or other independent variable) data, if the array happens to belong to a trajectory. The independent variable can also be selected as a column from the same data file as the dependent variable data, or provided explicitly as an array to the call of importPointset.
makeDataDict(fieldnames, fieldvalues) creates a dictionary using the field names list for keys, and the
corresponding field values for values. This helps to simplify notation when building data tables for use with the DataSystem classes of Generator.
orderEventData transforms the output of the Model class's getTrajGenEventTimes method into a list of time-ordered (eventname, time) tuples. If the optional 'nonames' Boolean argument is set to True, the function returns only an ordered list of event times, with no associated event names.
makeImplicitFunc builds a simple implicit function representation of an N-dimensional curve, usually specified by N-1 equations. Thus the first argument f is generally a function of 1 variable. In the case of the 'fsolve' method, however, f may have dimension up to N-1, thereby specifying the curve with correspondingly fewer equations. Available solution methods are: newton, bisect, steffensen, fsolve. All these methods utilize SciPy's MINPACK wrappers to Fortran codes, and do not perform any turning point detection (for which you should see PyCont).
This module defines common utility functions for internal use within PyDSTool, or for advanced users.
This module contains custom Python exception definitions. Exception raising using these classes is not yet consistently applied throughout PyDSTool.
last edited 20011-10-06 19:00:54 by RobClewley