Wednesday 18 October 2023

Typed Graph Implementation in PyrrhoDB

During 2023 I have been implementing Fritz Laux's concept of the Typed Graph Model (TGM) in PyrrhoDB. This is topical because ISO 9075-16 Property Graph Queries (SQL/PGQ) was published this year, and the new draft international standard ISO 39075 Graph Query Language (GQL) is in development and likely to be published in early 2024. In all three cases, we have the prospect of a full integration between the hitherto different topics of graph databases and relational databases.

PyrrhoV7alpha 7.06 is available on github, and contains considerable progress in this direction. This blog post is based on sections of the Pyrrho Manual and "Introduction to the Source Code of the PyrrhoDBMS" document in that repository. The important point in the implementation of typed graphs in Pyrrho is that graph data can be entered and modified using either relational SQL or the CREATE and MATCH statements, and Pyrrho maintains all of the base tables and indexes to make this work. The graph matching algorithms support repeating patterns and are reminiscent of Prolog backtracking in their use of continuations. The implementation takes care to respect transactions, roles, etc and all the RDBMS aspects, while making the creation and modification of graph data as smooth as possible.

A NodeType (or EdgeType) corresponds to a single database object that defines both a base Table in the database and a user-defined type for its rows. This UDT is managed by the database engine by default, but the usual ALTER operations are available for both Table and UDT. Its first column is a primary key ID of type INT, usually provided in the CREATE statement, or if this is not supplied, a new int value. Other columns are provided in the node type for any properties that are defined for a node of this type.

An EdgeType additionally specifies NodeTypes for Leaving and Arriving foreign key columns (an edge is said to leave a node and arrive at another). This means that all leaving nodes for an edge type have the same node type (and similarly all arriving nodes for an edge type have the same node type. As usual with foreign keys, the engine maintains multisets for the reverse relationships (edges leaving from or arriving at the node).

TNode and TEdge are TypedValues whose dataType is a NodeType (resp EdgeType). A TGraph is a collection of node and edge uids.

Nodes and edges are rows in the tables thus defined, and these can be updated and deleted using SQL in the usual ways, while ALTER TYPE, ALTER DOMAIN and ALTER TABLE statements can be applied to node and edge types.

In CREATE TYPE statements, metadata is available to declare a new type as a node type or an edge type, automatically specifying the ID (resp. ID, LEAVING and ARRIVING) columns and constraints as column 0 (resp, 0,1,2). A more convenient mechanism for defining or adding to typed graphs is provided by the CREATE syntax.

Creating graph data in the RDBMS

A Neo4j-like syntax can be used to add one or more nodes and zero or more edges using the CREATE statement defined below:

CreateStatement = CREATE Graph {THEN Statement END}

Graph= Node Path {',' Node Path } .

Path =    { Edge Node } .

Node = '(' GraphItem ')' .

Edge= '-[' GraphItem ']->' | '<-[' GraphItem ']-'

GraphItem =  [id | Node_Value] [GraphLabel] [ Document] .

GraphLabel = ':' (id | Label_Value) [GraphLabel] .

In this syntax we see new diglyph and triglyph tokens for indicating the start and end of directed edges. In this syntax id is an SQL identifier for later reference in the statement, not a node ID: node and edge identities are specified in the JSON document doc. Pyrrho will supply a default value for ID if not specified.

The Label identifies a node or edge type (with an optional subtype), which may be new. As suggested above, the columns of new node and edge types are inferred from supplied property values and automatically modified as needed. All nodes and edges by default have the special property ID of type INT. The syntax connects up the edges: it is not permitted to specify leaving and arriving nodes explicitly.

As indicated, the syntax can contain a comma-separated list of graph fragments. The engine endeavours to combine these, verifying or modifying the available node and edge types, and defining new nodes and edges.

Retrieving graph data from the RDBMS

The given graph fragments are evaluated in a recursive process that finds sets of values for unbound identifiers, for which the graph fragments are all found in the database. The result is thus a set of successful assignments of unbound identifiers to TypedValues. The Statement if supplied is executed for each row of this set. To be unbound, an identifier should not match any top-level database object (table, view, domain, type, procedure) or any identifier defined earlier in the current SQL statement. If there are no unbound identifiers in the MatchStatement, its value is just a Boolean indicating whether all of the fragments were found. = MATCH Match {',' Match} [WhereClause] [Statement] [THEN Statements END ] .

The Match statement computes a rowset of bindings for unbound identifiers (or a boolean if there are none) for which the Match is found in all or a selected set of graphs in the database (see MatchMode below). The Statement if present is executed for each row of bindings and may replace it with the result of a RETURN statement. If the Statement is present but has no RETURN, or if it changes the database, there is no rowset result for the MatchStatement. Match removes duplicate rows from its bindings and from the result if present. The THEN clause if present has access to the bindings but does not alter the result rowset.

Match = (MatchMode [id '='] MatchNode) {'|' Match}.

MatchNode = '(' MatchItem ')' {(MatchEdge|MatchPath) MatchNode}.

MatchEdge = '-[' MatchItem '->' | '<-' MatchItem ']-' .

MatchItem =  [id | Node_Value] [GraphLabel] [ Document | Where ] .

MatchPath = '[' Match ']' MatchQuantifier .

MatchQuantifier = '?' | '*' | '+' | '{' int , [int] '}' .


The MatchMode controls how repetitions of path patterns are managed in the graph matching mechanism. A MatchPath creates lists of values of bound identifiers in its Match. By default, binding rows that have already occurred in the match are ignored, and paths that have already been listed in a quantified graph are not followed. The MatchMode modifies this default behaviour: TRAIL omits paths where an edge occurs more than once, ACYCLIC omits paths where a node occurs more than once, SIMPLE looks for a simple cycle. The last three options apply to MatchStatements that do not use the comma operator, and select the shortest match, all matches or an arbitrary match.

The Graph view of graph data

The database is considered to contain a (possibly empty) set of disjoint TGraphs. Every Node in the database belongs to exactly one graph in this set.

The nodes of a graph are totally ordered by the order of insertion in the database, but this is not the traversal ordering: the first node in a graph is the first in both orderings. The traversal ordering starts with this first node but preferentially follows edges: the leaving edges ordered by their edge types and edge uids followed by arriving edges ordered similarly, while not visiting any node or edge more than once.

The set of graphs is (internally) totally ordered by the defining position of their first node.

In the data management language, an SqlNode is an SqlRow whose domain is a Node type. Evaluation of the SqlNode gives an explicit rowset of TGraph values. A TGraph specified in the above ways may match a subgraph of one of the graphs in this set, in which case we say the TGraph is found in the database.

The relational view of graph data

Graph models are typed collections of nodes and edges. This means that node and edge types are defined with particular typed properties including an integer identity, and for edge types, a leaving and an arriving integer property. Each node or edge type has a collection of nodes and edges, and these can be identified with a relational table whose columns of the properties of the node type/edge type, and whose rows are the values of the properties of a particular node.

The leaving and arriving properties of edges can be thought of as connecting the nodes into directed graphs. The leaving and arriving properties behave like foreign keys to a particular node type. Types can have subtypes (using the standard UNDER keyword).

The above description highlights the similarities with the relational model, so that it becomes natural to add the node/edge type behaviour to a relational type by simple metadata added to a standard type declaration with syntax respectively.

NODETYPE ['(' Identity_id [CHAR] ')']

Where id is an optional name for the identity column (if not specified, a column of type INT called ID is used, or added if not already specified). The column is automatically the primary key of the node type, but also accesses the database-wide collection of nodes and edges. By default it is an integer, but may be declared as a string type.

EDGETYPE [Identity_id ] '('[Leaving_id '='] NodeType_id ',' [Arriving_id '='] NodeType_id ')'

If not specified, columns of type string called LEAVING and ARRIVING are used or added if not already specified or inherited in the type declaration.

The identifiers ID, LEAVING and ARRIVING are not reserved words, and these columns can be renamed on a per-role basis subject to the usual rules and permissions. The identities of these structural columns are however inherited by subtypes. Columns added to a type in this way are appended to the row type.

The simplest node type (for a new node type called MYTYPE), containing only an identity column, is defined by the SQL statement


Additional columns can be specified in the usual ways, by declaring the new type to be UNDER and existing type and/or adding a clause AS '(' column_list ')' before the metadata part. A subtype of a node or edge type automatically inherits all its properties, so the metadata keywords should not occur in the declaration of a subtype of a node type or edge type. Edge types can be similarly defined in SQL.

The Graph CREATE statement has been added to facilitate rapid creation of graph types, nodes and edges. It uses extra token types for indicating directed arrows for edge connections and a JSON-style notation for providing property lists, so that a single statement can create many node types, edge types, and nodes and edges whose associated tables and columns are set up behind the scenes (in one transaction). Identifiers can be defined in the CREATE statement following the usual left-to-right conventions, and Pyrrho will supply an integer id value using its autokey feature if necessary. All such database items can be subsequently retrieved using MATCH and modified using SQL DDL statements such as SQL UPDATE, ALTER statements and now CREATE. An extra feature allows CREATE to be followed by a THEN clause which allows DDL and DML statements to use the identifiers accumulated during the CREATE.

Columns for node and edge types can thus be declared in three ways: (a) explicitly in the type clause of CREATE TYPE following the AS keyword, (b) ID, LEAVING and ARRIVING, in metadata in CREATE TYPE, (c) in the graph CREATE statement for a previously unknown Node or Edge label. In case (c) the values of these properties are also provided for the first node or edge of the new node or edge type.

In all cases, the NodeType.Build method does the actual work of creating the node or edge type and ensuring that the special properties ID, LEAVING , and ARRIVING have appropriate columns and indexes in the new type (or its supertypes). Even for a simple type-declaration, the transaction will require several stages of database object construction.

The parsing of these statements results in a set of physical records for the transaction log: (1) PNodeType or PEdgeType giving merely the name of the new type and its supertype (if there is a supertype all its supertypes and subtypes, and their supertypes and subtypes, will be modified to record this); (2) The new columns of the new type, that is, columns not inherited from supertypes (installing these objects modifies the parent type); (3) new indexes, for the primary key of the new type, and the two special foreign keys of a new edge type (installing these objects will modify the node/edge type to note their uids); and (4) for the graph create statement, Records containing new nodes and edges.

The label part for a node or edge can be a chain of identifiers in supertype to subtype order, but subtypes are first class types in the role and can be referenced on their own. Subtypes inherit the identity column (and leaving, arriving columns for edge types) so that the primary key of the subtype is also the primary key of the supertype. If a chain of labels is used for a new node or edge, any new columns are added to the first type in the chain. A new edge type in a CREATE statement will use the specified types for its leaving and arriving node type constraints.

Then supertypes are created before subtypes, node and edge types before edges, and columns before their indexes. The syntax ensures some regularity, and, for the most part, the class structure of the implementation is helpful. But it is useful at this point in the documentation to distinguish the various tasks and how their supported in the parser. In this version, the base table of a user-defined types has a rowType consisting (only) of the columns declared in the most specific type, but all records may contain values for columns in its pathDomain, which also contains columns inherited from supertypes if any. EdgeType is a subclass of NodeType, so in the discussion of the implementation below we can write node type even if we mean node type or edge type.

Additional data types in the implementation

Every node (TNode) has a unique integer identifier we refer to in these notes as id (whatever its column name is in the current role) and this can be modified using an SQL update statement: its value is independent of the role. The TNode also has a uid property that identifies the database TableRow discussed above. TNode is a subclass of TRow and its fields are the fields of this TableRow.

Every edge (TEdge) has a leaving property and an arriving property whose values reference actual nodes: the database maintains indexes to the actual TNodes. These properties are initially called LEAVING and ARRIVING but can be modified for the model. TEdge is a subclass of TNode, so edges have the identifiers and properties discussed above.

A graph (TGraph) is a collection of TNodes.  A database TNode, in principle, identifies a graph consisting of all nodes that be reached by following edges. The database maintains a set of such connected TGraphs that cover the database. Any such connected TGraph can be represented by its base node (the oldest, which has the lowest uid). It follows that TGraphs in the database are not modified directly.

TGParam is a class of TypedValues used only in MatchStatements, which has a lexical uid from its first occurrence in the statement, a type indicating its use in the MatchStatement (flags for node, edge, path, group, nullable) and its name (a string). It is initially unbound, and once bound its value is found in the  Context's list of bindings. TGParams can occur as the id of a node or edge, as the label part, or in SqlLiterals.

A MatchStatement is a collection of TGParams, a list of SqlNodes, and possibly where clause and a procedural body. An SqlNode keeps track of the TGParams they introduce, and SqlValues for id, type, and properties (a list of SqlValue key/value pairs: each key is a list of SqlValues that evaluate to strings). SqlNode has a subclass SqlEdge, which also has SqlValues for its leaving and arriving SqlNode, SqlEdge has a sbclass SqlPath, which has a pattern (a list of SqlNodes) and a pair of integers called the MatchQuantifier in the syntax for MatchStatement. All these evaluate to the binding of their id (a TNode or TEdge).

The Match Algorithm

When a MatchStatement is executed, the Context may already contain bindings from an enclosing Match in progress, but its own bindings will be unbound. The MatchStatement examines the first node in its list of SqlNodes and creates a continuation consisting of the rest of its list. On examination of this first SqlNode, a set of database nodes of its required type and properties is constructed; on examination of an SqlPath, the continuation is the pattern. For each element of this set, the SqlNode's TGParams are bound, and traversal continues to the next node in the continuation. . If there is no next node in the continuation, all SqlNodes in the MatchStatement have been bound to database rows (TNodes): subject to the where clause if any, the body of the match statement is obeyed, or if there is no body, a row of bindings is added to the result of the MatchStatement.  When all elements at the current stage have been examined, the TGParams in the SqlNode's list are removed from the list of bindings. When the recursion is complete, the MatchStatement's side-effects have occurred and/or the rowset result has been built.

In some ways, unbound identifiers are like the aliases used in ordinary SQL, since they are given a meaning inside the current statement and then forgotten after it is executed. But there is an important difference: by its nature the MatchStatement looks to match unbound identifiers with database objects, so that if you want a new unbound identifier you need to avoid the names of existing database objects (tables, types, procedures, or columns, fields or methods of objects referenced in the current statement), or currently bound identifiers or aliases. These observations also apply to the use of unbound identifiers in the CreateStatement, which also can have a dependent executable statement. In this section we examine more sophisticated opportunities provided by MATCH.

Like the CreateStatement, the Match statement consists of a set of graph fragments, but like a SELECT statement as it builds a rowset whose columns are unbound identifiers in the graph syntax. Such identifiers can occur anywhere in the graph syntax, and as its name implies, the MATCH statement finds all possible values of these such that the graph fragments are found in the database.


(:Product:WoodScrew {spec:'16/8x4'}),(:Product: WallPlug{spec:'18cm'}),

(Joe:Customer {Name:'Joe Edwards', Address:'10 Station Rd.'}),

(Joe)-[:Ordered {"Date":date'2002-11-22'} ]->(:"Order"{id:201})]

[MATCH (O:"Order"{id:201})

begin MATCH(P:Product{spec:'16/8x4'}) CREATE (O)-[:Item {Qty: 5}]->(P);

     MATCH(P:Product{spec:'18cm'}) CREATE (O)-[:Item {Qty: 3}]->(P) end]

MATCH ()-[:Item {Qty:A}]->(:T{spec:X}) where A>4

Taking the last example above and modifying it a little to reduce the number of steps below, let us trace through the execution of

match (:"Order")-[:Item where Qty>4]->(:T{spec:X})

The MatchStatement (referred to as %5 below) defines TGParams  T and X and has a single graph  whose nodes are the SqlNode #8, the SqlEdge #19, and the SqlNode #40. Looking at the TGParams we see that their properties remain to be discovered during the Build method of the MatchStatement.

The two main methods in the implementation of MatchStatement are: 

void ExpNode(Context cx, ExpStep be, Sqlx tok, TNode? pd)

which is passed the current position in the Match Expression in a continuation be (as we will see below) and some context tok and pd, and DbNode: 

void DbNode(Context cx, NodeStep bn, Sqltok, TNode? pd)

which is passed a list of relevant nodes in its continuation bn. In both cases the continuation specifies what the algorithm does if the algorithm is able to move to the next SqlMatch node. If all the nodes have matched and there no further nodes, the AddRow method will be called by the EndStep of the continuation to add a row of bindings to the result in an ExplicitRowSet constructed for the MatchStatement.

To start the process, ExpNode is called as follows:  

ExpNode(cx, new ExpStep(sa.mode, xf, new GraphStep(gf.Next(), new EndStep(this), Sqlx.Null, null);

We can see that the continuation specifies the current match expression xf, the remaining graphs of the Match statement gf.Next() (there are none in this case), and the EndStep. The four continuation classes visible above are subclasses of Step, and each Step contains a link to the next step. Step has one more subclass, PathStep, which is discussed in the next section. We start with the first node in the first Graph (gf is a bookmark for the first MatchAlt in the MatchStatement) and an ExpStep (sa is the first MatchAlt, and xf is a bookmark for sa’s first MatchExp).

Thus, the first step in the match is

Step 1: ExpNode({Exp[#8,#19,#40],Graph[],End[%5]}, Null, null)

xn<-{SqlNode COLON #8 421 Order NODETYPE (444)[444,Domain INTEGER] rows 1 Indexes:((444)468) KeyCols: (444=True) IdIx=468 IdCol=444 [#9]}

ExpNode constructs a list of relevant nodes in the database. There is just one at this stage:

ds {(487=..)}

  ds[0].vals {(444=201)}

We now call DbNode for this list of possible matches, with a continuation consisting of a NodeStep for the first node in the list ds, followed by an ExpStep for the rest of the current graph, and the rest of the continuation.

Step 2: DbNode {Node#8:[487],Exp[#19,#40],Graph[],End[%5]}

 dn<- {TNode 487[421]}

  dn.tableRow {(444=201)}

This node matches our requirements so DbNode adds any relevant bindings. There are no bindings for this SqlNode, so and the Next method for our NodeStep continuation takes us to the next SqlNode in the graph, remembering the node we have just left:

Step 4: DbNode ({Node#19:[990,1024],Exp[#40],Graph[],End[%5]}, ARROWBASE, {TNode 487[421]}

 dn<-{ {TEdge 990[774]} vals {(802=1,847=201,905=1,963=5)}

Again, we hit the DoBindings breakpoint (since Qty is 5) and Next takes us to

Step 5: ExpNode({Exp[#40],Graph[],End[%5]}, ARROWBASE, {TEdge 990[774]}

 xn<- {SqlNode COLON #40 -509  NODETYPE rows 0 [#41] {#43=#48} TYPE T,#41 T,#48 X}

 ds<- {[157,{{(48=1,88=16/8x4)}}]}

This time, in DbNode, when we hit DoBindings, we add bindings for T and X:

Step 6: DbNode {Node#40:[157],Exp[],Graph[],End[%5]}, ARROWBASE, {TEdge 990[774]}

 dn<- {TNode 157[113]}

 cx.binding<- {(#41=WOODSCREW,#48=16/8x4)}

and Next takes us to next.Next() where the GraphStep is also empty so we get to the EndStep, and AddRow adds the bindings to ers.explRows:

AddRow(cx) where cx.binding {[#41=WOODSCREW,#48=16/8x4]})

Returning from AddRow brings us back to Step6, where the bindings are cleared, to Step 4 where we call DbNode  at Step 7, where the next value for dn {TEdge 1024[774]} fails the test Qty>4, so backtracks and no further rows get added.


Repeating Patterns

For simplicity let us start with an empty database:

[CREATE (a:Person {name:'Fred Smith'})<-[:Child]-(b:Person {name:'Peter Smith'}),

(b)-[:Child]->(:Person {name:'Mary Smith'})]

MATCH (p)-[:Child]->(c) RETURN, AS child

MATCH ({name:'Peter Smith'} [()-[:Child]->()]+ (x) RETURN

The + is shorthand for {1,*} . The pattern must be overall like a complicated edge and therefore must start and end with a node, but these need not be empty: any requirements apply to the repeat and to the preceding and following nodes.

Tracing the steps as before (and watching also for calls on PathNode):

Step 1: ExpNode ({Exp[#8,%1,#50],Graph[],End[%7]}, Null, null)

 xn<- {SqlNode LBRACE #8 .. {#9=#14}}

 ds<- {[112, {{(47=1,87=Fred Smith)}}]}

{[139, {{(47=2,87=Peter Smith)}}]}

{[346, {{(193=1,236=2,290=1)}}]}

{[372, {{(47=3,87=Mary Smith)}}]}

{[399, {{(193=2,236=2,290=3)}}]}

Step 2: DbNode ({Node#8:[112,139,346,372,399],Exp[%1,#50],Graph[],End[%7]},Null,null)

  dn<-  {TNode 112[23]} backtrack (on CheckProps)

Step 3: DbNode(..)

  dn<- {TNode 139[23]}

Step 4: ExpNode {Exp[%1,#50],Graph[],End[%7]} null {TNode 139[23]}

Step 5: PathNode ({Path0[#32,#35,#45],Exp[#50],Graph[],End[%7]}, {TNode 139[23]})

Step 6: ExpNode {Exp[#35,#45],Path0[#32,#35,#45],Exp[#50],Graph[],End[%7]},Null, {TNode 139[23]})

  xn<-{SqlEdge #35 CHILD}

  ds<- {[346, {{(193=1,236=2,290=1)}}]}

{[399, {{(193=2,236=2,290=3)}}]}

Step 7: DbNode ({Node#35:[346,399],Exp[#45],Path0[#32,#35,#45],Exp[#50],Graph[],End[%7]}, ARROWBASE, {TNode 139[23]})

  dn<-{TEdge 346[166]}

Step 8: ExpNode ({Exp[#45],Path0[#32,#35,#45],Exp[#50],Graph[],End[%7]},ARROWBASE, {TEdge 346[166]}

 xn<- {SqlNode RPAREN #45 }

 ds<- {[112, {{(47=1,87=Fred Smith)}}]}

Step 9: DbNode({Node#45:[112],Exp[],Path0[#32,#35,#45],Exp[#50],Graph[],End[%7]},ARROWBASE, {TEdge 346[166]})

 dn<- {TNode 110[23]}

Step 10: PathNode ({Path1[#32,#35,#45],Exp[#50],Graph[],End[%7]}, {TNode 112[23]})

 ot<- {(%0,(0=(0=TNode 139[23],1=TEdge 346[166],2=TNode 112[23])))}

Step 11: ExpNode ({Exp[#35,#45],Path1[#32,#35,#45],Exp[#50],Graph[],End[%7]},Null, {TNode 112[23]})

 xn<-{SqlEdge COLON #35 166 CHILD ..}

 ds<-{} back to step 10

Step 12: ExpNode ({Exp[#50],Graph[],End[%7]},WITH, {TNode 112[23]})

 xn<-{SqlNode X #50 -510  NODETYPE rows 0 From:%8 Id=#50 Id X,#50 X}

 ds<- {#50, {{(47=1,87=Fred Smith)}}]}

Step 13: DbNode({Node#50:[112],Exp[],Graph[],End[%7]}, WITH, {TNode 112[23]})

 dn<- {TNode 112[23]}

 AddRow({(#50=TNode 112[23])} back to Step 9 back to step 7

Step 14: DbNode({Node#35:[346,399],Exp[#45],Path0[#32,#35,#45],Exp[#50],Graph[],End[%7]}, ARROWBASE, {TNode 139[23]})

 dn<- {TEdge 399[166]}

Step 15: ExpNode ({Exp[#45],Path0[#32,#35,#45],Exp[#50],Graph[],End[%7]},ARROWBASE, {TEdge 399[166]})

 xn<-{SqlNode RPAREN #45 }

 ds<- {[372, {{(47=3,87=Mary Smith)}}]}

Step 16: DbNode ({Node#45:[372],Exp[],Path0[#32,#35,#45],Exp[#50],Graph[],End[%7]}, ARROWBASE, {TEdge 399[166]})

 dn<-{TNode 372[23]}

Step 17: PathNode ({Path1[#32,#35,#45],Exp[#50],Graph[],End[%7]}, {TNode 372[23]})

Step 18: ExpNode ({Exp[#35,#45],Path1[#32,#35,#45],Exp[#50],Graph[],End[%7]}, Null, {TNode 372[23]})

 xn<- {SqlEdge COLON #35 166 CHILD ..}

 ds<-{} back to step 17

Step 19: ExpNode({Exp[#50],Graph[],End[%7]},WITH, {TNode 372[23]})

 xn<-{SqlNode X #50 -510  NODETYPE rows 0 From:%8 Id=#50 Id X,#50 X}

 ds<-{[#50, {{(47=3,87=Mary Smith)}}]}

Step 19: DbNode({Node#50:[372],Exp[],Graph[],End[%7]}, WITH, {TNode 372[23]})

 dn<- {TNode 372[23]}

 AddRow({(#50=TNode 372[23])} back to step 20 back to step 16 back to step 14 back to step 5 back to step 3

Step 21: dn<- {TEdge 346[166]} backtrack

Step 22: dn<- {TNode 372[23]} backtrack

Step 23: dn<- {TEdge 399[166]} backtrack back to step 1


F. Laux and M. Crowe, “Information Integration using the Typed Graph Model”, DBKDA 2021: The Thirteenth International Conference on Advances in Databases, Knowledge, and Data Applications, IARIA, May 2021, pp. 7-14, ISSN: 2308-4332, ISBN: 978-1-61208-857-0

N. Francis, A. Gheerbrant, P. Guagliardo, L. Leonid, V. Marsault, et al.. A Researcher’s Digest of GQL. 26th International Conference on Database Theory (ICDT 2023), Mar 2023, Ioannina, Greece. doi:10.4230/LIPIcs.ICDT.2023.1. 

M. Crowe, PyrrhoV7alpha, 

Thursday 21 April 2022

PyrrhoV7alpha is on GitHub again

 As of today 21 April 2022, the version of PyrrhoDBMS that I have been working on since 2019 is available on GitHub once again. It is still alpha code, but has a number of features of interest which this article will briefly review. Go to .

All textbooks on Relational Databases emphasise the imprtance of having serializable transactions, to preserve correctness in a live database. They explain the two main ways of achieving this are "pessimistic" control using two-phase locking and  "optimistic" control where transactions are subject to a validation stage during commit. Both methods are stated as being over complex, so that all of the same textbooks immediately settle for something much weaker that serialzable transactions. Furthermore, the discussion of what serialisability means for business operations is based on schedules, where likely areas of conflict are considered and avoidance strategies planned.

The international standard for the database language SQL, and all commercial database products, offer a range of "isiolation levels" which can be specified for transaction control: and generally users settle for the REPEATABLE_READ level. If SERIALIZABLE is selected, many operations are prevented with an error message along the lines that serializable transactions cannot be guaranteed.

For years now, I have argued for an alternative strategy, based on the "optimistic" approach, but incorprating a full transaction log of every committed operation on the database. In fact, in a strong sense, the transaction log is the database: it is the only thing persisted on disk, and consists of a sequence of non-overlapping transaction records. All versions of Pyrrho have used this approach, which guarantees from the outset that the committed transactions were indeed serialisable, and the serialisation used is the one in the transaction log. When the server is running, all of the live data is in memory. This architecture is suited for many business databases where data (customers, accounts etc) accumulates, and less useful for use cases such as messaging hubs where data is not kept for long.

In 2019 I demonstrated with a simple DBMS called Strong (StrongDBMS) that such an approach could outperform commercial database products in situations of high concurrency and conflicting transactions.  The code for this DBMS is available in the above github repository. 

However, there were two important points in the demonstration: first, that full safety requires a style of programming based on immutable shareable data structures, and second, that my own PyrhhoDBMS was unable to commit even as many transactions as the competition, in an admittedly challenging test. Since that date I have been working to apply the approach of shareable data structures to the PyrrhoDBMS, so that it too can outperform the traditional two-phase locking products. The alpha code is now available for this version 7 of PyrrhoDBMS, and it will be presented at DBKDA 2022.

For the changes in programming techniques that are required, see previous posts in this blog. Just as strings are immutable in Java and C#, database objects such as rowsets and cursors are immutable and shareable. Queries are parsed to construct the implementing rowsets, which when traversed will use the current versions of the (immutable) base tables. The radical nature of the change in programming style can be seen in how structures are shared but never copied or cloned; taking a snapshot is simply recording the current root node, as each version has a different root node, while sharing nearly all the rest of the nodes in the structure. The server's list of database root nodes (1 per database) is briefly locked to install a new database root. Writing of data to the disk is the result of a successful commit (append storage): nothing on disk is ever overwritten.

PyrrhoV7alpha includes a set of tests that demonstrate stored procedures, constraints and triggers, user defined types and view-mediated virtual data warehousing.

Thursday 23 September 2021

Implementation: instancing and framing

(Updated: 14th January 2022: Implementation continues)

Tables and table references

During traversal of a rowset, the value associated with a column name changes from one row to the next. This is managed in database implementations using the notion of cursor. A cursor is a row value with the additional capabilities of being able to advance to contain the values for the next row or step back to the previous row. 

Column names are local to a table, or more precisely to a table reference, since a table and its columns may be referenced more than once in an SQL statement, and the differenet references are then distinguished using aliases. Different references to the same table thus have different properties, and may be subject to different where-conditions etc.

To manage this, it is useful to have a process of instancing, so that the shared table remains immutable, while its instance are placed in transient local storage (a heap). Thus the instance and its columns will be referenced using heap uids instead of names or defining positions.

Each table reference thus has its own instance on the heap, and instances for its columns. At the end of execuition of the current SQL statement, the heap can be forgotten.

Compiled database objects

In standard SQL, several database object types (e.g. Procedure, Check, Trigger, View) define executable code. It is an ambition of Pyrrho V7 to compile such code once only: on database load, or on definition of a new compiled object. The compilation process creates many subsidiary objects (variable declarations, expressions, executable statements) in memory. These objects are then immutable and only replaced if the database object is altered (ALTER VIEW etc). In the next iteration (V7) of Pyrrho, these subsidiary objects have temporary uids and are stored in a field of the compiled object called framing.

When the compiled object is referenced during execution of an SQL statement, the objects in the framing need to be  copied to the top of the heap. However, for efficiency reasons, even such a simple step whould be carried out as few times as possible. Procedure code can be brought into the context just once when required. Trigger code, and constraints for columns and domains should be brought in to the context  before rowset traversal begins.

When views are used in a query, view references need to be instanced in the same way as table references, so that an instance must be created for each reference, so that it can be equipped with where-conditions, grouping specifications etc.

Prepared Statements

Prepared statements are local to a connection, and have query parameters, and can be handled in much the same way as compiled statements. An instance of a prepared statement has the formal paramneters replaced with actual values before execution begins,

The storage used for prepared statements in shared with successive transactions on the connection, and does not need to be saved in the database.

Friday 3 September 2021

Implementing Shareable Data Structures

 The concept of shareable data structures first appeared in this blog in 2018, and there has been a software repository in Github since that time. Several demonstration programs and PyrrhoV7alpha source code can also be found there. In the nature of things, the concept has evolved.

The use of shareable data structures in fundamental to the implementation of Pyrrho V7. The following documentation is based on part of a tutorial given by me and Fritz Laux at DBKDA 2021.

These have a recursive definition: a data structure is shareable iff it is immutable, and all fields are in turn shareable. In all programming languages, values of primitive data types are shareable. In Hava, Pythone and C#, strings are also shareable. Unfortunately, in any programming language that allows data structures to have have subclasses, other than declaring a class sealed or final. there is no way of preventing a subclass from adding a mutable field. If we want to ensure our data structures are shareable, we need to declare them as internal to our project. Then at least we control what subclasses exist.

In a programming language based on references, such as Java, or C#, we can make all fields in our structure final, or readonly. Then any reference can be safely shared, and all fields might as well be public (unless there are confidentiality issues).

If all of the fields are, in turn, also known to be immutable, then there is no need to clone or copy fields: copying a pointer to the structure itself gives read-only access to the whole thing.

For example, if the Database class is shareable, and b is a database, then a:=b is a shareable copy of the whole database (we have just copied a single 64-bit pointer). Such an assignment (or snapshot) is also guaranteed to be thread-safe.

Pointers to shareable structures are never updated but can be replaced when we have a new version to reference. If this new version has some changed fields, it is perfectly fine to continue to use the same pointers for all the unchanged fields

Shareable tree structures: adding a node

When we add a field located deep in a shareable structure (e.g. a node to a shareable tree structure), we will need to construct a single new node at each level back to the top of the structure. But the magic is that all the other nodes remain shared between the old and new versions.

The above picture (from Krijnen and Mertens, Mathematics Centre, Amsterdam, 1987) shows a tree with 7 leaves (that is, a tree of size 7), and updating (that is, replacing) one leaf node has resulted in just 2 new inner nodes being added to the tree. This makes shareable B-Trees into extremely efficient storage structures.

In tests, (such as DictDemo, in the github repository) we see that for a suitable minimum number of child nodes in the B-Tree, the number of new nodes required for a single update to a B-Tree of size N is O(logN), and experimentally, this means that for each tenfold increase in N, the number of new nodes per operation roughly doubles.

Note that we also get a new root node every time (this avoids wearing out flash memory).

The choice of programming language

It is helpful if the programming language supports:

  •        readonly directives (Java has final)
  •        Generics (Java has these)
  •        Customizing operators such as +=  (not Java)

o    Because a+=b is safer than Add(a,b)

o    Easy to forget to use the return value a=Add(a,b)

  •        Implies strong static typing (so not Java)

o    Many languages have “type erasure”

  •          Also useful to have all references nullable

So I prefer C#, which now has been around for 19 years. Java and Python have been with us for over 30 years. 

However, as mentioned above, C# provides no syntax for requiring a class to be shareable: specifically, there is no way of requiring a subclass of a shareable class to be shareable. It will cease to be shareable if it has even one mutable field.

Shareable database objects

What data structures in the DBMS can be made shareable?

  •       Database itself, and its subclass, Transaction.
  •       Database Objects such as Table, Index, TableColumn, Procedure, Domain, Trigger, Check, View, Role
  •       Processing objects such as Query, Executable, RowSet, and their many subclasses;
  •       Cursor and most of its subclasses.
  •       TypedValue and all its subclasses

All of these can be made shareable. But many things cannot be made shareable:

Context and Activation cannot be made shareable because in processing expressions we so often have intermediate values.

Also, something needs to access system non-shareable structures such as FileStreams, HttpRequest.

And Physical and its subclasses are used for preparing objects for the database file, so cursors that examine logs are not shareable.

An implementation library: first steps

A fundamental building block in many DBMS implementation is the B-tree. In Pyrrho BTree<K,V> is a sort of unbalanced B-tree. It has a += operator to add a (key,value) pair, and a -= operator to remove a key.

BList<V> is a subscriptable subclass where K is int. It is much slower than BTree, because it partially reindexes the list starting from 0 on insertion and deletion.

Both of these structures can be traversed up and down using shareable helper classes ABookmark<K,V> and methods First(), Last(). The ABookmark class implements key(). value(), Next() and Previous().

These classes and their subclasses are used throughout the DBMS implementation. They are shareable provided K and V are shareable. If the classes K and V are not themselves shareable, for example if one or both is the object class, a tree will be shareable provided all of its contents (the nodes actually added) are shareable. At least, it is easy to ensure that all public constructors only have shareable parameters.

For convenience, Pyrrho uses a potentially non-shareable base class Basis, whose only field is a BTree<long,object> called mem. It has an abstract method New which can be used to implement the += and -= as effectively covariant operators on subclasses, and these can be used to change the properties on a database (of course, by creating a new one).

DBObject and Database

DBObject is a subclass of Basis, with a public readonly field called defpos, of type long. The defpos (defining position) acts as a uid or unique identifier for every object in the database. As described in section 2.3 below, the range of values of long is divided into ranges for database objects depending on their lifetime: committed objects have the same lifetime as the database (e.g., the SQL delete statement can unlink one or more objects but they remain in the log). There are many subclasses of DBObject, described in this booklet.

Database is also a subclass of Basis, and it uses the mem field with positive uids to store all of database object it contains. (Entries in mem with negative uids are used for propert ies of the database, as described in section 3.5.2.) The Database class also has two static lists: databases, and dbfiles. The first is the set of databases indexed by database name, and the second is a corresponding this of File objects.

Then there is a method called Load() to build the database from the transaction log file, and Commit() which writes Physical record durably to the log.

Transaction and B-Tree

We can use the B-Tree concept to model how the transaction Commit process works. The Commit process was described in terms of transformations in the previous posting in this blog.

  1.       Suppose we have a database in any starting state D0.
  2. .       If we start a transaction T0 from this state, initially the T0 is a copy of D0 (i.e. equal pointers).
  3. .       As T0 is modified it becomes T1, which shares many nodes with D0, but not all.
  4. .       D0 also evolves as some other transaction has committed, so D1 has a new root and some new nodes.
  5. .       When T1 commits, we get a new database D2 incorporating the latest versions of everything.

Thursday 2 September 2021

Some Pyrrho V7 alpha documentation

  At DBKDA2021 in May, I presented a tutorial on the inner workings of Pyrrho V7. The PDF version, with the demonstrations, tiotalled 10MB, which was probably much too much. Material based on it will nevertheless be included in the introductory sections and appendices of the documentation "Introduction to the Source code of the PyrrhoDBMS" a version of which is available on my github site. The following extracts are from the very start of the tutorial, which went on to explain the use of shareable data structures and ended with a deep dive into remote updates using view-mediated web services. (All of this material will soon appear on the github site: email me  at if you are interested in getting it sooner.)

ACID and Serializable Transactions

Our starting point is that full isolation requires truly serializable transactions.

All database textbooks begin by saying how important serializability and isolation are, but very quickly settle for something much less.

If we agree that ACID transactions are good, then:

·         First, for atomicity and durability we should not write anything durable until (a) we are sure we wish to commit and (b) we are ready to write the whole transaction.

·         Second: before we write anything durable, we should validate our commit against the current database.

·         Third, for isolation, we should not allow any user to see transactions that have not yet been committed.

·         Fourth, for durability, we should use durable media – preferably write-once append storage.

From these observations, it seems clear that a database should record its durable transactions in a non-overlapping manner.

·         If transactions in progress overlap in time, they cannot both commit if they conflict: and if they don’t conflict, it does not matter which one is recorded first.

·         The simplest order for writing is that of the transaction commit.

·         If we are writing some changes that we prepared earlier, the validation step must ensure that it depends on nothing that has changed in the meantime, so that our change can seem to date from the time it was  committed rather than first considered.

·         Effectively we need to reorder overlapping transactions as we commit them.

These few rules guarantee actual serialization of transactions for a single transaction log (sometimes called a single transaction master). It obviously does not matter where the transactions are coming from.

But if a transaction is trying to commit changes to more than one transaction log, things are very difficult (the notorious two-army problem). If messages between autonomous actors can get lost, then inconsistencies are inevitable. The best solution is to have a DBMS act as transaction master for every piece of data  For safety any transaction should have at most one  remote participant, and more complex transactions can be implemented as a sequence of such one-way dependency transactions. 

The transaction log

The first was about the transaction log. In Pyrrho the transaction log defines the content of the database (it is the only thing stored on disk).

In this demonstration, we will see how every Transaction T consists of a set of elementary operations e1, e2,.. , en .

Each operation corresponds to a Physical object written to the transaction log

Committing this transaction applies the sequence to the Database D.  In reverse mathematical notation



(D)T = (..((D)e1)e2)..)en

If we think of a transaction commit as comprising a set of elementary operations e, then the transaction log is best implemented as a serialization of these events to the append storage. We can think of these serialized packets as objects in the physical database. In an object-oriented programming language, we naturally have a class of such objects, and we call this class Physical.

So, at the commit point of a transaction, we have a list of these objects, and the commit has two effects (a) 7 appending them to the storage, (b) modifying the database so that other users can see. We can think of each of these elementary operations as a transformation on the database, to be applied in the order they are written to the database (so the ordering within each transaction is preserved).

And the transaction itself is the resulting transformation of the database.

Every Database D is the result of applying a sequence of transactions to the empty _system Database D0.

Any state of the database is thus the result of the entire sequence of committed transactions, starting from a known initial database state corresponding to an empty database.

The sequence of transactions and their operations is recorded in the log.

Shareable data

Many programming languages (including Java, Python and C#) currently have shareable implementations of strings.

Specifically, strings in Java, Python and C# are immutable: if you make a change, you get a new string, and anyone who had a copy of the previous version sees no change.

In these systems, it is illegal to modify a string by assigning to a character position instead you need to use a library function.

The addition operator can be used in these languages to create a sum of strings. This is basically the model for shareable data structures.

For a class to be shareable, all fields must be read-only and shareable. Constructors therefore need to perform deep initialisation, and any change to an existing structure needs another constructor. Inherited fields need to be initialised in the base (or super) constructor, maybe with the help of a static method.

This is useful for databases because databases share so many things: predefined types, properties, system tables. For example, all databases can share the same starting state, by simply copying it from the _system database.

Even more importantly all transactions can start with the current state of the database, without cloning or separately copying any internal structures. When a transaction starts, it starts with the shared database state: as it adds physicals, it transforms. Different transactions will in general start from different states of the shared database.

In the above picture, we know what the database DB’s state is. Each of concurrent transaction steps T1, T2, and T3 are, if committed, will create a new version of DB (for example (DB)T2.) Because of isolation, from the viewpoint of any of these transactions, they cannot know whether DB has already been updates by another transaction (in which case, they may no longer fit on the resulting database). In particular, after T2 commits, T1 and/or T3 will possibly no longer be able to commit.

However, if T1 was able to commit before, then it will still be able to commit provided it has no conflict with T2’s changes.

Transaction conflict

The details of what constitutes a conflicting transaction are debatable. Most experts would agree with some version of the following rules:

·         T1 and T2 will not conflict if they affect different tables or other database objects

o    And only read from different tables

·         But we can allow them to change different rows in the same table

o    Provided they read different specified rows

·         Or even different columns in the same row

o    Provided they read different columns

The first rule is sound enough, although the condition on reading is very important: we would need to include things like aggregation in our definition of reading. The first rule is also very easy to implement, especially if tables are shareable structures, as a simple 64-bit comparison is sufficient!

For the other rules, we would need to be very clear on what is meant by a “specified row”, and the non-existence of a row might only be determined by reading the whole table. 

Transaction validation

At the start of Transaction Commit, there is a validation check, to ensure that the transaction still fits on the current shared state of the database, that is, that we have no conflict with transaction that committed since our transaction started.

We will see that during commit of a Transaction T, we do a validation check. It ensures that the elementary operations of T can be validly relocated to follow those of any transaction T´ that has committed since the start of T . T planned

But now we have

Relocation amounts to swapping the order of the elementary operations ei that comprise transaction T and T’. Two such cannot be swapped if they conflict. E.g. They change the same object (write/write conflict). The tests for write-write conflicts involve comparing our list of physicals with those of the other transactions.

There are also tests for read/write conflicts between T and T´. For checking read-write conflicts, we collect “read constraints” when we are making Cursors.