Chapter 10
Strategy Language

Rule rewriting is a highly nondeterministic process: at every step, many rules could be applied at various positions. A finer control on rule application is sometimes desirable, and even required, when rules are not terminating or not convergent. Maude programmers already had resources to restrain rewriting, from adding more information to the data representation to applying rules explicitly at the metalevel, as described in Section 17.7. However, these methods make specifications harder to understand and programming at the metalevel is a cumbersome task for novice users. For these reasons, a strategy language has been proposed [966198] as a specification layer above those of equations and rules. This provides a cleaner way to control the rewriting process respecting the separation of concerns principle. That is, the rewrite theory is not modified in any way: strategies provide an additional specification level above that of rules, so that the same system module may be executed according to different strategy specifications. The design of Maude’s strategy language has been influenced, among others, by ELAN [13] and Stratego [18].

Of course, the most basic action of the strategy language is rule application, which is invoked by mentioning the rule label. More complex strategies, involving several, or unboundedly many, rule applications, can be built by means of various strategy combinators. The language is described in detail in Section 10.1. Moreover, strategy expressions can be given a name to be later invoked with arguments, even recursively. Strategy modules, the modular way in which strategies are declared, are explained in Section 10.2. Like functional and system modules, strategy modules can be parameterized, with specific features that are described in Section 10.3. A discussion about the strategy search commands follows in Section 10.4, and the chapter concludes with a case study on logic programming in Section 10.5. Some other aspects are covered in specific chapters, like the metarepresentation of strategies in Section 17.3 and how strategy executions can be traced and debugged in Section 20.

Let us start with a simple example. The HANOI module below specifies the Tower of Hanoi puzzle, invented by the French mathematician Éduard Lucas in 1883 [90]. His story tells about an Asian temple where there are three diamond posts. The first one is surrounded by sixty-four golden disks of increasing size from the top to the bottom. The monks are committed to move them from one post to another respecting two rules: only a disk can be moved at a time, and they can only be laid either on a bigger disk or on the floor. Their objective is to move all of them to the third post, and then the world will end.

 
mod HANOI is
protecting NAT-LIST .

sorts Post Hanoi Game .
subsort Post < Hanoi .

op (_) [_] : Nat NatList -> Post [ctor] .
op empty : -> Hanoi [ctor] .
op __ : Hanoi Hanoi -> Hanoi [ctor assoc comm id: empty] .

vars S T D1 D2 N : Nat .
vars L1 L2 : NatList .
vars H H : Hanoi .

crl [move] : (S) [L1 D1] (T) [L2 D2] => (S) [L1] (T) [L2 D2 D1] if D2 > D1 .
rl [move] : (S) [L1 D1] (T) [nil] => (S) [L1] (T) [D1] .
endm

Here, the golden disks are modeled as natural numbers describing their size, and the posts are represented as lists of disks in bottom up order. We try to rewrite the initial puzzle setting
 
rew (0)[3 2 1] (1)[nil] (2)[nil] .

but the command does not terminate: the disks are being moved in a loop. To prevent such a situation, strategies can be useful. The Maude command for rewriting with strategies is:
 
srewrite [n] in ModId : Term by StrategyExpression .

It rewrites the term according to the given strategy expression and prints all the results. Since strategies need not be deterministic, many results may be obtained. Like in the standard rewriting commands, we can optionally specify the module where to rewrite after in, and a bound n on the number of solutions to be shown just after the command keyword, which can be shortened to srew. For example,

 
Maude> srew [3] in HANOI : (0)[3 2 1] (1)[nil] (2)[nil] using move .

Solution 1
rewrites: 1 in 0ms cpu (0ms real) (~ rewrites/second)
result Hanoi: (0)[3 2] (1)[1] (2)[nil]

Solution 2
rewrites: 2 in 0ms cpu (1ms real) (~ rewrites/second)
result Hanoi: (0)[3 2] (1)[nil] (2)[1]

No more solutions.
rewrites: 2 in 0ms cpu (1ms real) (~ rewrites/second)

The two results of applying the move rule to the initial term are shown. Every time we request all solutions or more solutions than possible, the interpreter indicates that there are no more solutions when the search finishes. The order in which solutions appear is implementation-dependent. Following the output of the srewrite command, we can view a strategy expression α as a transformation TΣ →𝒫(TΣ) from an initial term t to a set of terms being the results of α in t. To elaborate more complex strategies, we need to introduce the complete strategy language.

10.1 The strategy language

As we have anticipated, rule application is the essential building block of the strategy language. Besides the rule label, further restrictions can be imposed. Its most general syntax has the form:

 
label[X1 <- t1, , Xn <- tn]{α1, , αm}

Between square brackets, we can optionally set an initial ground substitution for the variables X1,,Xn appearing in the rule. And when invoking a conditional rule with rewriting conditions we must provide between curly brackets the strategies α1,m to control rewriting each of them. Only rules with exactly m condition fragments will be tried, so all rewriting fragments must be given a strategy. For example, we can select to which Hanoi tower the disk is moved by giving the command:
 
Maude> srew (0)[3 2 1] (1)[nil] (2)[nil] using move[T <- 2] .

Solution 1
rewrites: 1
result Hanoi: (0)[3 2] (1)[nil] (2)[1]

No more solutions.
rewrites: 1

Constrained by the substitution, the rewriting engine tries to match with extension (see Section 4.8) against the lefthand sides (S)[L1 D1] (2)[L2 D2] for the first move rule, and (S)[L1 D1] (2)[nil] for the second move rule, but only the second succeeds and the term is rewritten to solution 1.

An additional combinator all triggers a single rewriting step with all rules available in the module, even those which are not given a label, but excluding those marked with nonexec, external objects and implicit rules. Rewriting condition fragments are executed like in the usual rewriting engine, without any restriction. To illustrate all and how strategies can control rule conditions, the following module HANOI-COUNT introduces a pair operator of Game for keeping track of the number of moves used to reach a solution.

 
mod HANOI-COUNT is
protecting HANOI .

op <_,_> : Hanoi Nat -> Game [ctor] .

vars H H : Hanoi .
var N : Nat .

crl [step] : < H, N > => < H’, s N > if H => H .
rl [cancel] : N => 0 [nonexec] .
rl [inc] : N => s N [nonexec] .
endm

By controlling the H => H’ condition of step with all, we indicate that only one rule application has to be performed in each game step, so that the count is calculated correctly. The other rules cancel and inc will be used later to count moves in a different way.
 
Maude> srew < (0)[3 2 1] (1)[nil] (2)[nil], 0 > using step{all} .

Solution 1
result Game: < (0)[3 2] (1)[1] (2)[nil],1 >

Solution 2
result Game: < (0)[3 2] (1)[nil] (2)[1],1 >

No more solutions

By default, rule applications are tried anywhere within the subject term. However, they can be limited to the topmost position using the top restriction. For example, if we apply the cancel rule, which rewrites any natural number1 to zero, we obtain multiple solutions:

 
Maude> srew 1 using cancel .

Solution 1
result Zero: 0

Solution 2
result NzNat: 1

No more solutions.

In this case, the rule can be applied both on 1 = s 0, producing 0, and on its subterm 0, producing s 0. Instead, using top the rule is only applied at the top position s 0:
 
Maude> srew 1 using top(cancel) .

Solution 1
result Zero: 0

No more solutions.

Combining top with the matchrew operator described in Section 10.1.2, rules can be applied to any specified subterm in the subject term.

The other basic component of the language are the tests. They can be used for testing a condition on the subject term. Their syntax has the form:

 
match P s.t. C

where P is a pattern and C is an equational condition, as described in Section 4.3. Their syntax and behavior are similar to those of the match commands in Section 4.9: the such-that clause can be omitted if no condition is imposed, and the starting keyword indicates where the matching is done: on top (match), on any fragment of the flattened top modulo axioms (xmatch), or anywhere within the subject term (amatch). The condition may refer to variables in the pattern, whose value will be obtained from the matching. On a successful match and condition check, the result is the initial term. Otherwise, the test does not provide any solution. For example, we can check whether the towers of Hanoi puzzle is solved with tests:
 
Maude> srew (0)[nil] (1)[nil] (2)[3 2 1] using match (N)[3 2 1] H s.t. N =/= 0 .

Solution 1
result Hanoi: (0)[nil] (1)[nil] (2)[3 2 1]

No more solutions.

Maude> srew (0)[nil] (1)[nil] (2)[3 2 1] using xmatch (0)[nil] (2)[3 2 1] .

Solution 1
result Hanoi: (0)[nil] (1)[nil] (2)[3 2 1]

No more solutions.

Since the xmatch tests matches inside the top symbol with extension, and the two posts in the pattern form a fragment of the associative and commutative symbol on the subject term, the second command succeeds. The same command match will fail the match, because the pattern is not the whole term.
 
Maude> srew (0)[nil] (1)[nil] (2)[3 2 1] using match (0)[nil] (2)[3 2 1] .

No solution.

With the amatch variant, we can check whether the disks of radius 3 and 1 are the same post with the following command:
 
Maude> srew in HANOI : (0)[nil] (1)[nil] (2)[3 2 1] using amatch 3 L1 1 .

Solution 1
result Hanoi: (0)[nil] (1)[nil] (2)[3 2 1]

No more solutions.

Once we have rule application and tests, the basic actions and control of the language, we can combine them to build more complex strategies. Table 10.1 summarizes all primitive strategy constructors and informally describes their semantics2 .

Strategy ζ Results ζ(𝜃,t)




idle {t}


fail


rlabel[ρ] {t′∈ TΣt ρ(l)ρ(r)t for any l rlabelr R}


α;β t′∈α(𝜃,t)β(𝜃,t)


α|β α(𝜃,t) β(𝜃,t)


α* n=0αn(𝜃,t)


match P s.t. C  { {t} if matches(P,t,C,𝜃) ⁄= ∅ ∅ otherwise


α?β:γ  { ⟦α;β⟧(𝜃,t) if ⟦α⟧(𝜃,t) ⁄= ∅ ⟦γ⟧(𝜃,t) if ⟦α⟧(𝜃,t) = ∅


matchrew P s.t. C by
X1 using α1, ,
Xn using αn
σmatches(P,t,C,𝜃) ( ⋃ ⋅⋅⋅ t1∈⟦α1⟧(σ,σ(X1))
 
⋃ ) tn∈⟦αn⟧(σ,σ(Xn))σ[x1∕t1,...,xn∕tn](P )


slabel(t1,,tn) (lhs,δ,C)Defs σmatches(slabel(t1,,tn),lhs,C,id)δ(σ,t)
Table 10.1:Main strategy combinators and their informal semantics

10.1.1 Basic control combinators

To control the flow of rule execution some combinators are defined. Let α, β and γ represent arbitrary strategy expressions.

1.
The concatenation α;β executes the strategy α and then the strategy β on each α result. In our example, we can move twice using the strategy move ; move.

 
Maude> srew (0)[3 2 1] (1)[nil] (2)[nil] using move ; move .

Solution 1
result Hanoi: (0)[3 2 1] (1)[nil] (2)[nil]

Solution 2
result Hanoi: (0)[3] (1)[1] (2)[2]

Solution 3
result Hanoi: (0)[3 2] (1)[nil] (2)[1]

Solution 4
result Hanoi: (0)[3] (1)[2] (2)[1]

Solution 5
result Hanoi: (0)[3 2] (1)[1] (2)[nil]

No more solutions.

Notice that the initial state appears first in the solutions. This suggest why the initial rewrite command did not terminate.
2.
The disjunction or alternative α|β executes α or β. In other words, the results of α|β are both those of α and those of β.

 
Maude> srew (0)[3 2] (1)[1] (2)[nil] using move[S <- 0] | move[T <- 0] .

Solution 1
result Hanoi: (0)[3 2 1] (1)[nil] (2)[nil]

Solution 2
result Hanoi: (0)[3] (1)[1] (2)[2]

No more solutions.

3.
The iteration α* runs α zero or more times consecutively. If we issue the command
 
Maude> srew (0)[3 2 1] (1)[nil] (2)[nil] using move * .

Solution 1
result Hanoi: (0)[3 2 1] (1)[nil] (2)[nil]
...

Solution 27
result Hanoi: (0)[nil] (1)[1] (2)[3 2]

No more solutions.

we obtain the 27 admissible states, where the disks in each post are in the correct order.
4.
The concatenation, alternative, and iteration combinators resemble the similar constructors for regular expressions. The empty word and empty language constants are here represented by the idle and fail operators. The result of applying idle is always the initial term, while fail generates no solution.

 
Maude> srew (0)[3 2 1] using idle .

Solution 1
result Post: (0)[3 2 1]

No more solutions.

Maude> srew (0)[3 2 1] using fail .

No solution.

In a wider sense, we say that a strategy fails when no solution is obtained. Failures can happen in less explicit situations like vacuous rule applications, unsatisfied tests, etc.
5.
A conditional strategy is written α?β:γ. It executes α and then β on its results, but if α does not produce any, it executes γ on the initial term. That is, α is the condition; β the positive branch, which applies to the results of α; and γ the negative branch, which is applied only if α fails.

For example, we can instruct the monks to move disks to the first and second tower only if no disk can be moved to the target post. In this way, we obtain only 21 results out of 27.

 
Maude> srew in HANOI : (0)[3 2] (1)[1] (2)[nil] using
(move[T <- 2] ? idle : move) * .

Solution 1
result Hanoi: (0)[3 2] (1)[1] (2)[nil]
...

Solution 21
result Hanoi: (0)[nil] (1)[1] (2)[3 2]

No more solutions.

In the example above, we are giving precedence to one strategy over another, so that the second is only executed if the first fails. This is a common control pattern, and so it is defined as a derived operator with its own name, along with some other usual constructions:

10.1.2 Rewriting of subterms

The match and rewrite operator matchrew restricts the application of a strategy to a specific subterm of the subject term. Its syntax is

 
matchrew P(X1,,Xn) s.t. C by X1 using α1, , Xn using αn

where P is a pattern with variables X1,,Xn among others, and C is an optional equational condition. The using clauses associate variables in the pattern, which are matched by subterms of the matched term, with strategies that will be used to rewrite them. These variables must be distinct and must appear in the pattern. As in the case of tests, there are three flavors for this operator, depending on the type of matching: matchrew, xmatchrew, and amatchrew.

The semantics of this operator is illustrated in Figure 10.1. All matches of the pattern in the subject term are tried, checking the condition if any. If none succeeds, the strategy fails. Otherwise, for each match of the main pattern the subterms matching each Xi are extracted, rewritten according to αi, and their solutions are put in place of the original subterm. The rewriting of the subterms happens in parallel, in the sense that the terms evolve in a completely independent and fair way. Finally, all possible combinations of the subterm results will give rise to different reassembled solutions.

Another advantage of matchrew is that it let us obtain information about the term and use it to modify the behavior of the strategies. The scope of the variables in the pattern and the condition is extended to the substrategies α1,n, so that the values they take by matching and evaluation can be used in nested rule substitutions, strategy calls, and match and matchrew operators. Conversely, when these variables are bound by outer constructs, they keep their values and matching is done against a partially instantiated pattern. When a variable in the condition is not bound either by the outer context or by the matching, the strategy will be unusable and a warning will be shown by the interpreter.

PIC

Figure 10.1:Behavior of the amatchrew combinator

For example, instead of using the step rule to count the moves in the Tower of Hanoi, we could have used a matchrew and the increment rule inc. This is written:

 
Maude> srew [1] in HANOI-COUNT : < (0)[3 2 1] (1) [nil] (2)[nil], 0 >
using (matchrew < H, N > by H using move, N using top(inc)) * ;
amatch (2)[3 2 1] .

Solution 1
rewrites: 523 in 56ms cpu (56ms real) (9339 rewrites/second)
result Game: < (0)[nil] (1)[nil] (2)[3 2 1], 7 >

Notice the use of top in the inc rule application. As we have seen in the previous example with the cancel rule in page 300, inc could have been applied inside successors if we had omitted the top modifier. Although in this particular case the result would not have been affected, unnecessary work would have been done, as we can observe in the rewrites count of the following execution.
 
Maude> srew 1 using inc .

Solution 1
rewrites: 1 in 0ms cpu (0ms real) (~ rewrites/second)
result NzNat: 2

No more solutions.
rewrites: 2 in 0ms cpu (0ms real) (~ rewrites/second)

10.1.3 The one operator

Strategies are nondeterministic due to the different possible matches of the rules and the subterm rewriting operator, disjunction, iteration, etc. The various rewriting paths that these alternatives make possible are explored almost simultaneously by the strategic search engine. Sometimes, the different options lead to the same result or, the results being different, it may not matter which of them is selected. For efficiency purposes, the one combinator is introduced to execute its argument strategy and return only one of its solutions, namely the first one found. If the strategy does not produce any solutions, neither does one.

 
Maude> srew (0)[3 2 1] (1)[nil] (2)[nil] using one(move * ; amatch (2)[3 2 1]) .

Solution 1
rewrites: 121
result Hanoi: (0)[nil] (1)[nil] (2)[3 2 1]

No more solutions.
rewrites: 121

Maude> srew (0)[3 2 1] (1)[nil] (2)[nil] using move * ; amatch (2)[3 2 1] .

Solution 1
rewrites: 121
result Hanoi: (0)[nil] (1)[nil] (2)[3 2 1]

No more solutions.
rewrites: 150

The most noticeable performance improvements are obtained the closer one is applied to rules, because the alternatives are pruned immediately. However, this should be done only when paths can be discarded safely.
 
Maude> srew (0)[3 2 1] (1)[nil] (2)[nil] using one(move) * .

Solution 1
result Hanoi: (0)[3 2 1] (1)[nil] (2)[nil]

Solution 2
result Hanoi: (0)[3 2] (1)[1] (2)[nil]

No more solutions.

In the execution above, one(move) tries only a single move at each iteration, and the resulting sequence of moves does not lead to the solved game (in fact, it loops after the first move).

10.1.4 Strategy calls

Strategies with their own name and taking parameters can be defined in strategy modules, which will be explained in Section 10.2. These named strategies can be called as a strategy expression with a typical prefix operator syntax: a strategy identifier followed by a comma-separated list of arguments between parentheses. When the strategy does not take any arguments and there is no rule with the same name, the parentheses can be omitted. Suppose we already have defined a strategy moveAll to move disks between certain posts indicated by its arguments, with its third argument being the number of disks to be moved. It can be applied to an initial state like this:

 
Maude> srew (0)[3 2 1] (1)[nil] (2)[nil] using moveAll(0, 2, 3) .

Solution 1
rewrites: 24 in 4ms cpu (2ms real) (6000 rewrites/second)
result Hanoi: (0)[nil] (1)[nil] (2)[3 2 1]

No more solutions.
rewrites: 24 in 4ms cpu (2ms real) (6000 rewrites/second)

How this and other strategies are defined is explained in the next section.

10.2 Strategy modules

Strategy modules let the user give name to strategy expressions. Named strategies facilitate the use and description of complex strategies, and they extend the expressiveness of the language by means of recursive and mutually recursive strategies. As for functional and system modules, a strategy module is declared in Maude using the keywords

 
smod ModuleName is DeclarationsAndStatements endsm

A typical strategy module would import the system module it will control, declare some strategies, and define them by means of strategy definitions. All statements and declarations that were available in functional and system modules can also be included in strategy modules, but we encourage to avoid them as much as possible to emphasize a clean separation between the rewriting theory and the strategies that control it.

For example, the following strategy module HANOI-SOLVE defines the recursive strategy moveAll that we have used in Section 10.1.4 to illustrate strategy calls and solve the puzzle.

 
smod HANOI-SOLVE is
protecting HANOI .
protecting HANOI-AUX .

strat moveAll : Nat Nat Nat @ Hanoi .

vars S T C M : Nat .

sd moveAll(S, S, C) := idle .
sd moveAll(S, T, 0) := idle .
sd moveAll(S, T, s(C)) := moveAll(S, third(S, T), C) ;
move[S <- S, T <- T] ;
moveAll(third(S, T), T, C) .
endsm

The operator third, imported from the following functional module HANOI-AUX, calculates the third with respect to two given posts. In general, when we require some auxiliary operations to specify a strategy, we can define them in an auxiliary module that is then imported by the strategy module.

 
fmod HANOI-AUX is
protecting SET{Nat} .

op third : Nat Nat ~> Nat .

vars N M K : Nat .

ceq third(N, M) = K if N, M, K := 0, 1, 2 .
endfm

The features making up a strategy module are strategy declarations and definitions. Strategies are declared as follows:
 
strat StratName+ : Sort-1 ... Sort-k @ Sort [StratAttributes] .

Strategy names are single word identifiers. The sorts between the colon and the @ sign are the argument sorts of the strategy. The last argument after the @ sign is the subject sort, i.e., the sort of the terms to which the strategy will be applied. The colon can be omitted when a strategy has no arguments. Multiple strategy names, separated by blank space, can be used to define multiple strategies at once, provided that all have the same argument sorts and subject sort. In this case the plural keyword strats instead of strat is recommended. The only admitted strategy attribute is metadata (see Section 4.5.2).

Strategy names can be overloaded, i.e., strategies can be defined with the same name but may have different arities and argument types. Like operators, strategies are defined at the kind level. Homonymous strategies with the same arity and such that for each argument the corresponding sorts have the same kind are considered the same. In practice, the concrete sorts of the strategy signature do not have any effect and should be considered as a declaration of intent. Repeated declarations of the exact same strategy twice (with the same name and argument kinds) will generate a warning. For example, when processing the module

 
smod STRATS is
protecting INT .

strat repeat : Nat @ Nat .
strat repeat : Int @ Nat .
endsm

the interpreter will issue the warning:
 
Warning: <standard input>, line 5 (smod STRATS): strategy declaration
strat repeat : Int @ Nat . conflicts with
strat repeat : Nat @ Nat . from <standard input>, line 4 (smod STRATS).

For its part, the subject sort is ignored to all effects: it is not checked when calling the strategy and not used to distinguish strategies.

Without a corresponding definition, named strategies behave like fail. Definition statements should be provided to properly define each named strategy according to the syntax:

 
sd StrategyCall := StrategyExpression [SdAttributes] .

The lefthand side of a definition is a strategy call, as seen in the previous section, probably containing variables, which can be used in the righthand side. In fact, all variables in the strategy expression, except variables in matching patterns, must appear in the lefthand side. Like equations and rules, strategy definitions may be conditional:
 
csd StrategyCall := StrategyExpression if Condition [SdAttributes] .

The condition of a strategy definition is an equational one, as described in Section 4.3, i.e., rewriting conditions t1=>t2 are not allowed. The attributes metadata, label and print (see Section 4.5) can be attached to sd and csd definitions. When a strategy is invoked, all available definitions for the name are tried, and all those whose lefthand side matches the calling arguments are executed. Moreover, conditions may generate different variable assignments, in which case the strategy will be applied with each one of them. For example, we could have given the third definition of moveAll in HANOI-SOLVE above as
 
csd moveAll(S, T, s(C)) := moveAll(S, third(S, T), C) ;
move[S <- S, T <- T] ;
moveAll(third(S, T), T, C) if S =/= T .

since the definition does not make sense when the target and source towers are the same. In fact, this case always fails because the partial function third(S, T) evaluates to an error term of sort [Nat] such that the first recursive call does not match any definition.

To conclude this section, let us show another example: the insertion-sort algorithm implemented by means of strategies. The underlying array is represented as a set of pairs containing both a position and a value. It is assumed that there is a single entry for each position between one and the length of the array. A rule swap is defined to swap the values between two entries.

 
mod SWAPPING{X :: DEFAULT} is
protecting ARRAY{Nat, X} .
vars I J : Nat .
vars V W : X$Elt .
var AR : Array{Nat, X} .
rl [swap] : J |-> V ; I |-> W => J |-> W ; I |-> V .

op maxIndex : Array{Nat, X} ~> Nat .
eq maxIndex(empty) = 0 .
eq maxIndex(I |-> V ; AR) = if maxIndex(AR) < I then I else maxIndex(AR) fi .
endm

Then, the insertion-sort algorithm is specified as the insort strategy. It visits each position of the array in increasing order, maintaining the invariant that the entries to the left of the current index are sorted. At each iteration, it inserts the current value in the sorted fragment of the array. Insertion is done by the insert strategy, which compares in descending order each entry with its neighbor and swaps them if they are misplaced. The base case is the insertion in the first position, which does not require any work.

 
view DEFAULT+ from DEFAULT to STRICT-TOTAL-ORDER + DEFAULT is
endv

 
smod INSERTION-SORT{X :: STRICT-TOTAL-ORDER + DEFAULT} is
protecting SWAPPING{DEFAULT+}{X} * (
sort Array{Nat, DEFAULT+}{X} to NatArray{X}
) .

strat swap : Nat Nat @ NatArray{X} .
strats insert insort : Nat @ NatArray{X} .

vars X Y J I : Nat .
vars V W : X$Elt .
var AR : NatArray{X} .

sd insort(Y) := try(match AR s.t. Y <= maxIndex(AR) ;
insert(Y) ;
insort(Y + 1)) .

sd insert(1) := idle [label base-case] .
csd insert(s(X)) := try(xmatch X |-> V ; s(X) |-> W s.t. W < V ;
swap(X, s(X)) ;
insert(X))
if X > 0 [label recursive-case] .

sd swap(X, Y) := swap[J <- X, I <- Y] .
endsm

 
view Int<0 from STRICT-TOTAL-ORDER + DEFAULT to INT is
sort Elt to Int .
endv

 
smod INSERTION-SORT-INT is
protecting INSERTION-SORT{Int<0} .
endsm

Then, we can sort any array using insort with the following srewrite command:

 
Maude> srewrite 1 |-> 8 ; 2 |-> 3 ; 3 |-> 15 ; 4 |-> 5 ; 5 |-> 2 using insort(2) .

Solution 1
rewrites: 116 in 0ms cpu (0ms real) (~ rewrites/second)
result NatArray{Int<0}: 1 |-> 2 ; 2 |-> 3 ; 3 |-> 5 ; 4 |-> 8 ; 5 |-> 15

No more solutions.
rewrites: 116 in 0ms cpu (1ms real) (~ rewrites/second)

10.2.1 Module importation

Strategy modules can be seen as an additional layer of specification over functional and system modules, which are not allowed to import strategy modules. Instead, strategy modules can import functional, system, or strategy modules using the protecting, extending, and including modes.

The semantic requirements for each inclusion mode are described in Section 6.1 for functional and system modules. Similar requirements apply to strategies when importing strategy modules. A protecting extension M of a strategy module M should not alter the rewriting paths permitted by each strategy declared in M. The declaration and definition of new strategies in the importing module does not affect the semantics of the strategies declared in the imported module, but adding new definitions for the imported strategies may change their behavior. If new rewriting paths are added to the imported strategies, but those of the imported module are preserved, the extending mode should be used. In any other case, the including mode is the right choice.

Let us illustrate the importation modes with a simple example. The following strategy module SMOD-IMPORT-EXAMPLE declares two strategies st1 and st2. Since st1 is not given any definition, no rewriting is possible using it, and thus st1 is equivalent to fail. On the other hand, the definition of st2 describes all rewriting paths of the form q ’z for every quoted identifier q.

 
mod SMOD-IMPORT-EXAMPLE0 is
protecting QID .
rl [ab] : a => b .
rl [bc] : b => c .
rl [zz] : Q:Qid => z .
endm

 
smod SMOD-IMPORT-EXAMPLE is
protecting SMOD-IMPORT-EXAMPLE0 .
strats st1 st2 @ Qid .
sd st2 := st1 ? bc : zz .
endsm

The module SMOD-IMPORT-EXAMPLE’ below includes a new definition for st2 that allows the additional rewriting path ’a ’b. Hence, SMOD-IMPORT-EXAMPLE should not be imported in protecting mode but in extending mode.

 
smod SMOD-IMPORT-EXAMPLE is
extending SMOD-IMPORT-EXAMPLE .
sd st2 := ab .
endsm

Although there is no direct way of restricting an already defined strategy, this may indirectly happen due to the conditional and its derived operators. The following module extends st1 with the ’a ’b path. As a result, the condition of st1’s definition st1 ? bc : zz has now solutions for ’a, the rewriting path ’a ’b ’c is allowed, but also ’a ’z is lost, because the negative branch of the conditional is not evaluated.

 
smod SMOD-IMPORT-EXAMPLE’’ is
including SMOD-IMPORT-EXAMPLE .
sd st1 := ab .
endsm

The semantic requirements for strategies refer to the rewriting paths that can be followed while executing the strategy, which is stronger than refering only to their evaluation results. However, it this example, the results are also changed non-conservatively:
 
Maude> srew in SMOD-IMPORT-EXAMPLE : a using st2 .

Solution 1
rewrites: 1
result Qid: z

No more solutions.
rewrites: 1

Maude> srew in SMOD-IMPORT-EXAMPLE’’ : a using st2 .

Solution 1
rewrites: 2
result Qid: c

No more solutions.
rewrites: 2

As for functional and system modules, these requirements are not checked by Maude at runtime.

Finally, the language of module renamings (see Section 6.2.2) is extended with strategy renamings. Like for regular operators, strat name to newName renames all strategies whose name is name to newName. To rename only a specific instance of an overloaded strategy name, its arity and subject sort can be made explicit with strat name : s1sn @ s to newName.

10.3 Parameterization in strategy modules

Strategy modules can be parameterized like any other module in Maude, by means of theories and views, as explained in Section 6.3. A parameterized strategy module is a usual strategy module that takes a set of formal parameters, bound to some theories:

 
smod SM{X1 :: T1, ..., XN :: TN} is ... endsm

These theories can be functional or system theories, or also strategy theories. A strategy theory specifies the signature of the strategies that a concrete strategy module parameter must provide and the requirements that these actual strategies must satisfy. How a particular strategy module meets the requirements of a strategy theory is specified by means of a view with strategy bindings, which will be described soon, and then views are used to instantiate parameterized modules with actual parameters. Strategy theories are introduced with the syntax:
 
sth T is ... endsth

and have features entirely similar to those of strategy modules. They can:

The simplest strategy theory STRIV declares a single strategy without parameters:

 
sth STRIV is
including TRIV .
strat st @ Elt .
endsth

The subject sort Elt of the strategy st is imported from the functional theory TRIV. Strategy theories can never be used to parameterize functional or system modules.

Given a strategy module that meets the requirements of a strategy theory, we should specify how they are actually met by using a view. The repertoire of mappings that can appear in a view (see Section 6.3.2) has been extended with additional syntax for strategy bindings:

1.
Mapping all formal strategies with the same given name at once. Each affected strategy will be seeked in the target module after translating its arguments according to the sort mappings of the view.
 
strat StratName to StratName .

For example, consider the following strategy theory FOO and strategy module BAR:

 
sth STHEORY is
including TRIV .
protecting NAT .
strat foo : Elt @ Elt .
strat foo : Elt Nat @ Elt .
endsth

 
smod SMODULE is
protecting STRING .
protecting NAT .
strat bar : String @ String .
strat bar : String Nat @ String .
endsm

The following view Bar maps both the strategy foo(Elt) to bar(String) and the strategy foo(Elt, Nat) to bar(String, Nat) with a single mapping strat foo to bar.

 
view SModule from STHEORY to SMODULE is
sort Elt to String .
strat foo to bar .
endv

2.
Mapping a single formal strategy to an existing strategy in the target module.
 
strat StratName : Sort-1 ... Sort-k @ Sort to StratName .

For example, the previous view Bar can also be written as:

 
view SModule from STHEORY to SMODULE is
sort Elt to String .
strat foo : Elt @ Elt to bar .
strat foo : Elt Nat @ Elt to bar .
endv

3.
Mapping a single formal strategy to an actual strategy expression in the target module.
 
strat StrategyCall to expr StrategyExpression .

Only variables are allowed as arguments in the lefthand side of the mapping, as for the operator to term mappings (see Section 6.3.2). These variables can be used in the strategy expression in the righthand side. When the module is instantiated, the strategy calls matching the lefthand side will be substituted by the lefthand side expression in any strategy definition of the module.

Finally, and as usual, modules are instantiated with views using the syntax

 
SM{View1, , ViewN}

Let us illustrate how parameterized strategy modules are instantiated by strategy views with an example. The following parameterized strategy module BACKTRACKING is a backtracking problem solver: the generic algorithm is specified as a strategy solve depending on some formal declarations.

 
smod BACKTRACKING{X :: BT-STRAT} is
strat solve @ X$State .
var S : X$State .
sd solve := (match S s.t. isSolution(S)) ?
idle : (expand ; solve) .
endsm

This module is parameterized by the backtracking problem specification, whose requirements are expressed in the strategy theory BT-STRAT. Its main element is the expand strategy, which is intended to generate all the successors of the problem state in which it is applied.

 
fth BT-ELEMS is
protecting BOOL .
sort State .
op isSolution : State -> Bool .
endfth

 
sth BT-STRAT is
including BT-ELEMS .
strat expand @ State .
endsth

The strategy theory includes a functional theory BT-ELEMS, where we require a sort State for the problem states, and a predicate isSolution to recognize solutions among them. The strategy theory itself declares the strategy expand without parameters. Now, we specify a concrete backtracking problem, the 8-queens problem, and instantiate the parameterized BACKTRACKING module with it by means of a view. Then, the instantiated strategy solve will be able to solve the 8-queens problem. The problem is described in the system module QUEENS and the strategy module QUEENS-STRAT:

 
mod QUEENS is
protecting LIST{Nat} .
protecting SET{Nat} .

op isOk : List{Nat} -> Bool .
op ok : List{Nat} Nat Nat -> Bool .
op isSolution : List{Nat} -> Bool .

vars N M Diff : Nat .
var L : List{Nat} .
var S : Set{Nat} .

eq isOk(L N) = ok(L, N, 1) .
eq ok(nil, M, Diff) = true .
ceq ok(L N, M, Diff) = ok(L, M, Diff + 1)
if N =/= M /\ N =/= M + Diff /\ M =/= N + Diff .
eq isSolution(L) = size(L) == 8 .

crl [next] : L => L N if N, S := 1, 2, 3, 4, 5, 6, 7, 8 .
endm

smod QUEENS-STRAT is
protecting QUEENS .

strat expand @ List{Nat} .
var L : List{Nat} .
sd expand := top(next) ; match L s.t. isOk(L) .
endsm

Then, we define a view from the BT-STRAT strategy theory to QUEENS-STRAT:

 
view QueensBT from BT-STRAT to QUEENS-STRAT is
sort State to List{Nat} .
strat expand to expand .
endv

As with sorts and operators, unmentioned strategy names get the identity mapping, so the mapping for expand in QueenBT could be omitted. Finally, we can instantiate the parameterized module:

 
smod BT-QUEENS is
protecting BACKTRACKING{QueensBT} .
endsm

and run solve for 8-queens:
 
Maude> srew [2] nil using solve .
srewrite [2] in BT-QUEENS : nil using solve .

Solution 1
rewrites: 285984 in 238ms cpu (238ms real) (1200211 rewrites/second)
result NeList{Nat}: 1 5 8 6 3 7 2 4

Solution 2
rewrites: 285984 in 238ms cpu (238ms real) (1200211 rewrites/second)
result NeList{Nat}: 1 6 8 3 7 4 2 5

Typically, views will be defined from a strategy theory to a strategy module or theory, but other combinations are possible. In general, strategic and non-strategic theories and modules can be combined with freedom in views.

1.
Views from strategy theories to non-strategy modules (or theories) are useful when the target strategies are simple enough to be defined by strategy expressions, for which to expr bindings must be used.
2.
Views from non-strategy theories to strategy modules (or theories) can be used to instantiate modules of a lower level with strategy modules. In that case, the instantiated module is automatically promoted to a strategy module. These views cannot contain strategy bindings, as there are no formal strategies to bind.

Otherwise, all the details about module instantiation in Section 6.3.4 are applicable to the strategy level as well. As a simple example, we can write a view from the functional theory TRIV to the strategy theory STRIV and vice-versa as follows:

 
view StrivIdle from STRIV to TRIV is
sort Elt to Elt .
strat st to expr idle .
endv

view STRIV from TRIV to STRIV is
sort Elt to Elt .
endv

Moreover, we could have defined a view from the strategy theory BT-STRAT directly to the functional module QUEENS by binding the formal strategy expand to the actual inline definition of the expand strategy we already use in the strategy module QUEENS-STRAT. In this way, we do not need to define or use the QUEENS-STRAT strategy module.

 
view QueensBT2 from BT-STRAT to QUEENS is
sort State to List{Nat} .
var L : State .
strat expand to expr top(next) ; match L s.t. isOk(L) .
endv

10.4 Strategy search and the dsrewrite command

Although strategies control and restrict rewriting, they do not make the process deterministic. A strategy may allow multiple rewriting paths, produced by alternative local decisions that may appear during its execution. The rewrite and frewrite commands eliminate this nondeterminism by each choosing a specific fixed strategy, as explained in Sections 5.4.1 and 5.4.2. Instead, the search command explores all the possible rule applications looking for a term matching a given target. The srewrite command agrees with search in the exhaustive exploration of the alternatives, looking, in this case, for strategy solutions. However, the srewrite search is not conducted on the complete rewriting tree of search, but in a subtree pruned by the effect of the strategy. How this tree is explored has implications for the command’s output and its performance.

The srewrite command explores the rewriting graph following a fair policy which ensures that all solutions reachable in a finite number of steps are eventually found, unless the interpreter runs out of memory. Without being a breadth-first search, multiple alternative paths are explored in parallel. This can be observed in the n-queens problem presented in Section 10.3 , and in the particular example we showed:

 
Maude> srew [2] nil using solve .

Solution 1
rewrites: 285984 in 249ms cpu (248ms real) (1147935 rewrites/second)
result NeList{Nat}: 1 5 8 6 3 7 2 4

Solution 2
rewrites: 285984 in 249ms cpu (249ms real) (1147935 rewrites/second)
result NeList{Nat}: 1 6 8 3 7 4 2 5

Notice that both solutions are obtained at the same time and with the same number of rewrites, since these and even more rewriting paths are being explored in parallel. All of them increment indiscriminately the common rewrites counter. In this particular example, being all the solutions at the same depth in the rewriting tree, the tree has to be explored almost completely before the first solution is found.

An alternative rewriting command dsrewrite (depth strategic rewrite) explores the strategy rewriting graph in depth. Its syntax coincides with that of the srewrite command except for the starting keyword, which can be abbreviated to dsrew.

 
dsrewrite [n] in ModId : Term by StrategyExpression .

Using this command in the previous example, we can observe some differences. The first solution is obtained sooner and some rewrites before the second. In absolute terms, both solutions are shown earlier and using fewer rewrites. The memory usage is reduced too.
 
Maude> dsrew [2] nil using solve .
dsrewrite [2] in BT-QUEENS : nil using solve .

Solution 1
rewrites: 15602 in 16ms cpu (17ms real) (942092 rewrites/second)
result NeList{Nat}: 1 5 8 6 3 7 2 4

Solution 2
rewrites: 20339 in 19ms cpu (21ms real) (1022728 rewrites/second)
result NeList{Nat}: 1 6 8 3 7 4 2 5

However, the disadvantage is that the depth-first exploration could get lost in an infinite branch before finding some reachable solutions, which would then be missed (see Section 10.5 for an example). The depth-first search order is determined by the order of the direct successors, which is summarized in the following precedences:

10.5 Case study: logic programming

In this section, the strategy language is used to define the semantics of a logic programming language similar to Prolog [35]. Strategies will be used to discard failed proofs, to enforce the Prolog search strategy, and to implement advanced features like negation and cuts.

First, the syntax of the language is defined in a functional module:

 
fmod LP-SYNTAX is
protecting NAT .
protecting QID .

sort Variable .
op x{_} : Nat -> Variable [ctor] .

sorts Term NvTerm .
subsorts Qid < NvTerm < Term .
subsort Variable < Term .
op _[_] : Qid NeTermList -> NvTerm [ctor prec 30 gather (e &)] .

sort NeTermList .
subsort Term < NeTermList .
op _,_ : NeTermList NeTermList -> NeTermList [ctor assoc] .

sort Predicate .
op _‘(_‘) : Qid NeTermList -> Predicate [ctor prec 30 gather (e &)] .

sorts PredicateList NePredicateList .
subsorts Predicate < NePredicateList < PredicateList .
op nil : -> PredicateList [ctor] .
op _,_ : PredicateList PredicateList -> PredicateList
[ctor assoc prec 50 id: nil] .
op _,_ : NePredicateList PredicateList -> NePredicateList [ditto] .
op _,_ : PredicateList NePredicateList -> NePredicateList [ditto] .

sort Clause .
op _:-_ : Predicate PredicateList -> Clause [ctor prec 60] .

sort Program .
subsort Clause < Program .
op nil : -> Program [ctor] .
op _;_ : Program Program -> Program [ctor assoc prec 70 id: nil] .
endfm

The language distinguishes between terms and predicates: the first describe data while the second state facts about it. Terms are constants represented by quoted identifiers, variables of the form x{n}, and structured terms like ’f[’a, ’b], built from symbols composed of a name and a list of terms between brackets. Predicates also take a non-empty list of terms but between parentheses. A program is a list of Horn clauses. They are composed of a predicate called head, the :- symbol, and a comma-separated list of predicates called premises. Clauses with an empty list of premises are called facts. For example, we can express familiar relations with predicates and people names as constants, then we can give some clauses to indicate how complex relationships derive from simpler ones:
 
ops kinship family : -> Program .

eq kinship =
sibling(x{1},x{2}) :- parent(x{3},x{1}), parent(x{3}, x{2}) ;
parent(x{1}, x{2}) :- father(x{1}, x{2}) ;
parent(x{1}, x{2}) :- mother(x{1}, x{2}) ;
relative(x{1},x{2}) :- parent(x{1},x{3}), parent(x{3},x{2}) ;
relative(x{1},x{2}) :- sibling(x{1},x{3}), relative(x{3},x{2}) .

eq family =
mother(’jane, mike) :- nil ;
mother(’sally, john) :- nil ;
father(’tom, sally) :- nil ;
father(’tom, erica) :- nil ;
father(’mike, john) :- nil ;
kinship .

PIC
Figure 10.2:Family tree defined by the example predicates

According to the clauses above, Erica is sibling of Sally ’sibling(’erica, ’sally) because they both have Tom as parent, ’parent(’tom, ’erica) and ’parent(’tom, ’sally). And this is true because he is the father of both, ’father(’tom, ’erica) and ’father(’tom, ’sally).

The language interpreter is given a query, i.e., a list of predicates that may contain free variables, and it answers whether the conjunction of all of them can be satisfied and for which substitutions it does happen. The proof procedure consists in finding a clause whose head unifies with the predicate, and then trying to recursively prove their premises, while carrying the variable bindings during the process. Unification is implemented in a module LP-UNIFICATION, where substitutions are defined:

 
sort Binding .
op _->_ : Variable Term -> Binding [ctor prec 60] .

sort Substitution .
subsort Binding < Substitution .
op empty : -> Substitution [ctor] .
op _;_ : Substitution Substitution -> Substitution
[ctor assoc comm prec 70 id: empty] .

The unify operator calculates the substitution that unifies two given predicates if possible, respecting the bindings from an initial substitution.
 
op unify : Predicate Predicate Substitution -> Substitution? .

sort Substitution? .
subsort Substitution < Substitution? .
op fail : -> Substitution? [ctor] .

The variables in a clause must be fresh each time it is applied, so they must be renamed. An operator does so helped by an index that is supposed to be greater that any variable index involved.
 
op rename : Clause Nat -> Clause .

Finally, a partial function value obtains the value of a variable in a given substitution:
 
op value : Variable Substitution ~> Term .
eq value(V, (V -> T) ; S) = T .

The language semantics is defined in a system module LP-SEMANTICS. The state of the interpreter is represented as a Configuration, and a rule modifies it while applying the clauses.

 
mod LP-SEMANTICS is
protecting LP-UNIFICATION .

sort Configuration .

vars N1 N2 : Nat . vars P1 P2 P3 : Predicate .
vars PL1 PL2 PL3 : PredicateList . vars S1 S2 : Substitution .
vars Pr Pr1 Pr2 : Program .

op <_|_> : PredicateList Program -> Configuration [ctor] .
eq < PL1 | Pr > = < last(PL1) | PL1 $ empty | Pr > .

op <_|_$_|_> : Nat PredicateList Substitution Program -> Configuration [ctor] .

crl [clause] : < N1 | P1, PL1 $ S1 | Pr1 ; P2 :- PL2 ; Pr2 >
=> < N2 | PL3, PL1 $ S2 | Pr1 ; P2 :- PL2 ; Pr2 >
if P3 :- PL3 := rename(P2 :- PL2,N1)
/\ S2 := unify(P1,P3,S1)
/\ N2 := max(N1,last(P3 :- PL3)) .
endm

The interpreter state holds a predicate list of pending goals, and a substitution that accumulates the variable bindings from the previous execution. It also carries the program and maintains a renaming index greater than the index of any variable seen. The clause rule selects any program clause and tries to unify its head with the first predicate in the goal list. If it succeeds, the predicate is replaced by the clause premises and the current substitution is completed. Additionally, it takes care of renaming the selected clause and updating the variable index counter.

A proof is successfully finished when a configuration of the form < N | nil $ S | Pr > is reached, i.e., when no pending goals remain, and then S is a substitution that makes the initial predicate hold.

Coming back to the family tree example, we can now ask questions like whether Jane and John are relatives, or of whom Tom is parent. As we want to know if a proof can be concluded for the initial predicate, search is the appropriate command. The value of Pr has been omitted in the following results for clarity.

 
Maude> search < relative(’jane, john) | family > =>* < N | nil $ S | Pr > .
Solution 1 (state 11)
states: 12 rewrites: 1713
N:Nat --> 7
S:Substitution --> x{1} -> jane ; x{2} -> john ; x{3} -> x{5} ;
x{4} -> jane ; x{5} -> mike ; x{6} -> x{5} ; x{7} -> john

No more solutions.
states: 12 rewrites: 1967

Maude> search < parent(’tom, x{1}) | family > =>* < N | nil $ S | Pr > .
Solution 1 (state 3)
states: 4 rewrites: 251
N:Nat --> 3
S:Substitution --> x{1} -> x{3} ; x{2} -> tom ; x{3} -> sally

Solution 2 (state 4)
states: 5 rewrites: 280
N:Nat --> 3
S:Substitution --> x{1} -> x{3} ; x{2} -> tom ; x{3} -> erica

No more solutions.

The rewrite command is not useful in this context because it simply explores a single rewriting path, thus a single proof path. This is clearly not enough to show multiple solutions, but it may also be insufficient to find a single one, as we can see running the first example with rewrite.
 
Maude> rew < relative(’jane, john) | family > .
rewrites: 422
result Configuration: < 5 | father(x{4},x{5}),’parent(x{3},x{2}) $
x{1} -> jane ; x{2} -> john ; x{3} -> x{5} ; x{4} -> jane | family >

An admissible logic programming interpreter must consider all possible proof paths and be able to resume them when the execution arrives to a dead end. Strategies can take care of this. First, we define an auxiliary predicate isSolution to decide whether a given configuration is a solution.

 
mod LP-EXTRA is
protecting LP-SEMANTICS .

op isSolution : Configuration -> Bool .

var N : Nat . var S : Substitution .
var Pr : Program .

eq isSolution(< N | nil $ S | Pr >) = true .
eq isSolution(Conf) = false [owise] .
endm

Now, we can define the strategy solve-simple that applies clause until a solution is found, and implicitly rejects any rewriting path that does not end in one. The exhaustive search of the srewrite command shows all reachable solutions for the initial predicate.

 
smod PROLOG is
protecting LP-EXTRA .

strat solve-simple @ Configuration .

sd solve-simple := match Conf s.t. isSolution(Conf)
? idle : (clause ; solve-simple) .
endsm

The above definition is identical to the backtracking scheme described in Section 10.3. In fact, the BACKTRACKING module parameterized by a convenient view can be used to define solve-simple. Even so, during this section, we will add some other elements to the strategy that are not compatible with that scheme. Now, the strategy can be applied to the previous examples:
 
Maude> srew < parent(’tom, x{1}) | family > using solve-simple .

Solution 1
rewrites: 544
result Configuration: < 3 | nil $ x{1} -> x{3} ; x{2} -> tom ; x{3} -> sally
| (omitted) >

Solution 2
rewrites: 544
result Configuration: < 3 | nil $ x{1} -> x{3} ; x{2} -> tom ; x{3} -> erica
| (omitted) >

No more solutions.
rewrites: 544

The resulting configurations are not easily readable, because they include the full program, which has not changed and has been omitted here, and also substitutions containing meaningless variables that do not occur in the initial predicate. We propose to wrap solving strategies, like solve-simple and those we will define next, as a strategy wsolve that additionally cleans and presents the results in a more readable form. This is defined in the strategy module PL-SIMPLIFIER, which imports some required functions and rules from the system module PL-SIMPLIFIER-BASE. Since this wrapper is intended to be used for any solving strategy we might define, PL-SIMPLIFIER is a strategy module parameterized by a new theory INTERPRETER that strategies like solve-simple should satisfy.

 
sth INTERPRETER is
protecting LP-SEMANTICS .
strat solve @ Configuration .
endsth

 
view Simple from INTERPRETER to PROLOG is
strat solve to solve-simple .
endv

 
smod PL-SIMPLIFIER{X :: INTERPRETER} is
protecting PL-SIMPLIFIER-BASE .
strat wsolve @ Configuration .
var Conf : Configuration .
var VS : VarSet .
sd wsolve := matchrew Conf s.t. VS := occurs(Conf)
by Conf using (solve ; solution[VS <- VS]) .
endsm

The strategy wsolve records the variables that occur in the initial configuration predicate, then executes the solve strategy, and finally applies the solution rule with the initial variable set. This rule restricts the substitution to the variables in the given set, after resolving them by transitivity. This procedure is defined in the LP-SIMPLIFICATION-BASE module, which is only partially reproduced here because it has no interest regarding strategies.

 
mod PL-SIMPLIFIER-BASE is
extending LP-SEMANTICS .
sort VarSet .
subsort Variable < VarSet .
op empty : -> VarSet .
op _;_ : VarSet VarSet -> VarSet [ctor assoc comm id: empty] .

*** [...]

op occurs : Configuration -> VarSet .
op simplify : Substitution VarSet -> Substitution .
op solution : Substitution -> Configuration [ctor format (g! o)] .
rl [solution] : < N | nil $ S | Pr > => solution(simplify(S, VS)) [nonexec] .
endm

Instantiating the module, we obtain a wrapped strategy that additionally pretty prints the solution. The same will be done for the other solving strategies defined in the rest of the section.

 
smod PROLOG-MAIN is
protecting PL-SIMPLIFIER{Simple} * (strat wsolve to wsolve-simple) .
endsm

We can also observe that the order in which solutions appear depends on the way the rewriting tree is explored. With the dsrewrite command the results will appear in the same order as in Prolog, because both explore the derivation tree in depth. However, the srewrite command will often obtain shallower solutions first.

 
Maude> dsrew < p(x{1}) | p(x{1}) :- q(x{1}) ; p(’a) :- nil ; q(’b) :- nil >
using wsolve-simple .

Solution 1
rewrites: 82
result Configuration: solution(x{1} -> b)

Solution 2
rewrites: 111
result Configuration: solution(x{1} -> a)

No more solutions.
rewrites: 117

Maude> srew < p(x{1}) | p(x{1}) :- q(x{1}) ; p(’a) :- nil ; q(’b) :- nil >
using wsolve-simple .

Solution 1
rewrites: 105
result Configuration: solution(x{1} -> a)

Solution 2
rewrites: 117
result Configuration: solution(x{1} -> b)

No more solutions.
rewrites: 95

However, the benefit of using srewrite is that all reachable solutions are shown. In Prolog and with dsrewrite some of them may be hidden by a non-terminating branch.
 
Maude> dsrew < p(x{1}) | p(x{1}) :- p(x{1}) ; p(’a) :- nil >
using wsolve-simple .

Debug(1)> abort . *** non-terminating

Maude> srew < p(x{1}) | p(x{1}) :- p(x{1}) ; p(’a) :- nil >
using wsolve-simple .

Solution 1
rewrites: 109 in 0ms cpu (3ms real) (~ rewrites/second)
result Configuration: solution(x{1} -> a)

Debug(1)> abort . *** non-terminating

10.5.1 Negation as failure

In logic programming, the concept of negation is somewhat subtle. Facts and predicates express positive knowledge, so the alternatives are explicitly asserting what is false, or assuming that any predicate that cannot be derived from the considered facts is false. The last approach is known as negation as failure: the negation of a predicate holds if the predicate cannot be proved, no matter the values its variables take. This cannot be expressed with Horn clauses but can be easily implemented using strategies and an extra rewriting rule, added to LP-EXTRA. Negation is represented as a normal predicate named \+, whose argument is written as a term.

 
mod LP-EXTRA+NEG is
extending LP-EXTRA .

var N : Nat . var Q : Qid .
var NeTL : NeTermList . var PL : PredicateList .
var S : Substitution . var Pr : Program .
var Conf : Configuration .

crl [negation] : < N | ’\+(Q[NeTL]), PL $ S | Pr > => < N | PL $ S | Pr >
if < N | Q(NeTL) $ S | Pr > => Conf .
endm

The negation rule only removes the negation predicate from the goal list if its rewriting condition holds. By its own semantics, the negation never binds variables, so the substitution remains unchanged. The initial term of the rewriting condition contains the negated predicate as its only goal. Whether this term can be rewritten to a solution configuration determines whether the negated predicate can be satisfied. Hence, we need to control the condition with a strategy that fails whenever that happens.

 
smod PROLOG+NEG is
protecting LP-EXTRA+NEG .

strat solve-neg @ Configuration .

var Conf : Configuration .

sd solve-neg := match Conf s.t. isSolution(Conf) ? idle :
((clause | negation{not(solve-neg)}) ; solve-neg) .
endsm

The strategy is similar to the original solve-simple strategy, but the negation rule can be applied when a negated predicate is on top of the goal list. not(solve-neg) fails if solve-neg finds a solution for the negated predicate. Otherwise, it behaves like idle, triggering the rule application. Thus, it is a suitable strategy for the rewriting condition.

We can illustrate the negation feature using the family tree example. A predicate ’no-children claims that someone does not have descendants:

 
Maude> srew < no-children(’erica) | family ;
no-children(x{1}) :- ’\+(’parent[x{1}, x{2}]) > using wsolve-neg .

Solution 1
rewrites: 887
result Configuration: solution(empty)

No more solutions.
rewrites: 887

Maude> srew < no-children(’mike) | family ;
no-children(x{1}) :- ’\+(’parent[x{1}, x{2}]) > using solve-neg .

No solution.
rewrites: 894

The second predicate does not succeed since Mike is the father of John. Similarly, we can define ’orphan(x{1}) := ’\+(’parent[x{2}, x{1}]).

10.5.2 Cuts

Prolog includes a more powerful (and more controversial) control feature called cut. Intuitively, a cut ! is a goal that always succeeds and establishes a wall preventing backtracking left-wise. It can appear among the premises of a clause, and it triggers two effects when it is reached:

1.
Locally to the clause, the first substitution produced to prove the predicates to the left of the cut is definitive, and it cannot be reconsidered.
2.
Only the current clause can be used to prove the predicate to which the clause was applied. In other words, no other clause for this predicate can be used if the current clause fails or more solutions are required.

Up to now, the order in which the solutions are found was less important, specially when using the srew command. However, for cut to be well-defined and coincide with its Prolog semantics, we need to respect its order strictly:

1.
Inside a clause, the premises are proven from left to right.
2.
To prove each goal, the program clauses are used in order, starting from the first one.

Keeping all this in mind, the cut-aware semantics is defined by means of two strategies solve and solveOne. The first one calculates all reachable solutions in no particular order, while the second calculates only the first solution according to the order described above. The ordered trial of the program clauses is conducted by two recursive strategies clauseWalk and clauseWalkOne receiving a program as argument. Before describing them, the LP-SYNTAX module should be extended with the cut symbol.

 
fmod LP-SYNTAX+CUT is
extending LP-SYNTAX .

sorts CfPredicateList NeCfPredicateList .
subsorts Predicate < NeCfPredicateList
< CfPredicateList NePredicateList < PredicateList .

op ! : -> NePredicateList [ctor] .

mb nil : CfPredicateList .
op _,_ : NeCfPredicateList NeCfPredicateList -> NeCfPredicateList [ditto] .
op _,_ : CfPredicateList CfPredicateList -> CfPredicateList [ditto] .
op _,_ : NeCfPredicateList CfPredicateList -> NeCfPredicateList [ditto] .
op _,_ : CfPredicateList NeCfPredicateList -> NeCfPredicateList [ditto] .
op _,_ : PredicateList CfPredicateList -> PredicateList [ditto] .
op _,_ : CfPredicateList PredicateList -> PredicateList [ditto] .
endfm

The CfPredicateList sort represents cut-free predicate lists. These can be treated as usual, while clauses and goal lists containing cuts will be treated differently. The four strategies mentioned before are defined in the PROLOG+CUT module:

 
smod PROLOG+CUT is
protecting LP-EXTRA+CUT .

var Conf : Configuration .
var N : NAt .
var CfPL : CfPredicateList .
var P : Predicate .
vars Pr Pr2 : Program .
vars PL PL2 : PredicateList .
var S : Substitution .

strats solve solveOne @ Configuration .

sd solve := match Conf s.t. isSolution(Conf) or-else (
matchrew Conf s.t. < N | CfPL $ S | Pr > := Conf
by Conf using clauseWalk(Pr)
| matchrew Conf s.t. < N | CfPL, !, PL $ S | Pr > := Conf
by Conf using (cut{solveOne} ; solve)
) .

sd solveOne := match Conf s.t. isSolution(Conf) or-else (
matchrew Conf s.t. < N | CfPL $ S | Pr > := Conf
by Conf using clauseWalkOne(Pr)
| matchrew Conf s.t. < N | CfPL, !, PL $ S | Pr > := Conf
by Conf using (cut{solveOne} ; solveOne)
) .

strats clauseWalk clauseWalkOne : Program @ Configuration .

sd clauseWalk(nil) := fail .
sd clauseWalk(P :- PL2 ; Pr) := (clause[Pr2 <- Pr] ; solve) | clauseWalk(Pr) .
sd clauseWalk(P :- CfPL, !, PL ; Pr) := (clause[Pr2 <- Pr] ; cut{solveOne})
? solve : clauseWalk(Pr) .

sd clauseWalkOne(nil) := fail .
sd clauseWalkOne(P :- CfPL ; Pr) := (clause[Pr2 <- Pr] ; solveOne)
or-else clauseWalkOne(Pr) .
sd clauseWalkOne(P :- CfPL, !, PL ; Pr) := (clause[Pr2 <- Pr] ; cut{solveOne})
? solveOne : clauseWalkOne(Pr) .
endsm

As in the previous strategies, solve stops when the subject term is a solution. Otherwise, two mutually exclusive branches are available. The first branch deals with cut-free goal lists, and simply invokes clauseWalk to attempt the program clauses in order. The second branch deals with cuts thanks to a new conditional rule cut defined in LP-EXTRA.

 
mod LP-EXTRA+CUT is
extending LP-SYNTAX+CUT .
extending LP-SEMANTICS+CALL .
extending LP-EXTRA .

vars N1 N2 : Nat . var CfPL : CfPredicateList .
var PL : PredicateList . vars S1 S2 : Substitution .
var Pr : Program .

crl [cut] : < N1 | CfPL, !, PL $ S1 | Pr > => < N2 | PL $ S2 | Pr >
if < N1 | CfPL $ S1 | Pr > => < N2 | nil $ S2 | Pr > .
endm

This rule isolates the goal list to the left of the cut, which is cut-free. If a solution can be found for it, in the main configuration the left predicate list and the first cut are both removed, and the substitution is updated according to the result of the rewriting condition. Since cut disallows reconsidering the substitution that arrives to it, we are only interested in a single solution for the rewriting condition. This is achieved by controlling it with solveOne, which produces one solution at most. Notice that we are managing the cuts by anticipating them, and it works regardless of the way the strategy search tree is explored, and without explicit mention of backtracking. However, this only ensures the local effect of the cut; the second effect is guaranteed by clauseWalk. The recursive definition of solveOne only differs from solve in that clauseWalkOne is called instead of clauseWalk. The reason why only a solution is found by solveOne is because of clauseWalkOne, which is explained below.

clauseWalk is a recursive strategy that walks through the program clauses and tries them in order. Setting Pr2 to the argument’s Pr, makes the application clause[Pr2 <- Pr] deterministic and ensures the current clause in the strategy arguments is used. If the clause succeeds, the execution continues with a recursive call to solve. No matter whether all this fails or not, the next clause is also tried to possibly find other solutions. clauseWalkOne differs only in that the disjuntive combinator | is replaced by an or-else. This ensures that only a single solution is found and that it is the first according to the order described before. The second effect of cut is guaranteed by the third definition of both clauseWalkOne and clauseWalk, that are dedicated to clauses with at least one cut in their premises. First, they apply the current clause using the clause rule. If the rule can be applied, its premises are transferred to the configuration goal list, where the cut is managed by cut{solveOne} as we explained above. Notice that the cut is only reached if cut{solveOne} succeeds. In that case, we continue with a recursive call to solve, ignoring the rest of clauses in Pr for the current predicate, as the second effect of cut requires. Otherwise, we continue with the next clause because the cut has not been triggered.

The examples in Section 10.5.1 can also be expressed by means of cuts.

 
Maude> srew < no-children(’erica) | family ;
no-children(x{1}) :- parent(x{1}, x{2}), !, fail(x{1}) ;
no-children(x{1}) :- nil > using wsolve .

Solution 1
rewrites: 789
result Configuration: solution(empty)

No more solutions.
rewrites: 789

where fail is a predicate that always fails. This usage pattern of cut imitates the typical definition of the negation \+ predicate. In fact, this predicate can be defined inside the language if completed by an extra meta-predicate named call, which converts its argument from a term to a predicate, as the negation rule implicitly did. It is implemented by means of an equation than reduces the head of the goal list:

 
fmod LP-SEMANTICS+CALL is
protecting LP-SEMANTICS .

var N : Nat . var Q : Qid .
var NeTL : NeTermList . var PL : PredicateList .
var S : Substitution . var Pr : Program .
var V : Variable .

eq [call] : < N | call(Q[NeTL]), PL $ S | Pr >
= < N | Q(NeTL), PL $ S | Pr > .

ceq [call] : < N | call(V), PL $ S | Pr >
= < N | Q(NeTL), PL $ S | Pr > if Q[NeTL] := value(V, S) .
endfm

Negation is then defined:
 
eq negation-bycut := ’\+(x{1}) :- call(x{1}), !, fail(x{1}) ;
’\+(x{1}) :- nil .

This is a negation as failure, because whenever the \+ predicate argument can be proven, the cut will be reached. Then, any other alternative proof for the negation will be discarded, in particular the use of the second clause. Since fail always fails, the negation will fail completely. On the contrary, when the argument predicate does not produce any result, the cut will not be reached and the second clause will be used, thus proving the negation of the predicate.
 
Maude> srew < no-children(’erica) | family ; negation-bycut ;
no-children(x{1}) :- ’\+(’parent[x{1}, x{2}]) > using wsolve .

Solution 1
rewrites: 1028
result Configuration: solution(empty)

No more solutions.
rewrites: 1028

Maude> srew < no-children(’mike) | family ; negation-bycut ;
no-children(x{1}) :- ’\+(’parent[x{1}, x{2}]) > using wsolve .

No solution.
rewrites: 736