Difference between revisions of "Complexity:Old"

From Termination-Portal.org
Jump to navigationJump to search
Line 81: Line 81:
 
with  all TRSs with  flag "(STRATEGY  INNERMOST)" as testbed. Here  TRSs which
 
with  all TRSs with  flag "(STRATEGY  INNERMOST)" as testbed. Here  TRSs which
 
only differ by the flag in the current TPDB are only considered once.
 
only differ by the flag in the current TPDB are only considered once.
However for sub-categories concerned with "derivational complexity" we
+
However for the sub-category concerned with "derivational complexity" and "full rewriting", we
propose to  restrict our attention  to non-duplicating systems.
+
propose to  restrict our attention  to non-duplicating systems. (The below given examples show
 +
that duplicating systems in conjunction with an innermost strategy need not be of exponential
 +
derivational complexity.)
  
 
In the following test cases we restrict to full rewriting.
 
In the following test cases we restrict to full rewriting.
  
<em>Test Cases - derivational complexity </em>
+
<em>test cases - derivational complexity </em>
 
*
 
*
* R= {a -> b}, expected output "YES(O(n),O(n))" or "YES(?,O(n))" (if tool output should be closed under extension of signature)
+
* R= {a -> b}, expected output "YES(O(n),O(n))" or "YES(?,O(n))" (that is the tool output has to be closed under extension of signature)
  
 
* R= {a(b(x)) -> b(a(x))}, expected output "YES(?,O(n^2))" or "YES(O(n),O(n^2))" or "YES(O(n^2),O(n^2))"
 
* R= {a(b(x)) -> b(a(x))}, expected output "YES(?,O(n^2))" or "YES(O(n),O(n^2))" or "YES(O(n^2),O(n^2))"
Line 96: Line 98:
 
* R= {+(s(x),+(y,z)) -> +(x,+(s(s(y)),z)), +(s(x),+(y,+(z,w))) -> +(x,+(z,+(y,w)))}, expected output "YES(?,?)"
 
* R= {+(s(x),+(y,z)) -> +(x,+(s(s(y)),z)), +(s(x),+(y,+(z,w))) -> +(x,+(z,+(y,w)))}, expected output "YES(?,?)"
  
<em>Test Cases - runtime complexity </em>
+
<em>test cases - runtime complexity </em>
 
*
 
*
 
* R= {a(b(x)) -> b(b(a(x)))}, expected output "YES(?,O(n))" or "YES(O(n),O(n))"
 
* R= {a(b(x)) -> b(b(a(x)))}, expected output "YES(?,O(n))" or "YES(O(n),O(n))"
Line 108: Line 110:
 
In the following test cases we restrict to innermost rewriting.
 
In the following test cases we restrict to innermost rewriting.
  
<em>Test Cases - derivational complexity </em>
+
<em>test cases - derivational complexity </em>
 
*
 
*
 
* R= {f(x) -> c(x,x)}, expected output "YES(O(n),O(n))" or "YES(?,O(n))"
 
* R= {f(x) -> c(x,x)}, expected output "YES(O(n),O(n))" or "YES(?,O(n))"
  
<em>Test Cases - runtime complexity </em>
+
<em>test cases - runtime complexity </em>
 
*
 
*
 
* R= {f(x) -> c(x,x), g(0) -> 0, g(s(x)) -> f(g(x))}, expected output "YES(O(n),O(n))" or "YES(?,O(n))"
 
* R= {f(x) -> c(x,x), g(0) -> 0, g(s(x)) -> f(g(x))}, expected output "YES(O(n),O(n))" or "YES(?,O(n))"

Revision as of 18:16, 20 October 2008

This page is to record the current status of discussion on the proposed Complexity Category of the Termination Competition.

The first installation of this event is planned for November 1, 2008.

(Discussion should take place on the termtools mailing list.)

Overview of the Event

It is a challenging topic to automatically determine upper bounds on the complexity of rewrite systems. By complexity of a TRS, we mean the maximal length of derivations, where either no restrictions on the initial terms are present ("derivational complexity") or only constructor based terms are considered ("runtime complexity"). See (Hirokawa, Moser, 2008) for further reading on the notion of runtime complexity. Additionally one distinguishes between complexities induced by full rewriting as opposed to complexities induced by specific strategies, as for example innermost rewriting. We propose four sub-categories, structured in two logical layers: "strategy" and "complexity certificate", such that for each of the currently considered strategies, both notions of complexity are tested.

Syntax/Semantics for Input/Output

As competition semantics, we propose to focus on polynomial bounds. The current input format should be kept as far as possible, i.e., from a given TRS a complexity problem file is generated by adding an annotation expressing one of the above given categories. This is done on the fly during the competition to prevent the multiplication of the database.

On the other hand the output format is adapted so that additional information on the asymptotic complexity is given for lower as well as upper bounds. Hence the output written to the first line of STDOUT shall be a complexity statement according to the following grammar:

S -> NO | MAYBE | YES( F, F) | YES( ?, F) | YES( F, ?)
F -> O(1) | O(n^Nat) | POLY

where "Nat" is a non-zero natural number and YES(F1, F2) means F2 is upper bound and that F1 is a lower-bound. "O(n^k)" is the usual big-Oh notation and "POLY" indicates an unspecified polynomial. Either of the functions F1, F2 (but not both) may be replaced by ``don't know, indicated by ?. Any remaining output on STDOUT will be considered as proof output and has to follow the normal rules for the competition.

Example: Consider R= {a(a(x)) -> b(c(x)), b(b(x)) -> a(c(x)), c(c(x)) -> a(b(x))}. Within the derivational complexity category a syntactically correct output would be "YES(O(n^2),POLY)". (Whether this output would also indicate a correct tool, is another question.)

Scoring

Currently we focus on (polynomial) upper bounds. As the output format indicates, this restriction should be lifted later, see below. In order to take into account the quality of the upper bound provided by the different tools, we propose the following scoring algorithm, where we suppose the number of competitors is x.

Firstly, for each TRS the competing tools are ranked, where constant complexity, i.e., output "YES(?,O(1))" is best and "MAYBE", "NO" or time-out is worst. As long as the output is of form "YES(?,O(n^k))" or "YES(?,POLY)" the rank of the tool defines the number of points. More precisely the best tool gets x+1 points, the second gets x points and so on. On the other hand a negative output ("MAYBE", "NO" or time-out) gets 0 points. If two or more tools would get the same rank, the rank of the remaining tools is adapted in the usual way.

Secondly, all resulting points for all considered systems are summed up and the contestant with the highest number of points wins. If this cannot establish a winner, the total number of wins is counted. If this still doesn't produce a winner, we give up and provide two (or more) winners.

The maximal allowed CPU time is 60 seconds.

Problem selection

We propose the collection of all "standard" TRSs together with all TRSs with flag "(STRATEGY INNERMOST)" as testbed. Here TRSs which only differ by the flag in the current TPDB are only considered once. However for the sub-category concerned with "derivational complexity" and "full rewriting", we propose to restrict our attention to non-duplicating systems. (The below given examples show that duplicating systems in conjunction with an innermost strategy need not be of exponential derivational complexity.)

In the following test cases we restrict to full rewriting.

test cases - derivational complexity

  • R= {a -> b}, expected output "YES(O(n),O(n))" or "YES(?,O(n))" (that is the tool output has to be closed under extension of signature)
  • R= {a(b(x)) -> b(a(x))}, expected output "YES(?,O(n^2))" or "YES(O(n),O(n^2))" or "YES(O(n^2),O(n^2))"
  • R= {a(a(x)) -> b(c(x)), b(b(x)) -> a(c(x)), c(c(x)) -> a(b(x))}, expected output "YES(O(n^2),?)" or "YES(?,?)"
  • R= {+(s(x),+(y,z)) -> +(x,+(s(s(y)),z)), +(s(x),+(y,+(z,w))) -> +(x,+(z,+(y,w)))}, expected output "YES(?,?)"

test cases - runtime complexity

  • R= {a(b(x)) -> b(b(a(x)))}, expected output "YES(?,O(n))" or "YES(O(n),O(n))"
  • R= {plus(0,y) -> y, plus(s(x),y) -> s(plus(x,y)), mul(0,y) -> 0, mul(s(x),y) -> plus(mul(x,y),y)}, expected output "YES(?,O(n^2))" or "YES(O(n),O(n^2))" or "YES(O(n^2),O(n^2))"
  • R= {f(x,0) -> s(0), f(s(x),s(y)) -> s(f(x,y)), g(0,x) -> g(f(x,x),x)}, expected output "YES(?,O(n))" or "YES(O(n),O(n))"
  • R= {f(0) -> c, f(s(x)) -> c(f(x),f(x))}, expected output "YES(?,?)"

In the following test cases we restrict to innermost rewriting.

test cases - derivational complexity

  • R= {f(x) -> c(x,x)}, expected output "YES(O(n),O(n))" or "YES(?,O(n))"

test cases - runtime complexity

  • R= {f(x) -> c(x,x), g(0) -> 0, g(s(x)) -> f(g(x))}, expected output "YES(O(n),O(n))" or "YES(?,O(n))"

Wishlist

  • assessment of lower bounds:

In the future the tools should also be able to provide certificates on the lower bound. This would imply to extend the grammar as follows

F -> O(1) | O(n^Nat) | POLY | EXP | INF

such that e.g. "YES(EXP,?)" indicated an exponential lower-bound, or "YES(INF,INF)" indicated non-termination.

  • as for the upper bound the lower bound certificate should be ranked and

both ranks could be compared lexicographically

Questions

  • the precise format for the subcategories needs to be fixed; JW suggests:

(START-TERMS CONSTRUCTOR-BASED) (VAR x) (RULES a(b(x)) -> b(a(x)))

to indicate runtime complextiy and full rewriting , GM suggests

(VAR x) (RULES a(b(x)) -> b(a(x))) (COMPLEXITY RUNTIME)

for the same thing.

  • JW would prefer the following output format as it is easier to parse:

F -> POLY(Nat) | POLY(?)

Here "POLY(k)" abbreviates "O(n^k)" and "POLY(?)" denotes an unspecified polynomial.

Participants

insert your name here if you intend to participate. The sources of all tools that want to participate in the competition have to be publicly available.

  • Johannes Waldmann (Matchbox), but will need more time (December 2008)
  • M. Avanzini, G. Moser, A. Schnabl (TCT)