Difference between revisions of "Complexity:Old"
Line 106: | Line 106: | ||
#:* '''subcategory DC:''' as above, but we split (ii) into duplicating TRS (''F_duplicating'') and non-duplicating TRSs (note that any TRS from (i) is non-duplicating) | #:* '''subcategory DC:''' as above, but we split (ii) into duplicating TRS (''F_duplicating'') and non-duplicating TRSs (note that any TRS from (i) is non-duplicating) | ||
# In accordance to the above described partitioning, we define a probability distribution ''p'' on ''F'' such that ''p(F_1) + ... p(F_n) = 1''. For this year's competition we propose the following distribution: | # In accordance to the above described partitioning, we define a probability distribution ''p'' on ''F'' such that ''p(F_1) + ... p(F_n) = 1''. For this year's competition we propose the following distribution: | ||
− | #:for all subcategories and families ''F'', we propose ''p(F_auto) = 0.4'' and ''p(F_nonauto) = 0.6'' (For the category DC, we additionally set ''p(F_duplicating) = 0.0''). That is, we want to consider 40% examples that could be solved automatically in last years competition, and 60% of examples that could not be solved automatically. Additionally for DC we want to exclude duplicating TRS as those admit exponential derivational complexity. Based on the probability distribution ''p'' we | + | #:for all subcategories and families ''F'', we propose ''p(F_auto) = 0.4'' and ''p(F_nonauto) = 0.6'' (For the category DC, we additionally set ''p(F_duplicating) = 0.0''). That is, we want to consider 40% examples that could be solved automatically in last years competition, and 60% of examples that could not be solved automatically. Additionally for DC we want to exclude duplicating TRS as those admit exponential derivational complexity. Based on the probability distribution ''p'' we define the extended selection function ''select_comp(F,i) = min(|F_i|, p(i) * select(F))''. Here ''|F_i|'' denotes the size of ''F_i''. |
# From each partition ''F_i'' of a family ''F'', we randomly select ''select_comp(F,i)'' examples. | # From each partition ''F_i'' of a family ''F'', we randomly select ''select_comp(F,i)'' examples. | ||
Revision as of 13:19, 10 November 2009
This page is to record the current status of discussion on the proposed Complexity Category of the Termination Competition.
The first installation of this event is planned for November 1, 2008.
(Discussion should take place on the termtools mailing list.)
Contents
Overview of the Event
It is a challenging topic to automatically determine upper bounds on the complexity of rewrite systems. By complexity of a TRS, we mean the maximal length of derivations, where either no restrictions on the initial terms are present ("derivational complexity") or only constructor based terms are considered ("runtime complexity"). See (Hirokawa, Moser, 2008) for further reading on the notion of runtime complexity. Additionally one distinguishes between complexities induced by full rewriting as opposed to complexities induced by specific strategies, as for example innermost rewriting. We propose four sub-categories:
- Derivational Complexity (DC),
- innermost Derivational Complexity (iDC),
- Runtime Complexity (RC), and
- innermost Runtime Complexity (iRC)
Syntax/Semantics for Input/Output
As competition semantics, we propose to focus on polynomial bounds.
Input Format
Problems will be given in the newly TPDB-format, cf. [1], where the XML-element problem will have the type complexity given. Further, depending on the category DC, iDC, RC and iRC, the attributes strategy and startterm will be set to FULL/INNERMOST and full/constructor-based respectively. In particluar, this allows the upload of one single tool for all categories the authors want to participate in.
Output Format
The output format is adapted so that additional information on the asymptotic complexity is given for lower as well as upper bounds. Hence the output written to the first line of STDOUT shall be a complexity statement according to the following grammar:
S -> NO | MAYBE | YES( F, F) | YES( ?, F) | YES( F, ?) F -> O(1) | O(n^Nat) | POLY
where "Nat" is a non-zero natural number and YES(F1, F2) means F2 is upper bound and that F1 is a lower-bound. "O(n^k)" is the usual big-Oh notation and "POLY" indicates an unspecified polynomial. Either of the functions F1, F2 (but not both) may be replaced by ``don't know, indicated by ?. Any remaining output on STDOUT will be considered as proof output and has to follow the normal rules for the competition.
Example: Consider R= {a(a(x)) -> b(c(x)), b(b(x)) -> a(c(x)), c(c(x)) -> a(b(x))}. Within the derivational complexity category a syntactically correct output would be "YES(O(n^2),POLY)". (Whether this output would also indicate a correct tool, is another question.)
Scoring
Currently we focus on (polynomial) upper bounds. As the output format indicates, this restriction should be lifted later, see below. In order to take into account the quality of the upper bound provided by the different tools, we propose the following scoring algorithm, where we suppose the number of competitors is x.
Firstly, for each TRS the competing tools are ranked, where constant complexity, i.e., output "YES(?,O(1))" is best and "MAYBE", "NO" or time-out is worst. As long as the output is of form "YES(?,O(n^k))" or "YES(?,POLY)" the rank of the tool defines the number of points. More precisely the best tool gets x+1 points, the second gets x points and so on. On the other hand a negative output ("MAYBE", "NO" or time-out) gets 0 points. If two or more tools would get the same rank, the rank of the remaining tools is adapted in the usual way.
Secondly, all resulting points for all considered systems are summed up and the contestant with the highest number of points wins. If this cannot establish a winner, the total number of wins is counted. If this still doesn't produce a winner, we give up and provide two (or more) winners.
The maximal allowed CPU time is 60 seconds.
Problem selection
We propose to run each complexity-subcategory on all TRS and SRS families from the newly organised TPDB, after the following selection function defined below has been applied.
Selection function
In the following, we denote by select the function that relates each family from the TPDB to the number of randomly chosen examples within this family as defined for the termination competition. The idea is to make select aware of different difficulties of proving complexity bounds. We do so by
- partitioning each family F into n different sets F = F_1 \cup ... \cup F_n, where the sets F_i may be seen as collections of TRSs similar in difficulty. For this years competition we propose following partitioning of a family F:
- subcategories RC, iRC and iDC: we propose to partition each family into
- (i) those upon which a polynomial bound could be shown automatically in last years competition (denoted by F_auto below) and
- (ii) those where a polynomial bound could not be shown (F_nonauto).
- subcategory DC: as above, but we split (ii) into duplicating TRS (F_duplicating) and non-duplicating TRSs (note that any TRS from (i) is non-duplicating)
- subcategories RC, iRC and iDC: we propose to partition each family into
- In accordance to the above described partitioning, we define a probability distribution p on F such that p(F_1) + ... p(F_n) = 1. For this year's competition we propose the following distribution:
- for all subcategories and families F, we propose p(F_auto) = 0.4 and p(F_nonauto) = 0.6 (For the category DC, we additionally set p(F_duplicating) = 0.0). That is, we want to consider 40% examples that could be solved automatically in last years competition, and 60% of examples that could not be solved automatically. Additionally for DC we want to exclude duplicating TRS as those admit exponential derivational complexity. Based on the probability distribution p we define the extended selection function select_comp(F,i) = min(|F_i|, p(i) * select(F)). Here |F_i| denotes the size of F_i.
- From each partition F_i of a family F, we randomly select select_comp(F,i) examples.
Test Cases
In the following test cases we restrict to full rewriting. test cases - derivational complexity
R = {a(b(x)) -> b(a(x))}, expected output "YES(?,O(n^2))" or "YES(O(n^1),O(n^2))" or "YES(O(n^2),O(n^2))" R= {a(a(x)) -> b(c(x)), b(b(x)) -> a(c(x)), c(c(x)) -> a(b(x))}, expected output "YES(O(n^2),?)" or "YES(?,?)" R= {+(s(x),+(y,z)) -> +(x,+(s(s(y)),z)), +(s(x),+(y,+(z,w))) -> +(x,+(z,+(y,w)))}, expected output "YES(?,?)"
test cases - runtime complexity
R = {a(b(x)) -> b(b(a(x)))}, expected output "YES(?,O(n^1))" or "YES(O(n^1),O(n^1))" R = {plus(0,y) -> y, plus(s(x),y) -> s(plus(x,y)), mul(0,y) -> 0, mul(s(x),y) -> plus(mul(x,y),y)}, expected output "YES(?,O(n^2))" or "YES(O(n^1),O(n^2))" or "YES(O(n^2),O(n^2))" R = {f(x,0) -> s(0), f(s(x),s(y)) -> s(f(x,y)), g(0,x) -> g(f(x,x),x)}, expected output "YES(?,O(n^1))" or "YES(O(n^1),O(n^1))" R= {f(0) -> c, f(s(x)) -> c(f(x),f(x))}, expected output "YES(?,?)"
In the following test cases we restrict to innermost rewriting.
test cases - derivational complexity
R = {f(x) -> c(x,x)}, expected output "YES(O(n^1),O(n^1))" or "YES(?,O(n^1))"
test cases - runtime complexity
R= {f(x) -> c(x,x), g(0) -> 0, g(s(x)) -> f(g(x))}, expected output "YES(O(n^1),O(n^1))" or "YES(?,O(n^1))"
Wishlist
- assessment of lower bounds:
In the future the tools should also be able to provide certificates on the lower bound. This would imply to extend the grammar as follows
F -> O(1) | O(n^Nat) | POLY | EXP | INF
such that e.g. "YES(EXP,?)" indicated an exponential lower-bound, or "YES(INF,INF)" indicated non-termination.
- as for the upper bound the lower bound certificate should be ranked and
both ranks could be compared lexicographically
Questions
- JW would prefer the following output format as it is easier to parse:
F -> POLY(Nat) | POLY(?)
Here "POLY(k)" abbreviates "O(n^k)" and "POLY(?)" denotes an unspecified
polynomial.
resolved
Participants
insert your name here if you intend to participate. The sources of all tools that want to participate in the competition have to be publicly available.
- Johannes Waldmann (Matchbox), but will need more time (December 2008)
- M. Avanzini, G. Moser, A. Schnabl (TCT)
- N. Hirokawa (Hydra), but might need more time
- M. Korp, C. Sternagel, H. Zankl (CaT)