Difference between revisions of "Complexity"
Georg.moser (talk | contribs) |
Georg.moser (talk | contribs) |
||
Line 4: | Line 4: | ||
An up to date version of the content of this page | An up to date version of the content of this page | ||
− | can now be found here: [http://cl-informatik.uibk.ac.at/users/georg/cbr/competition]. | + | can now be found here: [http://cl-informatik.uibk.ac.at/users/georg/cbr/competition complexity competition]. |
---- | ---- |
Revision as of 12:00, 8 June 2012
This page is outdated an no longer maintained.
An up to date version of the content of this page can now be found here: complexity competition.
This page provides general information on the complexity category; (potential) tool authors or others that may want to join the discussion on various aspects of the category can also go directly to the discussion and rules page; aficionados may be interested in the techniques page.
Contents
Introduction
It is a challenging topic to automatically determine upper and lower bounds on the complexity of term rewrite systems (TRSs for short). Here complexity of a TRS basically refers to the maximal length of derivations, measured in the size of the initial terms. Still, depending on rewrite strategies and restrictions on the set of allowed starting terms, various notions of complexity exist. The current focus of the competition is on providing polynomial upper bounds to the complexity of TRSs. Presumably the competition will be extended to lower bounds soon.
Overview of the Complexity Category
Currently, depending on considered set of starting terms and strategy, the competition is divided into four categories:
Derivational Complexity | Runtime Complexity | |
---|---|---|
Full Rewriting | DC | RC |
Innermost Rewriting | iDC | iRC |
Derivational complexity and runtime complexity differ in the sense that for derivational complexity, no restrictions on the set of considered starting terms is put. For runtime complexity however, only basic terms (i.e. terms where arguments are formed from constructor symbols) are taken into account. See for example (Hirokawa, Moser, 2008) (Moser, 2009) for further reading on the notion of runtime complexity.
Furthermore, we distinguish between complexities induced by full rewriting as opposed to complexities induced by innermost rewriting.
Overview of the Competition
The complexity category is part of the termination competition since 2008. The competition is currently hosted at [1], where you can find results of the competitions from 2008 and 2009.
Important Dates
Deadline/Date | |
---|---|
Tool Submission | July 14, 2010 |
Problem Submission | July 2, 2010 (but see email to termtools) |
Competition | July 16, 2010 |
Participate
If you want to participate in the competition, go to the Termination Competition Portal, create an account and add your tool. Documentation concerning this process can be found here.
Kindly note that in order to participate, your tool needs to be open source.
Participating Teams of Past and Upcoming Competitions
Here you can find a list (sorted by tool name) of participants of past competitions and supposed participants of the upcoming competition.
Tool | Internal Page | Authors | 2008 | 2009 | 2010 |
---|---|---|---|---|---|
Matchbox | Tools:Matchbox | J. Waldmann | --- | x | x |
TCT | Tools:TCT | M. Avanzini, G. Moser, A. Schnabl | x | x | x |
CaT | Tools:CaT | M. Korp, C. Sternagel, H. Zankl | x | x | x |
AProVE | Tools:AProVE | The AProVE Team | --- | --- | x |
Oops | Tools:Oops | Fabian Emmes, Lars Noschinski | --- | --- | x |
An overview of the proof techniques applied by the participants in past competitions can be found here.
Past Winners
In 2008, CaT was the winner of both derivational complexity categories, whereas for runtime complexity only TCT participated. In 2009, all categories where ruled by CaT. Congratulations!
For detailed information on the testsuites employed and the actual results of the tools see the termcomp pages.
Overview of the Competition Rules
Problems are given in the new TPDB-format, see [2] and are selected randomly from the newest version of the Termination Problem Database.
In the selection process, emphasis is put onto selecting problems that could not be automatically verified in past competitions, c.f. page problem selection for details. Tools are ranked not only by number of succeeded runs, but in the scoring also the preciseness of the bounds is taken into account. See page scoring for further details.
Unfortunately the database is currently somehow biased towards termination problems. We encourage everybody to submit new problems to the TPDB that are interesting from a complexity related perspective. (In submission please indicate that these examples should go into the complexity benchmark.)