Difference between revisions of "Complexity"

From Termination-Portal.org
Jump to navigationJump to search
 
 
(12 intermediate revisions by 2 users not shown)
Line 1: Line 1:
#REDIRECT [[Complexity:Old]]
+
<pre>
 +
This page is outdated and no longer maintained.
 +
</pre>
 +
 
 +
An up to date version of the content of this page
 +
can now be found here: [http://cl-informatik.uibk.ac.at/users/georg/cbr/competition complexity competition].
 +
 
 +
----
 +
 
 +
This page provides general information on the complexity category;
 +
(potential) tool authors or others that may want to join the discussion
 +
on various aspects of the category can also go directly to the
 +
discussion and rules page; aficionados
 +
may be interested in the techniques page.
 +
 
 +
== Introduction ==
 +
 
 +
It is a  challenging topic to automatically determine  upper and lower bounds on
 +
the complexity  of term rewrite systems (TRSs for short). 
 +
Here complexity of a TRS basically refers to the maximal length of derivations,
 +
measured in the size of the initial terms.
 +
Still, depending on rewrite strategies and restrictions on the
 +
set of allowed starting terms, various notions of complexity exist.
 +
The current focus of the competition is on providing
 +
<em>polynomial upper bounds</em> to the complexity of TRSs.
 +
Presumably the competition will be extended to lower bounds soon.
 +
 
 +
== Overview of the Complexity Category ==
 +
Currently, depending on considered set of starting terms and strategy,
 +
the competition is divided into four categories:
 +
 
 +
{| class="wikitable center"
 +
|-
 +
!  !! Derivational Complexity !! Runtime Complexity
 +
|-
 +
! style="text-align:left"|Full Rewriting
 +
| DC
 +
| RC
 +
|-
 +
! style="text-align:left"|Innermost Rewriting
 +
| iDC
 +
| iRC
 +
|}
 +
 
 +
Derivational complexity and runtime complexity differ in the sense that for
 +
derivational complexity, no restrictions on the set of considered starting terms is put.
 +
For runtime complexity however, only basic terms (i.e. terms where arguments
 +
are formed from constructor symbols) are taken into account.
 +
See for example [http://dx.doi.org/10.1007/978-3-540-71070-7_32 (Hirokawa, Moser, 2008)]
 +
[http://arxiv.org/abs/0907.5527 (Moser, 2009)]
 +
for further reading on the  notion of runtime
 +
complexity. 
 +
 
 +
Furthermore, we  distinguish  between  complexities
 +
induced  by  full rewriting  as  opposed  to  complexities induced  by
 +
innermost rewriting.
 +
 
 +
== Overview of the Competition ==
 +
The complexity category is part of the termination competition since 2008.
 +
The competition is currently hosted at [http://termcomp.uibk.ac.at/termcomp/home.seam],
 +
where you can find results of the competitions from
 +
[http://termcomp.uibk.ac.at/termcomp/competition/competitionSummary.seam?comp=15991 2008]
 +
and [http://termcomp.uibk.ac.at/termcomp/competition/competitionSummary.seam?comp=101722 2009].
 +
 
 +
=== Important Dates ===
 +
{| class="wikitable center"
 +
|-
 +
! !! Deadline/Date
 +
|-
 +
! style="text-align:left"|Tool Submission
 +
| July 14, 2010
 +
|-
 +
! style="text-align:left"|Problem Submission
 +
| July 2, 2010 (but see email to termtools)
 +
|-
 +
! style="text-align:left"|Competition
 +
| July 16, 2010
 +
|}
 +
 
 +
=== Participate ===
 +
If you want to participate in the competition,
 +
go to the [http://termcomp.uibk.ac.at/ Termination Competition Portal], create an account and add your tool.
 +
Documentation concerning this process can be found
 +
[http://termcomp.uibk.ac.at/termcomp/help/register.seam?cid=8144 here].
 +
 
 +
Kindly note that in order to participate, your tool needs to be
 +
[http://www.opensource.org/ <em>open source</em>].
 +
 
 +
==== Participating Teams of Past and Upcoming Competitions ====
 +
Here you can find a list (sorted by tool name) of participants of past competitions
 +
and supposed participants of the upcoming competition.
 +
 
 +
{| class="wikitable center" border="0" style="text-align:center"
 +
|-
 +
! Tool !! Internal Page !! text-align="left"|Authors !! 2008 !! 2009 !! 2010
 +
|-
 +
! [http://www.imn.htwk-leipzig.de/~waldmann/ Matchbox]
 +
| [[Tools:Matchbox]]
 +
| J. Waldmann
 +
| ---
 +
| x
 +
| x
 +
|-
 +
! [http://cl-informatik.uibk.ac.at/software/tct TCT]
 +
| [[Tools:TCT]]
 +
| M. Avanzini, G. Moser, A. Schnabl
 +
| x
 +
| x
 +
| x
 +
|-
 +
! [http://cl-informatik.uibk.ac.at/software/ttt2 CaT]
 +
| [[Tools:CaT]]
 +
| M. Korp, C. Sternagel, H. Zankl
 +
| x
 +
| x
 +
| x
 +
|-
 +
! [http://aprove.informatik.rwth-aachen.de/ AProVE]
 +
| [[Tools:AProVE]]
 +
| The AProVE Team
 +
| ---
 +
| ---
 +
| x
 +
|-
 +
! Oops
 +
| [[Tools:Oops]]
 +
| Fabian Emmes, Lars Noschinski
 +
| ---
 +
| ---
 +
| x
 +
|-
 +
|}
 +
 
 +
An overview of the proof techniques applied by the participants in past competitions can be found [[Complexity_Techniques|here]].
 +
 
 +
== Past Winners ==
 +
In 2008, [[Tools:CaT|CaT]] was the winner of both derivational complexity categories, whereas
 +
for runtime complexity only [[Tools:TCT|TCT]] participated.
 +
In 2009, all categories where ruled by [[Tools:CaT|CaT]]. Congratulations!
 +
 
 +
For detailed information on the testsuites employed and the actual results of the
 +
tools see the [http://termcomp.uibk.ac.at termcomp] pages.
 +
 
 +
== Overview of the Competition Rules ==
 +
Problems are given in the new TPDB-format, see
 +
[http://www.termination-portal.org/wiki/XTC_Format_Specification] and
 +
are selected randomly from the newest version of the Termination Problem Database.
 +
 
 +
In the selection process, emphasis is put onto selecting problems that
 +
could not be automatically verified in past competitions, c.f. page [[Complexity:Rules#Problem Sets and Problem Selection|problem selection]]
 +
for details.
 +
Tools are ranked not only by number of succeeded runs, but in the scoring
 +
also the preciseness of the bounds is taken into account. See page [[Complexity:Rules#scoring|scoring]] for further details.
 +
 
 +
Unfortunately the database is currently somehow biased towards termination problems.
 +
We encourage everybody to submit new problems to the TPDB that are interesting from a complexity
 +
related perspective. (In submission please indicate that these examples should go into the complexity benchmark.)
 +
 
 +
[[Category:Categories]]

Latest revision as of 11:01, 28 June 2012

This page is outdated and no longer maintained. 

An up to date version of the content of this page can now be found here: complexity competition.


This page provides general information on the complexity category; (potential) tool authors or others that may want to join the discussion on various aspects of the category can also go directly to the discussion and rules page; aficionados may be interested in the techniques page.

Introduction

It is a challenging topic to automatically determine upper and lower bounds on the complexity of term rewrite systems (TRSs for short). Here complexity of a TRS basically refers to the maximal length of derivations, measured in the size of the initial terms. Still, depending on rewrite strategies and restrictions on the set of allowed starting terms, various notions of complexity exist. The current focus of the competition is on providing polynomial upper bounds to the complexity of TRSs. Presumably the competition will be extended to lower bounds soon.

Overview of the Complexity Category

Currently, depending on considered set of starting terms and strategy, the competition is divided into four categories:

Derivational Complexity Runtime Complexity
Full Rewriting DC RC
Innermost Rewriting iDC iRC

Derivational complexity and runtime complexity differ in the sense that for derivational complexity, no restrictions on the set of considered starting terms is put. For runtime complexity however, only basic terms (i.e. terms where arguments are formed from constructor symbols) are taken into account. See for example (Hirokawa, Moser, 2008) (Moser, 2009) for further reading on the notion of runtime complexity.

Furthermore, we distinguish between complexities induced by full rewriting as opposed to complexities induced by innermost rewriting.

Overview of the Competition

The complexity category is part of the termination competition since 2008. The competition is currently hosted at [1], where you can find results of the competitions from 2008 and 2009.

Important Dates

Deadline/Date
Tool Submission July 14, 2010
Problem Submission July 2, 2010 (but see email to termtools)
Competition July 16, 2010

Participate

If you want to participate in the competition, go to the Termination Competition Portal, create an account and add your tool. Documentation concerning this process can be found here.

Kindly note that in order to participate, your tool needs to be open source.

Participating Teams of Past and Upcoming Competitions

Here you can find a list (sorted by tool name) of participants of past competitions and supposed participants of the upcoming competition.

Tool Internal Page Authors 2008 2009 2010
Matchbox Tools:Matchbox J. Waldmann --- x x
TCT Tools:TCT M. Avanzini, G. Moser, A. Schnabl x x x
CaT Tools:CaT M. Korp, C. Sternagel, H. Zankl x x x
AProVE Tools:AProVE The AProVE Team --- --- x
Oops Tools:Oops Fabian Emmes, Lars Noschinski --- --- x

An overview of the proof techniques applied by the participants in past competitions can be found here.

Past Winners

In 2008, CaT was the winner of both derivational complexity categories, whereas for runtime complexity only TCT participated. In 2009, all categories where ruled by CaT. Congratulations!

For detailed information on the testsuites employed and the actual results of the tools see the termcomp pages.

Overview of the Competition Rules

Problems are given in the new TPDB-format, see [2] and are selected randomly from the newest version of the Termination Problem Database.

In the selection process, emphasis is put onto selecting problems that could not be automatically verified in past competitions, c.f. page problem selection for details. Tools are ranked not only by number of succeeded runs, but in the scoring also the preciseness of the bounds is taken into account. See page scoring for further details.

Unfortunately the database is currently somehow biased towards termination problems. We encourage everybody to submit new problems to the TPDB that are interesting from a complexity related perspective. (In submission please indicate that these examples should go into the complexity benchmark.)