Termination Competition

From Termination-Portal.org
Revision as of 12:33, 8 March 2026 by JCKassing (talk | contribs) (Added explanations for each of the category types)
Jump to navigationJump to search

Annual International Termination Competition

During the 90's a number of new, powerful termination methods was developed. Thus, at the beginning of the millennium many research groups started to develop tools for fully-automated termination analysis.

After a tool demonstration at the Termination Workshop 2003 (Valencia), the community then decided to install an annual termination competition, and to collect benchmarks, to spur the development of tools and new termination techniques.

Upcoming Competitions

Organization

Questions and suggestions regarding the competition should go to the termtools mailing list. Discussion is open and happens primarily on the list. Decisions will be made by votes among the Termination Competition Steering Committee, with current members

From 2004 till 2007, the competition organizer was Claude Marché, Paris. From 2008 to 2013 the competition was run by René Thiemann, Innsbruck. From 2014 to 2017, the competition organizer was Johannes Waldmann. Jobs were run on the Star Exec platform at U Iowa. From 2018 to 2023, the organizer was Akihisa Yamada. From 2024 on, the organizer is Florian Frohn.

Competition Categories

Currently, the competition features the following categories. Since 2007 some of the categories also have certified categories, where an additional certifier checks the output of the tools. Categories that were used in the past but not included in the three most recent competitions are marked with an .

Termination of Rewriting

These categories consider the termination of rewrite systems, a foundational computational model used to represent symbolic computation and program transformations. The goal is to automatically prove that no infinite rewrite sequences are possible for the given system. Different categories capture variations of rewriting such as relative rewriting, context-sensitive rewriting, conditional rules, or restrictions on the rewriting strategy (e.g., innermost or outermost rewriting).

Termination of Probabilistic Rewriting

These categories address probabilistic rewrite systems , where rewrite rules are applied according to probability distributions. Instead of classical termination, the goal is to prove almost-sure termination, i.e., that infinite executions occur with probability zero, or strong almost-sure termination, i.e., that the expected runtime is finite. This lifts termination analysis to models that capture randomized algorithms or stochastic behavior.

Currently, there are only categories regarding probabilistic rewrite systems, but an extension to probabilistic imperative programs may be possible in future competitions.

Termination of Programs

These categories focus on proving termination of actual programming languages used in industry. The categories differ by the source programming language or program model.

Complexity of Rewriting

These categories evaluate tools that automatically analyze the asymptotic complexity of rewrite systems . Instead of only proving termination, the goal is to derive upper bounds on the length of rewrite sequences, typically expressed as functions of the input size. Different categories measure difference runtime complexities under various rewriting strategies and different restrictions on the initial start term.

Complexity Analysis

These categories focus on automatically determining time complexity bounds for programs written in concrete programming languages . Tools analyze the program’s control flow, data dependencies, and loops to derive asymptotic upper bounds on runtime with respect to the input size.

Termination Problems Data Base

The Termination Problems Data Base collects all the problems used in the competitions.

We welcome problem submissions from non-participants.

History of Termination Competitions

The following competitions have taken place:

At the "tool demonstration" in 2003, participating provers (including AProVe, Torpa, Matchbox) were run on the laptop computers of their developers in the room. Termination problems were announced on the spot by participants, then written on the blackboard, then typed in by everyone, and when a team's program could solve it, they shouted "solved".

Results

The results of (almost) all competitions are available here