Difference between revisions of "Termination Competition 2026"

From Termination-Portal.org
Jump to navigationJump to search
(Created page with "The Termination and Complexity Competition (TermComp) 2026 will be affiliated with [https://termination-portal.org/wiki/21st_International_Workshop_on_Termination WST 2026], w...")
 
 
(One intermediate revision by the same user not shown)
Line 4: Line 4:
 
The first run will be two weeks before WST, followed by a bug/conflict reporting phase.
 
The first run will be two weeks before WST, followed by a bug/conflict reporting phase.
 
Bug fixes will be possible until shortly before the final run.
 
Bug fixes will be possible until shortly before the final run.
The final run and a presentation of the final results will be live at WST, and presumably also either at IJCAR, or in the context of the FLoC Olympics..
+
The final run and a presentation of the final results will be live at WST, and presumably also at IJCAR.
  
  
Line 93: Line 93:
  
 
* In addition to the usual tracks, we will try to organize a '''knockout tournament'''.
 
* In addition to the usual tracks, we will try to organize a '''knockout tournament'''.
 +
* Instead of using a fixed timeout as in previous years, the timeout will be determined as described above
  
 
Adopted changes:
 
Adopted changes:
Line 98: Line 99:
 
* We will only run categories with clearly defined rules.
 
* We will only run categories with clearly defined rules.
 
* Every participant has to submit at least five new '''secret''' benchmarks
 
* Every participant has to submit at least five new '''secret''' benchmarks
* Instead of using a fixed timeout as in previous years, the timeout will be determined as described above
 
 
* Demo categories will only be run on request, and this request has to be justified somehow (e.g., by a paper, significant tool improvements, ...). The final decision is up to the SC.
 
* Demo categories will only be run on request, and this request has to be justified somehow (e.g., by a paper, significant tool improvements, ...). The final decision is up to the SC.
 
* Submissions to TPDB must have a README with a (short) description of the origin of the benchmarks.
 
* Submissions to TPDB must have a README with a (short) description of the origin of the benchmarks.

Latest revision as of 15:42, 11 February 2026

The Termination and Complexity Competition (TermComp) 2026 will be affiliated with WST 2026, which takes place at FLoC 2026.

The competition will be run on the RWTH Aachen High Performance Computing Cluster. The first run will be two weeks before WST, followed by a bug/conflict reporting phase. Bug fixes will be possible until shortly before the final run. The final run and a presentation of the final results will be live at WST, and presumably also at IJCAR.


Dates

  • Tool and Benchmark Submission: June 26
  • First Run: July 9/10
  • Final Run: July 24/25
  • Presentation of the results at WST: July 25

Competition Categories

The competition contains several categories for termination and complexity from the areas of term rewriting (all categories with HO/FO-TRSs, with- or without strategies) and programming languages (Logic Programming, Haskell, Java, C, integer transition systems, ...)

Proposals for new categories are welcome.

A category is only run at the competition if there are at least 2 participants, at least 40 examples for this category in the underlying termination problem data base, and clearly defined rules.

Competition Procedure

All participants in the same category will be run on the existing problems of this category. There might be modifications of the rules suggested by the organizer and decided by the SC.

The wall-clock timeout for each benchmark is determined as follows:

  • The minimal timeout is 5 seconds
  • The maximal timeout is 300 seconds
  • For benchmarks that have not been solved in 2025, the timeout is 300 seconds
  • For benchmarks that have been solved in 2025, the timeout is determined by multiplying the runtime of the fastest successful solver by 2, i.e., the runtime is
    min(300, max(5, 2 x runtime of last year's fastest solver))

Further technical information is available here.

The tools are expected to give an answer (YES, NO, MAYBE) in the first line on stdout, followed by a proof in ASCII, HTML, or CPF format. Exceptions to these rules are the certified (see also the CPF-website) and complexity categories. See all existing categories for more details.

For those categories devoted to prove only termination or non-termination, the score of a tool is the number of non-contradictory answers minus the given penalization (if any).

A proof or answer that is known to be wrong will be penalized by -10 points, if it remains after the bugfix deadline. The SC will decide what should be penalized, depending on the discussion among the participants.

Steering Committee

  • Florian Frohn (Chair and Organizer), RWTH Aachen
  • Jürgen Giesl, RWTH Aachen
  • Georg Moser, University of Innsbruck
  • Étienne Payet, Université de La Réunion
  • Akihisa Yamada, AIST Tokyo Waterfront
  • Dieter Hofbauer, ASW Saarland

Registration

To submit a tool, please follow the instruction at this repository. Every participant must submit at least five new secret benchmarks via email to florian.frohn@cs.rwth-aachen.de (unless the size of the benchmarks exceeds 5MB -- in that case, please get in touch to figure out another way). Please also include a README with a (short) description of the origin of the benchmarks. After the competition, these benchmarks will become part of TPDB.

To submit a benchmark, please follow the instruction at the TPDB ARI repository

We recommend to register early, and update information as needed.

It is highly recommended that participants also subscribe to the termtools mailing list, because that is where announcements will be made, and where discussion takes place.

Technical Details

Competition data will be presented via starexec-master (a successor of star-exec-presenter developed at HTWK Leipzig).


Contact

Any questions or suggestions regarding the termination competition should go to, and discussed at termtools<at>lists.rwth-aachen.de

To contact the steering committee, send an email to terminationcompetitionsc<at>lists.rwth-aachen.de.

Changes with respect to 2025

These lists reflect the discussion in the community, and in the steering committee. Items are preliminary, and not officially binding. Please do not edit this list (unless you're in the SC). Instead, send proposals to termtools or terminationcompetitionsc mailing list, or create a new wiki page and put a link here.

Proposed changes:

  • In addition to the usual tracks, we will try to organize a knockout tournament.
  • Instead of using a fixed timeout as in previous years, the timeout will be determined as described above

Adopted changes:

  • We will only run categories with clearly defined rules.
  • Every participant has to submit at least five new secret benchmarks
  • Demo categories will only be run on request, and this request has to be justified somehow (e.g., by a paper, significant tool improvements, ...). The final decision is up to the SC.
  • Submissions to TPDB must have a README with a (short) description of the origin of the benchmarks.