In Brief
The annual SIGIR conference is the primary international forum for presenting new research results and demonstrating new systems and techniques in the broad information retrieval (IR) field.
The 48th ACM SIGIR conference will be run in-person from July 13th to 18th, 2025, in Padua, Italy.
This year, we continue a special track for resource and reproducibility papers, separate from regular full and short papers. Resource and reproducibility papers share the same track, in recognition that reproducibility papers also include resources and artefacts, and sometimes resource papers include results/analyses that reproduce previously published results.
Submissions will be peer-reviewed, and accepted papers will be published in the main conference proceedings.
Resource and reproducibility papers will be reviewed single-blind, so that reviewers may investigate any shared resources that cannot be anonymised. By standardising and forcing all submissions to be single-blind, we streamline the process and make it fair for all authors.
Papers are a maximum of 9 pages + references. (Shorter papers are welcome)
Important Dates
Deadlines time zone: Anywhere on Earth (AoE)
- Abstract submission: February 4, 2025
- Resource & Reproducibility paper submission: February 18, 2025
Due to other community events, there are 2 weeks between the abstract and paper submission! - Resourcs & Reproducibility paper notifications: April 4, 2025
Take note:
- Immediately after the abstract deadline, PC Chairs will reject submissions that lack informative titles and abstracts ("placeholder abstracts").
- The official publication date is when the proceedings are made available in the ACM Digital Library. This date may be up to two weeks before the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.
What do Resource Papers Look Like?
The resource track seeks submissions from academia and industry that describe resources available to the community, the process and methodology of building those resources, and/or the lessons learned. Resources include, but are not restricted to:
- Test collections for information retrieval and access tasks;
- Documentation of designs and protocols of evaluation tasks (e.g., novel task designs implemented at evaluation forums);
- Labelled datasets for machine learning;
- Software tools and services for information retrieval and access, including the evaluation and analysis of information retrieval and access systems.
The resource must be available to reviewers at the time of submission, for example, through a GitHub link, so that reviewers can confirm that the resource matches what is described in the paper. A consequence of this is that resource papers will only be single-blind: reviewers will have access to your resource, probably revealing your identity, but you will not know who the reviewers are.
Resource Review Criteria
Novelty
- What is new about this resource?
- Does the resource represent an incremental advance or something more dramatic?
Availability
- Is the resource available to the reviewer at the time of review?
- Are there discrepancies between what is described and what is available?
- Are the licensing/terms of use sufficiently open to allow most academic and industry researchers access to the resource?
- If the resource is data collected from people, do appropriate human subjects control board procedures appear to have been followed?
- Make sure links are publicly accessible without requiring logins.
Utility
- Is the resource well documented? What level of expertise do you expect is required to use the resource?
- Are there tutorials or examples? Do they resemble actual uses, or are they toy examples?
- If the resource is data, are appropriate tools provided for loading that data?
- If the resource is data, are the provenance (source, preprocessing, cleaning, aggregation) stages clearly documented?
Predicted impact
- What IR research activity is enabled by the availability of this resource?
- Does the resource advance a well-established research area or a brand-new one?
- Do you expect this resource to be useful for a long time, or will it need to be curated or updated? If the latter, is that planned?
- How large is the (anticipated) research user community? Will that grow or shrink in the next few years?
Reviewers may not take into account anything about the authors, for example, the reputation of the authors or their institution. Single-blind reviews can make it likely that we only accept papers from people we know, and we certainly do not want that to happen. Reviews that reflect a judgment based on the author's identity will be replaced.
What do Reproducibility Papers Look Like?
The reproducibility track solicits papers that repeat, reproduce, generalise, and analyse prior work impacting information retrieval. The focus is on generating new findings of established approaches akin to a test of time. Submitted papers should analyse to which extent assumptions of the original work held up, and elaborate error modes and unexpected conclusions.
We are particularly interested in reproducibility papers (different team, different experimental setup) rather than replicability papers (different team, same experimental setup). The emphasis is not on reproducibility badging but on generating new research insights with existing approaches.
Like resource papers, we encourage people to share their resources associated with a reproducibility paper publicly.
Reproducibility Review Criteria
Reproducibility track papers are expected to help establish whether prior research in IR is generalisable beyond the theoretical or experimental settings that the paper(s) being reproduced assume(s). Submissions are welcome on reproducibility in any area in IR.
Papers submitted to the Reproducibility paper track must explain:
- heir motivation for selecting the methods that are replicated or reproduced and the impact of these methods on the IR community;
- The directions in which they try to generalise, choose different angles from the original work that they reproduce, and the experimental setup(s) they select to support their research in these new directions;
- The assumptions of the original work that they found to hold up, and the ones that could not be confirmed. For papers in the reproducibility track the key is to share knowledge about what lessons from prior work held up.
The key criteria are:
- Contribution: Does this work provide a novel angle on existing approaches and lead to novel insights for the IR community?
- Motivation: How relevant is the replicated or reproduced work for the IR community, and how impactful are the achieved conclusions?
- Soundness: Is the replicated or reproduced paper sufficiently solid regarding methodology and evaluation?
- Quality of reproduction artefacts: Do the supplementary materials for this submission support ease of reproducibility?
Submission Policy for Resource and Reproducibility Papers
Anonymity Policy
- Submissions are not anonymous, hence authors should list their names and affiliations.
- Anonymising external resources such as code, notebooks, and datasets can be challenging or even impossible. Therefore, authors are not required to anonymise these materials. For resource papers, a link to the resource must be provided in the submission and available to anyone (no login information should be needed to access the resources).
arXiv Policy
You can submit, to SIGIR 2025, papers that you have posted to pre-print/archival platforms (e.g. arXiv), or plan to post in the future, after submission. However, your paper must conform to the SIGIR 2025 Pre-Print/ArXiv Policy.
ACM Submission Policy
- Authors should carefully go through ACM’s authorship policy before submitting a paper. Submissions that violate the preprint policy, length, or formatting requirements or are plagiarised are subject to desk rejection by the chairs.
- It is also NOT permitted to double-submit the content to this track and other track(s) of SIGIR 2025 (e.g., a resource paper for building Dataset A and a full paper containing the construction process of Dataset A in the experiment section).
Author List Policy
- To support the identification of reviewers with conflicts of interest, the full author list must be specified at submission time.
- Authors should note that changes to the author list after the submission deadline are not allowed without permission from the PC Chairs.
Desk Rejection Policy
- Submissions that violate the preprint policy, length, or formatting requirements, or are determined to violate ACM’s policies on academic dishonesty, including plagiarism, author misrepresentation, falsification, etc., are subject to desk rejection by the chairs.
- Figures, tables, proofs, appendixes, acknowledgements, or any other content after page 9 of the submission.
- Formatting is not in line with the guidelines provided.
- Addition of authors after abstract submission.
- Content that has been determined to have been copied from other sources.
- Any form of academic fraud or dishonesty.
- Lack of topical fit for SIGIR.
- Submissions that are submitted anonymously.
In-Person Attendance Policy
SIGIR 2025 is an in-person conference. Despite the challenges it may pose, we believe that an in-person conference is more beneficial in terms of direct engagement and networking opportunities, more dynamic exchange of research ideas, and welcoming and nurturing newcomers.
All accepted papers (including all tracks) to the main conference and workshops are expected to be presented in-person. We anticipate that at least one author from each accepted paper will attend the conference in person to deliver the paper and address audience questions during the Q&A session. No pre-recorded videos will be accepted. More information at the SIGIR 2025 In-presence Policy page.
Child Care Service
SIGIR 2025 promotes inclusion and attendance also by organizing a child care service, in order to foster work-family balance by offering congress attendees the opportunity to attend events with their children.
Children from 0 to 12 years of age are welcomed in a specially set up room inside Padua Congress Center throughout the duration of the congress. A colorful, “warm”, safe and welcoming environment, in the same structure that hosts the congresses, a protected place where parents can leave their children, but at the same time stay in touch with them during the breaks of the different sessions.
The nursery space dedicated to the 0-3 age group, equipped with a changing table and other facilities dedicated to the little ones, is next to the kids’ area for children over 3 years old and a cinema room for screenings.
The activities are run by a company specialized in education services with expert staff, appointed by Padua Congress Center, and include thematic workshops - from painting to creative activities - as well as playtime also in the open air, in the facility's outdoor spaces. Children are also offered lunch.
The service is available - for a fee (depending on sponsors) - to all SIGIR 2025 attendees who request it. For more information and updates, check out the SIGIR 2025 Child Care page.
Submission Guidelines
- Submissions of resource papers must be in English, and in PDF format.
- Length is at most 9 pages (including figures, tables, proofs, appendixes, acknowledgments, and any content except references). Authors are not expected to fill the entire 9 pages - we recognise that for some papers (particularly resource papers) the contribution can be explained in less than 9 pages.
- Unrestricted space for references in the current ACM two-column conference format.
- Suitable LaTeX, Word, and Overleaf templates are available from the ACM Website (use
sigconf
proceedings template for LaTeX and the Interim Template for Word). - ACM’s CCS concepts and keywords are not required for review but may be required if accepted and published by the ACM.
For LaTeX, the following should be used:
documentclass[sigconf,natbib=true,review]{acmart}
At least one author of each accepted paper must register for and present the work at the conference.
Submissions should be submitted electronically via EasyChair:
https://easychair.org/conferences/?conf=sigir2025
by selecting the “SIGIR 2025 Resource & Reproducibility Papers” track.
Resource & Reproducibility Chairs
sigir2025-resrepro@dei.unipd.it-
Timo Breuer, TH Köln, Germany
-
Ian Soboroff, National Institute of Standards and Technology (NIST), USA
-
Johanne Trippas, Royal Melbourne Institute of Technology (RMIT) University, Australia