Welcome + What’s New in 2026

Posted: Dec 11, 2025

By Nedjma Ousidhoum and Saif M. Mohammad (startsem-2026-pcs@googlegroups.com)

As programme chairs, we warmly invite you to submit your paper(s) to *Sem 2026. Through a series of blog posts here, we will keep you posted on the developments of the conference. This blog post will discuss two notable changes to the review process.

Please fill out this form if you would like to volunteer as a reviewer or as an AC.

1. Centering Research Questions

Research questions in NLP can be roughly categorized into those that address:

  • new findings about language (linguistic phenomena, semantic patterns),
  • new findings about people (language use, behavior, health, ethics, etc.),
  • new findings about automatic language processing (advancing language understanding through ML/AI and other approaches).

Centering and explicitly articulating the research question helps authors frame and present their contribution more clearly. Just as importantly, it helps reviewers and Area Chairs evaluate the work within the appropriate context. For instance, a paper that centers a compelling linguistic or behavioral research question and offers meaningful new insights need not also introduce methodological novelty or rely on the latest models (including LLMs).

Research questions in NLP can be roughly categorized into those that address:

  • new findings about language (linguistic phenomena, semantic patterns),
  • new findings about people (language use, behavior, health, ethics, etc.),
  • new findings about automatic language processing (advancing language understanding through ML/AI and other approaches).

Centering and explicitly articulating the research question helps authors frame and present their contribution more clearly. Just as importantly, it helps reviewers and Area Chairs evaluate the work within the appropriate context. For instance, a paper that centers a compelling linguistic or behavioral research question and offers meaningful new insights need not also introduce methodological novelty or rely on the latest models (including LLMs).

To support this, the *SEM 2026 submission form asks authors to explicitly identify the predominant research question type for their work (the three bullets listed above), as well as any additional categories that apply.

Please note that there are no quotas for accepted papers of different types, and submissions will not receive preferential treatment simply because they selected a particular category. Likewise, papers that span multiple research-question types are not considered inherently “better” than those that focus on a single type. The purpose of this question is solely to help authors clearly communicate the nature of their work and to help reviewers evaluate it within the appropriate context.

Including this information in the submission form also allows *Sem to track the kinds of research questions authors pursue and how the conference’s focus evolves over time.

2. Lasting Impact

Modern NLP and ML papers have often been criticized for being overly incremental or becoming obsolete shortly after publication.

To encourage work with broader scientific value and longer-term relevance, reviewers of *Sem 2026 will be asked to explicitly assess the potential lasting impact of each submission.

This assessment will be included as a short written justification and will factor into the overall recommendation.

Importantly, a healthy research ecosystem requires diversity in the time horizons of research contributions. Some papers offer immediate practical value; others generate insights or resources whose importance unfolds over years. *Sem 2026 welcomes this full spectrum. Reviewers should evaluate the potential for lasting influence—not only immediate performance gains.

Work can have lasting impact in many ways, including (but not limited to):

1. Advancing a Research Question

Helping the community better understand a phenomenon or domain. Examples:

  • Clearly defining or reframing a task in an underexplored area.
  • Identifying new challenges, trends, or gaps in existing work.
  • Theorizing about linguistic, cognitive, social, or affective processes in ways that broaden future research directions.

2. Creating a Dataset or Resource

Building resources—large or small—that enable progress. Examples:

  • Monolingual and multilingual datasets that shed new light on research questions.
  • Benchmarks or lexicons that support development and evaluation of models.
  • Curated datasets enabling cross-disciplinary research or longitudinal analyses.

3. Developing a Method

Proposing novel or simplified methods with potential staying power. Examples:

  • A modeling innovation that offers conceptual or practical advantages.
  • A simple, robust approach that clarifies what truly matters in a task.
  • A creative use of LLMs or other existing tools to uncover phenomena, generate insights, or build tools not previously possible.

4. Improving Evaluation

Offering new ways to measure progress or understand system behavior. Examples:

  • Evaluation frameworks grounded in theory or empirical evidence.
  • Metrics that capture dimensions beyond accuracy (e.g., fairness, robustness, efficiency).
  • Protocols for assessing generalization, long-tail performance, or real-world usability.

5. Conducting Insightful Analysis

Providing analyses that deepen understanding of models, data, or human behavior. Examples:

  • Error analyses that reveal structural issues.
  • Behavioral or introspective studies of model capabilities.
  • Comparative analyses that clarify what different modeling choices accomplish.

6. Addressing Ethical and Societal Considerations

Surfacing issues that shape responsible research and deployment. Examples:

  • Identifying dual-use risks or unintended consequences.
  • Proposing guidelines, transparency standards, or best practices.
  • Exploring implications for different communities, languages, or cultures.

We hope these initiatives will improve the overall experience for both authors and reviewers.

FAQ

  1. Will preference be given to certain types of research papers such as new method papers over findings about language papers?

    Answer. No. Acceptance decisions will be based on a wide variety of information gleaned from the reviews. In addition, there will be an emphasis on valuing lasting impact. The exact review form will be made available in January.

  2. Who will be asked to identify the predominant research question type for a paper?

    Answer. Both authors and reviewers.
    Authors specify the research question type in the paper submission form.
    Reviewers (with full access to the authors’ selections) will record their assessment in the review form. Reviewers are encouraged to consider the authors’ classification carefully. They are still asked to indicate the type themselves—both to flag the rare case where they believe the paper fits a different category and to ensure they evaluate the work in the appropriate context.

  3. Assessing the lasting impact of the paper is difficult, if not impossible. Do you expect to see much agreement on this question?

    Answer. Assessing the degree and timeframe of a paper’s impact (short-term or lasting) is undeniably challenging. Nonetheless, reviewer recommendations and acceptance decisions already implicitly rely on such judgments: accepted papers are generally those perceived as having greater potential impact. By including an explicit lasting-impact question in the review form, we are asking reviewers to reflect on whether a submission is likely to have primarily short-term influence or longer-term, sustained impact. It remains to be seen how much agreement reviewers will show on this dimension (we will share findings at the conference), but our hope is that this prompt will help reviewers and ACs identify and recommend more work with the potential for lasting contribution.

  4. Do you have any quotas for how many papers of each research question type (method, findings about language, etc.) or impact time frame (short term/lasting) will be accepted.

    Answer. No.