ML with New Compute Paradigms (MLNCP) at NeurIPS 2023

Welcome to the MLNCP Workshop at NeurIPS 2023!

This workshop aims to bring together ML researchers with academic and industrial researchers building novel AI accelerators. The goal is to enable interaction between the two groups and kick-start a new feedback cycle between models and accelerators and to enable hardware-model co-design. We welcome relevant algorithmic or model-innovations as well as results demonstrated on accelerators in the following categories:

The workshop will be held on December 16th, 2023 as part of the NeurIPS conference in New Orleans, Louisiana.

Abstract

As GPU computing comes closer to a plateau in terms of efficiency and cost due to Moore' s law reaching its limit, there is a growing need to explore alternative computing paradigms, such as (opto-)analog, neuromorphic, and low-power computing. This NeurIPS workshop aims to unite researchers from machine learning and alternative computation fields to establish a new hardware-ML feedback loop. By co-designing models with specialized accelerators, we can leverage the benefits of increased throughput or lower per-flop power consumption. Novel devices hold the potential to further accelerate standard deep learning or even enable efficient inference and training of hitherto compute-constrained model classes. However, new compute paradigms typically present challenges such as intrinsic noise, restricted sets of compute operations, or limited bit-depth, and thus require model-hardware co-design. This workshop's goal is to foster cross-disciplinary collaboration to capitalize on the opportunities offered by emerging AI accelerators.

Call for Papers

The 2023 Workshop on ML with New Compute Paradigms is calling for papers on machine-learning models and algorithms that enable training or inference on novel AI accelerators. Potential areas of interest include:

  1. Performance analysis of algorithms and models on future or current hardware.
  2. New or existing model paradigms (e.g. spiking networks or other neuromorphic models, energy-based models, or others) that map well onto AI accelerators currently in development.
  3. Strategies for inference or training on such new hardware. This includes new training algorithms or approaches to enable inference of pretrained models.
  4. Strategies for dealing with precision issues and hardware-induced noise in analog machine learning.

Camera-ready deadline: Anywhere on earth (AoE), 4th December 2023

Oral Presentation Guidelines

Length: 8 min + 2 min Q&A

Poster Guidelines

Poster material: light weight paper, not laminated.
Format: 24W x 36H inches portrait format.

Submission link

Best paper award: The best paper will be recognized!

Submissions that discuss work-in-progress, unanswered questions, and challenges for new AI accelerator hardware are also welcome.

Guidelines for submission Important note:

Authors do not forfeit the right to publish elsewhere.

All accepted works will be made available on the workshop website. However, authors retain full copyright of their work and are free to publish their extended work in another journal or conference. We allow submission of works that overlap with papers that are under review or have been recently published in a conference or a journal, including science journals. Cross-submissions to multiple workshops at NeurIPS are not accepted.

Speakers

Schedule

Morning Session: Poster Session and Lunch Break: Afternoon Session:

Organisers

The workshop is organised by the following people:

Sponsors

Contact: MLwithNewCompute _at_ googlegroups.com