ACR AI-LAB™ Resident Mammography Challenge

Participants in the challenge will annotate breast density studies to develop their own training data sets and then design and train models by adjusting hyperparameters.

Unlike in most AI challenges, coding skills will not be needed to participate, opening up the challenge to those who are new to AI. While this challenge is open to everyone, only residents are eligible to win the prize.

The challenge is designed to educate participants about the essential skills involved in AI, including annotating data, building models, and testing models for performance.

At the end of the challenge, all models will be tested on an additional test data set. The model with the best kappa score wins funding to cover attendance at RSNA 2019.

Challenege ran Sept. 16th - Oct. 4th, 2019. New submissions are not allowed, but feel free to try out the different aspects of the challenge.

Good luck!

If you experience any issues during the challenge, please let us know by emailing ailabsupport@acr.org.

Instructions
  1. Click MY ANNOTATIONS to annotate at least 100 of the 5,000 breast density studies. Your annotations will be used for the training data set. You can compare your overall annotation distribution with that of a DMIST reader in the confusion matrix on this page but will not be able to see the DMIST reader annotations on an image level.
  2. Click TRAIN MY MODELS to design your own AI model by adjusting hyperparameters and training on your own annotated data.
  3. Send in your design for training, also known as a job, to ACR's GPUs.
  4. Submit a model to the leader board to compare your test results with other models. The test data set will be using DMIST reader annotations as ground truth.
  5. You may submit as many models as you want before the end of the challenge but only your most recent submission will appear on the leaderboard. The leaderboard is ranked by kappa scores on the test results. At the end of the challenge, all submissions will be judged against a final test data set so the leaderboard is not necessarily indicative of who could win.
Training container citation: Beers, A., Brown, J., Chang, K., Hoebel, K., Gerstner, E., Rosen, B., & Kalpathy-Cramer, J. (2018). DeepNeuro: an open-source deep learning toolbox for neuroimaging. arXiv preprint arXiv:1808.04589.

My Annotations

Annotations Confusion Matrix

Leaderboard

Your Annotations: {{userMammoChallengeAnnotationsCount}}
#1    {{ranks[0].UserName}}
Rank Kappa Username Submitted Date AUC Score Annotations
{{index + 1}} {{user.Kappa}} {{user.UserName}} {{user.SubmittedDate}} {{user.AUC}} {{user.Annotations}}