Unconstrained Ear Recognition Challenge 2019

Welcome to the Unconstrained Ear Recognition Challenge 2019!

The 2nd Unconstrained Ear Recognition Challenge (UERC) will be organized in the scope of the IAPR International Conference on Biometrics 2019. The goal of the challenge is to further advance the state-of- technology in the field of automatic ear recognition, to provide participants with a challenging research problem and introduce a benchmark dataset and protocol for assessing the latest techniques, models, and algorithms related to automatic ear recognition.

The results of UERC 2019 will published in an ICB conference paper authored jointly by all participants of the challenge.

The old UERC (2017) webpage is available here.


The registration will be open till 2019/01/03. If you have any questions or suggestions feel free to contact


Ear recognition is an active area of research within the biometric community. While work in this field has long focused on constrained, laboratory like-setting, recent approaches are looking increasingly at data acquired in unconstrained conditions and many techniques and approaches have been presented recently focusing on data captured in these so called “in-the-wild” settings. To promote research in these “in-the-wild” settings the Unconstrained Ear Recognition Challenge (UERC) 2019 will bring together researchers working in the field of ear recognition and benchmark existing algorithms on a common dataset and under a predefined experimental protocol. UERC 2019 will build on the previous challenge, UERC 2017 (available on IEEE Xplore and on, organized in the scope of the 2017 International Joint Conference on Biometrics (IJCB) and use the same dataset and protocol, thus enabling to examine and directly compare the progress made in the field since 2017.

The results of the challenge will be published in a summary paper authored jointly by all participants of the challenge.

The Dataset, Protocol and Performance Metrics

The challenge will be held on an extended version of the Annotated Web Ears (AWE) dataset, containing a total of 9,500 ear images - see the UERC 2017 summary paper for details (available on IEEE Xplore and on The images were collected with a semi-automatic procedure involving web-crawlers and a subsequent manual inspection. Because the AWE images were not gathered in controlled laboratory-like conditions, they better represent the variability in ear appearance than existing datasets of ear images. However, the problem of automatic ear recognition is also significantly harder. A few example images from the extended AWE dataset are shown below.

A more in depth description of the images, acquisition procedure, dataset characteristics and other information on the AWE dataset is available in the Neurocomputing paper.

UERC 2019 will use three image datasets:

  • part A: the main datasets of 3,300 ear images belonging to 330 distinct identities (with 10 images per subject) that was used for the recognition experiments (training and testing),
  • part B: a set of 804 ear images of 16 subjects (with a variable number of images per subject) that was used for the recognition experiments (training),
  • part C: an additional set of 7,700 ear images of around 3,360 identities that was used to test the scalability of the submitted algorithms.

The 3,300 images of the main dataset contain various annotations, such as the level of occlusion, rotation (yaw, roll and pitch angles), presence of accessories, gender and side. This information is also made available during training and can be exploited to build specialized recognition techniques.

The 3,300 images of the main part of the dataset was split into a training set of 1,500 images (belonging to 150 subjects) and a test set of 1,800 images (belonging to 180 subjects). The identities in the training and test set were disjoint. The purpose of the training set is to train recognition models and set any open hyper-parameters, while the test set is reserved for the final evaluation. The test set MUST NOT have been used to learn or fine-tune any parameters of the recognition model. The organizers reserved the right to exclude a team from the competition (and consequently the jointly authored ICB conference paper) if the final result analysis suggested that the test images were also used for training.

The train and test sets were split:

  • Train (2,304 images of 166 subjects): 1,500 images (150 subjects) from part A and all images from part C (804 - 16 subjects).
  • Test (9,500 images of 3,540 subjects): 1,800 images (180 subjects) from part A and all impostor images (7,700 - 3,360 subjects).

UERC will test the recognition performance of all submitted algorithms through identification experiments. The participants will have to generate a similarity score matrix with comparisons of each image in the probe.txt file to each image in the gallery.txt file and return the resulting similarity matrix to the organizers for scoring. Thus, each participant will be required to generate a 7442x9500 matrix for each submitted system.

The number of approaches that each participant is allowed to submit is not limited. However, only approaches with a least a short description (written by the participants) and some sort of original contribution, will be included in the ICB summary paper of UERC 2019.

The submitted similarity matrices will be scored by the organizers. Rank-1 recognition rates, complete CMC curves and the Area-under- the CMC (AUC) curve will be computed and reported for each submitted algorithm. The AUC will be used to rank the participating algorithms.

Sequestered Data: Participants of the winning three recognition models on the main UERC data (described above) will be asked to provide their source code, so that the organizers can run independent experiments on a sequestered dataset. The goal of this part of the challenge is to test the generalization abilities of the best submitted algorithms on data that may differ slightly in characteristics from the data used in the main part of the competition.

Starter Kit

We provide all participants with a Matlab starter kit that generated a matrix of comparisons for each sample in the dataset and computed all relevant performance metrics. The starter kit will help participants to quickly start with the research work and generate score matrices compliant with our scoring procedures. For the baseline approaches we made a number of descriptor-based approaches (using SIFTs, POEMs, LBPs, etc.) available to the participants as well as a deep learning approach.

All scripts that were used to generate results for the UERC 2017 summary paper will also be included in the starter kit.

Provisional Timeline

  • 2018/11/12: Evaluation announcement to participants, including tasks, starter kit, datasets and evaluation metrics
  • 2019/01/03: Evaluation registration deadline
  • 2019/01/22: Participants’ deadline for submission of results and descriptions of methods
  • 2019/02/11: Summary paper of the evaluation submitted
  • 2019/06/5-7: Conference presentation of the summary paper


  • Žiga Emeršič, PhD candidate
    University of Ljubljana, Faculty of Computer and Information Science, Slovenia, EU
  • Assoc. Prof. Hazım Kemal Ekenel
    Istanbul Technical University, Department of Computer Engineering, Turkey & École Polytechnique Fédérale de Lausanne, Switzerland
  • Assoc. Prof. Li Yuan
    University of Science & Technology Beijing, China
  • Assoc. Prof. Vitomir Štruc
    University of Ljubljana, Faculty of Electrical Engineering, Slovenia, EU
  • Assoc. Prof. Peter Peer
    University of Ljubljana, Faculty of Computer and Information Science, Slovenia, EU

If you have any questions, suggestions or would like to participate in the competition, feel free to contact