We are happy to not only share the gist of our work in publications, but also relevant code and data that went into the publications. Please find related resources adjacent to our publications. Here, we give an overview of our general resources that we provide.

The BAGLS dataset

At the Division of Phoniatrics and Pediatric Audiology, University Hospital Erlangen, we generated a comprehensive benchmark for automatic glottis segmentation (short: BAGLS). BAGLS is openly available through zenodo, kaggle and the BAGLS website. It features 59,250 endoscopic images and their respective, manually annotated segmentation masks, pre-split into training and balanced test data. BAGLS raw data were acquired in seven renowned institutions, such as UCLA, BU and NYU. All details about the dataset can be found in the original publication (Gómez*, Kist* et al., Sci Data 2020).

Useful Github code

  • PiPrA – Pixel Precise Annotator
    A tool for creating labels in semantic segmentation tasks, used to create BAGLS
  • Lossless
    Code for lossless compression of videos and data
  • MovieMaker
    Small package for creating supplementary movies
  • nutil
    Fast image browsing, custom colormaps (real white –> color, real black –> color) and Nature paper style figures with one line of code
  • GenericGUI
    Functional graphical user interface based on PyQt5 and pyqtgraph, with shortcuts etc, just to get started
  • ImagesAreExcelSheets
    Code to create Excel Sheets from an image (RGB, R, G, B and Y channel). We provide an example using the famous photo from Eileen Collins.
Scroll to top