Permalink
Cannot retrieve contributors at this time
Name already in use
A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
cityscapes/README.txt
Go to fileThis commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
86 lines (55 sloc)
3.44 KB
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
---------------------- | |
The Cityscapes Dataset | |
---------------------- | |
1. Contact | |
---------- | |
Please feel free to contact us with any questions, suggenstions or comments: | |
Marius Cordts | |
www.cityscapes-dataset.net | |
mail@cityscapes-dataset.net | |
2. Dataset Content | |
------------------ | |
This is a pre-release version of the Cityscapes dataset, please keep the data confidential and do not distribute further. | |
- scripts: scripts to process the data, see section 4 for details. | |
- train_fine: training data with fine annotations | |
| | |
|----- stuttgart | |
| | | |
| |----- groundtruth: annotations in polygonal format | |
| | | |
| |----- images: images with 8bit color depth | |
| | |
|----- ulm | |
| | |
... | |
- train_coarse: training data with coarse annotations of the same cities as in train_fine. This data is part of the training data for the weak annotation challenge. | |
- train_coarse_extra: additional training data with coarse annotations. You may or may not use this data for training. We will make clear in the dataset's publication and on our website, which kind of data you used for training. | |
- val_fine: validation set for tuning your hyper parameters and testing your approach. | |
- val_coarse: training data with coarse annotations of the same cities as in val_fine. This data is part of the training data for the weak annotation challenge. However, you should not use this data, when you validate your approach. | |
- test_fine: test data without ground truth annotations. Please run your approach on this data and send your results to us. | |
3. Further data | |
--------------- | |
Please let us know if you need any other meta-data to run your approach, e.g. right stereo views, preceding and trailing video frames, GPS, vehicle odometry, camera information. | |
4. Scripts | |
----------- | |
There are several scripts included with this prerelease dataset for | |
helpers : helper files that are included by other scripts | |
viewer : view the images and the annotations | |
preparation : convert the ground truth into a format suitable for your approach | |
evaluation : validate your approach | |
Details: | |
-> helpers/labels.py | |
central file defining the IDs of all semantic classes and providing mapping between various class properties. | |
-> viewer/cityscapesViewer.py | |
view the images and overlay the annotations. | |
-> preparation/createLabelImgs.py | |
convert annotations in polygonal format to png images. Pixels can optionally encode integer IDs or colors that correspond to labels as defined in the file labels.py. | |
-> preparation/createInstanceImgs.py | |
convert annotations in polygonal format to png images, where pixel values encode instance IDs. | |
-> evaluation/evalSemanticLabeling.py | |
script to evaluate semantic labeling results on the validation set. This script is also used to evaluate the results on the test set. | |
-> evaluation/setup.py | |
run "setup.py build_ext --inplace" to enable cython plugin for faster evaluation. Only tested for Ubuntu. | |
5. Evaluation | |
-------------- | |
We would kindly ask you to run your approach on the provided test images and send your results to us as soon as possible. We require the result format to match the format of label images as created by the tool 'createLabelImgs.py'. Thus, your code should produce images, where each pixel's value corresponds to a class ID as defined in 'labels.py'. Note that our evaluation scripts are included in the scripts folder and can be used to test your approach on the validation set. |