Datasets

2 minutes

360+x: A Panoptic Multi-modal Scene Understanding Dataset

CVPR, Dataset link: https://x360dataset.github.io/
360+x dataset introduces a unique panoptic perspective to scene understanding, differentiating itself from existing datasets, by offering multiple viewpoints and modalities, captured from a variety of scenes.
For more details please refer to the paper

DyMVHumans: A Multi-View Video Benchmark for High-Fidelity Dynamic Human Modeling

CVPR, Dataset link: https://pku-dymvhumans.github.io/
This is a versatile human-centric dataset for high-fidelity reconstruction and rendering of dynamic human scenarios from dense multi-view videos.
For more details please refer to the paper

Scene Context-Aware Salient Object Detection

ICCV, Dataset link: https://github.com/SirisAvishek/Scene_Context_Aware_Saliency
This is a new dataset about salient object detection considering the scene context.
For more details please refer to the paper

Tactile Sketch Saliency

ACM MM, Dataset link: https://bitbucket.org/JianboJiao/tactilesketchsaliency/src/master/
This is a new dataset about tactile saliency on sketch data, i.e. measuring which region is more likely to be touched on the object depicted by a sketch. For more details pelase refer to the paper

Attention Shift Saliency Ranks

CVPR/IJCV, Dataset link: https://cove.thecvf.com/datasets/325

This is the first large-scale dataset for saliency ranks due to attention shift.
For more details pelase refer to the project

Task-driven Webpage Saliency

ECCV, Dataset link: https://quanlzheng.github.io/projects/Task-driven-Webpage-Saliency.html
This is the first dataset about webpage saliency modelling according to different tasks, i.e. the attention may shift according to different task when viewing a webpage. For more details pelase refer to the paper

Real Noisy Stereo

IJCV, Dataset link:

https://drive.google.com/file/d/1yjQs_fH7SQ-7pSLigklUkNH96SovghWG/view

This dataset provides a group of stereo images captured in real scenes and with real noise.
For more details pelase refer to the paper