The MIx Group @ University of Birmingham

2 minutes

an img

Machine Intelligence + X,

X =

We are the Machine Intelligence + x group at the School of Computer Science, University of Birmingham. Welcome!
Our group mainly studies machine learning and computer vision, and also interested in other applied machine learning problems including multimodal data, neuroscience, healthcare, physics, chemistry, to name a few. That is where the x lies in.

Key research interests:
  • Learning representations with limited human supervision, e.g. self-/semi-/weakly-supervised learning
  • Multimodal data processing and analysis, e.g. vision-language, vision-audio, etc.
  • Open-world problems, e.g. incremental learning, open-vocabulary visual understanding
  • Visual semantics understanding, e.g. semantic segmentation, saliency modelling
  • 3D problems, e.g. depth estimation, multi-view geometry, 3D generation
  • Healthcare, e.g. medical image understanding and analysis, explanable AI for healthcare
  • AI for science, including neuroscience, physics and chemistry

News #

Jul 2024: The paper "Show from Tell" is now published in Scientific Reports (Nature Portfolio)! Please check it out here: https://rdcu.be/dNcmb and here :) Jul 2024: Three papers accepted to ECCV 2024, congrats to all the co-authors! Mar 2024: Very grateful to be awarded the Amazon Research Award! Feb 2024: Two papers (the 360+x (Oral) multi-modal holistic scene understanding dataset, and DyMvHuman dynamic multiview dataset) are accepted to CVPR 2024;
and another two papers are accepted to ISBI 2024 (Oral) and T-IP. Congrats to all the co-authors! Oct 2023: Two papers (1 Oral 1 Poster) are accepted to WACV 2024, congrats to all the co-authors! Sep 2023: Grateful to be awarded the Royal Society Short Industry Fellowship! Aug 2023: One paper is accepted to IJCV. Congrats to all the co-authors! Jul 2023: Four papers are accepted to ICCV 2023. Congrats to all the co-authors (esp. the MSc students Hao and Chenyuan)! Apr 2023: Grateful to receive the International Exchanges Grant from The Royal Society! Apr 2023: Two papers are accepted to CVPR 2023 Workshops (Foundation Model and Sight and Sound) about self-supervised multi-modal (video-text-audio) representation learning Mar 2023: Two papers are accepted to ICLR 2023 workshops (TML4H and Neural Fields) about medical video quality assessment and neural representations in low-level vision. Congrats to Jong (PhD) and Wentian (MSc)! Oct 2022: Very glad to receive the Best Paper Award at the ECCV 2022 Workshop on Medical Computer Vision! Congrats to the PULSENet Team! Sep 2022: One paper is accepted to NeurIPS 2022 about continual learning Aug 2022: One paper is accepted to ECCV 2022 Workshop (ECCV-MCV) about anatomy-aware contrastive medical representation learning Feb 2022: Birthday of the MIx group @ the University of Birmingham



Contact and Join Us

Contact E-mail: mix.group.uk@gmail.com Join us We are always looking for people with strong self-motivation, unusual creativity, and passion for hard problems! If you share the same intetests and passion with us, please send your CV together with a short description (2 – 3 sentences) of your research interests to the above email address (with the keywords “[PhD/Postdoc/RA/Visitor/Collaboration application]” in your email subject). Prospective PhD students Please apply via the University application system here, and mention the PI’s name on your application.

2 minutes


360+x: A Panoptic Multi-modal Scene Understanding Dataset CVPR, Dataset link: https://x360dataset.github.io/ 360+x dataset introduces a unique panoptic perspective to scene understanding, differentiating itself from existing datasets, by offering multiple viewpoints and modalities, captured from a variety of scenes. For more details please refer to the paper DyMVHumans: A Multi-View Video Benchmark for High-Fidelity Dynamic Human Modeling CVPR, Dataset link: https://pku-dymvhumans.github.io/ This is a versatile human-centric dataset for high-fidelity reconstruction and rendering of dynamic human scenarios from dense multi-view videos.

2 minutes


PCo3D: Physically Plausible Controllable 3D Generative Models Amazon Research Award, PI, with AleŇ° Leonardis Generative AI has shown remarkable performance across various applications involving content generation, showcasing its potential in both academic research and industrial settings. While its effectiveness in generating images and videos is well-established, there exists a notable gap when it comes to 3D content creation, particularly in the consideration of physical properties during the generation process. Another gap is the controllability of the physics-aware generation.

4 minutes


*Equal contribution 360+x: A Panoptic Multi-modal Scene Understanding Dataset Hao Chen, Yuqi Hou, Chenyuan Qu, Irene Testini, Xiaohan Hong, Jianbo Jiao IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Oral Presentation (3.3% of accepted papers), 2024 [PDF] [BibTeX] [arXiv] [Project Page] DyMVHumans: A Multi-View Video Benchmark for High-Fidelity Dynamic Human Modeling Xiaoyun Zheng, Liwei Liao, Xufeng Li, Jianbo Jiao, Rongjie Wang, Feng Gao, Shiqi Wang, Ronggang Wang IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024 [PDF] [BibTeX] [arXiv] [Project Page] Disentangled Pre-training for Image Matting Yanda Li, Zilong Huang, Gang Yu, Ling Chen, Yunchao Wei, Jianbo Jiao IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Oral Presentation (2.

3 minutes

Team Members

Jianbo Jiao Principle Investigator Jianbo is an Assistant Professor in the School of Computer Science at the University of Birmingham, a Royal Society Short Industry Fellow, and a visiting researcher at the University of Oxford. Cai Wingfield Research Fellow (2023 -), MI n Cai is a Senior Research Data Scientist with the Interdisciplinary Institute for Data Science and AI (IIDSAI) at the University of Birmingham. He received his PhD in theoretical computer science at the University of Bath, and has worked as a researcher in cognitive science at the universities of Cambridge and Lancaster.

4 minutes