About Us


Welcome to the webpage of the Brain-Inspired Vision Laboratory (BIVLab). We are based in the Department of Automation, School of Information Science and Technology at the University of Science and Technology of China. We aim to address real-world problems through deep-learning approaches and bring artificial intelligence into our daily life. To this purpose, we maintain a research agenda that balances methods and applications. Some of our current areas of interest are:

  • Large Langauge Models (NEW Topic!🔥🔥🔥), e.g., language agent, multi-modal LLM
  • Diffusion Models (NEW Topic!🔥🔥🔥), e.g., image generation, video generation
  • Perception for autonomous driving, e.g., 2D/3D/multi-modal object detection, multi-view 3D object detection, semantic segmentation, and depth estimation
  • Model/data compression, e.g., knowledge distillation, domain adaptation, semi-supervised learning, and data evaluation
  • Image quality improvement under adverse conditions, e.g., image enhancement, image dehazing, image deblurring, and multi-task degradation restoration
  • Algorithm designing for image reconstruction, e.g., self-supervised learning, signal processing, and optimization methods
  • Human behavior analysis, e.g., facial expression recognition, micro-expression recognition, action recognition, and multi-modal expression recognition
  • OOD generalization, e.g., domain generalization and shortcut learning
  • Brain-inspired intelligence learning, e.g., neuron classification and morphology analysis

News

Feb 2024

We have 3 papers accepted to CVPR 2024!

Dec 2023

We have 1 papers accepted to AAAI 2024!

Sep 2023

We have 3 papers (2 Spotlight) accepted to NeurIPS 2023!

Jul 2023

We have 5 papers accepted to ICCV 2023!

Apr 2023

We have 1 paper accepted to IJCAI 2023!

Mar 2023

We have 1 paper accepted to IEEE TMM 2023 and 1 paper accepted to IEEE TGRS 2023!

Feb 2023

We have 5 papers accepted to CVPR 2023!

Jan 2023

We have 1 paper accepted to ICLR 2023!

Dec 2022

We have 1 paper accepted to IEEE TGRS 2023!

... see all News