Resources

DATASET RELEASE

HI-MIA                                    Xiaoyi Qin, Hui Bu, Ming Li, “HI-MIA: a far-field text-dependent speaker verification database and the baselines”, Proc. of ICASSP 2020, 7609-7613.  http://openslr.org/85/

FFSVC20                     Xiaoyi Qin, Ming Li, Hui Bu, Wei Rao, Rohan Kumar Das, Shrikanth Narayanan, Haizhou Li, “The INTERSPEECH 2020 Far-Field Speaker Verification Challenge”, Proc. of INTERSPEECH 2020, 3456-3460. http://2020.ffsvc.org/DataDownload

 

FFSVC22      Xiaoyi Qin, Ming Li, Hui Bu, Shrikanth Narayanan, Haizhou Li, “The 2022 Far-field Speaker Verification Challenge: Exploring domain mismatch and semi-supervised learning under the far-field scenario“, Proc. of FFSVC workshop.  https://ffsvc.github.io/

 

AISHELL3          Yao Shi, Hui Bu, Xin Xu, Shaoji Zhang, Ming Li, “AISHELL-3: A Multi-Speaker Mandarin TTS Corpus”, Proc. of INTERSPEECH 2021, 2756-2760. http://www.aishelltech.com/aishell_3

 

DKU-JNU-EMA: Zexin Cai, Xiaoyi Qin (*), Danwei Cai (*), Ming Li, Xinzhong Liu, “The DKU-JNU-EMA Electromagnetic Articulography Database on Mandarin and Chinese Dialects with Tandem Feature based Acoustic-to-Articulatory Inversion”, Proc. of ISCSLP 2018.  https://catalog.ldc.upenn.edu/LDC2019S14

 

RWF-2000:      Ming Cheng, Kunjing Cai, Ming Li, “RWF-2000: An Open Large Scale Video Database for Violence Detection”, Proc. of ICPR 2020, https://github.com/mchengny/RWF2000-Video-Database-for-Violence-Detection

 

Slingua:         Xingming Wang, Hao Wu, Chen Ding, Chuanzeng Huang, Ming Li, “Exploring Universal Singing Speech Language Identification Using Self-Supervised Learning Based Front-End Features”, ICASSP 2023     https://github.com/Doctor-Do/Slingua

 

VoxBlink:       Yuke Lin, Xiaoyi Qin, Guoqing Zhao, Ming Cheng, Ning Jiang, Haiying Wu, Ming Li, “Voxblink: A Large Scale Speaker Verification Dataset on Camera”, ICASSP 2024. https://voxblink.github.io/

 

SlideSpeech:     HaoxuWang, Fan Yu, Xian Shi, Yuezhang Wang, Shiliang Zhang, Ming Li, “Slidespeech: A Large Scale Slide-Enriched Audio-Visual Corpus”, ICASSP 2024. https://slidespeech.github.io/

 

KunquDB       Huali Zhou, Yuke Lin, Dong Liu, Ming Li, “KunquDB: An Attempt for Speaker Verification in the Chinese Opera Scenario”, ICPR 2024. GitHub – hualizhou167/KunquDB: the official site for KunquDB dataset

 

VoxBlink2:      Yuke Lin, Ming Cheng, Fulin Zhang, Yingying Gao, Shilei Zhang, Ming Li, “VoxBlink2: A 100K+ Speaker Recognition Corpus and the Open-Set Speaker-Identification Benchmark“, Interspeech 2024.

VoxBlink2: A 100K+ Speaker Recognition Corpus and Open-Set Speaker-Identification Benchmark

 

SMIIP-NV:       Zhuojun Wu, Dong Liu, Juan Liu, Yechen Wang, Linxi Li, Liwei Jin, Hui Bu, Pengyuan zhang, Ming Li, “SMIIP-NV: A Multi-Annotation Non-Verbal Expressive Speech Corpus in Mandarin for LLM-Based Speech Synthesis“, ACM Multimedia 2025. https://axunyii.github.io/SMIIP-NV/

 

SMIIP-TV      Xiaoyi Qin, Na Li, Shufei Duan, Ming Li, “Investigating Long-Term and Short-Term Time-Varying Speaker Verification“, IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024. https://openslr.org/156/

 

VCapAV        Yuxi Wang, Yikang Wang, Qishan Zhang, Hiromitsu Nishizaki, Ming Li, “VCapAV: A Video-Caption Based Audio-Visual Deepfake Detection Dataset“, Interspeech 2025. https://vcapav.github.io/

 

SSTC24        Ze Li, Yuke Lin, Yao Tian, Hongbin Suo, Pengyuan zhang, Yanzhen Ren, Zexin Cai, Hiromitsu Nishizaki, Ming Li, “The Database and Benchmark for the Source Speaker Tracing Challenge 2024”, SLT 2024. https://sstc-challenge.github.io/

 

TMCSpeech     Dong Liu, Yueqian Lin, Yunfei Xu, Ming Li, “TMCSpeech: A Chinese TV and Movie Speech Dataset with Character Descriptions and a Character-Based Voice Generation Model“, ICPR 2024. https://raydonld.github.io/TMCSPEECH/

KunquDB                    Huali Zhou, Yuke Lin, Dong Liu, Ming Li, “KunquDB: An Attempt for Speaker Verification in the Chinese Opera Scenario“, ICPR 2024. https://hualizhou167.github.io/KunquDB/

 

TheSound-test  Xiaoyi Qin, Ze Li, Dong Liu, Ming Li, “Speaker verification in deliberately disguised scenarios”, Computer Engineering and Applications, 2024. https://github.com/DDS-SV/TheSound-test

 

Co-Organized Challenges

SLT 2024: Source Speaker Tracing Challenge (SSTC2024) https://sstc-challenge.github.io/

INTERSPEECH 2022: Far Field Speaker Verification Challenge (FFSVC 22) https://ffsvc.github.io

ISCSLP 2021: Personalized Voice Trigger Challenge (PVTC) https://www.pvtc.org.cn/

INTERSPEECH 2020: Far Field Speaker Verification Challenge (FFSVC 20) http://2020.ffsvc.org/