Hetu is a high-performance distributed deep learning system targeting trillions of parameters DL model training developed by DAIR Lab at Peking University. It takes account of both high availability in industry and innovation in academia which has a number of advanced characteristics Applicability.
2020-11-19 · Peking University Xiaohu SUN Introduction • The non-resonant HH production is predicted by the SM and beyond • The Higgs self-coupling controls the HH rate • EFT vertices can also lead to HH • This talk presents an overview of the current status on non-resonant HH searches in ATLAS and CMS 1. Non-resonant HH with ggF production
2019-4-2 · A multi-task learning approach for Mandarin-English code-switching conversational speech recognition Xiao Song12 and Yi Liu2 and Daming Yang3 and Yuexian Zou1 1 ADSPLAB/ Intelligent Lab School of ECE Peking University Shenzhen 518055 China 2 IMSL Shenzhen Key Lab PKU-HKUST Shenzhen Hong Kong Institution China 3 PKU Shenzhen Institute China zouyx pkusz.edu.cn
2020-10-31 · _. 6-19. NIPS 2020 papers AI 4127 paper paper paper
2020-12-4 · Jinfeng Kang received his B.S. degree in physics from Dalian University of Technology in 1984 and Ph.D degree in solid-state electronics from Peking University in 1992. He is full professor of Electronics Engineering Computer Science School Peking University with research interest in novel nano devices for computing and data storage.
Peking University China. Peking University China. View Profile A Joint Dynamic Ranking System with DNN and Vector-based Clustering Bandit. Pages 645–650. Previous Fei Sun Yu Zhu and Keping Yang. 2019. Deep Session Interest Network for Click-Through Rate Prediction. In Proceedings of the Twenty-Eighth International Joint Conference
2020-12-30 · The authors express gratitude to Mr. Shuai Zheng of Peking University for his assistance with the results visualization of the 3D case. Appendix A This appendix provides the data matching and prediction results for the observation points 3 4 and 5 as introduced in Subsection 6.5 .
2021-1-6 · Adversarial Robustness through Disentangled Representations Shuo Yang 1 Tianyu Guo 2 Yunhe Wang 3 Chang Xu 1 1School of Computer Science Faculty of Engineering The University of Sydney Australia 2Key Laboratory of Machine Percepton (MOE) CMIC School of EECS Peking University China 3Huawei Noah s Ark Lab syan9630 uni.sydney.edu tianyuguo pku.edu.cn
Lingxiao Ma and Zhi Yang Peking University Youshan Miao Jilong Xue Ming Wu and Lidong Zhou Microsoft Research Yafei Dai Peking University Optimizing CNN Model Inference on CPUs Paper Yizhi Liu Yao Wang Ruofei Yu Mu Li Vin Sharma and Yida Wang Amazon
2020-12-30 · The authors express gratitude to Mr. Shuai Zheng of Peking University for his assistance with the results visualization of the 3D case. Appendix A This appendix provides the data matching and prediction results for the observation points 3 4 and 5 as introduced in Subsection 6.5 .
2019-4-2 · A Multi-task Learning Approach for Mandarin-English Code-Switching Conversational Speech Recognition Xiao Song1 2 Yi Liu2 Daming Yang 3 and Yuexian Zou1( ) 1 ADSPLAB/Intelligent Lab SECE Peking University Shenzhen China zouyx pkusz.edu.cn 2 IMSL Shenzhen Key Lab PKU-HKUST Shenzhen Hong Kong Institution Shenzhen China 3 PKU Shenzhen Institute Shenzhen China
Lingxiao Ma and Zhi Yang Peking University Youshan Miao Jilong Xue Ming Wu and Lidong Zhou Microsoft Research Yafei Dai Peking University Optimizing CNN Model Inference on CPUs Paper Yizhi Liu Yao Wang Ruofei Yu Mu Li Vin Sharma and Yida Wang Amazon
2019-4-2 · A multi-task learning approach for Mandarin-English code-switching conversational speech recognition Xiao Song12 and Yi Liu2 and Daming Yang3 and Yuexian Zou1 1 ADSPLAB/ Intelligent Lab School of ECE Peking University Shenzhen 518055 China 2 IMSL Shenzhen Key Lab PKU-HKUST Shenzhen Hong Kong Institution China 3 PKU Shenzhen Institute China zouyx pkusz.edu.cn
2021-6-6 · Yang Zhao Ronggang Wang With the development of deep neural network (DNN) many DNN-based super-resolution (SR) models have achieved state-of-the-art (SOTA) performance.
2021-6-3 · DNN can be directly applied in signal and noise discrimination since it is a classification task. With a sufficient training set DNN can achieve up to 99.2 (Li et al. 2018 ) and 99.5 precision (Meier et al. 2019 ) in different regions.
2021-1-15 · RAMMER Enabling Holistic Deep Learning Compiler Optimizations with rTasks Lingxiao Ma⇤†⇧ Zhiqiang Xie⇤‡⇧ Zhi Yang† Jilong Xue⇧ Youshan Miao⇧ Wei Cui⇧ Wenxiang Hu⇧ Fan Yang⇧ Lintao Zhang⇧ Lidong Zhou⇧ †Peking University ‡ShanghaiTech University ⇧Microsoft Research Abstract Performing Deep Neural Network (DNN) computation on
2018-6-4 · Rammer Enabling Holistic Deep Learning Compiler Optimizations with rTasksOSDI2020 Lingxiao Ma Peking University and Microsoft Research ZhiqiangXie ShanghaiTech University and Microsoft Research Zhi Yang Peking University
2020-12-30 · The authors express gratitude to Mr. Shuai Zheng of Peking University for his assistance with the results visualization of the 3D case. Appendix A This appendix provides the data matching and prediction results for the observation points 3 4 and 5 as introduced in Subsection 6.5 .
2020-11-4 · RAMMER Enabling Holistic Deep Learning Compiler Optimizations with rTasks Lingxiao Ma† Zhiqiang Xie‡ Zhi Yang† Jilong Xue Youshan Miao Wei Cui Wenxiang Hu Fan Yang Lintao Zhang Lidong Zhou †Peking University ‡ShanghaiTech University Microsoft Research Abstract Performing Deep Neural Network (DNN) computation on hardware accelerators efficiently is challenging.
2020-11-21 · Proceedings of Machine Learning Research 95 614-629 2018 ACML 2018 Optimization Algorithm Inspired Deep Neural Network Structure Design Huan Li lihuanss pku.edu.cn Key Lab. of Machine Perception (MOE) School of EECS Peking University Beijing China
Lingxiao Ma and Zhi Yang Peking University Youshan Miao Jilong Xue Ming Wu and Lidong Zhou Microsoft Research Yafei Dai Peking University Optimizing CNN Model Inference on CPUs Paper Yizhi Liu Yao Wang Ruofei Yu Mu Li Vin Sharma and Yida Wang Amazon
2021-7-20 · Rammer generates an efficient static spatio-temporal schedule for a DNN at compile time to minimize scheduling overhead. It maximizes hardware utilization by holistically exploiting parallelism through inter- and intra- operator co-scheduling. Rammer achieves this by proposing several novel hardware neutral and clean abstractions for the
2020-10-31 · _. 6-19. NIPS 2020 papers AI 4127 paper paper paper
tions Center for Data Science Peking University. Lingxiao Ma and Zhi Yang equally contributed to this work. The work is done when Lingxiao Ma is an intern and Zhi Yang is a visiting researcher at Microsoft Research. sufficiently. The lack of system support has seriously limited the ability to explore the full potentials of GNNs at scale.
2018-7-25 · Institute of Computer Science and Technology Peking University Beijing 100871 China pengyuxin pku.edu.cn Abstract DNN-based cross-modal retrieval is a research hotspot to retrieve across different modalities as im-age and text but existing methods often face the challenge of insufcient cross-modal training data.
2021-7-22 · Li Yang Chen Lin revealed the connection between optimization algorithm and DNN structure design in a theoretically rigorous way. It is an analogy. However analogy does not mean unsolid. For example DNN is inspired by brain. It is also an analogy and has no strict connections to brain either. However no one can say that DNN is insigni cant
2019-12-14 · Northwestern University IL Conducted benchmarking tests on state-of-the-art human pose estimation models OpenPose and AlphaPose Collected and annotated human action data (4000 samples of 5 classes) Extracted human skeleton and features from original data using OpenPose Developed and trained DNN model with Keras / Tensorflow
Peking University China. Peking University China. View Profile A Joint Dynamic Ranking System with DNN and Vector-based Clustering Bandit. Pages 645–650. Previous Fei Sun Yu Zhu and Keping Yang. 2019. Deep Session Interest Network for Click-Through Rate Prediction. In Proceedings of the Twenty-Eighth International Joint Conference