Institute of Theoretical Physics |
Key Laboratory of Theoretical Physics |
Chinese Academy of Sciences |
Seminar |
Title
题目 |
Exploration of Parameter Redundancy and Compression in Deep Neural Networks |
Speaker
报告人 |
Dr. Yu Cheng |
Affiliation
所在单位 |
IBM T.J. Watson Research Center, USA
|
Date
日期 |
22 July (Friday), 10:30 - 11:30 |
Venue
地点 |
Conference Hall 6420, ITP new building/理论物理所新楼6420报告厅
|
Abstract
摘要 |
Compressing Deep Neural Networks has has attracted a lot of attention recently. In this talk, we will discuss several of our previous and current work about parameter redundancy in deep convoluational nets. Particularly, the talk focuses on how to use structure transformation to reduce parameters and how to design efficient architecture in deep nets. We will show intensive empirical studies on several standard datasets to demonstrate the effectiveness of our approaches. |
Contact person
所内合作者 |
Hai-Jun Zhou
|