Home | Contact | Sitemap | 中文 | CAS
Search: 
Home │  About Us │  Research │  People │  International Cooperation │  Education & Training │  Papers
  Seminar
Conference
Forum on FS
Colloquium
Seminar
Lunch Seminar
Coffee Time
Advanced Course
KITPC Activities
Other activities
  Location: Home >  Research Activities >  Seminar
Exploration of Parameter Redundancy and Compression in Deep Neural Networks
2016-07-22     Text Size:  A

Institute of Theoretical Physics

Key Laboratory of Theoretical Physics

  Chinese Academy of Sciences

Seminar

Title

题目

Exploration of Parameter Redundancy and Compression in Deep Neural Networks

Speaker

报告人

Dr. Yu Cheng

Affiliation

所在单位

IBM T.J. Watson Research Center, USA

 

Date

日期

22 July (Friday), 10:30 - 11:30

Venue

地点

Conference Hall 6420, ITP new building/理论物理所新楼6420报告厅

 

Abstract

摘要

Compressing Deep Neural Networks has has attracted a lot of attention recently. In this talk, we will discuss several of our previous and current work about parameter redundancy in deep convoluational nets. Particularly, the talk focuses on how to use structure transformation to reduce parameters and how to design efficient architecture in deep nets. We will show intensive empirical studies on several standard datasets to demonstrate the effectiveness of our approaches.

Contact person

所内合作者

Hai-Jun Zhou

 

  Appendix:
       Address: No. 55 Zhong Guan Cun East Road, Haidian District, Beijing 100190, P. R. China
Copyright ? Institute of Theoretical Physics, Chinese Academy of Sciences, All Rights Reserved