深度神经网络

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
– Google
• “on our test set we saw double the average precision when compared to other approaches we had tried. We acquired the rights to the technology and went full speed ahead adapting it to run at large scale on Google’s computers. We took cutting edge research straight out of an academic research lab and launched it, in just a little over six months.”
传统机器学习
解决这些问题的思路
特征表达
良好的特征表达,对最终算法的准确性起了非常关键的作用,而且 能不能自动地学习一些特征呢? 系统主要的计算和测试工作都耗在这一大部分。 但实际中一般都是人工完成的。
能!Deep Learning
生物学启示
人脑视觉机理
“视觉系统的信息处理”:可视皮层 是分级的 神经-中枢-大脑的工作过程,或许是 一个不断迭代、不断抽象的过程。 关键词:一个是抽象,一个是迭代。
深层结构有效(vision, audio, NLP等)!
Computer Vision Features
Audio Features
Deep Learning 基本思想
自动地学习特征
假设有一堆输入I(如图像或者文本),我们设计了一个系统S(有n 层),通过调整系统中参数,使得它的输出仍然是输入I,那么我们 就可以自动地获取得到输入I的一系列层次特征,即S1,…, Sn。 对于深度学习来说,其思想就是堆叠多个层
Deep belief net Science
Speech
1986
2006
2011 deep learning results
• •
Solve general learning problems Tied with biological system
But it is given up…
Neural network Back propagation Nature
深度学习浪潮
Deep Learning
深度学习浪潮
时代背景-数据爆炸
还存在很多没有良好解决的问题,例如图像识别、语音识别、自然语 言理解、天气预测、基因表达、内容推荐等。
深度学习浪潮
时代背景-计算性能提升
动机——Why Deep Learning?
深度学习
What is Deep Learning?
– Baidu
Neural network Back propagation
Deep belief net Science
Speech
Face recognition
1986
2006
2011
2012
2014
Deep learning achieves 99.53% face verification accuracy on Labeled Faces in the Wild (LFW), higher than human performance
– GPU – Multi-core computer systems
• Large scale databases
Big Data !
深度学习浪潮
IT Companies are Racing into Deep Learning
Neural network Back propagation Nature
Hand-crafted features Deep learning
Neural network Back propagation
Deep belief net Science
Speech
1986
2006
2011
2012
ImageNet 2014 – Image classification challenge
“Deep learning is a set of algorithms in machine learning that attempt to learn in multiple levels, corresponding to different levels of abstraction. It typically uses artificial neural networks. The levels in these learned statistical models correspond to distinct levels of concepts, where higher-level concepts are defined from lower-level ones, and the same lower-level concepts can help to define many higher-level concepts.” (Oct. 2013.) “Deep learning is a set of algorithms in machine learning that attempt to model high-level abstractions in data by using model architectures composed of multiple non-linear transformations.” (Aug. 2014)
Neural network Back propagation
Deep belief net Science
Speech
1986
2006
2011
2012
• Google and Baidu announced their deep learning based visual search engines (2013)
深度神经网络 I
Deep Neural Networks
中国科学院自动化研究所 吴高巍 gaowei.wu@ia.ac.cn 2016-12-6
内容
深度神经网络发展历史、背景
动机——Why Deep Learning? 深度学习常用模型
历史
Neural network Back propagation Nature
1986
• •
解决了一般性学习问题 与生物系统相联系
历史
Neural network Back propagation Nature
1986
w1
w2 x2
w3
x1
x3
历史
Neural network Back propagation Nature
1986
2006
• •
解决了一般性学习问题 与生物系统相联系
Fra Baidu bibliotek
Xerox/INRIA 0.27058
Object recognition over 1,000,000 images and 1,000 categories (2 GPU)
A. Krizhevsky, L. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” NIPS, 2012.
2
3
NUS
Oxford
0.12535
0.13555
Deep learning
Deep learning
MSRA, IBM, Adobe, NEC, Clarifai, Berkley, U. Tokyo, UCLA, UIUC, Toronto …. Top 20 groups all used deep learning
• ImageNet 2013 – object detection challenge
Rank Name Mean Average Precision Description
1
2 3
UvA-Euvision
NEC-MU NYU
0.22581
0.20895 0.19400
Hand-crafted features
Deep belief net Science
Speech
1986
2006
2011
2012
Rank 1 2 3 4
Name U. Toronto U. Tokyo U. Oxford
Error rate 0.15315 0.26172 0.26979
Description Deep learning Hand-crafted features and learning models. Bottleneck.
Y. Sun, X. Wang, and X. Tang. Deep Learning Face Representation by Joint IdentificationVerification. NIPS, 2014.
Y. Sun, X. Wang, and X. Tang. Deeply learned face representations are sparse, selective, and robust. CVPR, 2015.
1986
2006
… … … …
… … … …
• Unsupervised & Layer-wised pre-training • Better designs for modeling and training (normalization, nonlinearity, dropout) • New development of computer architectures
也就是说这一层的输出作为下一层的输入。通过这种方式,就可以实现对 输入信息进行分级表达了。
可以略微地放松“输出等于输入”的限制
深层 vs 浅层神经网络
多隐层的人工神经网络具有优异的特征学习能力,学习得到 的特征对数据有更本质的刻画,从而有利于可视化或分类
深层网络结构中,高层可以综合应用低层信息 低层关注“局部”,高层关注“全局”、更具有语义化
But it is given up…
• • • •
SVM Boosting Decision tree …
历史
Neural network Back propagation Nature Deep belief net Science
Neural networks is coming back!
Rank 1 2 3 Name Google Oxford MSRA Error rate 0.06656 0.07325 0.08062 Description Deep learning Deep learning Deep learning
• ImageNet 2014 – object detection challenge
从原始信号,做低级抽象,逐渐向高级抽 象迭代。人类的逻辑思维,经常使用高度 抽象的概念。
不同水平的抽象
层次化表示
脑的深层结构
why go deep?
深层结构能够有效被表达
对相同的函数需要更少的计算单元
深层结构可产生层次化特征表达
允许非局部扩展 可解释性
多层隐变量允许统计上的组合共享
Neural network Back propagation
Deep belief net Science
Speech
1986
2006
2011
2012
ImageNet 2013 – image classification challenge
Rank 1 Name NYU Error rate 0.11197 Description Deep learning
Rank 1 2 3 4 5 Name Google CUHK DeepInsight UvA-Euvision Berkley Vision Mean Average Precision 0.43933 0.40656 0.40452 0.35421 0.34521 Description Deep learning Deep learning Deep learning Deep learning Deep learning
相关文档
最新文档