深度学习综述讨论简介deepLearning

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Each unit is binary(0 or 1). Every visible unit connects to all the hidden units. Every hidden unit connects to all the visible units. There are no connections between v-v and h-h.
•Biblioteka Baidu
Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Advances in neural information processing systems. 2014: 2672-2680.
Long Short-Term Memory(LSTM,1997)
Forward propagation -Taking a sample (X, Yp) from the sample set and put the X into the network; -Calculating the corresponding actual output Op. Back propagation -Calculating the difference between the actual output Op and the corresponding ideal output Yp; -Adjusting the weight matrix by minimizing the error.
Applacations:
• • • • • • Image editing Image to image translation Generate text Generate images based on text Combined with reinforcement learning And more…
Deep neural network architectures
• Deep Belief Networks(DBN)
• Recurrent Neural Networks (RNN) • Generative Adversarial Networks (GANs) • Convolutional Neural Networks (CNN) • Long Short-Term Memory(LSTM)
RNN(Recurrent Neural Network,2013)
What?
RNN aims to process the sequence data. RNN will remember the previous information and apply it to the calculation of the current output. That is, the nodes of the hidden layer are connected, and the input of the hidden layer includes not only the output of the input layer but also the output of the hidden layer.
How to train?
BPTT(Back propagation through time)
Applications?
Machine Translation Generating Image Descriptions Speech Recognition
Marhon S A, Cameron C J F, Kremer S C. Recurrent Neural Networks[M]// Handbook on Neural Information Processing. Springer Berlin Heidelberg, 2013:29-65.
Neural Networks
Neuron
Neural network
Convolutional Neural Networks(CNN)
Convolution neural network is a kind of feedforward neural network, which has the characteristics of simple structure, less training parameters and strong adaptability. CNN avoids the complex pre-processing of image(etc.extract the artificial features), we can directly input the original image. Basic components : Convolution Layers, Pooling Layers, Fully connected Layers
ReLU SVM
MP model
BP algorithm
DBN
1943 1940
1958 1950 1960
1969 1970
1986 1980
1989 1991 1995 1997 2006 2011 2012 2015 1990 2000 2010
Deep Learning Frameworks
GANs(Generative Adversarial Networks,2014)
GANs Inspired by zero-sum Game in Game Theory, which consists of a pair of networks - a generator network and a discriminator network.
Convolution layer local receptive field weight sharing
Reduced the number of parameters
The convolution kernel translates on a 2-dimensional plane, and each element of the convolution kernel is multiplied by the element at the corresponding position of the convolution image and then sum all the product. By moving the convolution kernel, we have a new image, which consists of the sum of the product of the convolution kernel at each position.
Introduction Network structure Training tricks
• Application in Aesthetic Image Evaluation • Idea
Deep Learning(Hinton,2006)
• Deep learning is a branch of machine learning based on a set of algorithms that attempt to model high level abstractions in data. • The advantage of deep learning is to extracting features automatically instead of extracting features manually.
Fully connected layer and Softmax layer
Fig1. Fully connected layer.
Each node of the fully connected layer is connected to all the nodes of the last layer, which is used to combine the features extracted from the front layers.
• •
Idea ? Composed of multiple layers of RBM. How to we train these additional layers?
Unsupervised greedy approach
Hinton G E. Deep belief networks[J]. Scholarpedia, 2009, 4(6):5947.
Computer vision Speech recognition Natural language processing
Development History
Marvin Minsky
Hinton LeCun Bengio
BN Faster R-CNN ResidualNet
Hinton
Fig2. Complete CNN structure.
Fig3. Softmax layer.
Training and Testing
Before the training stage, we should use some different small random numbers to initialize weights. Training stage :
Introduction to Deep Learning
Huihui Liu Mar.1, 2017
Outline
• • • • • Conception of deep learning Development history Deep learning frameworks Deep neural network architectures Convolutional neural networks
• The generator network generates a sample from the random vector, the discriminator network discriminates whether a given sample is natural or counterfeit. Both networks train together to improve their performance until they reach a point where counterfeit and real samples can not be distinguished.
LSTM Gradient disappearance problem
Yann LeCun
XOR problem
Rosenblatt
Dropout AlexNet
W.S.McCulloch W.Pitts
Single- layer Perceptron
CNNLeNet
Geoffrey Hinton Hinton
Pooling layer
Pooling layer aims to compress the input feature map, which can reduce the number of parameters in training process and the degree of over-fitting of the model. Max-pooling : Selecting the maximum value in the pooling window. Mean-pooling : Calculating the average of all values in the pooling window.
DBN(Deep Belief Network,2006)
Fig1. RBM(restricted Boltzmann machine) structure.
Fig2. DBN(deep belief network) structure.
Hidden units and visible units
相关文档
最新文档