深度学习基础

首页 > 图书 > 教材教辅/2020-10-03 / 加入收藏 / 阅读 [打印]
深度学习基础

深度学习基础

作者:Nikhil Buduma著

开 本:24cm

书号ISBN:9787564175177

定价:

出版时间:2018-02-01

出版社:东南大学出版社

深度学习基础 内容简介

Google、微软和Facebook等公司正在积极发展内部的深度学习团队。对于我们而言,深度学习仍然是一门非常复杂和难以掌握的课题。如果你熟悉Python,并且具有微积分背景,以及对于机器学习的基本理解,本书将帮助你开启深度学习之旅。 * 检验机器学习和神经网络基础 * 学习如何训练前馈神经网络 * 使用TensorFlow实现你的第1个神经网络 * 管理随着网络加深带来的各种问题 * 建立神经网络用于分析复杂图像 * 使用自动编码器实现有效的维度缩减 * 深入了解从序列分析到语言检验 * 掌握强化学习基础

深度学习基础 目录

Preface
1. The Neural Network
Building Intelligent Machines
The Limits of Traditional Computer Programs
The Mechanics of Machine Learning
The Neuron
Expressing Linear Perceptrons as Neurons
Feed-Forward Neural Networks
Linear Neurons and Their Limitations
Sigmoid, Tanh, and ReLU Neurons
Softmax Output Layers
Looking Forward

2. Training Feed-Forward Neural Networks
The Fast-Food Problem
Gradient Descent
The Delta Rule and Learning Rates
Gradient Descent with Sigmoidal Neurons
The Backpropagation Algorithm
Stochastic and Minibatch Gradient Descent
Test Sets, Validation Sets, and Overfitting
Preventing Overfitting in Deep Neural Networks
Summary

3. Implementing Neural Networks in TensorFIow
What Is TensorFlow?
How Does TensorFlow Compare to Alternatives?
Installing TensorFlow
Creating and Manipulating TensorFlow Variables
TensorFlow Operations
Placeholder Tensors
Sessions in TensorFlow
Navigating Variable Scopes and Sharing Variables
Managing Models over the CPU and GPU
Specifying the Logistic Regression Model in TensorFlow
Logging and Training the Logistic Regression Model
Leveraging TensorBoard to Visualize Computation Graphs and Learning
Building a Multilayer Model for MNIST in TensorFlow
Summary

4. Beyond Gradient Descent
The Challenges with Gradient Descent
Local Minima in the Error Surfaces of Deep Networks
Model Identifiability
How Pesky Are Spurious Local Minima in Deep Networks?
Flat Regions in the Error Surface
When the Gradient Points in the Wrong Direction
Momentum-Based Optimization
A Brief View of Second-Order Methods
Learning Rate Adaptation
AdaGrad——Accumulating Historical Gradients
RMSProp——Exponentially Weighted Moving Average of Gradients
Adam——Combining Momentum and RMSProp
The Philosophy Behind Optimizer Selection
Summary

5. Convolutional Neural Networks
Neurons in Human Vision
The Shortcomings of Feature Selection
Vanilla Deep Neural Networks Don't Scale
Filters and Feature Maps
Full Description of the Convolutional Layer
Max Pooling
Full Architectural Description of Convolution Networks
Closing the Loop on MNIST with Convolutional Networks
Image Preprocessing Pipelines Enable More Robust Models
Accelerating Training with Batch Normalization
Building a Convolutional Network for CIFAR-10
Visualizing Learning in Convolutional Networks
Leveraging Convolutional Filters to Replicate Artistic Styles
Learning Convolutional Filters for Other Problem Domains
Summary

6. Embedding and Representation Learning
Learning Lower-Dimensional Representations
Principal Component Analysis
Motivating the Autoencoder Architecture
Implementing an Autoencoder in TensorFlow
Denoising to Force Robust Representations
Sparsity in Autoencoders
When Context Is More Informative than the Input Vector
The Word2Vec Framework
Implementing the Skip-Gram Architecture
Summary

7. Models for Sequence Analysis
Analyzing Variable-Length Inputs
Tackling seq2seq with Neural N-Grams
Implementing a Part-of-Speech Tagger
Dependency Parsing and SyntaxNet
Beam Search and Global Normalization
A Case for Stateful Deep Learning Models
Recurrent Neural Networks
The Challenges with Vanishing Gradients
Long Short-Term Memory (LSTM) Units
TensorFlow Primitives for RNN Models
Implementing a Sentiment Analysis Model
Solving seq2seq Tasks with Recurrent Neural Networks
Augmenting Recurrent Networks with Attention
Dissecting a Neural Translation Network
Summary

8. Memory Augmented Neural Networks
Neural Turing Machines
Attention-Based Memory Access
NTM Memory Addressing Mechanisms

 1/2    1 2 下一页 尾页

中小学教辅 英语阅读

在线阅读

  • 最新内容
  • 相关内容
  • 网友推荐
  • 图文推荐