Deep learning course at deeplearning.ai
Course lists
- Deep Neural Networks (DNNs) by Andrew Ng [full course]
- Convolutional Neural Networks (CNNs) by Andrew Ng [full course]
- Recurrent Neural Networks (RNNs) by Andrew Ng [full course]
Deep Neural Networks (DNNs) by Andrew Ng
- 1 What is Deep Learning?
- 2 What is a Neural Network?
- 3 Supervised Learning with Neural Networks
- 4 Drivers Behind the Rise of Deep Learning
- 5 Binary Classification in Deep Learning
- 6 Logistic Regression
- 7 Logistic Regression Cost Function
- 8 Gradient Descent
- 9 Derivatives
- 10 Derivatives Examples
- 11 Computation Graph
- 12 Derivatives with a Computation Graph
- 13 Logistic Regression Derivatives
- 14 Gradient Descent on m Training Examples
- 15 Vectorization
- 16 More Vectorization Examples
- 17 Vectorizing Logistic Regression
- 18 Vectorizing Logistic Regression’s Gradient Computation
- 19 Broadcasting in Python
- 20 Python-Numpy
- 21 Jupyter-iPython
- 22 Logistic Regression Cost Function Explanation
- 23 Neural Network Overview
- 24 Neural Network Representation
- 25 Computing a Neural Network’s Output
- 26 Vectorizing Across Multiple Training Examples
- 27 Vectorized Implementation Explanation
- 28 Activation Functions
- 29 Why Non-Linear Activation Function?
- 30 Derivatives of Activation Functions
- 31 Gradient Descent for Neural Networks
- 32 BackPropagation Intuition (Optional)
- 33 Random Initialization of Weights
- 34 Deep L-layer Neural Network
- 35 Forward Propagation in Deep Networks
- 36 Getting your Matrix Dimension Right
- 37 Why DEEP representation?
- 38 Building Blocks of Deep Neural Network
- 39 Forward Propagation for Layer L
- 40 Parameters vs Hyperparameters
- 41 Brain and Deep Learning
- 42 Train/Dev/Test sets
- 43 Bias/ Variance
- 44 Basic “Recipe” of Machine Learning
- 45 Regularization
- 46 Why Regularization reduces Overfitting?
- 47 Dropout Regularization
- 48 Why does drop-out work?
- 49 Other Regularization Methods
- 50 Normalizing Input
- 51 Vanishing / Exploding Gradients
- 52 Weight Initialization for deep networks
- 53 Numerical Approximation of Gradients
- 54 Gradient Checking
- 55 Gradient Checking Implantation Notes
- 56 Mini Batch Gradient Descent
- 57 Understanding Mini-Batch Gradient Descent
- 58 Exponentially Weighted Averages
- 59 Understanding Exponentially Weighted Averages
- 60 Bias Correction in Exponentially Weighted Average
- 61 Gradient Descent with Momentum
- 62 RMSprop
- 63 Adam Optimization Algorithm
- 64 Learning Rate Decay
- 65 The Problem of Local Optima
- 66 Tunning Process
- 67 Right Scale for Hyperparameters
- 68 Hyperparameters tuning in Practice: Panda vs Caviar
- 69 Batch Norm
- 70 Fitting Batch Norm into a Neural Network
- 71 Why Does Batch Nom Work?
- 72 Batch Norm at Test Time
- 73 Softmax Regression
- 74 Training a Softmax Classifier
- 75 Deep Learning Frameworks
- 76 TensorFlow
- 77 Why ML Strategy?
- 78 Orthogonalization
- 79 Single Number Evaluation Metric
- 80 Satisfying and Optimizing Metrics
- 81 train/dev/test distributions
- 82 Size of dev and test sets
- 83 When to change dev/test sets and metrics?
- 84 Why human-level performance?
- 85 Avoidable Bias
- 86 Understanding Human-Level Performance
- 87 Surpassing Human-Level Performance
- 88 Improving Your Model Performance
- 89 Carrying Out Error Analysis
- 90 Cleaning Up Incorrect Labeled Data
- 91 Build Your First System Quickly, Then Iterate
- 92 Training and Testing on Different Distributions
- 93 Bias and Variance with Mismatched data distributions
- 94 Addressing Data Mismatch
- 95 Transfer Learning
- 96 Multi-Task Learning
- 97 End-to-End Deep Learning
- 98 Whether to use End-to-End Learning
Convolutional Neural Networks (CNNs) by Andrew Ng
- CNN1: What is Computer Vision?
- CNN2: Edge Deletion Example
- CNN3: More Edge Deletion
- CNN4: Padding
- CNN5: Strided Convolution
- CNN6: Convolutions Over Volume
- CNN7: One Layer of A Convolutional Network
- CNN8: A Simple Convolution Network Example
- CNN9: Pooling Layers
- CNN10: Convolutional Neural Network Example
- CNN11: Why Convolutions?
- CNN12: Look at Case Studies
- CNN13: Classic Networks
- CNN14: Residual Networks (ResNets)
- CNN15: Why Residual Network Works Well?
- CNN16: Network in Network and 1*1 Convolutions
- CNN17: Inception Network Motivation
- CNN18: Inception Network
- CNN19: ConvNets: Using open-Source Implementation
- CNN20: Transfer Learning
- CNN21: Data Augmentation
- CNN22: The State of Computer Vision
- CNN23: Object Detection: Object Localization
- CNN24: Object Detection: Landmark Detection
- CNN25: Object Detection: Object Detection
- CNN26: Object Detection: Convolutional Implementation of Sliding Windows
- CNN27: Object Detection: Bounding Box Predictions
- CNN28: Object Detection: Intersection Over Union
- CNN29: Object Detection: Non-max Suppression
- CNN30: Object Detection: Anchor Boxes
- CNN31: Object Detection: YOLO Algorithm
- CNN32: Object Detection : Region Proposal (optional)
- CNN33: Face Recognition: What is Face Recognition?
- CNN34: Face Recognition, One-Shot Learning
- CNN35: Face Recognition: Siamese Network
- CNN36: Face Recognition: Triplet Loss
- CNN37: Face Recognition: Face Verification and Binary Classification
- CNN38: Neural Style Transfer: What is it?
- CNN39: Neural Style Transfer: What are deep ConvNets Learning?
- CNN40: Neural Style Transfer: Cost Function
- CNN41: Neural Style Transfer : Content Cost Function
- CNN42: Neural Style Transfer: Style Cost Function
- CNN43: 1D and 3D Generalization of Models
Recurrent Neural Networks (RNNs) by Andrew Ng
- RNN1: Why sequence models?
- RNN2: Notation
- RNN3: Recurrent Neural Network Model
- RNN4: Backpropagation through time
- RNN5: Different types of RNNs
- RNN6: Language model and sequence generation
- RNN7: Sampling novel sequences
- RNN8: Vanishing gradients with RNNs
- RNN9: Gated Recurrent Unit GRU
- RNN10: LSTM long short term memory units
- RNN11: Bidirectional RNN
- RNN12: Deep RNNs
- RNN13: Word representation
- RNN14: Using word embeddings
- RNN15: NLP Properties of word embeddings
- RNN16: NLP Embedding matrix
- RNN17: NLP Learning word embeddings
- RNN18: NLP Word2Vec
- RNN19: NLP Negetive sampling
- RNN20: NLP - GloVe word vectors
- RNN21: NLP - Sentiment classification
- RNN22: NLP - Debiasing word embeddings
- RNN23: Sequence to sequence models - Basic models
- RNN24: Piccking the most likely sentence
- RNN25: Beam search
- RNN26: Refinement to beam search
- RNN27: Error analysis on beam search
- RNN28: Bleu score (optional)
- RNN29: Attention model - intuition
- RNN30: Attention model
- RNN31: Audio data - Speech recognition
- RNN32: Audio data -Trigger word detection
- RNN33: Summary and Thank you!