Practice: Transfer Learning
1. The main idea of transfer learning of a neural network is:
2. In the context of transfer learning, which is a guiding principle of fine tuning?
3. In the context of transfer learning, what do we call the process in which you only train the last or a few layers instead of all layers of a neural network?
Practice: Convolutional Neural Network Architectures
1. This concept came as a solution to CNNs in which each layer is turned into branches of convolutions:
2. Which CNN Architecture is considered the flash point for modern Deep Learning?
3. Which CNN Architecture can be described as a "simplified, deeper LeNet" in which the more layers, the better?
4. Which CNN Architecture is the precursor of using convolutions to obtain better features and was first used to solve the MNIST data set?
5. The motivation behind this CNN Architecture was to solve the inability of deep neural networks to fit or overfit the training data better when adding layers.
6. This CNN Architecture keeps passing both the initial unchanged information and the transformed information to the next layer.
7. Which activation function was notably used in AlexNet and contributed to its success?
Practice: Regularization
1. Which regularization technique can shrink the coefficients of the less important features to zero?
2. (True/False) Batch Normalization tackles the internal covariate shift issue by always normalizing the input signals, thus accelerating the training of deep neural nets and increasing the generalization power of the networks.
3. Regularization is used to mitigate which issue in model training?
Week 5 Final Quiz
1. (True/False) In Keras, the Dropout layer has an argument called rate, which is a probability that represents how often we want to invoke the layer in the training.
2. What is a benefit of applying transfer learning to neural networks?
3. By setting ` layer.trainable = False` for certain layers in a neural network, we____
4.Which option correctly orders the steps of implementing transfer learning?
1. Freeze the early layers of the pre-trained model.
2. Improve the model by fine-tuning.
3. Train the model with a new output layer in place.
4. Select a pre-trained model as the base of our training.
5. Given a 100x100 pixels RGB image, there are _____ features.
6. Before a CNN is ready for classifying images, what layer must we add as the last?
7. In a CNN, the depth of a layer corresponds to the number of:
No comments:
Post a Comment