Pytania i odpowiedzi

DeepLearning_GPT3_questions

Zebrane pytania i odpowiedzi do zestawu.
Ilość pytań: 85 Rozwiązywany: 599 razy
Pytanie 21
What is the purpose of the softmax function in deep learning?
To normalize the output of the neural network to a probability distribution
Pytanie 22
What is the purpose of the backpropagation algorithm in deep learning?
To compute the gradient of the loss function with respect to the weights
Pytanie 23
What is the difference between supervised and unsupervised learning?
Supervised learning requires labeled data, while unsupervised learning does not
Pytanie 24
Which of the following is not a commonly used activation function in deep learning?
Polynomial
Pytanie 25
What is the purpose of regularization in deep learning?
To prevent overfitting
Pytanie 26
What is the purpose of using dropout in convolutional neural networks?
To prevent overfitting in the model

Answer: b) To prevent overfitting in the model

Explanation: Dropout is a regularization technique used in convolutional neural networks to prevent overfitting in the model. Dropout randomly drops out a certain percentage of the neurons in the network during training, forcing the network to learn more robust features. This helps prevent the model from memorizing the training data and improves its generalization performance on unseen data.

Pytanie 27
What is the purpose of using padding in convolutional neural networks?
To ensure that the output volume has the same spatial dimensions as the input volume

c) To ensure that the output volume has the same spatial dimensions as the input volume

Explanation: Padding is used in convolutional neural networks to ensure that the output volume has the same spatial dimensions as the input volume. This is necessary when using convolutional layers without any pooling layers, since the output size would be reduced after each convolutional layer.

Pytanie 28
Which of the following is used to reduce the spatial dimensions of the input volume in a convolutional neural network?
Pooling layers

) Pooling layers

Explanation: Pooling layers are used to reduce the spatial dimensions of the input volume in a convolutional neural network. Max pooling and average pooling are the two commonly used types of pooling layers.

Pytanie 29
What is the output shape of a convolutional layer with 32 filters, a filter size of 3x3, and input shape of 224x224x3?
222x222x32

b) 222x222x32

Explanation: The output shape of a convolutional layer can be calculated using the formula: (W - F + 2P)/S + 1, where W is the input shape, F is the filter size, P is the padding size, and S is the stride. Applying this formula, we get (224 - 3 + 2*0)/1 + 1 = 222. Therefore, the output shape is 222x222x32.

Pytanie 30
Which of the following statements is true about convolutional neural networks?
The use of convolutional layers reduces the number of parameters in the network.

b) The use of convolutional layers reduces the number of parameters in the network.

Explanation: Convolutional neural networks are used for various tasks including image classification, object detection, and segmentation. The use of convolutional layers reduces the number of parameters in the network by sharing weights across different locations in the input, making the model more efficient.

Pytanie 31
Which of the following is a common technique used to prevent overfitting in convolutional neural networks?
All of the above

Explanation: Overfitting is a common problem in convolutional neural networks, and several techniques can be used to prevent it, including early stopping, dropout, and data augmentation. Early stopping involves monitoring the performance of the network on a validation set and stopping training when the validation error stops improving. Dropout involves randomly dropping out some of the neurons in the network during training, which can help prevent over-reliance on particular features. Data augmentation involves artificially increasing the size of the training set by applying random transformations to the input data.

Pytanie 32
What is the main advantage of using convolutional layers in CNNs?
To increase the receptive field size of the network

Answer: b) To increase the receptive field size of the network

Explanation: Convolutional layers are used in CNNs to increase the receptive field size of the network, allowing the network to capture larger spatial structures in the input data. This can help improve the network's ability to classify objects in the input data.

Pytanie 33
Which of the following is a common activation function used in convolutional neural networks?
ReLU

b) ReLU

Explanation: Rectified Linear Units (ReLU) is a common activation function used in convolutional neural networks. ReLU is computationally efficient, easy to optimize, and has been shown to work well in practice.

Pytanie 34
What is the main advantage of using pooling layers in CNNs?
To reduce the number of learnable parameters

Answer: a) To reduce the number of learnable parameters

Explanation: Pooling layers are used in CNNs to reduce the spatial resolution of the feature maps, thereby reducing the number of learnable parameters in the network. This can help prevent overfitting, improve computational efficiency, and allow the network to generalize better to unseen data.

Pytanie 35
Which of the following is a common problem in image classification that convolutional neural networks (CNNs) aim to address?
Overfitting on small datasets

Answer: a) Overfitting on small datasets

Explanation: Overfitting is a common problem in image classification when training deep neural networks on small datasets. CNNs are designed to address this issue by introducing parameters that are shared across the image, reducing the number of learnable parameters and allowing the network to generalize better to unseen data.

Pytanie 36
What is dropout regularization?
It randomly removes a fraction of the neurons from the network during training.
Pytanie 37
What is L2 regularization?
It adds the sum of squared values of the weights as a penalty term to the loss function.
Pytanie 38
What is the effect of increasing the regularization parameter in L2 regularization?
It reduces the magnitude of the weights.
Pytanie 39
What is weight decay?
It adds the sum of squared values of the w loss function.

D. Weight decay adds the sum of squared values of the weights multiplied by a regularization parameter as a penalty term to the loss function.

Pytanie 40
Which of the following is a technique used for data augmentation as a regularization technique?
Adding noise to the input data