Pytania i odpowiedzi

DeepLearning_GPT3_questions

Zebrane pytania i odpowiedzi do zestawu.
Ilość pytań: 85 Rozwiązywany: 601 razy
Pytanie 1
Which of the following is a type of regularization that encourages weight values to be small but non-zero?
L2 regularization

b) L2 regularization. L2 regularization adds a penalty term to the loss function that encourages the model to learn small weight values.

Pytanie 2
Which of the following is a type of regularization that encourages sparse weight matrices?
L1 regularization

a) L1 regularization. L1 regularization adds a penalty term to the loss function that encourages the model to learn sparse weight matrices.

Pytanie 3
What is the purpose of early stopping as a regularization technique?
To prevent overfitting

d) To prevent overfitting. Early stopping is a technique that involves stopping the training of a model before it has completed all the epochs in order to prevent overfitting.

Pytanie 4
Which of the following is a technique used for regularization in deep learning?
Dropout

b) Dropout. Dropout is a regularization technique in which randomly selected neurons are dropped during training, which helps prevent overfitting.

Pytanie 5
Which of the following is a benefit of using multilayer perceptrons with multiple hidden layers?
They are less likely to overfit.
Pytanie 6
Which of the following is a disadvantage of using multilayer perceptrons?
They can suffer from the vanishing gradient problem.

c) They can suffer from the vanishing gradient problem. Multilayer perceptrons can suffer from the vanishing gradient problem, where gradients become very small as they are backpropagated through many layers. This can make training difficult and slow. While multilayer perceptrons are often computationally efficient and can be relatively easy to interpret, they do require labeled training dat

Pytanie 7
Which of the following is true about the backpropagation algorithm?
It is used to compute gradients of a loss function with respect to the weights of a neural network.

a) It is used to compute gradients of a loss function with respect to the weights of a neural network. Backpropagation is a widely used algorithm for computing gradients of a loss function with respect to the weights of a neural network. However, it is not guaranteed to find the global minimum of the loss function, it can be used with recurrent neural networks as well as feedforward neural networks, and it requires the use of activation functions.

Pytanie 8
Which of the following is not a method for avoiding overfitting in multilayer perceptrons?
Removing hidden layers

Removing hidden layers. Removing hidden layers is not typically used as a method for avoiding overfitting in multilayer perceptrons. Regularization, dropout, and early stopping are all commonly used techniques for this purpose.

Pytanie 9
Which of the following activation functions is not typically used in multilayer perceptrons?
Softmax

Softmax. While softmax is often used as the output activation function for multiclass classification problems, it is not typically used as an activation function for hidden layers in multilayer perceptrons.

Pytanie 10
What is the purpose of the bias term in a neural network?
To shift the activation function to the left or right
Pytanie 11
Which of the following is a common technique used to prevent overfitting in deep learning?
All of the above

All of the above. Early stopping, data augmentation, and dropout are all common techniques used to prevent overfitting in deep learning.

Pytanie 12
What is the primary benefit of using mini-batches during training in deep learning?
All of the above

d) All of the above. Using mini-batches during training can lead to faster convergence to a good solution, improved generalization to new data, and reduction of overfitting.

Pytanie 13
Which of the following is not a commonly used optimizer in deep learning?
Naive Bayes

d) Naive Bayes. Stochastic Gradient Descent (SGD), Adam, and RMSProp are all commonly used optimizers in deep learning, but Naive Bayes is not an optimizer, it is a classification algorithm.

Pytanie 14
What does the Perceptron Loss minimize?
The negative sum of the dot product between weights and inputs for all misclassified examples.

Answer: B) The Perceptron Loss minimizes the negative sum of the dot product between weights and inputs for all misclassified examples. This can be mathematically defined as L(w) = -Σ [y_i(w^T x_i)] where y_i is the true label, x_i is the input, and w is the weight vector.

Pytanie 15
What does the Perceptron Loss minimize?
The number of misclassified examples by a perceptron.
Pytanie 16
What is the main advantage of using convolutional neural networks for image recognition tasks?
They can learn spatial hierarchies of features
Pytanie 17
Which of the following is not a common approach to unsupervised pretraining in deep learning?
Convolutional Neural Networks
Pytanie 18
Which of the following is not a commonly used regularization technique in deep learning?
Random forest regularization
Pytanie 19
What is the main problem with using the vanilla gradient descent algorithm for training deep neural networks?
It can get stuck in local optima
Pytanie 20
Which of the following is not a commonly used activation function in deep learning?
Linear