Strona 1

DeepLearning_GPT3_questions

Pytanie 1
Which of the following is a type of regularization that encourages weight values to be small but non-zero?
Dropout regularization
None of the above
L1 regularization
L2 regularization
Pytanie 2
Which of the following is a type of regularization that encourages sparse weight matrices?
Dropout regularization
None of the above
L2 regularization
L1 regularization
Pytanie 3
What is the purpose of early stopping as a regularization technique?
To prevent overfitting
To minimize the validation loss
To minimize the training loss
To minimize the sum of training and validation loss
Pytanie 4
Which of the following is a technique used for regularization in deep learning?
Gradient Descent
Dropout
Softmax
Stochastic Gradient Descent
Pytanie 5
Which of the following is a benefit of using multilayer perceptrons with multiple hidden layers?
They are less computationally expensive.
They require less labeled training data.
They are more easily interpretable.
They are less likely to overfit.
Pytanie 6
Which of the following is a disadvantage of using multilayer perceptrons?
They do not require labeled training data.
They are computationally efficient.
They are easy to interpret.
They can suffer from the vanishing gradient problem.
Pytanie 7
Which of the following is true about the backpropagation algorithm?
It is used to compute gradients of a loss function with respect to the weights of a neural network.
It is only used for feedforward neural networks.
It is guaranteed to find the global minimum of the loss function.
It does not require the use of an activation function.
Pytanie 8
Which of the following is not a method for avoiding overfitting in multilayer perceptrons?
Removing hidden layers
Removing hidden layers
Dropout
Regularization
Przejdź na Memorizer+
W trybie testu zyskasz:
Brak reklam
Quiz powtórkowy - pozwoli Ci opanować pytania, których nie umiesz
Więcej pytań na stronie testu
Wybór pytań do ponownego rozwiązania
Trzy razy bardziej pojemną historię aktywności
Wykup dostęp