Strona 2

DeepLearning_GPT3_questions

Przejdź na Memorizer+
W trybie testu zyskasz:
Brak reklam
Quiz powtórkowy - pozwoli Ci opanować pytania, których nie umiesz
Więcej pytań na stronie testu
Wybór pytań do ponownego rozwiązania
Trzy razy bardziej pojemną historię aktywności
Wykup dostęp
Pytanie 9
Which of the following activation functions is not typically used in multilayer perceptrons?
Sigmoid
Softmax
Tanh
ReLU
Pytanie 10
What is the purpose of the bias term in a neural network?
To introduce non-linearity into the network
To ensure that the output is always positive
To reduce the risk of overfitting
To shift the activation function to the left or right
Pytanie 11
Which of the following is a common technique used to prevent overfitting in deep learning?
Early stopping
Dropout
All of the above
Data augmentation
Pytanie 12
What is the primary benefit of using mini-batches during training in deep learning?
Faster convergence to a good solution
All of the above
Reduction of overfitting
Improved generalization to new data
Pytanie 13
Which of the following is not a commonly used optimizer in deep learning?
Stochastic Gradient Descent (SGD)
RMSProp
Naive Bayes
Adam
Pytanie 14
What does the Perceptron Loss minimize?
The sum of the absolute differences between predicted and target values.
The mean squared error between predicted and target values.
The negative sum of the dot product between weights and inputs for all misclassified examples.
The entropy of the predicted probabilities compared to the true labels.
Pytanie 15
What does the Perceptron Loss minimize?
The squared difference between the predicted output and the target output of a perceptron.
The number of iterations required for a perceptron to converge.
The average of the distances between the decision boundary and the training examples.
The number of misclassified examples by a perceptron.
Pytanie 16
What is the main advantage of using convolutional neural networks for image recognition tasks?
They can learn spatial hierarchies of features
They can handle variable-sized inputs
They are more interpretable than other types of neural networks
They require less training data than other types of neural networks