Nauka

DeepLearning_GPT3_questions

Wyświetlane są wszystkie pytania.
Pytanie 49
What is the PyTorch module used for building neural networks?
torch.nn
torch.utils
torch.tensor
torch.optim
Pytanie 50
Which of the following is not a PyTorch data type?
FloatTensor
LongTensor
DoubleTensor
IntTensor
Pytanie 51
What is the main difference between a traditional feedforward neural network and a recurrent neural network (RNN)?
Both RNNs and feedforward neural networks can process sequential data of varying lengths.
RNNs are better suited for image classification tasks than feedforward neural networks.
Feedforward neural networks can process sequential data of varying lengths while RNNs cannot.
RNNs can process sequential data of varying lengths while feedforward neural networks cannot.
Pytanie 52
What is the learning rate schedule used in Adam?
A learning rate that adapts based on the history of the gradients
A constant learning rate
An exponentially decreasing learning rate
A linearly decreasing learning rate
Pytanie 53
What is the role of the bias correction terms in Adam?
They increase the stability of the optimization process
They correct for the fact that the moving averages start at zero
They help to reduce the variance of the updates
They prevent the learning rate from getting too large
Pytanie 54
What is the update rule for the moving average of the gradient in Adam?
m_t = beta_2 * m_t-1 + (1 - beta_2) * g_t
v_t = beta_2 * v_t-1 + (1 - beta_2) * g_t^2
v_t = beta_1 * v_t-1 + (1 - beta_1) * g_t^2
m_t = beta_1 * m_t-1 + (1 - beta_1) * g_t
Pytanie 55
What is the key feature of Adam that distinguishes it from other optimization algorithms?
It uses momentum to smooth the parameter updates
It scales the learning rate by the magnitude of the gradient
It computes an average of the past gradients for each parameter
It adapts the learning rate for each parameter
Pytanie 56
Which of the following is an alternative to Adagrad that addresses its memory requirement issue?
RMSprop
Stochastic Gradient Descent
Adadelta
Adam