Pytania i odpowiedzi

DeepLearning_GPT3_questions

Zebrane pytania i odpowiedzi do zestawu.
Ilość pytań: 85 Rozwiązywany: 603 razy
Pytanie 61
What is the purpose of the peephole connections in a peephole LSTM?
To allow the gates to observe the current cell state directly

c) To allow the gates to observe the current cell state directly. The peephole connections in a peephole LSTM allow the gates to directly observe the current

Pytanie 62
Which of the following is true about the output of an LSTM cell?
The output is a function of both the hidden state and the cell state

d) The output is a function of both the hidden state and the cell state. While the hidden state represents a summary of the information processed by the LSTM cell up to the current time step, the output is computed using a combination of the hidden state and the cell state, with the output gate controlling the relative contributions of each. The output gate also determines which elements of the hidden state should be passed on to the next time step.

Pytanie 63
What is the difference between a standard LSTM cell and a peephole LSTM cell?
Peephole LSTM cells have additional connections from the cell state to the gates

b) Peephole LSTM cells have additional connections from the cell state to the gates. In addition to the input, forget, and output gates used in a standard LSTM cell, peephole LSTM cells also have connections from the cell state to each of the gates. This allows the gates to directly observe the cell state and make more informed decisions about how to update it.

Pytanie 64
Which of the following is an advantage of using LSTM over traditional recurrent neural networks (RNNs)?

a) LSTMs can handle variable-length sequences. LSTMs are designed to address the vanishing gradient problem that occurs in traditional RNNs, which makes it difficult for them to capture long-term dependencies in sequences. LSTMs achieve this by using a memory cell and various gates to control the flow of information through the cell. This makes LSTMs well-suited for handling variable-length sequences, which is a common use case in many applications.

Pytanie 65
What is the purpose of the forget gate in an LSTM cell?
To decide whether to update the cell state or not

d) To decide whether to update the cell state or not. The forget gate takes as input the concatenation of the previous hidden state and the current input, and outputs a value between 0 and 1 for each element in the cell state vector. This value determines how much of the corresponding element in the previous cell state should be kept or forgotten.

Pytanie 66
Which of the following is NOT a type of gate in an LSTM?
Update gate

d) Update gate is not a type of gate in an LSTM. The three types of gates are forget gate, input gate, and output gate. The update of the cell state is controlled by the forget and input gates, which determine what information should be retained and added to the cell state, respectively.

Pytanie 67
What is the purpose of the teacher forcing technique in training RNNs?
To provide the network with the correct input at each time step during training.

d) To provide the network with the correct input at each time step during training. In the teacher forcing technique, the correct output from the previous time step is used as input to the network at the current time step during training, instead of using the output predicted by the network at the previous time step. This is done to ensure that the network receives the correct input during training and can learn to make accurate predictions.

Pytanie 68
What is the difference between a unidirectional and bidirectional RNN?
A unidirectional RNN can only process data in one direction, while a bidirectional RNN can process data in both directions.

a) A unidirectional RNN can only process data in one direction, while a bidirectional RNN can process data in both directions. In a unidirectional RNN, information flows only from the past to the future, whereas in a bidirectional RNN, information flows in both directions, allowing the network to take into account past and future context when making predictions.

Pytanie 69
What is the long short-term memory (LSTM) architecture designed to address in RNNs?
The vanishing gradient problem.

a) The vanishing gradient problem. The LSTM architecture is designed to address the issue of the gradients becoming too small during backpropagation in RNNs. It uses a memory cell and three gates (input, output, and forget) to selectively remember or forget information over time.

Pytanie 70
What is the vanishing gradient problem in recurrent neural networks (RNNs)?
The gradients become too small during backpropagation.

b) The gradients become too small during backpropagation. The vanishing gradient problem refers to the issue of the gradients becoming extremely small as they are backpropagated through many time steps in an RNN. This can make it difficult for the network to learn long-term dependencies.

Pytanie 71
Which of the following is a potential application of UNet in medical image analysis?
Segmenting tumor regions in an MRI

c) Segmenting tumor regions in an MRI. UNet has been used in various medical image analysis tasks, including segmenting tumor regions in MRIs and identifying different types of cells in histology images.

Pytanie 72
How does UNet handle class imbalance in image segmentation tasks?
By weighting the loss function for underrepresented classes

a) By weighting the loss function for underrepresented classes. UNet typically uses a modified cross-entropy loss function that weights the contribution of each pixel to the loss based on the frequency of its class in the training data.

Pytanie 73
Which operation is used in UNet to recover the image resolution?
Upsampling

Upsampling. The expansive path in UNet uses upsampling and transposed convolution operations to recover the image resolution.

Pytanie 74
Which part of UNet captures the context of an image?
The contracting path
Pytanie 75
What type of deep learning task is UNet commonly used for?
Image segmentation

) Image segmentation. UNet is often used for segmenting images into different regions, such as identifying different types of cells in a medical image.

Pytanie 76
Which of the following is an application of autoencoders?
Image denoising

d) All of the above Explanation: Autoencoders have a wide range of applications, including image denoising, dimensionality reduction, and anomaly detection, among others.

Pytanie 77
What is the difference between a denoising autoencoder and a standard autoencoder?
Denoising autoencoders use noisy data as input during training.

d) Denoising autoencoders use noisy data as input during training.

Explanation: Denoising autoencoders are trained using noisy data as input, with the objective of reconstructing the original, noise-free input. This helps the autoencoder to learn more robust representations.

Pytanie 78
What is the bottleneck layer in an autoencoder?
The last hidden layer in the encoder network.

b) The last hidden layer in the encoder network.

Explanation: The bottleneck layer is the last hidden layer in the encoder network, which contains a compressed representation of the input.

Pytanie 79
Which of the following is a type of regularized autoencoder?
All of the above

d) All of the above

Explanation: Denoising autoencoders, contractive autoencoders, and sparse autoencoders are all examples of regularized autoencoders.

Pytanie 80
What is the objective of a variational autoencoder (VAE)?
To maximize the lower bound on the log-likelihood of the data.

d) To maximize the lower bound on the log-likelihood of the data. Explanation: The objective of a VAE is to maximize the lower bound on the log-likelihood of the data, which is also known as the Evidence Lower BOund (ELBO).

Pytanie 81
Which of the following pooling methods is designed to capture spatial context in the feature maps?
Max pooling

d) Spatial pyramid pooling. Spatial pyramid pooling divides the feature maps into a pyramid of pooling regions, and pools each region using max or average pooling. This allows the model to capture information about the spatial context of the features at multiple scales.

Pytanie 82
What is the main disadvantage of using pooling layers in convolutional neural networks?
They can reduce the representational capacity of the model

b) They can reduce the representational capacity of the model. Pooling layers can discard information from the feature maps and reduce the spatial resolution of the output volume, which can limit the ability of the model to capture fine-grained details.

Pytanie 83
Which of the following statements about max pooling is true?
It performs the same operation on all feature maps in a given layer

c) It performs the same operation on all feature maps in a given layer. Max pooling takes the maximum value in each pooling region across all feature maps in the layer, and this operation is applied uniformly to all feature maps.

Pytanie 84
Which of the following pooling methods does not involve any parameter learning?
Global pooling

d) Global pooling. Global pooling takes the entire feature map as input and outputs a single value, without any parameter learning.

Pytanie 85
What is the purpose of pooling layers in convolutional neural networks?
To reduce the spatial dimensions of the output volume

b) To reduce the spatial dimensions of the output volume. Pooling layers are used to downsample the spatial dimensions of the feature maps, which reduces the number of parameters in the model and helps to prevent overfitting.