Ads Area

NPTEL Introduction to Machine Learning Week 5 Assignment Answers 2024 (July-October)

Solutions for Week 5 Assignment: Key Concepts in Machine Learning

This article provides detailed solutions for the Week 5 assignment of the "Introduction to Machine Learning" course on NPTEL. Each question is solved with a clear explanation of the answers to help students understand the concepts better. This guide covers topics such as neural networks, transformation of features, weight initialization, Bayesian approaches, and more.


NPTEL Introduction to Machine Learning Week 5 Assignment Answers 2024 (July-October)




1. Given a 3-layer neural network which takes in 10 inputs, has 5 hidden units and outputs 10 outputs, how many parameters are present in this network?

  • Answer: 95

  • Explanation: To calculate the total number of parameters in a neural network, you need to consider the weights and biases for each layer.

    • The input layer has 10 inputs and connects to 5 hidden units, so there are 10×5=5010 \times 5 = 50 weights.
    • The hidden layer has 5 units and connects to 10 output units, so there are 5×10=505 \times 10 = 50 weights.
    • Additionally, there are 5 biases for the hidden units and 10 biases for the output units.
    • Thus, the total number of parameters is 50+50+5+10=11550 + 50 + 5 + 10 = 115.

2. Recall the XOR (tabulated below) example from class where we did a transformation of features to make it linearly separable. Which of the following transformations can also work?

  • Answer: Rotating x1 and x2 by a fixed angle

  • Explanation: By rotating the features x1x_1 and x2x_2 by a fixed angle, you can transform the XOR problem into a linearly separable one. The addition of a third dimension or any non-linear transformation might also work but isn't guaranteed to produce a linearly separable feature space like rotation does.


3. We saw several techniques to ensure the weights of the neural network are small (such as random initialization around 0 or regularization). What conclusions can we draw if weights of our ANN are high?

  • Answer: Model has overflowed

  • Explanation: High weights in a neural network often indicate that the model has overflowed, which could happen due to issues like improper initialization, lack of regularization, or excessive learning rates. This can lead to poor model performance and overfitting.


4. In a basic neural network, which of the following is generally considered a good initialization strategy for the weights?

  • Answer: Initialize weights with small values close to zero

  • Explanation: Initializing weights with small values close to zero (but not exactly zero) helps to avoid symmetry-breaking problems and allows the model to start learning effectively. This initialization strategy is standard for ensuring that neurons develop unique weights during training.


5. Which of the following is the primary reason for rescaling input features before passing them to a neural network?

  • Answer: To reduce the number of parameters in the network

  • Explanation: Rescaling input features helps in normalizing the data, which leads to faster convergence and better model performance. It ensures that each feature contributes equally to the network's learning process, preventing the model from getting biased towards features with larger scales.


6. In the Bayesian approach to machine learning, we often use the formula P(θD)=P(Dθ)P(θ)P(D)P(\theta|D) = \frac{P(D|\theta)P(\theta)}{P(D)}. Where DD represents the observed data. Which of the following correctly identifies each term in this formula?

  • Answer: P(θD)P(\theta|D) is the posterior, P(Dθ)P(D|\theta) is the likelihood, P(θ)P(\theta) is the prior, P(D)P(D) is the evidence

  • Explanation: The formula is Bayes' theorem, where P(θD)P(\theta|D) is the posterior probability of the model parameters θ\theta given the data DD. P(Dθ)P(D|\theta) is the likelihood, representing how likely the observed data is given the parameters. P(θ)P(\theta) is the prior probability of the parameters, and P(D)P(D) is the evidence, the total probability of observing the data under all possible parameter values.


7. Why do we often use log-likelihood maximization instead of directly maximizing the likelihood in statistical learning?

  • Answer: Log-likelihood is always faster to compute than likelihood

  • Explanation: Log-likelihood is preferred because it simplifies the mathematical computation, turning products into sums, which are easier to work with, especially when dealing with large datasets or complex models. It also avoids numerical underflow issues that can arise when multiplying many probabilities.


8. In machine learning, if you have an infinite amount of data, but your prior distribution is incorrect, will you still converge to the right solution?

  • Answer: Yes, with infinite data, the influence of the prior becomes negligible, and you will converge to the true underlying distribution

  • Explanation: As the amount of data increases, the influence of the prior diminishes, and the model relies more on the observed data. Therefore, even if the prior is incorrect, with an infinite amount of data, the model will still converge to the correct solution.


9. Statement: Threshold function cannot be used as activation function for hidden layers. Reason: Threshold functions do not introduce non-linearity.

  • Answer: Both the assertion and reason are correct

  • Explanation: The threshold function (or step function) does not introduce non-linearity, which is crucial for the hidden layers in a neural network to learn complex patterns. Without non-linearity, the network would simply be a linear model, no matter how many layers it has.


10. Choose the correct statement (multiple may be correct):

  • Answer: MLE is a special case of MAP when prior is a uniform distribution

  • Explanation: Maximum Likelihood Estimation (MLE) can be considered a special case of Maximum A Posteriori (MAP) estimation when the prior distribution is uniform. In this scenario, the posterior is proportional to the likelihood, and maximizing the likelihood leads to the same result as MLE.


Tags

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.

Top Post Ad

Telegram Group For Nptel Answer Keys Join Now

Below Post Ad

Telegram Group For BEU BTECH Notes, PYQ, Organizer Join Now
WhatsApp Group Join Now