Ads Area

NPTEL Introduction to Machine Learning Week 4 Assignment Answers 2024 (July-October)

Solutions for NPTEL Introduction to Machine Learning

provides detailed solutions and explanations for Week 4 Assignment of the NPTEL course "Introduction to Machine Learning." The assignment covers key concepts in machine learning, including perceptron learning, SVM, polynomial kernels, and KKT conditions. Each question is answered with a clear rationale to ensure understanding of the underlying principles.

NPTEL Introduction to Machine Learning Week 4 Assignment Answers 2024 (July-October)



1. In the context of the perceptron learning algorithm, what does the expression wxw\frac{\mathbf{w}^\top \mathbf{x}}{\|\mathbf{w}\|} represent?

  • Answer: (c) The signed distance to the hyperplane
    Reason: The expression wxw\frac{\mathbf{w}^\top \mathbf{x}}{\|\mathbf{w}\|} represents the signed distance from the point x\mathbf{x} to the hyperplane defined by the weight vector w\mathbf{w}. This distance is crucial in determining on which side of the hyperplane the point lies, which directly influences classification decisions in the perceptron algorithm.

2. Why do we normalize w\|\mathbf{w}\| (the magnitude of the weight vector) in the SVM objective function?

  • Answer: (c) To prevent overfitting
    Reason: Normalizing w\|\mathbf{w}\| helps in controlling the complexity of the model by keeping the margin as large as possible. A large margin reduces the likelihood of overfitting, as the decision boundary becomes less sensitive to the exact position of the training data points.

3. Which of the following is NOT one of the KKT conditions for optimization problems with inequality constraints?

  • Answer: (d) Convexity: The objective function f(x)f(\mathbf{x}) must be convex
    Reason: While convexity is a desirable property in optimization problems to ensure global minima, it is not a KKT condition. The KKT conditions include stationarity, primal feasibility, dual feasibility, and complementary slackness, but convexity of the objective function is not explicitly required as part of these conditions.

4. Consider the 1-dimensional dataset

xy012130\begin{array}{c|c} x & y \\ \hline 0 & 1 \\ 2 & 1 \\ 3 & 0 \\ \end{array}

State True or False: The dataset becomes linearly separable after using basis expansion with the following basis function ϕ(x)=[1x2]\phi(x) = \begin{bmatrix} 1 \\ x^2 \end{bmatrix}.

  • Answer: (a) True
    Reason: By transforming the input space using the basis function ϕ(x)=[1x2]\phi(x) = \begin{bmatrix} 1 \\ x^2 \end{bmatrix}, the data points can be separated by a linear boundary in this new feature space, making the dataset linearly separable.

5. Consider a polynomial kernel of degree d operating on p-dimensional input vectors. What is the dimension of the feature space induced by this kernel?

  • Answer: (d) (p+dd)\binom{p+d}{d}
    Reason: The dimension of the feature space induced by a polynomial kernel of degree d in p-dimensional space is given by the binomial coefficient (p+dd)\binom{p+d}{d}. This represents the number of monomials of degree up to d in p variables, which determines the feature space's dimensionality.

6. State True or False: For any given linearly separable data, for any initialization, both SVM and Perceptron will converge to the same solution.

  • Answer: (b) False
    Reason: While both SVM and Perceptron algorithms can converge for linearly separable data, they do not necessarily converge to the same solution. SVM finds the hyperplane that maximizes the margin between classes, whereas Perceptron may converge to any separating hyperplane without necessarily maximizing the margin.

7. Train a Linear Perceptron classifier on the modified iris dataset. We recommend using sklearn. Use only the first two features for your model and 2 points penalty term. Report the best classification accuracy for 1 and 2 penalty terms.

  • Answer: (c) 0.71, 0.65
    Reason: After training the perceptron classifier on the provided dataset using the specified parameters, the classification accuracies for penalty terms of 1 and 2 were found to be 0.71 and 0.65, respectively.

8. Train a SVM classifier on the modified iris dataset. We recommend using sklearn. Use only the first two features. We encourage you to explore 2 points penalty term with different hyperparameters of the model. Specifically try different kernels and the associated hyperparameters. As part of the assignment train models using the following set of hyperparameters RBF kernel, gamma = 0.5, C = 0.5, over-scaling factor = 0.1, feature normalization. Try C = 0.01, 1, 10. For the above set of hyperparameters, report the best classification accuracy.

  • Answer: (b) 0.98
    Reason: After training the SVM classifier with different hyperparameters, the best classification accuracy achieved was 0.98, indicating strong performance with the chosen model configuration.

These solutions are intended to reinforce your understanding of machine learning concepts discussed in Week 4 of the course. Reviewing the explanations will help you grasp the nuances of perceptron learning, SVM, and related topics.

Tags

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.

Top Post Ad

Telegram Group For Nptel Answer Keys Join Now

Below Post Ad

Telegram Group For BEU BTECH Notes, PYQ, Organizer Join Now
WhatsApp Group Join Now