banner



Which Of The Following Types Of Intelligence Is Most Likely To Change Due To Aging?ã¢â‚¬â€¹

Introduction

Linear Regression is still the almost prominently used statistical technique in information science industry and in academia to explain relationships betwixt features.

A total of i,355 people registered for this skill examination. Information technology was particularly designed for you to test your knowledge on linear regression techniques. If yous are i of those who missed out on this skill test, here are the questions and solutions. You lot missed on the existent time test, but can read this article to discover out how many could accept answered correctly.

Here is the leaderboard for the participants who took the test.

Overall Distribution

Beneath is the distribution of the scores of the participants:

You lot can access the scores here. More than 800 people participated in the skill test and the highest score obtained was 28.

Helpful Resource

Here are some resource to become in depth knowledge in the subject.

  • 5 Questions which can teach you Multiple Regression (with R and Python)

  • Going Deeper into Regression Analysis with Assumptions, Plots & Solutions

  • vii Types of Regression Techniques you should know!

Are y'all a beginner in Machine Learning? Do you want to principal the concepts of Linear Regression and Motorcar Learning? Here is a beginner-friendly class to aid yous in your journey –

  • Certified AI & ML Blackbelt+ Program
  • Applied Machine Learning Grade

Skill examination Questions and Answers

1) True-False: Linear Regression is a supervised automobile learning algorithm.

A) TRUE
B) FALSE

2) Truthful-Simulated: Linear Regression is mainly used for Regression.

A) TRUE
B) FALSE

iii) True-Fake: Information technology is possible to design a Linear regression algorithm using a neural network?

A) Truthful
B) FALSE

4) Which of the following methods exercise we use to find the all-time fit line for data in Linear Regression?

A) Least Square Error
B) Maximum Likelihood
C) Logarithmic Loss
D) Both A and B

5) Which of the following evaluation metrics tin can be used to evaluate a model while modeling a continuous output variable?

A) AUC-ROC
B) Accurateness
C) Logloss
D) Mean-Squared-Error

6) True-False: Lasso Regularization tin can be used for variable selection in Linear Regression.

A) TRUE
B) False

Solution: (A)

True, In case of lasso regression we utilize absolute penalty which makes some of the coefficients zero.

vii) Which of the following is truthful about Residuals ?

A) Lower is better
B) College is better
C) A or B depend on the state of affairs
D) None of these

Solution: (A)

Residuals refer to the error values of the model. Therefore lower residuals are desired.

eight) Suppose that nosotros have North independent variables (X1,X2… Xn) and dependent variable is Y. At present Imagine that you lot are applying linear regression past plumbing equipment the best fit line using least square error on this data.

You found that correlation coefficient for one of it's variable(Say X1) with Y is -0.95.

Which of the following is true for X1?

A) Relation between the X1 and Y is weak
B) Relation between the X1 and Y is stiff
C) Relation between the X1 and Y is neutral
D) Correlation can't judge the human relationship

Solution: (B)

The absolute value of the correlation coefficient denotes the strength of the relationship. Since  absolute correlation is very high it ways that the human relationship is strong between X1 and Y.

ix) Looking at above ii characteristics, which of the following option is the correct for Pearson correlation between V1 and V2?

If you are given the 2 variables V1 and V2 and they are following below two characteristics.

i. If V1 increases so V2 also increases

2. If V1 decreases and then V2 behavior is unknown

A) Pearson correlation will be shut to 1
B) Pearson correlation will be shut to -i
C) Pearson correlation will be close to 0
D) None of these

Solution: (D)

We cannot comment on the correlation coefficient by using only statement one.  Nosotros need to consider the both of these two statements. Consider V1 equally 10 and V2 equally |10|. The correlation coefficient would not exist close to 1 in such a example.

x) Suppose Pearson correlation between V1 and V2 is zero. In such case, is it right to conclude that V1 and V2 practice not have whatsoever relation between them?

A) True
B) FALSE

Solution: (B)

Pearson correlation coefficient betwixt ii variables might exist naught even when they have a relationship betwixt them. If the correlation coefficient is nothing, it just means that that they don't movement together. We can take examples similar y=|x| or y=x^two.

eleven) Which of the following offsets, do nosotros use in linear regression's least square line fit? Suppose horizontal axis is contained variable and vertical axis is dependent variable.

A) Vertical showtime
B) Perpendicular offset
C) Both, depending on the state of affairs
D) None of above

Solution: (A)

We always consider residuals equally vertical offsets. We calculate the direct differences betwixt actual value and the Y labels. Perpendicular offset are useful in case of PCA.

12) True- False: Overfitting is more than probable when you have huge amount of data to train?

A) True
B) Faux

Solution: (B)

With a minor training dataset, information technology's easier to find a hypothesis to fit the training data exactly i.e. overfitting.

13) We can likewise compute the coefficient of linear regression with the help of an analytical method called "Normal Equation". Which of the following is/are true nearly Normal Equation?

  1. We don't accept to cull the learning rate
  2. Information technology becomes slow when number of features is very large
  3. Thers is no need to iterate

A) ane and 2
B) 1 and 3
C) 2 and 3
D) one,2 and 3

Solution: (D)

Instead of gradient descent, Normal Equation tin can also be used to observe coefficients. Refer this article for read more near normal equation.

xiv) Which of the following argument is true most sum of residuals of A and B?

Beneath graphs show two fitted regression lines (A & B) on randomly generated data. Now, I want to discover the sum of residuals in both cases A and B.

Note:

  1. Calibration is aforementioned in both graphs for both axis.
  2. X axis is contained variable and Y-centrality is dependent variable.

A) A has higher sum of residuals than B
B) A has lower sum of residual than B
C) Both accept aforementioned sum of residuals
D) None of these

Solution: (C)

Sum of residuals will e'er be zero, therefore both have aforementioned sum of residuals

Question Context xv-17:

Suppose you have fitted a complex regression model on a dataset. Now, yous are using Ridge regression with penality x.

15) Choose the option which describes bias in all-time way.
A) In case of very big x; bias is low
B) In instance of very large x; bias is loftier
C) We can't say about bias
D) None of these

Solution: (B)

If the penalisation is very large it means model is less circuitous, therefore the bias would be high.

16) What will happen when you use very large penalty?

A) Some of the coefficient will get absolute goose egg
B) Some of the coefficient will arroyo zero but not absolute goose egg
C) Both A and B depending on the situation
D) None of these

Solution: (B)

In lasso some of the coefficient value get zero, only in case of Ridge, the coefficients get close to zero only not zero.

17) What will happen when you apply very large punishment in case of Lasso?
A) Some of the coefficient will become zero
B) Some of the coefficient will exist approaching to zero merely not absolute zero
C) Both A and B depending on the situation
D) None of these

Solution: (A)

As already discussed, lasso applies absolute penalty, so some of the coefficients will become null.

18) Which of the following statement is true most outliers in Linear regression?

A) Linear regression is sensitive to outliers
B) Linear regression is non sensitive to outliers
C) Tin't say
D) None of these

Solution: (A)

The gradient of the regression line volition change due to outliers in most of the cases. So Linear Regression is sensitive to outliers.

nineteen) Suppose you lot plotted a besprinkle plot between the residuals and predicted values in linear regression and you found that there is a human relationship between them. Which of the following determination practise you brand about this state of affairs?

A) Since the there is a relationship means our model is not good
B) Since the there is a relationship ways our model is good
C) Can't say
D) None of these

Solution: (A)

In that location should not exist any human relationship betwixt predicted values and residuals. If at that place exists any relationship between them,it means that the model has not perfectly captured the data in the information.

Question Context 20-22:

Suppose that you lot have a dataset D1 and you lot pattern a linear regression model of degree iii polynomial and you lot plant that the training and testing error is "0" or in some other terms it perfectly fits the data.

20) What will happen when you fit degree iv polynomial in linear regression?
A) There are high chances that degree four polynomial will over fit the data
B) There are high chances that degree 4 polynomial will under fit the information
C) Can't say
D) None of these

Solution: (A)

Since is more caste iv will be more complex(overfit the data) than the degree 3 model so information technology will again perfectly fit the information. In such case preparation error will be zero but test fault may not be zero.

21) What will happen when you fit degree 2 polynomial in linear regression?
A) It is high chances that degree 2 polynomial will over fit the data
B) It is high chances that degree 2 polynomial will under fit the data
C) Tin't say
D) None of these

Solution: (B)

If a degree iii polynomial fits the data perfectly, it's highly likely that a simpler model(degree 2 polynomial) might under fit the data.

22) In terms of bias and variance. Which of the following is true when you fit degree 2 polynomial?


A) Bias will be high, variance will be high
B) Bias volition be depression, variance will be loftier
C) Bias will be high, variance will exist depression
D) Bias will exist low, variance will exist low

Solution: (C)

Since a degree ii polynomial will be less circuitous equally compared to degree 3, the bias will exist high and variance will be depression.

Question Context 23:

Which of the following is true about below graphs(A,B, C left to right) betwixt the price function and Number of iterations?

23) Suppose l1, l2 and l3 are the 3 learning rates for A,B,C respectively. Which of the following is true about l1,l2 and l3?

A) l2 < l1 < l3

B) l1 > l2 > l3
C) l1 = l2 = l3
D) None of these

Solution: (A)

In case of high learning rate, step will exist high, the objective function volition subtract rapidly initially, merely it will not find the global minima and objective function starts increasing after a few iterations.

In case of depression learning rate, the step volition be pocket-sized. So the objective function volition decrease slowly

Question Context 24-25:

We have been given a dataset with due north records in which nosotros take input aspect as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we separate the data in grooming ready and test set randomly.

24) Now we increase the grooming fix size gradually. Equally the training ready size increases, what practice you expect volition happen with the hateful training error?

A) Increment
B) Subtract
C) Remain constant
D) Can't Say

Solution: (D)

Training error may increment or subtract depending on the values that are used to fit the model. If the values used to train contain more than outliers gradually, then the error might only increment.

25) What do you expect volition happen with bias and variance every bit you increase the size of training data?

A) Bias increases and Variance increases
B) Bias decreases and Variance increases
C) Bias decreases and Variance decreases
D) Bias increases and Variance decreases
E) Can't Say Fake

Solution: (D)

As nosotros increase the size of the training data, the bias would increase while the variance would subtract.

Question Context 26:

Consider the post-obit information where one input(10) and one output(Y) is given.

26) What would be the root mean square training mistake for this data if y'all run a Linear Regression model of the form (Y = A0+A1X)?

A) Less than 0
B) Greater than zip
C) Equal to 0
D) None of these

Solution: (C)

We tin perfectly fit the line on the following information so mean mistake will be zero.

Question Context 27-28:

Suppose you have been given the following scenario for preparation and validation error for Linear Regression.

Scenario Learning Rate Number of iterations Preparation Error Validation Error
1 0.1 yard 100 110
2 0.ii 600 90 105
3 0.3 400 110 110
iv 0.4 300 120 130
5 0.4 250 130 150

27) Which of the following scenario would requite yous the right hyper parameter?

A) 1
B) 2
C) 3
D) 4

Solution: (B)

Option B would exist the better option because information technology leads to less preparation as well equally validation error.

28) Suppose you lot got the tuned hyper parameters from the previous question. Now, Imagine you lot want to add together a variable in variable space such that this added feature is important. Which of the following thing would you notice in such case?

A) Training Error will decrease and Validation fault will increase

B) Training Error will increase and Validation fault volition increase
C) Training Error will increment and Validation fault will subtract
D) Preparation Mistake will decrease and Validation error will decrease
Due east) None of the to a higher place

Solution: (D)

If the added characteristic is of import, the training and validation error would decrease.

Question Context 29-xxx:

Suppose, you got a state of affairs where you find that your linear regression model is under fitting the information.

29) In such situation which of the following options would you lot consider?

  1. Add more variables
  2. Beginning introducing polynomial degree variables
  3. Remove some variables

A) ane and 2
B) two and 3
C) 1 and three
D) i, two and 3

Solution: (A)

In instance of under fitting, you demand to induce more than variables in variable space or you lot tin add together some polynomial degree variables to make the model more complex to be able to fir the data better.

thirty) At present situation is same as written in previous question(under plumbing fixtures).Which of following regularization algorithm would you prefer?

A) L1
B) L2
C) Any
D) None of these

Solution: (D)

I won't use any regularization methods because regularization is used in case of overfitting.

End Notes

I tried my best to make the solutions equally comprehensive as possible but if you lot accept any questions / doubts delight drop in your comments below. I would beloved to hear your feedback about the skilltest. For more such skilltests, check out our electric current hackathons.

Source: https://www.analyticsvidhya.com/blog/2017/07/30-questions-to-test-a-data-scientist-on-linear-regression/

Posted by: fosdickgagainfoute.blogspot.com

0 Response to "Which Of The Following Types Of Intelligence Is Most Likely To Change Due To Aging?ã¢â‚¬â€¹"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel