## Introduction

We learned in the previous chapter that (Ax = b) need not possess a solution when the number of rows of (A) exceeds its rank, i.e., (r < m). As this situation arises quite often in practice, typically in the guise of 'more equations than unknowns,' we establish a rationale for the absurdity (Ax = b).

## The Normal Equations

The goal is to choose (x) such that (Ax) is as close as possible to (b). Measuring closeness in terms of the sum of the squares of the components we arrive at the 'least squares' problem of minimizing

res

[(||Ax-b||)^2 = (Ax-b)^{T}(Ax-b) onumber]

over all (x in mathbb{R}). The path to the solution is illuminated by the Fundamental Theorem. More precisely, we write

(forall b_{R}, b_{N}, b_{R} in mathbb{R}(A) wedge b_{N} in mathbb{N} (A^{T}) : (b = b_{R}+b_{N})). On noting that (i) (forall b_{R}, x in mathbb{R}^n : ((Ax-bR) in mathbb{R}(A))) and (ii) (mathbb{R}(A) perp mathbb{N} (A^T)) we arrive at the Pythagorean Theorem.

Definition: Pythagoream Theorem

[norm^{2}(Ax-b) = (||Ax-b_{R}+b_{N}||)^2 onumber]

[= (||Ax-b_{R}||)^2+(||b_{N}||)^2 onumber]

It is now clear from the Pythagorean Theorem that the best (x) is the one that satisfies

[Ax = b_{R} onumber]

As (b_{R} in mathbb{R}(A)) this equation indeed possesses a solution. We have yet however to specify how one computes (b_{R}) given (b). Although an explicit expression for (b_{R}) **orthogonal projection** of (b) onto (mathbb{R}(A)), in terms of (A) and (b) is within our grasp we shall, strictly speaking, not require it. To see this, let us note that if (x) satisfies the above equation then **orthogonal projection** of (b) onto (mathbb{R}(A)), in terms of (A) and (b) is within our grasp we shall, strictly speaking, not require it. To see this, let us note that if (x) satisfies the above equation then

[Ax-b = Ax-b_{R}+b_{N} onumber]

[= -b_{N} onumber]

As (b_{N}) is no more easily computed than (b_{R}) you may claim that we are just going in circles. The 'practical' information in the above equation however is that ((Ax-b) in A^{T}), i.e., (A^{T}(Ax-b) = 0), i.e.,

[A^{T} Ax = A^{T} b onumber]

As (A^{T} b in mathbb{R} (A^T)) regardless of (b) this system, often referred to as the **normal equations**, indeed has a solution. This solution is unique so long as the columns of (A^{T}A) are linearly independent, i.e., so long as (mathbb{N}(A^{T}A) = {0}). Recalling Chapter 2, Exercise 2, we note that this is equivalent to (mathbb{N}(A) = {0})

The set of (x in b_{R}) for which the misfit ((||Ax-b||)^2) is smallest is composed of those (x) for which (A^{T}Ax = A^{T}b) There is always at least one such (x). There is exactly one such (x) if (mathbb{N}(A) = {0}).

As a concrete example, suppose with reference to Figure 1 that (A = egin{pmatrix} {1}&{1} {0}&{1} {0}&{0} end{pmatrix}) and (A = egin{pmatrix} {1} {1} {1} end{pmatrix})

As (b e mathbb{R}(A)) there is no (x) such that (Ax = b). Indeed, ((||Ax-b||)^2 = (x_{1}+x_{2}+-1)^2+(x_{2}-1)^2+1 ge 1), with the minimum uniquely attained at (x = egin{pmatrix} {0} {1} end{pmatrix}), in agreement with the unique solution of the above equation, for (A^{T} A = egin{pmatrix} {1}&{1} {1}&{2} end{pmatrix}) and (A^{T} b = egin{pmatrix} {1} {2} end{pmatrix}). We now recognize, a posteriori, that (b_{R} = Ax = egin{pmatrix} {1} {1} {0} end{pmatrix}) is the orthogonal projection of b onto the column space of (A).

## Applying Least Squares to the Biaxial Test Problem

We shall formulate the identification of the 20 fiber stiffnesses in this previous figure, as a least squares problem. We envision loading, the 9 nodes and measuring the associated 18 displacements, (x). From knowledge of (x) and (f) we wish to infer the components of (K=diag(k)) where (k) is the vector of unknown fiber stiffnesses. The first step is to recognize that

[A^{T}KAx = f onumber]

may be written as

[forall B, B = A^{T} diag(Ax) : (Bk = f) onumber]

Though conceptually simple this is not of great use in practice, for (B) is 18-by-20 and hence the above equation possesses many solutions. The way out is to compute (k) as the result of more than one experiment. We shall see that, for our small sample, 2 experiments will suffice. To be precise, we suppose that (x^1) is the displacement produced by loading (f^1) while (x^2) is the displacement produced by loading (f^2). We then piggyback the associated pieces in

[B = egin{pmatrix} {A^{T} ext{diag} (Ax^1)} {A^{T} ext{diag} (Ax^2)} end{pmatrix}]

and

[f= egin{pmatrix} {f^1} {f^2} end{pmatrix}.]

This (B) is 36-by-20 and so the system (Bk = f) is overdetermined and hence ripe for least squares.

We proceed then to assemble (B) and (f). We suppose (f^{1}) and (f^{2}) to correspond to horizontal and vertical stretching

[f^{1} = egin{pmatrix} {-1}&{0}&{0}&{0}&{1}&{0}&{-1}&{0}&{0}&{0}&{1}&{0}&{-1}&{0}&{0}&{0}&{1}&{0} end{pmatrix}^{T} onumber]

[f^{2} = egin{pmatrix} {0}&{1}&{0}&{1}&{0}&{1}&{0}&{1}&{0}&{1}&{0}&{1}&{0}&{-1}&{0}&{-1}&{0}&{-1} end{pmatrix}^{T} onumber]

respectively. For the purpose of our example we suppose that each (k_{j} = 1) except (k_{8} = 5). We assemble (A^{T}KA) as in Chapter 2 and solve

[A^{T}KAx^{j} = f^{j} onumber]

with the help of the pseudoinverse. In order to impart some `reality' to this problem we taint each (x^{j}) with 10 percent noise prior to constructing (B)

[B^{T}Bk = B^{T}f onumber]

we note that Matlab solves this system when presented with`k=Bf`

when BB is rectangular. We have plotted the results of this procedure in the link. The stiff fiber is readily identified.

## Projections

From an algebraic point of view Equation is an elegant reformulation of the least squares problem. Though easy to remember it unfortunately obscures the geometric content, suggested by the word 'projection,' of Equation. As projections arise frequently in many applications we pause here to develop them more carefully. With respect to the normal equations we note that if (mathbb{N}(A) = {0}) then

[x = (A^{T}A)^{-1} A^{T} b onumber]

and so the orthogonal projection of bb onto (mathbb{R}(A)) is:

[b_{R}= Ax onumber]

[= A (A^{T}A)^{-1} A^T b onumber]

Defining

[P = A (A^{T}A)^{-1} A^T onumber]

takes the form (b_{R} = Pb). Commensurate with our notion of what a 'projection' should be we expect that (P) map vectors not in (mathbb{R}(A)) onto (mathbb{R}(A)) while leaving vectors already in (mathbb{R}(A)) unscathed. More succinctly, we expect that (Pb_{R} = b_{R}) i.e., (Pb_{R} = Pb_{R}). As the latter should hold for all (b in R^{m}) we expect that

[P^2 = P onumber]

We find that indeed

[P^2 = A (A^{T}A)^{-1} A^T A (A^{T}A)^{-1} A^T onumber]

[= A (A^{T}A)^{-1} A^T onumber]

[= P onumber]

We also note that the (P) is symmetric. We dignify these properties through

Definition: Orthogonal Projection

A matrix (P) that satisfies (P^2 = P) is called a **projection**. A symmetric projection is called an **orthogonal projection**.

We have taken some pains to motivate the use of the word 'projection.' You may be wondering however what symmetry has to do with orthogonality. We explain this in terms of the tautology

[b = Pb−Ib onumber]

Now, if (P) is a projection then so too is (I-P). Moreover, if (P) is symmetric then the dot product of (b).

[egin{align*} (Pb)^T(I-P)b &= b^{T}P^{T}(I-P)b [4pt] &= b^{T}(P-P^{2})b [4pt] &= b^{T} 0 b [4pt] &= 0 end{align*}

i.e., (Pb) is orthogonal to ((I-P)b). As examples of a nonorthogonal projections we offer

[P = egin{pmatrix} {1}&{0}&{0} {frac{-1}{2}}&{0}&{0} {frac{-1}{4}}&{frac{-1}{2}}&{1} end{pmatrix}]

and (I-P). Finally, let us note that the central formula (P = A (A^{T}A)^{-1} A^T), is even a bit more general than advertised. It has been billed as the orthogonal projection onto the column space of (A). The need often arises however for the orthogonal projection onto some arbitrary subspace M. The key to using the old PP is simply to realize that **every** subspace is the column space of some matrix. More precisely, if

[{x_{1}, cdots, x_{m}} onumber]

is a basis for MM then clearly if these (x_{j}) are placed into the columns of a matrix called (A) then (mathbb{R}(A) = M). For example, if (M) is the line through (egin{pmatrix} {1}&{1} end{pmatrix}^{T}) then

[P = egin{pmatrix} {1} {1} end{pmatrix} frac{1}{2} egin{pmatrix} {1}&{1} end{pmatrix} onumber]

[P = frac{1}{2} egin{pmatrix} {1}&{1} {1}&{1} end{pmatrix} onumber]

is orthogonal projection onto (M).

## Exercises

Exercise (PageIndex{1})

Gilbert Strang was stretched on a rack to lengths (l = 6, 7, 8) feet under applied forces of (f = 1, 2, 4) tons. Assuming Hooke's law (l−L = cf), find his compliance, (c), and original height, (L), by least squares.

Exercise (PageIndex{2})

With regard to the example of § 3 note that, due to the the random generation of the noise that taints the displacements, one gets a different 'answer' every time the code is invoked.

- Write a loop that invokes the code a statistically significant number of times and submit bar plots of the average fiber stiffness and its standard deviation for each fiber, along with the associated M--file.
- Experiment with various noise levels with the goal of determining the level above which it becomes difficult to discern the stiff fiber. Carefully explain your findings.

Exercise (PageIndex{3})

Find the matrix that projects (mathbb{R}^3) onto the line spanned by (egin{pmatrix} {1}&{0}&{1} end{pmatrix}^{T}).

Exercise (PageIndex{4})

Find the matrix that projects (mathbb{R}^3) onto the line spanned by (egin{pmatrix} {1}&{0}&{1} end{pmatrix}^{T}) and (egin{pmatrix} {1}&{1}&{-1} end{pmatrix}^{T}).

Exercise (PageIndex{5})

If (P) is the projection of (mathbb{R}^m) onto a k--dimensional subspace (M), what is the rank of (P) and what is (mathbb{R}(P))?

## 10.4: The Least Squares Regression Line

Once the scatter diagram of the data has been drawn and the model assumptions described in the previous sections at least visually verified (and perhaps the correlation coefficient (r) computed to quantitatively verify the linear trend), the next step in the analysis is to find the straight line that best fits the data. We will explain how to measure how well a straight line fits a collection of points by examining how well the line (y=frac<1><2>x-1) fits the data set

(which will be used as a running example for the next three sections). We will write the equation of this line as (hat

The idea for measuring the goodness of fit of a straight line to data is illustrated in Figure (PageIndex<1>), in which the graph of the line (hat

Figure (PageIndex<1>): Plot of the Five-Point Data and the Line (hat

To each point in the data set there is associated an &ldquoerror,&rdquo the positive or negative vertical distance from the point to the line: positive if the point is above the line and negative if it is below the line. The error can be computed as the actual (y)-value of the point minus the (y)-value (hat

The computation of the error for each of the five points in the data set is shown in Table (PageIndex<1>).

Table (PageIndex<1>): The Errors in Fitting Data with a Straight Line

(x) | (y) | (hat | (y-hat | ((y-hat | |
---|---|---|---|---|---|

2 | 0 | 0 | 0 | 0 | |

2 | 1 | 0 | 1 | 1 | |

6 | 2 | 2 | 0 | 0 | |

8 | 3 | 3 | 0 | 0 | |

10 | 3 | 4 | &minus1 | 1 | |

(sum) | - | - | - | 0 | 2 |

A first thought for a measure of the goodness of fit of the line to the data would be simply to add the errors at every point, but the example shows that this cannot work well in general. The line does not fit the data perfectly (no line can), yet because of cancellation of positive and negative errors the sum of the errors (the fourth column of numbers) is zero. Instead goodness of fit is measured by the sum of the squares of the errors. Squaring eliminates the minus signs, so no cancellation can occur. For the data and line in Figure (PageIndex<1>) the sum of the squared errors (the last column of numbers) is (2). This number measures the goodness of fit of the line to the data.

Definition: goodness of fit

The goodness of fit of a line (hat

((n) terms in the sum, one for each data pair).

## Contents

### Founding Edit

The method of least squares grew out of the fields of astronomy and geodesy, as scientists and mathematicians sought to provide solutions to the challenges of navigating the Earth's oceans during the Age of Exploration. The accurate description of the behavior of celestial bodies was the key to enabling ships to sail in open seas, where sailors could no longer rely on land sightings for navigation.

The method was the culmination of several advances that took place during the course of the eighteenth century: [7]

- The combination of different observations as being the best estimate of the true value errors decrease with aggregation rather than increase, perhaps first expressed by Roger Cotes in 1722.
- The combination of different observations taken under the
*same*conditions contrary to simply trying one's best to observe and record a single observation accurately. The approach was known as the method of averages. This approach was notably used by Tobias Mayer while studying the librations of the moon in 1750, and by Pierre-Simon Laplace in his work in explaining the differences in motion of Jupiter and Saturn in 1788. - The combination of different observations taken under
*different*conditions. The method came to be known as the method of least absolute deviation. It was notably performed by Roger Joseph Boscovich in his work on the shape of the earth in 1757 and by Pierre-Simon Laplace for the same problem in 1799. - The development of a criterion that can be evaluated to determine when the solution with the minimum error has been achieved. Laplace tried to specify a mathematical form of the probability density for the errors and define a method of estimation that minimizes the error of estimation. For this purpose, Laplace used a symmetric two-sided exponential distribution we now call Laplace distribution to model the error distribution, and used the sum of absolute deviation as error of estimation. He felt these to be the simplest assumptions he could make, and he had hoped to obtain the arithmetic mean as the best estimate. Instead, his estimator was the posterior median.

### The method Edit

The first clear and concise exposition of the method of least squares was published by Legendre in 1805. [8] The technique is described as an algebraic procedure for fitting linear equations to data and Legendre demonstrates the new method by analyzing the same data as Laplace for the shape of the earth. Within ten years after Legendre's publication, the method of least squares had been adopted as a standard tool in astronomy and geodesy in France, Italy, and Prussia, which constitutes an extraordinarily rapid acceptance of a scientific technique. [7]

In 1809 Carl Friedrich Gauss published his method of calculating the orbits of celestial bodies. In that work he claimed to have been in possession of the method of least squares since 1795. This naturally led to a priority dispute with Legendre. However, to Gauss's credit, he went beyond Legendre and succeeded in connecting the method of least squares with the principles of probability and to the normal distribution. He had managed to complete Laplace's program of specifying a mathematical form of the probability density for the observations, depending on a finite number of unknown parameters, and define a method of estimation that minimizes the error of estimation. Gauss showed that the arithmetic mean is indeed the best estimate of the location parameter by changing both the probability density and the method of estimation. He then turned the problem around by asking what form the density should have and what method of estimation should be used to get the arithmetic mean as estimate of the location parameter. In this attempt, he invented the normal distribution.

An early demonstration of the strength of Gauss's method came when it was used to predict the future location of the newly discovered asteroid Ceres. On 1 January 1801, the Italian astronomer Giuseppe Piazzi discovered Ceres and was able to track its path for 40 days before it was lost in the glare of the sun. Based on these data, astronomers desired to determine the location of Ceres after it emerged from behind the sun without solving Kepler's complicated nonlinear equations of planetary motion. The only predictions that successfully allowed Hungarian astronomer Franz Xaver von Zach to relocate Ceres were those performed by the 24-year-old Gauss using least-squares analysis.

In 1810, after reading Gauss's work, Laplace, after proving the central limit theorem, used it to give a large sample justification for the method of least squares and the normal distribution. In 1822, Gauss was able to state that the least-squares approach to regression analysis is optimal in the sense that in a linear model where the errors have a mean of zero, are uncorrelated, and have equal variances, the best linear unbiased estimator of the coefficients is the least-squares estimator. This result is known as the Gauss–Markov theorem.

The idea of least-squares analysis was also independently formulated by the American Robert Adrain in 1808. In the next two centuries workers in the theory of errors and in statistics found many different ways of implementing least squares. [9]

The objective consists of adjusting the parameters of a model function to best fit a data set. A simple data set consists of *n* points (data pairs) ( x i , y i ) *)!> , i = 1, …, n, where x i *

* *

*The least-squares method finds the optimal parameter values by minimizing the sum of squared residuals, S : [10] *

A data point may consist of more than one independent variable. For example, when fitting a plane to a set of height measurements, the plane is a function of two independent variables, *x* and *z*, say. In the most general case there may be one or more independent variables and one or more dependent variables at each data point.

If the residual points had some sort of a shape and were not randomly fluctuating, a linear model would not be appropriate. For example, if the residual plot had a parabolic shape as seen to the right, a parabolic model ( Y i = α + β x i + γ x i 2 + U i ) *+gamma x_ ^<2>+U_)> would be appropriate for the data. The residuals for a parabolic model can be calculated via r i = y i − α ^ − β ^ x i − γ ^ x i 2 *

* *

*This regression formulation considers only observational errors in the dependent variable (but the alternative total least squares regression can account for errors in both variables). There are two rather different contexts with different implications:*

- Regression for prediction. Here a model is fitted to provide a prediction rule for application in a similar situation to which the data used for fitting apply. Here the dependent variables corresponding to such future application would be subject to the same types of observation error as those in the data used for fitting. It is therefore logically consistent to use the least-squares prediction rule for such data.
- Regression for fitting a "true relationship". In standard regression analysis that leads to fitting by least squares there is an implicit assumption that errors in the independent variable are zero or strictly controlled so as to be negligible. When errors in the independent variable are non-negligible, models of measurement error can be used such methods can lead to parameter estimates, hypothesis testing and confidence intervals that take into account the presence of observation errors in the independent variables. [11] An alternative approach is to fit a model by total least squares this can be viewed as taking a pragmatic approach to balancing the effects of the different sources of error in formulating an objective function for use in model-fitting.

The minimum of the sum of squares is found by setting the gradient to zero. Since the model contains *m* parameters, there are *m* gradient equations:

The gradient equations apply to all least squares problems. Each particular problem requires particular expressions for the model and its partial derivatives. [12]

### Linear least squares Edit

A regression model is a linear one when the model comprises a linear combination of the parameters, i.e.,

Finding the minimum can be achieved through setting the gradient of the loss to zero and solving for β →

Finally setting the gradient of the loss to zero and solving for β

### Non-linear least squares Edit

There is, in some cases, a closed-form solution to a non-linear least squares problem – but in general there is not. In the case of no closed-form solution, numerical algorithms are used to find the value of the parameters β

The Jacobian **J** is a function of constants, the independent variable *and* the parameters, so it changes from one iteration to the next. The residuals are given by

which, on rearrangement, become *m* simultaneous linear equations, the **normal equations**:

The normal equations are written in matrix notation as

These are the defining equations of the Gauss–Newton algorithm.

### Differences between linear and nonlinear least squares Edit

- The model function,
*f*, in LLSQ (linear least squares) is a linear combination of parameters of the form f = X i 1 β 1 + X i 2 β 2 + ⋯eta _<1>+X_ eta _<2>+cdots > The model may represent a straight line, a parabola or any other linear combination of functions. In NLLSQ (nonlinear least squares) the parameters appear as functions, such as β 2 , e β x ,e^<eta x>> and so forth. If the derivatives ∂ f / ∂ β j > are either constant or depend only on the values of the independent variable, the model is linear in the parameters. Otherwise the model is nonlinear. - Need initial values for the parameters to find the solution to a NLLSQ problem LLSQ does not require them.
- Solution algorithms for NLLSQ often require that the Jacobian can be calculated similar to LLSQ. Analytical expressions for the partial derivatives can be complicated. If analytical expressions are impossible to obtain either the partial derivatives must be calculated by numerical approximation or an estimate must be made of the Jacobian, often via finite differences.
- Non-convergence (failure of the algorithm to find a minimum) is a common phenomenon in NLLSQ.
- LLSQ is globally concave so non-convergence is not an issue.
- Solving NLLSQ is usually an iterative process which has to be terminated when a convergence criterion is satisfied. LLSQ solutions can be computed using direct methods, although problems with large numbers of parameters are typically solved with iterative methods, such as the Gauss–Seidel method.
- In LLSQ the solution is unique, but in NLLSQ there may be multiple minima in the sum of squares.
- Under the condition that the errors are uncorrelated with the predictor variables, LLSQ yields unbiased estimates, but even under that condition NLLSQ estimates are generally biased.

These differences must be considered whenever the solution to a nonlinear least squares problem is being sought. [12]

Consider a simple example drawn from physics. A spring should obey Hooke's law which states that the extension of a spring y is proportional to the force, *F*, applied to it.

constitutes the model, where *F* is the independent variable. In order to estimate the force constant, *k*, we conduct a series of *n* measurements with different forces to produce a set of data, ( F i , y i ) , i = 1 , … , n *), i=1,dots ,n!> , where y_{i} is a measured spring extension. [14] Each experimental observation will contain some error, ε , and so we may specify an empirical model for our observations,*

* *

*There are many methods we might use to estimate the unknown parameter k. Since the n equations in the m variables in our data comprise an overdetermined system with one unknown and n equations, we estimate k using least squares. The sum of squares to be minimized is*

The least squares estimate of the force constant, *k*, is given by

We assume that applying force * causes* the spring to expand. After having derived the force constant by least squares fitting, we predict the extension from Hooke's law.

In a least squares calculation with unit weights, or in linear regression, the variance on the *j*th parameter, denoted var ( β ^ j )

where the true error variance *σ* 2 is replaced by an estimate, the reduced chi-squared statistic, based on the minimized value of the residual sum of squares (objective function), *S*. The denominator, *n* − *m*, is the statistical degrees of freedom see effective degrees of freedom for generalizations. [12] *C* is the covariance matrix.

If the probability distribution of the parameters is known or an asymptotic approximation is made, confidence limits can be found. Similarly, statistical tests on the residuals can be conducted if the probability distribution of the residuals is known or assumed. We can derive the probability distribution of any linear combination of the dependent variables if the probability distribution of experimental errors is known or assumed. Inferring is easy when assuming that the errors follow a normal distribution, consequently implying that the parameter estimates and residuals will also be normally distributed conditional on the values of the independent variables. [12]

It is necessary to make assumptions about the nature of the experimental errors to test the results statistically. A common assumption is that the errors belong to a normal distribution. The central limit theorem supports the idea that this is a good approximation in many cases.

- The Gauss–Markov theorem. In a linear model in which the errors have expectation zero conditional on the independent variables, are uncorrelated and have equal variances, the best linear unbiased estimator of any linear combination of the observations, is its least-squares estimator. "Best" means that the least squares estimators of the parameters have minimum variance. The assumption of equal variance is valid when the errors all belong to the same distribution.
- If the errors belong to a normal distribution, the least-squares estimators are also the maximum likelihood estimators in a linear model.

However, suppose the errors are not normally distributed. In that case, a central limit theorem often nonetheless implies that the parameter estimates will be approximately normally distributed so long as the sample is reasonably large. For this reason, given the important property that the error mean is independent of the independent variables, the distribution of the error term is not an important issue in regression analysis. Specifically, it is not typically important whether the error term follows a normal distribution.

A special case of generalized least squares called **weighted least squares** occurs when all the off-diagonal entries of *Ω* (the correlation matrix of the residuals) are null the variances of the observations (along the covariance matrix diagonal) may still be unequal (heteroscedasticity). In simpler terms, heteroscedasticity is when the variance of Y i

The first principal component about the mean of a set of points can be represented by that line which most closely approaches the data points (as measured by squared distance of closest approach, i.e. perpendicular to the line). In contrast, linear least squares tries to minimize the distance in the y

Notable statistician Sara van de Geer used Empirical process theory and the Vapnik-Chervonenkis dimension to prove a least-squares estimator can be interpreted as a measure on the space of square-integrable functions. [15]

### Tikhonov regularization Edit

In some contexts a regularized version of the least squares solution may be preferable. Tikhonov regularization (or ridge regression) adds a constraint that ‖ β ‖ 2 _{2}-norm of the parameter vector, is not greater than a given value. [* citation needed *] Equivalently, [* dubious – discuss *] it may solve an unconstrained minimization of the least-squares penalty with α ‖ β ‖ 2

### Lasso method Edit

An alternative regularized version of least squares is *Lasso* (least absolute shrinkage and selection operator), which uses the constraint that ‖ β ‖ _{1}-norm of the parameter vector, is no greater than a given value. [16] [17] [18] (As above, this is equivalent [* dubious – discuss *] to an unconstrained minimization of the least-squares penalty with α ‖ β ‖

One of the prime differences between Lasso and ridge regression is that in ridge regression, as the penalty is increased, all parameters are reduced while still remaining non-zero, while in Lasso, increasing the penalty will cause more and more of the parameters to be driven to zero. This is an advantage of Lasso over ridge regression, as driving parameters to zero deselects the features from the regression. Thus, Lasso automatically selects more relevant features and discards the others, whereas Ridge regression never fully discards any features. Some feature selection techniques are developed based on the LASSO including Bolasso which bootstraps samples, [20] and FeaLect which analyzes the regression coefficients corresponding to different values of α

The L 1 -regularized formulation is useful in some contexts due to its tendency to prefer solutions where more parameters are zero, which gives solutions that depend on fewer variables. [16] For this reason, the Lasso and its variants are fundamental to the field of compressed sensing. An extension of this approach is elastic net regularization.

## Table of Contents

0.1 Evaluating a polynomial

0.3 Floating point representation of real numbers

0.3.1 Floating point formats

0.3.2 Machine representation

0.3.3 Addition of floating point numbers

0.6 Software and Further Reading

1.1.2 How accurate and how fast?

1.2.1 Fixed points of a function

1.2.2 Geometry of Fixed Point Iteration

1.2.3 Linear Convergence of Fixed Point Iteration

1.3.1 Forward and backward error

1.3.2 The Wilkinson polynomial

1.3.3 Sensitivity and error magnification

1.4.1 Quadratic convergence of Newton's method

1.4.2 Linear convergence of Newton's method

1.5 Root-finding without derivatives

1.5.1 Secant method and variants

REALITY CHECK 1: Kinematics of the Stewart platform

1.6 Software and Further Reading

**2. Systems of Equations**

2.1.1 Naive Gaussian elimination

2.2.1 Backsolving with the LU factorization

2.2.2 Complexity of the LU factorization

2.3.1 Error magnification and condition number

2.4 The PA=LU factorization

REALITY CHECK 2: The Euler-Bernoulli Beam

2.5.2 Gauss-Seidel Method and SOR

2.5.3 Convergence of iterative methods

2.5.4 Sparse matrix computations

2.6 Methods for symmetric positive-definite matrices

2.6.1 Symmetric positive-definite matrices

2.6.2 Cholesky factorization

2.6.3 Conjugate Gradient Method

2.7 Nonlinear systems of equations

2.7.1 Multivariate Newton's method

2.8 Software and Further Reading

3.1 Data and interpolating functions

3.1.1 Lagrange interpolation

3.1.2 Newton's divided differences

3.1.3 How many degree d polynomials pass through n points?

3.1.4 Code for interpolation

3.1.5 Representing functions by approximating polynomials

3.2.1 Interpolation error formula

3.2.2 Proof of Newton form and error formula

3.3 Chebyshev interpolation

3.3.2 Chebyshev polynomials

3.4.1 Properties of splines

REALITY CHECK 3: Constructing fonts from Bézier splines

3.6 Software and Further Reading

4.1 Least squares and the normal equations

4.1.1 Inconsistent systems of equations

4.1.2 Fitting models to data

4.2 Linear and nonlinear models

4.1.3 Conditioning of least squares

4.3.1 Gram-Schmidt orthogonalization and least squares

4.3.2 Modified Gram-Schmidt orthogonalization

4.3.3 Householder reflectors

4.4 Generalized Minimum Residual (GMRES) Method

4.5 Nonlinear least squares

4.5.2 Models with nonlinear parameters

4.5.3 Levenberg-Marquardt method

REALITY CHECK 4: GPS, conditioning and nonlinear least squares

4.6 Software and Further Reading

**5. Numerical Differentiation and Integration**

5.1 Numerical differentiation

5.1.1 Finite difference formulas

5.1.4 Symbolic differentiation and integration

5.2 Newton-Cotes formulas for numerical integration

5.2.3 Composite Newton-Cotes Formulas

5.2.4 Open Newton-Cotes methods

REALITY CHECK 5: Motion control in computer-aided modelling

5.6 Software and Further Reading

**6. Ordinary Differential Equations**

6.1 Initial value problems

6.1.2 Existence, uniqueness, and continuity for solutions

6.1.3 First-order linear equations

6.2 Analysis of IVP solvers

6.2.1 Local and global truncation error

6.2.2 The explicit trapezoid method

6.3 Systems of ordinary differential equations

6.3.1 Higher order equations

6.3.2 Computer simulation: The pendulum

6.3.3 Computer simulation: Orbital mechanics

6.4 Runge-Kutta methods and applications

6.4.1 The Runge-Kutta family

6.4.2 Computer simulation: The Hodgkin-Huxley neuron

6.4.3 Computer simulation: The Lorenz equations

REALITY CHECK 6: The Tacoma Narrows Bridge

6.5 Variable step-size methods

6.5.1 Embedded Runge-Kutta pairs

6.6 Implicit methods and stiff equations

6.7.1 Generating multistep methods

6.7.2 Explicit multistep methods

6.7.3 Implicit multistep methods

6.8 Software and Further Reading

**7. Boundary Value Problems**

7.1.1 Solutions of boundary value problems

7.1.2 Shooting method implementation

REALITY CHECK 7: Buckling of a circular ring

7.2 Finite difference methods

7.2.1 Linear boundary value problems

7.2.2 Nonlinear boundary value problems

7.3 Collocation and the Finite Element Method

7.3.2 Finite elements and the Galerkin method

7.4 Software and Further Reading

**8. Partial Differential Equations**

8.1.1 Forward difference method

8.1.2 Stability analysis of forward difference method

8.1.3 Backward difference method

8.3.1 Finite difference method for elliptic equations

REALITY CHECK 8: Heat distribution on a cooling fin

8.3.2 Finite element method for elliptic equations

8.4 Nonlinear partial differential equations

8.4.1 Implicit Newton solver

8.4.2 Nonlinear equations in two space dimensions

8.5 Software and Further Reading

**9. Random Numbers and Applications**

9.1.2 Exponential and normal random numbers

9.2.1 Power laws for Monte Carlo estimation

9.3 Discrete and continuous Brownian motion

9.3.2 Continuous Brownian motion

9.4 Stochastic differential equations

9.4.1 Adding noise to differential equations

9.4.2 Numerical methods for SDEs

REALITY CHECK 9: The Black-Scholes formula

9.5 Software and Further Reading

**10. Trigonometric Interpolation and the FFT**

10.1 The Fourier Transform

10.1.2 Discrete Fourier Transform

10.1.3 The Fast Fourier Transform

10.2 Trigonometric interpolation

10.2.1 The DFT Interpolation Theorem

10.2.2 Efficient evaluation of trigonometric functions

10.3 The FFT and signal processing

10.3.1 Orthogonality and interpolation

10.3.2 Least squares fitting with trigonometric functions

10.3.3 Sound, noise, and filtering

REALITY CHECK 10: The Wiener filter

10.4 Software and Further Reading

11.1 The Discrete Cosine Transform

11.1.2 The DCT and least squares approximation

11.2 Two-dimensional DCT and image compression

11.3.1 Information theory and coding

11.3.2 Huffman coding for the JPEG format

11.4 Modified DCT and audio compression

11.4.1 Modified Discrete Cosine Transform

REALITY CHECK 11: A simple audio codec using the MDCT

11.5 Software and Further Reading

**12. Eigenvalues and Singular Values**

12.1 Power iteration methods

12.1.2 Convergence of power iteration

12.1.3 Inverse power iteration

12.1.4 Rayleigh quotient iteration

12.2.1 Simultaneous iteration

12.2.2 Real Schur form and QR

12.2.3 Upper Hessenberg form

REALITY CHECK 12: How search engines rate page quality

12.3 Singular value decomposition

12.3.1 Finding the SVD in general

12.3.2 Special case: symmetric matrices

12.4 Applications of the SVD

12.4.1 Properties of the SVD

12.5 Software and Further Reading

### Лучшие отзывы о курсе ADVANCED LINEAR MODELS FOR DATA SCIENCE 1: LEAST SQUARES

This is an excellent course that enabled me to understand how multiple regression in linear models works behind the hood. The practical examples shown by the professor were very helpful. Thank you

Great, detailed walk-through of least squares. Linear Algebra is a must for this course. To follow the last part requires knowledge of matrix (eigen?)decomposition, which derailed me somewhat.

I really enjoyed the course. It was well explained and the quizzes at regular intervals were helpful. It would be great if there were some practice exercises though.

We need more advanced, theoretical courses on Coursera, like this one, in order to deeply understand the more general courses like Regression Models and Linear Models.

## Abstract

This paper presents a robust color image watermarking algorithm based on fuzzy least squares support vector machine (FLS-SVM) and Bessel K form (BKF) distribution, which is a recently developed geometric correction algorithm. We firstly compute the quaternion discrete Fourier transform (QDFT) of the maximum central region of the original color image. Then the watermark is embedded into the magnitudes of low-frequency information of QDFT. In watermark decoding process, the synchronous correction based on FLS-SVM model is used. When training the FLS-SVM model, we firstly perform the quaternion wavelet transform (QWT) of the grayscale images that correspond to the color training images, and then use BKF distribution to fit the empirical histogram of coefficients of the QWT, and finally use the shape parameters and scale parameters of BKF distribution to construct image feature vector. Experimental results show that the proposed algorithm is not only invisible, but also has outstanding robustness against common image processing attacks and geometric attacks.

Senior undergraduates or graduate students in engineering and science who are taking a numerical methods course using Python

PART 1 INTRODUCTION TO PYTHON PROGRAMMING

1.1 Getting Started With Python

1.2 Python as a Calculator

1.4 Introduction to Jupyter Notebook

1.5 Logical Expressions and Operators

CHAPTER 2 Variables and Basic Data Structures

2.1 Variables and Assignment

2.2 Data Structure – String

2.6 Data Structure – Dictionary

2.7 Introducing Numpy Arrays

3.2 Local Variables and Global Variables

3.3 Nested Functions

3.4 Lambda Functions

3.5 Functions as Arguments to Functions

CHAPTER 4 Branching Statements

4.1 If-Else Statements

4.2 Ternary Operators

4.3 Summary and Problems

5.1 For-Loops

5.2 While Loops

CHAPTER 7 Object-Oriented Programming

7.3 Inheritance, Encapsulation, and Polymorphism

7.4 Summary and Problems

CHAPTER 8 Complexity

8.1 Complexity and Big-ONotation

CHAPTER 9 Representation of Numbers

9.2 Floating Point Numbers

CHAPTER 10 Errors, Good Programming Practices, and Debugging

CHAPTER 11 Reading and Writing Data

CHAPTER 12 Visualization and Plotting

12.1 2D Plotting

12.2 3D Plotting

12.3 Working With Maps

12.4 Animations and Movies

CHAPTER 13 Parallelize Your Python

13.1 Parallel Computing Basics

PART 2 INTRODUCTION TO NUMERICAL METHODS

CHAPTER 14 Linear Algebra and Systems of Linear Equations

14.1 Basics of Linear Algebra

14.2 Linear Transformations

14.3 Systems of Linear Equations

14.4 Solutions to Systems of Linear Equations

14.5 Solving Systems of Linear Equations in Python

CHAPTER 15 Eigenvalues and Eigenvectors

15.1 Eigenvalues and Eigenvectors Problem Statement

15.4 Eigenvalues and Eigenvectors in Python

CHAPTER 16 Least Squares Regression

16.1 Least Squares Regression Problem Statement

16.2 Least Squares Regression Derivation (Linear Algebra)

16.3 Least Squares Regression Derivation (Multivariate Calculus)

16.4 Least Squares Regression in Python

16.5 Least Squares Regression for Nonlinear Functions

17.1 Interpolation Problem Statement

17.2 Linear Interpolation

17.3 Cubic Spline Interpolation

17.4 Lagrange Polynomial Interpolation

17.5 Newton’s Polynomial Interpolation

CHAPTER 18 Taylor Series

18.1 Expressing Functions Using a Taylor Series

18.2 Approximations Using Taylor Series

18.3 Discussion About Errors

CHAPTER 19 Root Finding

19.1 Root Finding Problem Statement

19.4 Newton–Raphson Method

19.5 Root Finding in Python

19.6 Summary and Problems

CHAPTER 20 Numerical Differentiation

20.1 Numerical Differentiation Problem Statement

20.2 Using Finite Difference to Approximate Derivatives

20.3 Approximating of Higher Order Derivatives

20.4 Numerical Differentiation With Noise

CHAPTER 21 Numerical Integration

21.1 Numerical Integration Problem Statement

21.2 Riemann Integral

21.3 Trapezoid Rule

21.4 Simpson’s Rule

21.5 Computing Integrals in Python

21.6 Summary and Problems

CHAPTER 22 Ordinary Differential Equations (ODEs) Initial-Value Problems

22.1 ODE Initial Value Problem Statement

22.2 Reduction of Order

22.3 The Euler Method

22.4 Numerical Error and Instability

22.5 Predictor–Corrector and Runge–Kutta Methods

CHAPTER 23 Boundary-Value Problems for Ordinary Differential Equations (ODEs)

23.1 ODE Boundary Value Problem Statement

23.3 The Finite Difference Method

23.4 Numerical Error and Instability

CHAPTER 24 Fourier Transform

24.1 The Basics of Waves

24.2 Discrete Fourier Transform (DFT)

24.3 Fast Fourier Transform (FFT)

Appendix A Getting Started With Python in Windows

VitalSource eBooks are read using the Bookshelf ® platform. Bookshelf is free and allows you to access your Stata Press eBook from your computer, smartphone, tablet, or eReader.

### How to access your eBook

**1)** Visit Bookshelf online to sign in or create an account.

**2)** Once logged in, click redeem in the upper right corner. Enter your eBook code. Your eBook code will be in your order confirmation email under the eBook's title.

**3)** The eBook will be added to your library. You may then download Bookshelf on other devices and sync your library to view the eBook.

Bookshelf is available on the following:

**Online**

Bookshelf is available online from just about any Internet-connected computer by accessing https://online.vitalsource.com/user/new.

**PC**

Bookshelf is available for Windows 7/8/8.1/10 (both 32-, and 64-bit).

Download Bookshelf software to your desktop so you can view your eBooks with or without Internet access.

**iOS**

Bookshelf is available for iPad, iPhone, and iPod touch.

Download the Bookshelf mobile app from the Itunes Store.

**Android**

Bookshelf is available for Android phones and tablets running 4.0 (Ice Cream Sandwich) and later.

Download the Bookshelf mobile app from the Google Play Store.

**Kindle Fire**

Bookshelf is available for Kindle Fire 2, HD, and HDX.

Download the Bookshelf mobile app from the Kindle Fire App Store.

**Mac**

Bookshelf is available for macOS X 10.9 or later.

Download Bookshelf software to your desktop so you can view your eBooks with or without Internet access.

Senior statistician at the **USC Children's Data Network**, author of four Stata Press books, and former UCLA statistical consultant who envisioned and designed the **UCLA Statistical Consulting Resources website**.

#### Return policy for eBooks

Stata Press eBooks are nonreturnable and nonrefundable.

eBook not available for this title

### Comment from the Stata technical group

William Greene&rsquos *Econometric Analysis* has been the standard reference for econometrics among economists, political scientists, and other social scientists for almost thirty years. As of 2016, the book had been cited more than 60,000 times in 2014, it was part of Google Scholar&rsquos list of 100 most cited works over all fields and for all time. The newly released eighth edition is certain to continue that tradition. This book&rsquos abundance of examples and emphasis on putting econometric theory to practical use make it valuable not only to graduate students taking their first course in econometrics, but also to students and professionals who engage in empirical research.

Part I of the book, chapters 1 to 6, covers regression modeling properties of the least-squares estimator inference and prediction and tests for functional form and specification. Chapter 6 is of special interest. In this new edition, it introduces modern treatment effects concepts, such as regression discontinuity, as part of the basic analytical tool set in econometrics rather than a special topic to be presented in later chapters of the text.

Part II of the book, chapters 7 to 11, covers extensions and deviations of the basic framework presented in Part I. Chapter 7 covers nonlinear models and contains a new discussion of interaction effects. Chapter 8 covers instrumental variables and endogeneity and has been revised to include more current methods and applications. Chapters 9 and 10 generalize the linear regression model to allow for heteroskedasticity. Then, with the generalized least-squares (GLS) estimator already discussed in the context of nonspherical disturbances, Greene presents fixed- and random-effects panel-data models as straightforward extensions of least squares. Chapter 11, which deals with panel data, has many revisions relevant to current research and applications, much like Chapter 8.

Part III of the book, chapters 12 to 16, devotes one chapter to each of four popular estimation methods: the generalized method of moments, maximum likelihood, simulation, and Bayesian inference. Each chapter strikes a good balance between theoretical rigor and practical applications. Many newer discrete-choice models require evaluation of multivariate normal probabilities to account for this, Chapter 15 includes a detailed discussion of the GHK simulator.

Part IV of the book, chapters 17 to 19, covers advanced techniques for microeconometrics. Chapter 17 details binary choice models for both cross-sectional and panel data. Part IV also includes bivariate and multivariate probit models models for count, multinomial, and ordered outcomes and models for truncated data, duration data, and sample selection.

Part V of the book, chapters 20 and 21, covers advanced techniques for macroeconometrics. Chapter 20, on stationary time series, describes estimation in the presence of serial correlation, tests for autocorrelation, lagged dependent variables, and ARCH models. Chapter 21, on nonstationary series, covers unit roots and cointegration. The chapters in Part V frequently use the results obtained in Part III on estimation. The book concludes with appendices on matrix algebra, probability, distribution theory, and optimization. These appendices are freely available at the book&rsquos companion website.

## About the Author

**ALAN AGRESTI, PhD,** is Distinguished Professor Emeritus in the Department of Statistics at the University of Florida. He has presented short courses on generalized linear models and categorical data methods in more than 30 countries. The author of over 200 journal articles, Dr. Agresti is also the author of *Categorical Data Analysis*, Third Edition, *Analysis of Ordinal Categorical Data*, Second Edition, and *An Introduction to Categorical Data Analysis*, Second Edition, all published by Wiley.

## Step 7 :

#### Calculating the Least Common Multiple :

7.1 Find the Least Common Multiple

The left denominator is : (a+1) • (a-1) • (a 2 +1)

The right denominator is : (a 4 +1) • (a 2 +1) • (a+1) • (a-1)

Number of times each Algebraic Factor

appears in the factorization of:

Algebraic Factor | Left Denominator | Right Denominator | L.C.M = Max |
---|---|---|---|

a+1 | 1 | 1 | 1 |

a-1 | 1 | 1 | 1 |

a 2 +1 | 1 | 1 | 1 |

a 4 +1 | 0 | 1 | 1 |

Least Common Multiple:

(a+1) • (a-1) • (a 2 +1) • (a 4 +1)