Estimation Procedures
Estimation
Ordinary Least Squares
Matrix Formulation
Standard Errors
Conduct in R
Ordinary Least Squares
Maximum Likelihood Approach
Method of Moments
Find the variance of the estimate
Find the information matrix
Use for Inference
For a data pair \((X_i,Y_i)_{i=1}^n\), the ordinary least squares estimator will find the estimates of \(\hat\beta_0\) and \(\hat\beta_1\) that minimize the following function:
\[ \sum^n_{i=1}\{y_i-(\beta_0+\beta_1x_i)\}^2 \]
\[ \hat\beta_0 = \bar y - \hat\beta_1\bar x \] \[ \hat\beta_1 = \frac{\sum^n_{i=1}(y_i-\bar y)(x_i-\bar x)}{\sum^n_{i=1}(x_i-\bar x)^2} \] \[ \hat\sigma^2 = \frac{1}{n-2}\sum^n_{i=1}(y_i-\hat y_i)^2 \]
\[ Y_i = \boldsymbol X_i^\mathrm T \boldsymbol \beta + \epsilon_i \]
\(Y_i\): Outcome Variable
\(\boldsymbol X_i=(1, X_i)^\mathrm T\): Predictors
\(\boldsymbol \beta = (\beta_0, \beta_1)^\mathrm T\): Coefficients
\(\epsilon_i\): error term
For \(n\) data points
\[ \boldsymbol Y = \boldsymbol X^\mathrm T\boldsymbol \beta + \boldsymbol \epsilon \]
\(\boldsymbol Y = (Y_1, \cdots, Y_n)^\mathrm T\): Outcome Variable
\(\boldsymbol X=(\boldsymbol X_1, \cdots, \boldsymbol X_n)^\mathrm T\): Predictors
\(\boldsymbol \beta = (\beta_0, \beta_1)^\mathrm T\): Coefficients
\(\boldsymbol \epsilon = (\epsilon_1, \cdots, \epsilon_n)^\mathrm T\): Error terms
\[ (Y - \boldsymbol X ^\mathrm T\boldsymbol \beta)^\mathrm T(Y - \boldsymbol X ^\mathrm T\boldsymbol \beta) \]
\[ \hat{\boldsymbol \beta} = (\boldsymbol X ^\mathrm T\boldsymbol X)^{-1}\boldsymbol X ^\mathrm T\boldsymbol Y \]
\[ \hat \sigma^2 = \frac{1}{n-2} \sum^n_{i=1} (Y_i-\boldsymbol X_i^\mathrm T\hat{\boldsymbol \beta})^2 \]
\[ SE(\hat\beta_0)=\sqrt{\frac{\sum^n_{i=1}x_i^2\hat\sigma^2}{n\sum^n_{i=1}(x_i-\bar x)^2}} \]
\[ SE(\hat\beta_1)=\sqrt\frac{\hat\sigma^2}{\sum^n_{i=1}(x_i-\bar x)^2} \]
\[ Var(\hat {\boldsymbol \beta}) = (\boldsymbol X ^\mathrm T\boldsymbol X)^{-1} \hat \sigma^2 \]
You can use the lm
to fit a linear model and extract the estimated values and standard errors
R is capable of conducting matrix operations with the following functions:
%*%
: matrix multiplication
t()
: transpose a matrix
solve()
: computes the inverse matrix
Minimize the least squares using a numerical methods in R. The optim()
function will minimize a function for set of parameters. We can minimize a function, least squares function, and supply initial values (0) for the parameters of interest.
lm
for the following dataoptim
Find the value of x and y that will minimize the following function for any value a and b.
\[ f(x,y) = \frac{(x-3)^2}{a^2} + \frac{(y+4)^2}{b^2} \]
optim