Too Tired? Too Anxious? Need More Time? We’ve got your back.
Q1:Suppose you are asked to estimate the weekly CAPM model for each of the five stocks in prmdf. You are asked to use OLS (ordinary least squares) to do the estimation (remember there is no intercept in the CAPM model). Now answer the following questions.
1. How would you use the LM function to estimate the CAPM model for GOOG? Call your output lmgoog. Use the summary function to summarize lmgoog.
2. From the summary of lmgoog, how would you extract the estimated beta of goog? The Std.Error of betahat? And shat (the estimate of sigma, the error standard deviation)?
3. It is known that the square of the std error of betahat given in the summary output can be calculated as s2hat*(X’X)^-1, where s2hat is the estimate of sigma2 and X is the matrix containing the data on prmsp500. Write code to check this out.
4. Now use the mapply function to estimate all five CAPM models. Call the output outals.
5. How would summarize outls with the help of lapply? Q2:Suppose you decide to compare the OLS CAPM results from Question 1 with the Bayesian results. Answer the following questions.
1. Do a Bayesian estimation of the CAPM model for goog under the default training sample prior in cbw. Call the output from the MCMCregressg function thetam. Summarize thetam. How the posterior mean of beta compare with the estimate of beta from lm? How does the posterior sd of beta compare with the Std.Error of betahat from lm?
2. Now do a Bayesian estimation of each CAPM model in a loop with mapply. Call the output outls. Use the default training sample prior.
3. Now use mapply again, but this time with a non-training sample prior. Do not use the first 99 observations of the data and suppose that the prior mean of each beta is 1, the prior variance matrix (B0_) is .065, the prior mean of sigma2 is .001 and the prior variance of sigma2 is .002.
4. Compare the results from Questions 2 and 3.
Too Tired? Too Anxious? Need More Time? We’ve got your back.