## Introduction to GARCH Modeling

GARCH is a time series modeling technique that uses past variances to predict future variances. This post demonstrates how to do a simple GARCH modeling Excel and Matlab. GARCH stands for Generalized Autoregressive Conditional Heteroskedasticity. The series is heteroskedastic when the time series is said to have GARCH effects – when variances vary with time. The series is homoskedastic when variances remain constant with time. Put another way, GARCH includes past variances that explains future variances. Consequently, you can apply GARCH models to diverse fields such as option pricing, foreign exchange, asset allocation, risk management, and portfolio management to name a few.

GARCH(1,1) is defined as:

where,

σ is the variance, ε is the residual, t is the time period, ω, α, and β are estimation parameters determined by the log likelihood function.

**ε ^{2}**

_{t-1 }is the natural log of the ratio of closing asset prices for two consecutive trading periods or

**ln(P**and

_{t}/P_{t-1})**stands for asset closing price.**

*P*## GARCH Modeling Excel Matlab

The Excel Spreadsheet in this example has been automated in every way possible. To begin, just enter a major stock index or an ETF symbol, the start and end dates. This example uses daily returns of S&P 500 from Feb-2010 to Feb-2015.

*Figure 1: GARCH input parameters and results*

The excel code uses the Solver library to maximize the Log Likelihood. You can manually modify the constraints or the objective function by bringing up the Solver dialogue box assuming the Solver addin is installed.

*Note: Before using Solver commands in VBA, you must reference the Solver library in **the VBE. To do this, go to Tools > References and choose Solver from the list. If you do **not see Solver as an option, use the Browse button and go to C :/> Program Files > **Microsoft Office > Office (or Office10) > Library > Solver and look for the “Solver32.dll” **file. Double-click this file; now you should find “SOLVER” in the list and check the box **next to it. If this does not work, copy the “Solver32.dll” file to the C > WINDOWS > **SYSTEM32 directory. You may also try to open the file “Solver.xla” directly and then **open the Excel file. If you do not find the “Solver32.dll” file, you may need to reinstall **Office. The code has been tested against Excel 2010 and 2013.*

*Figure 2: Volatility of GSPC as estimated by GARCH(1,1) model.*

You can see from Figure 2 that GARCH volatility spikes upward and decays until there is another spike. There are usually two type of spikes – Shocks and Announcements. Volatility increases as the announcement (financial result) season approaches, and then fades way when the results of the announcement are known to the public. The GARCH(1,1) model is employed primarily to estimate how fast the fading (decay) is. Lots of data points would indicate tens or thousands of daily observations.

In this example, daily returns of S&P 500 Index was fed into the GARCH(1,1) model, totaling 1260 observations. Usually, when you have less than 1000 observations, then the GARCH estimation cannot do a good job in estimating omega, alpha, and beta parameters. GARCH models are estimated through maximum likelihood. The optimization results can go awry at times, especially when Excel Solver is used.

Returns almost always do not have a pure normal distribution. They rather have long tails. GARCH effects can be a contributing factor for long tails. A t distribution can be used when longer tailed distribution turns out to give a better fit. There will be no auto-correlation in the squared residuals, if the volatility clustering is well explained by the model. In such situations, Ljung-Box test can be used to detect auto-correlation.

Optimization demands caution. It is always a good idea to test the outcome of one algorithm in Excel against alternative robust tools such as Matlab. Below is the Matlab Snippet for GARCH(1,1) model estimation. The same set of daily returns was used against Matlab. See the attached csv file.

### GARCH Modeling in Matlab

```
>> residual = csvread('gspc.csv');
>> model = GARCH(1,1) % model definition, GARCH(1,1) parameter estimation
>> [fit,VarCov,LogL,Params] = estimate(model,residual);
>> % pull model parameters
>> omega=Params.X(1) % omega parameter
>> beta = Params.X(2) % beta parameter
>> alpha = Params.X(3) % alpha parameter
>> EstCondVar=omega/(1-alpha-beta) % estimated conditional var
>> vol = omega/EstCondVar % volatility estimation
>> volL = sqrt(vol)
>> model=garch('Constant',omega,'GARCH',beta,'ARCH',alpha) % redefine model parameters
GARCH(1,1) Conditional Variance Model:
--------------------------------------
Distribution: Name = 'Gaussian'
P: 1
Q: 1
Constant: 3.29426e-06
GARCH: {0.840753} at Lags [1]
ARCH: {0.125089} at Lags [1]
>> omega
omega =
3.2943e-06
>> alpha
alpha =
0.1559
>> beta
beta =
0.8408
```

### Comparing the results:

The results came very close with the exception of **α** parameter. The GRG Non Linear optimization method in Solver yielded in subtle differences on output parameters.

Excel Solver for **[ω, α, β]** = [0.0000, 0.1559, 0.8400]

Matlab results for **[ω, α, β]** = [0.0000, 0.1559, 0.8408]

**Limitations:**

GARCH(1,1) is not a magic bullet. Although GARCH models are usually applied to return series, investment decisions are rarely based solely on expected returns and volatilities. GARCH models often fail to capture high periods of volatility, including severe market downturns (e.g., crashes and subsequent rebounds), and other highly unanticipated events that can lead to significant structural change. It often fails to fully capture the fat tails observed in asset return series. Ljung box tests can help in identifying auto-correlation that remains in the residuals. Once you get a feel of GARCH(1,1), you can adapt the code to EGARCH or NGARCH.

password? The file fails to run due to improper rate.

Would be best if it could be open-scourced then Bloomberg/Reuters could be adapted instead of yahoo finance.

improper link, i meant.