By Daniel Roeder
Portfolio Optimization is a common financial econometric application that draws on various types of statistical methods. The goal of portfolio optimization is to determine the ideal allocation of assets to a given set of possible investments. Many optimization models use classical statistical methods, which do not fully account for estimation risk in historical returns or the stochastic nature of future returns. By using a fully Bayesian analysis, however, this analysis is able to account for these aspects and also incorporate a complete information set as a basis for the investment decision. The information set is made up of the market equilibrium, an investor/expert’s personal views, and the historical data on the assets in question. All of these inputs are quantified and Bayesian methods are used to combine them into a succinct portfolio optimization model. For the empirical analysis, the model is tested using monthly return data on stock indices from Australia, Canada, France, Germany, Japan, the U.K.
and the U.S.
Advisor: Andrew Patton | JEL Codes: C1, C11, C58, G11 | Tagged: Bayesian Analysis Global Markets Mean-Variance Portfolio Optimization
By Andrew Bentley
Conventional measurements of equity return volatility rely on the asset’s previous day closing price to infer the current level of volatility and fail to incorporate information concerning intraday influntuctuations. Realized measures of volatility, such as the realized variance, are able to integrate intraday information by utilizing high-frequency data to form a very accurate measure of the asset’s return volatility. These measures can be used in parallel with the traditional definition of the Capital Asset Pricing Model (CAPM) beta to better predict the time-varying systematic risk of an asset. In this analysis, realized measures were added to the General Autoregressive Conditional Heteroskedastic (GARCH) framework to form a predictive model of beta that can quickly respond to rapid changes in the level of volatility. The ndings suggest that this predictive beta is better able to explain the stylized characteristics of beta and is a more accurate forecast of the realized beta than the GARCH model or the benchmark Autoregressive Moving-Average (ARMA) model used as a comparison.
JEL Codes: C0, C3, C03, C32, C53, C58 | Tagged:
By Vivek Bhattacharya
This paper uses high-frequency price data to study the relative contribution of jumps to the total volatility of an equity. In particular, it systematically compares the relative contribution of jumps across a panel of stocks from three different industries by computing the cross-correlation of this statistic for pairs of stocks. We identify a number of empirical regularities in this cross-correlation and compare these observations to predictions from a standard jump-diffusion model for the joint price process of two stocks. A main finding of this paper is that this jump-diffusion model, when calibrated to particular pairs of stocks in the data, cannot replicate some of the empirical patterns observed. The model predictions differ from the empirical observations systematically: predictions for pairs of stocks from the same industry are on the whole much less accurate than predictions for pairs of stocks from different industries. Some possible explanations for this discrepancy are discussed.
Advisor: George Tauchen | JEL Codes: C5, C52, C58 | Tagged:
By Kyu Won Choi
This paper studies common intraday jumps and relative contribution of these common jumps
in realized correlation between individual stocks and market index, using high-frequency price
data. We find that the common jumps significantly contribute in realized correlation at different
threshold cut-offs and both common jumps and realized correlation are relatively consistent across
time period including financial crisis. We also find a weak, positive relationship between relative
contribution of common jumps and realized correlation, when we further sample high-frequency
data into a year. We also observe that the volatility index and market index reveal the strongest
Advisor: Geourge Tauchen, Tim Bollerslev | JEL Codes: C40, C58, G10 | Tagged:
By Angela Ryu
Using high frequency stock price data in estimating nancial measures often causes serious distortion. It is due to the existence of the market microstructure noise, the lag of the observed price to the underlying value due to market friction. The adverse eect of the noise can be avoided by choosing an appropriate sampling frequency. In this study, using mean square error as the measure of accuracy in beta estimation, the optimal pair of sampling frequency and the trailing window was empirically found to be as short as 1 minute and 1 week, respectively. This surprising result may be due to the low market noise resulting from its high liquidity and the econometric properties of the errors-in-variables model. Moreover, the realized beta obtained from the optimal pair outperformed the constant beta from the CAPM when overnight returns were excluded. The comparison further strengthens the argument that the underlying beta is time-varying.
Advisor: George Tauchen | JEL Codes: C51, C58, G17 | Tagged:
By Kunal Jain
Conventional models of volatility estimation do not capture the persistence in high-frequency market data and are not able to limit the impact of market micro-structure noise present at very finely sampled intervals. In an attempt to incorporate these two elements, we use the beta-metric as a proxy for equity-specific volatility and use finely sampled time-varying conditional forecasts estimated using the Heterogeneous Auto-regressive framework to form a predictive beta model. The findings suggest that this predictive beta is better able to capture persistence in financial data and limit the effect of micro-structure noise in high frequency data when compared to the existing benchmarks.
Advisor: George Tauchen | JEL Codes: C01, C13, C22, C29, C58 | Tagged: