The role of the efficient market hypothesis
In previous chapters, the focus was on the various methodologies for determining value, noting the disadvantages in some of the important and relevant methods used. They often involved complex mathematical models in an environment in which it was believed that it was possible to find certainty. The efficient market hypothesis (EMH) prevailed. Capital markets are efficient, because competition between profit-seeking market participants will ensure that the prices of securities are continuously adjusted to reflect all publicly available information. Many argued that the dominance of the theory created the context in which the financial crisis occurred.
The theory influenced market participants, central bankers and regulators alike. Central bankers believed that market prices could be trusted and that bubbles either did not exist or could not be identified before they occurred, or even that they were beneficial for growth. Regulators seemed to accept the need for ‘light touch’ regulation, in which the view was taken that ‘bankers knew how to run their business’ and were best left to carry on with that. If the market is indeed efficient in incorporating and acting immediately on information about prices, then transparency is vital. Ensuring adequate, fair and prompt disclosure about a company's financial situation was one of the most important aims of financial regulation, especially in the early part of the last decade. Mark-to-market accounting can be seen as part of that approach to regulation, but does not depend on the efficient market hypothesis for its validity.
The Turner Review, the Financial Services Authority's analysis of the global financial crisis, issued in 2009, places the efficient market theory at the centre.1 The report's conclusions on market efficiency follow.
At the core of these assumptions has been the theory of efficient and rational markets. Five propositions with implications for regulatory approach have followed:
- Market prices are a good indication of rationally evaluated economic value.
- The development of securitised credit, based on the creation of new and more liquid markets, has improved both allocative efficiency and financial stability.
- The risk characteristics of financial markets can be inferred from mathematical analyses, delivering robust quantitative measures of trading risk.
- Market discipline can be used as an effective tool in constraining harmful risk taking.
- Financial innovation can be assumed to be beneficial since market competition would winnow out any innovations which did not deliver value-added.
Each of these assumptions is now subject to extensive challenge on both theoretical and empirical grounds, with potential implications for the appropriate design of regulation and for the role of regulatory authorities. Putting the blame on the efficient markets hypothesis (EMH) was a widely held popular approach during and immediately after the crisis. The view has also been attributed to Alan Greenspan, Chairman of the Board of Governors of the Federal Reserve Bank from 1987 to January 2006, but his view is rather more nuanced than that.
In his book, The Age of Turbulence, Greenspan recalls how, as the newly appointed Chairman, he watched the stock markets very closely and asked,
How does one make sense of the unprecedented drop (involving the loss of more than a fifth of the total value of the Dow Jones Industrial Average) on October 19, 1987? What new piece of information surfaced between the market's close at the end of the previous trading day and its close on October 19th? I am aware of none.… No financial information was driving these prices.2
It was simply due to the ‘fear of the continuing loss of wealth’.
When markets are behaving rationally, as they do almost all of the time, they appear to engage in a ‘random walk’: the past gives no better indication than a coin flip of the future direction of the price of a stock. But sometimes that walk gives rise to a stampede. When gripped by fear, people rush to disengage from commitments, and stocks will plunge.
In his testimony before the House Committee of Government Oversight and Reform, Greenspan presented his views on the sources of the crisis. He admitted that his views of the operations of the market had been shattered:
Those of us who have looked to the self-interest of lending institutions to protect shareholder's equity (myself especially) are in a state of shocked disbelief. Such counterparty surveillance is a central pillar of our financial markets' state of balance. If it fails, as occurred this year, market stability is undermined.3
His further criticisms were much more pointed:
It was the failure to properly price such risky assets [mortgage-backed securities and collateral debt obligations] that precipitated the crisis. In recent decades, a vast risk management and pricing system has evolved, combining the best insights of mathematicians and finance experts supported by major advances in computer and communications technology. A Nobel prize was awarded for the pricing model that underpins much of the advance in derivative markets. This modern risk management paradigm held sway for decades. The whole intellectual edifice collapsed in the summer of last year because the data inputted into the risk management models generally covered only the past two decades, a period of euphoria. Had the models been fitted more appropriately to historic periods of stress, capital requirements would have been much higher and the financial world would be in much better shape today, in my judgment.4
Earlier, in an article for the Financial Times, he spelt out the ‘essential problem’ which is that ‘our models-both risk models and econometric models-as complex as they have become, are still too simple to capture the full array of governing variables that drive our global economic reality. A model is, of necessity, an abstraction from the full detail of the real world’.5
A glance at some of his earlier views shows the depths of his disillusionment with financial innovations, or perhaps the way in which they had been used. He valued the technological innovations; the ‘development of paradigms for containing risk to those willing and presumably able to bear it; the ability of modern economics to absorb unanticipated shocks’; lenders becoming considerably more diversified; the growth of the secondary mortgage market and the growth of financial derivatives.
Conceptual advances in pricing options and other complex financial products … have significantly lowered the costs of and expanded the opportunities for hedging risks. If risk is properly dispersed, shocks to the overall economic system will be better absorbed and less likely to create the cascading failures that could threaten financial stability.6
To be fair, the conversation was not without its warnings. Risk management capabilities had to be improved. The ‘underlying human traits which lead to excess are scarcely likely to be reformed’, and the role of central banks is in preventing major market disruptions through the ‘development and enforcement of prudent regulatory standards.’7 Hence Greenspan's position is not quite the simplistic view of capitalism and the capital markets sometimes ascribed to him, nor does he believe that the markets are quite as efficient as the EMH apparently portrays them as being.
Other voices were much more strident. Professor of Economics at the University of Columbia and Nobel Prize Winner in Economics Joseph Stiglitz concluded in an interview: ‘The Chicago School bears the blame for providing a seeming intellectual foundation for the idea that markets are self-adjusting and the best role for governments is to do nothing’.8 George Soros stated bluntly: ‘On a deeper level, the demise of Lehman Brothers conclusively falsifies the efficient market hypothesis.’9
To consider its role in the financial crisis, the theory itself must be defined. It originated in the work of Paul Samuelson and Eugene Fama, first developed in 1965 in ‘Random Walks in Stock Market Prices’, which they expanded and defended in many subsequent articles.10 However, the most useful definition of the theory is in Fama's 1970 article:
An ‘efficient’ market is defined as a market where there are large numbers of rational, profit ‘maximisers’ actively competing with each other trying to predict future market values of individual securities, and where important current information is almost freely available to all participants. In an efficient market, competition among many intelligent participants leads to a situation where, at any point in time, actual prices of securities already reflect the effects of information based both on events that have already occurred and on events which, as of now, the market expects to take place in the future. In other words, in an efficient market at any point in time the actual price of a security will be a good estimate of its intrinsic value.11
He identified three distinct levels (or ‘strengths’) at which the market might actually be efficient:
- The weak form: current prices of securities already reflect past price and volume information.
- The semi-strong version is similarly already incorporated into a security's current market price not only the past price, but also information about the company as well, such as company quarterly financial statements. No one should be able to outperform the market by using something that everyone else knows.
- The strong form refers to all information, both public and private, where this information is also incorporated in the price. Monopolistic information does not enable the possessor of that information to profit from that knowledge in an efficient market. Such a view seems to be counter-intuitive, to say the least. Insider dealers would be disappointed, if they realized that their careful acquisition of inside information did not provide them with a profit, at least as long as they disposed of the securities quickly enough before others acquired the information.
However, in a recent article John Cochrane pointed out that the term ‘efficient’ has a precise meaning. ‘An “efficient” market is “informationally efficient” if prices at each moment incorporate all available information about future values’.12 Cochrane further argues that the principle of the efficiency of the markets, though a simple one, has led to the production of countless papers testing the principle, or rather, the principle can only be tested in the context of an asset-pricing model that specifies equilibrium in expected returns. Or, in other words, ‘to fully test whether prices fully reflect available information, we must specify how the market is trying to compensate investors when it sets prices’. However, Cochrane considerably lowers the power of EMH. The efficiency is only informational, not operational. The hypothesis can only be tested by assuming that equilibrium exists at all times, that is, by assuming that the hypothesis is true.
Fama recognizes that there is a ‘joint hypothesis’ problem right from the start. This is because the hypothesis that markets are efficient, that is, the general statement that prices reflect all the available information, is not an empirical statement and is not itself testable. As he put it in an interview in the New York Times, ‘you can't test the hypothesis without also setting out what we call a “model of market equilibrium” ’. He explained that what the market was trying to do in setting prices is also ‘telling me something about how to measure risk, and then tell me, what is the relationship between the expected return on an asset and its risks?’ That is what Fama, right from the start, called the ‘joint hypothesis problem’ and both have to be tested together. Fama cheerfully added, ‘Testing all of that is where it gets tricky’. Many academics would certainly agree with that since Fama's thesis had led to hundreds of articles seeking to analyse the ever-increasing and more extensive data on stock and bond market returns in attempts to prove or disprove the EMH.13
The efficient market theory and its critics
However, almost from the start, supporters have attempted to prove that the EMH holds good. There is still no consensus even amongst financial economists as to its validity. Extensive reviews of the empirical evidence conducted by Martin Sewell led him to the conclusion that, given that an
efficient market will always ‘fully reflect’ available information, but in order to determine how the market should ‘fully reflect’ this information, we need to determine investors' risk preferences. Therefore, any test of the EMH is a test of both market efficiency and investors' risk preferences. For this reason, by itself, is not a well-defined and empirically refutable hypothesis.14
The point about risk preferences is an important one. One of the assumptions underlying the hypothesis is that investors invest for the highest immediate returns in the form of dividends, whereas they may seek capital growth, for example, over a long period of time. This may well be the case if the investors are fund managers acting on behalf of pension funds. The reference to fund managers is important. Both Fama and Shiller see the market as being dominated by individuals, each investing on their own account, and, in the nature of the case having less access to all kinds of information, such as detailed analysts' reports on individual companies, industrial sectors and analyses of country risk. Markets outside the USA are dominated by institutional investors, with very few individual traders. The USA still has individual day traders, but even so their stock markets are dominated by mutual funds and 401K investments.
One of the most thorough, but not uncritical exponents of Fama's hypothesis, Andrew Lo, concluded as far back as 1999, that
the Efficient Markets Hypothesis, by itself, is not a well-defined and empirically refutable hypothesis. To make it operational, one must specify additional structure, e.g. investor preferences, information structure, or business conditions. But then a test of the Efficient Markets Hypothesis becomes a test of several auxiliary hypotheses as well, and a rejection of such a joint hypothesis tells us little about which aspect of the joint hypothesis is inconsistent with the data. Are stock prices too volatile because markets are inefficient, or is it due to risk aversion, or dividend smoothing? All three inferences are consistent with the data. Moreover, new statistical tests designed to distinguish between them will no doubt require auxiliary hypotheses of their own which, in turn, may be questioned.15
Lo reiterated this view in his contribution to The New Palgrave: A Dictionary of Economics almost a decade later, but added that the EMH might be a way of gauging the efficiency of a particular market relative to other markets, such as futures vs. spot markets, or auction vs. dealer markets. He also pointed to ‘several new strands of literature’ based on ‘more realistic assumptions’ including ‘psychological approaches to risk-taking behaviour’.16 More of that later.
Although it would be many steps too far to suggest that the efficient market theory was in any way a major cause of the financial crisis, its neglect of the context in which markets operate and the varying roles of the range of market participants may have encouraged regulators and policymakers to pay little attention to what was actually happening in the world outside the markets. The financial crisis did, however, cause some leading members of the Chicago School to abandon its main tenets, and in the case of Judge Richard Posner that meant turning to Keynes and to Keynes' General Theory of Employment, Interest and Money, in particular. Posner pointed out that that the failure of the Chicago School to understand the magnitude of the crisis is because much of modern economics, by contrast with the work of Keynes, ‘is very mathematical, and, on the other hand, very … credulous about the self-regulating power of the markets. That combination is very dangerous.’17
Another Nobel prize winner, Professor Gary Becker, admitted that Chicago got it wrong, saying, ‘You take derivatives and do not fully understand how the aggregate risk of derivatives operated. Systemic risk: I don't think we understand that, either – at Chicago or anywhere else.’ Elsewhere, Becker admitted that ‘group rationality is questionable. When you look at what happened in housing and related credit markets, you cannot call those rationally functioning markets.’18 Fama was quite unrepentant:
I think it worked quite well in this episode. Stock prices typically decline prior to a recession and in a state of recession. This was a particularly severe recession. Prices started to decline in advance of when people recognized that it was a recession and then continued to decline. That is what you would expect if markets were efficient.
It would make more sense to say that stock prices declined before people knew what was happening; for example, Lehman's stock prices declined because people did not know the value of the company's assets, not because they did. He went on to blame the Federal government and its instructions to Fannie Mae and Freddie Mac to buy subprime loans, which is certainly where it all started.19 Perhaps Warren Buffett has the best answer: ‘I'd be a bum on the street with a tin cup if markets were always efficient.’20 He is the third richest person in the world, whose net worth in August 2014 was $66.3bn.
Irrational exuberance: introducing behavioural finance
At times, Fama seemed to contradict his own thesis that stock market prices are entirely unpredictable, but later he admitted that some factors may help to predict longer-term stock prices. When ‘the dividend stock-price ratio is high, expected stock returns tend to be high, and when it is low, expected returns tend to be low’. In a joint paper with Ken French, they argued that ‘for both bonds and stocks, there are several variables that affect stocks, all of which are highly related to business conditions. We concluded that it tells us that it is likely that the variation in expected returns is rational, and presumably predictable’.21 However, the variation in expected returns, if it is related to business conditions, can be rationally or irrationally related to such business conditions. Fama's view is that ‘there is variation in the expected returns, which leads to some predictability … but there is nothing in the available evidence that allows one to settle (whether it is rational or irrational in a convincing way)’.22
Shiller's argument, in a paper written in 1981, is that stock prices move too much to be justified by the subsequent changes in dividends, on the basis of data from 1871 to 1979. If the stock market prices fully reflected all available information, variability in the prices would be less, or at least not significantly greater than the variability in underlying fundamentals. Shiller concluded that ‘the failure of the efficient markets model is thus so dramatic that it would seem impossible to attribute the failure to such things as data errors, price index problems or changes in the tax laws’.23 This work was pursued by Shiller and others throughout the 1980s, leading them to the view that since EMH could not explain most of the volatility in the market, it called into question the basic underpinnings of the entire theory. The evidence suggested that changes in prices occurred for no fundamental reason at all, or that the explanation should be sought elsewhere, perhaps in another test for expected volatility that modelled dividends and stock prices in a more general way. But as they were developed, according to Shiller, they simply showed that stock prices had more volatility than any version of the efficient market hypothesis could confirm, at least as far as that applies to the whole market.
The 1980s were not all bad news for the EMH, as some of the research conducted then suggested that, even though the whole stock market appears to be highly inefficient, based on the indices, ‘individual stock prices do show some correspondence to efficient markets theory.’24 Shiller quotes Paul Samuelson's dictum that the
stock market is micro-efficient but macro-inefficient, That is, individual stock variations are dominated by actual new information about subsequent dividends, but aggregate stock market variations are dominated by bubbles. Modern markets show considerable micro-efficiency (for the reason that the minority who spot aberrations from micro-efficiency can make money from those occurrences and, in doing so, tend to wipe out any persistent inefficiencies). In no contradiction to the previous sentence, I had hypothesised considerable macro inefficiency, in the sense of long waves in the time series of aggregate indexes of security prices below and above the various definitions of fundamental values.25
Shiller hastens to point out that
this does not mean that there are not ‘substantial bubbles’ in individual stock prices, but that the predictable variation across firms in dividends has often been so large as to largely swamp out the effect of bubbles.… When it comes to individual stocks, such predictable variations, and their effects on price, are often far larger than the bubble component of stock prices.26
Perhaps because the work on EMH seemed to have reached an impasse, academic attention turned towards behavioural models and the financial markets. The foundation of that was laid in The Econometrics of Financial Markets, published in 1997.27
The behavioural theorists shift the emphasis away from examining trends in the market data and developing models to explain them, to the behaviour of investors in the market, or rather to the factors influencing their behaviour. The work is based on Shiller's own observations and the results of the surveys of high-income individuals regarding their opinions of the stock market conducted by the International Centre for Finance at Yale from 1996 onwards at Shiller's instigation. The survey material balances the impressionistic and anecdotal evidence, which he sometimes cites in support of his views. The key question was whether or not they agreed with the following statement: ‘The stock market is the best investment for long-term holders, who can just buy and hold through the ups and downs of the market’. During the boom years of the 90s and in the peak year of 2000, 97 per cent of the respondents agreed at least somewhat with the statement, falling to 83 per cent in 2004, with those who strongly agreed falling from 67 per cent to 42 per cent over the same period. Investors' decisions are driven by emotional reactions to stock market developments, including resentment of those who have invested well, and loss of respect if one's own investments have failed.
His theory focuses on a bubble and the behaviours which contribute to its formation. A ‘bubble’ is defined as
A situation in which news of price increases spurs investor enthusiasm which spreads by psychological contagion from person to person, in the process amplifying stories that might justify the price increase and bringing in a larger and larger class of investors, who, despite doubts about the real value of an investment, are drawn to it partly by envy of others' successes and partly through a gambler's excitement.
This is part of what he describes as the ‘feedback loops’ in which ‘the changes in thought patterns infect the entire culture, and how they operate not only directly from past increases but also from auxiliary cultural changes that past price increases helped generate.’28 These changes are brought about by the media reporting of the possibilities of wealth through the stock market, thus propagating speculative price movements.
In his later lecture following the award of the Nobel Prize, he added: ‘Bubbles are not, in my mind, about the craziness of investors. They are rather about how investors are buffeted en masse from one superficially plausible theory about conventional valuation to another’,29 apparently unable to think for themselves. Individual reactions to the rise and fall of the markets are accounted for by various psychological factors, such as ‘reasoning that is characterised by an inability to think through elementary conclusions one would draw in the future if hypothetical events were to occur’, called ‘non-consequentialist reasoning’. That failure is not enough to explain the reactions to the stock market so Shiller turns to the social basis of thinking, tendencies to herd behaviour and the contagion of ideas as leading to bubbles. Such attitudes to the stock market are reinforced by some basic tenets which people have ‘learned’, such as that stocks always go up again after they go down, and that stocks always outperform bonds over time, but since neither of these statements is true, it is not a new enlightenment and society needs to address this issues, so this last chapter is a ‘Call to Action’.
Following on from Shiller's approach, many behaviourial theorists continue to reject the traditional approach to understanding financial markets using models in which the agents are rational. Rational behaviour in buying or selling stocks is described as considering conditional probabilities of which the simplest example is that the chances of winning the lottery if you have not purchased a ticket are zero. Applied to buying stocks, such decisions are clearly much more complex; for example, weighing up the likely gains or losses on the purchase of stocks and shares in a major oil company, if the oil price suddenly collapses and, if purchased, whether to sell or hold. Such considerations are linked with weighing up the options and selecting the one that has the highest possible value for those making decisions.30
Just as Shiller does, but with a wider range of examples of irrational behaviour demonstrated by examining long-term trends in stock market prices and the volatility of movements in prices. There are two building blocks of behavioural finance: one is that in an economy where rational and irrational traders interact, ‘irrationality can have a substantial and long-term impact on prices’.31 The second building block is psychology. For guidance on this, behavioural theorists turn to the experimental evidence on the way in which people form beliefs, and on their preferences, or how they make decisions given their beliefs. These insights from cognitive psychology depend on psychometric tests, multiple choice questionnaires in which one is asked to make a judgement or assign a probability to a certain event or judgement without a wider context. The conclusion is that people are over-confident in their judgements; poorly calibrated when estimating probabilities; and display unrealistically rosy views of their abilities and prospects.
There is also much evidence that once people have formed an opinion, they cling to it too tightly and for far too long. This is because they are reluctant to search for evidence that contradicts their belief, and even if they find such evidence, they are likely to treat it with excessive scepticism, or may even misinterpret it as evidence for their beliefs. When forming estimates, people often start with some initial, possibly arbitrary value and then adjust away from it, but the adjustment is often insignificant. This is the kind of empirical evidence concerning human behaviour which the behavioural theorists apply to understanding and assessing the behaviour of individual investors in the stock market. They were assumed to make similarly irrational decisions about their investments. The irrational behaviour on the part of individual investors is held to have long-term and apparently adverse effects on the markets. Certainly people do behave irrationally in these and other ways. The point, however, is that no research has been done to discover how investment decisions are actually made, which can only be discovered by discussing the whole process of decision-making, in particular by fund managers. Such research has not yet been undertaken by behavioural theorists.
However, more recently, the attention of behavioural finance theorists has been focused on the selection of stocks by individual investors and what actually influences them. This is a step forward from trying to apply the general theories of cognitive psychology to investors. Alok Kumar and Charles M. Lee produced a study of retail investor sentiment, based on the personal trading records of individual investors, using a database of over 1.85 million buy and sell transactions made by over 60,000 retail clients of a large discount brokerage firm between 1991 and 1996. Individual investors are described as ‘noise traders’ and institutional investors are ‘rational arbitragers’. The evidence they say, is not surprisingly that individual investors spend less time on investment analysis and rely (inevitably) on a different set of information sources from their professional counterparties. (It should be noted that far more information is freely available to individuals than ever was available in the 1990s.) Their results show that when one group of stocks is being bought or sold by retail investors, other groups tend to be bought or sold. Retail investor sentiments have greater effects on small stocks, value stocks and those with low institutional ownership and stocks with lower prices. The prices of these shares are most sensitive to changes in retail investor sentiment. They accept that they ‘need to better understand the processes by which individual investors formulate their trading decisions, including the identification of the information sources they use in decision-making’.32
What seems to underlie these theories is the notion that irrational choices of buying and selling stocks on the part of individual investors distort the price of stocks and shares, which would be different if the prices were governed by the decisions made by the rational (institutional investors).
Interviewing investors is the only way forward. It would also mean bringing their databases up to date. The start-date, 1991, is well over twenty years ago, and the study ends before the dot.com boom, itself an interesting study. Since then, markets have become global markets, information is more widely disseminated, and the proportion of individuals trading in the New York Stock Exchange may well have declined.
A second paper by Barber and Odean, covering the same data and the same time period, sets out a model of decision-making in which agents, such as individual investors consider only those alternatives which attract attention.33 The set of alternatives is limited by those stocks which have caught their attention. For individual investors, this is not surprising, since the time and access are limited. Preferences come into play only after time has limited the choice set, and having tested for attention-driven buying by sorting stocks on events which were likely to have coincided with catching individual investors' attention, and checked these facts against abnormal trading volumes and extreme one-day returns. They were net buyers on high-volume days, when particular firms both large and small, were in the news, whereas institutional investors were not.
Much of the work of behavioural finance is directed at a rejection of the Efficient Market Hypothesis, which is ‘predicated on the notion that the current price of a stock, closely reflects the present value of its future cash flows’.34 What seems to underlie behavioural finance for some theorists at least is that the irrational choices of buying and selling shares on the part of individual investors distorts their price, which would be different if the prices were governed by the decisions made by the rational (institutional investors).
If that were indeed the case, it would be impossible to discern the true price, since that would require knowing the decision-making processes of market participants and the true price (fundamental or intrinsic price, as it is sometimes called) would depend on an assessment of that. Both the theories of behavioural finance and the Efficient Market Hypothesis involve the notion that there is an intrinsic, fundamental or true price lurking behind the inefficiencies of the market, to be discovered by understanding that the market is efficient and delivers the price, provided it is interpreted and defined in the right way. It is impossible to find the true price behind the price people pay. That is to chase a chimera.
The price is simply the price that people will pay for a share at any one time. This is the essence of the mark-to-market definition of fair value as ‘the exchange price in an orderly transaction between market participants to sell the asset or transfer the liability in the market in which the reporting entity would transact for the asset or liability, that is, the principal or the most advantageous market for the asset or the liability’. (Summary of Statement No 157.)
What is a market?
The market itself is neither efficient nor inefficient. To ascribe such epithets to the market is to fail to see that such references are to an abstraction, since the market is composed of the various participants operating in the market and only operates within a particular regulatory, legal, cultural and macroeconomic environment. Markets are dominated by the political structure and local political decisions and policies, but global markets are increasingly influenced by geopolitical events. Such views ignore the context in which they operate. The market itself is neither rational nor irrational, but the actions of some of the market participants may be irrational at times. That assessment is also more complex than it appears to be in the literature.
First of all, the market participants are not all individuals deciding to buy stocks and bonds for their own investments. Prices are not set by an army of private investors or the ‘representative household’, investing directly in equities, bonds and even across the spectrum of the derivatives markets. Most stock exchanges are dominated by institutional investors, and that has been the case ever since governments started encouraging individuals to invest for pensions, savings in equities and for their long-term security, especially from the 1980s onwards. Most individuals delegate those investment decisions to mutual funds, or such decisions are delegated for them when they save through pension schemes or purchase financial products such as life insurance or other packaged financial savings products. The value of saving through these indirect means is often incentivized by the provision of tax relief.
The ‘real world complication is that investors delegate virtually all their involvement in financial matters to professional intermediaries … who dominate the pricing process’.35 Vayanos and Woolley add that ‘delegation creates an agency problem. Agents have more and better information than the investors who appoint them, and the interests of the two are rarely aligned’. They argue, correctly, that principals (or perhaps more clearly, consumers) cannot be certain of the competence or diligence of their appointed agents, which is certainly true and explains the reason for regulation designed to protect the consumer, not all of which is simply reactive; that is, it is in place before some disasters occur.
Introducing the agents certainly does bring greater realism to asset-pricing models and can be shown to transform the analysis and output. Importantly, this is achieved whilst maintaining the assumption of fully rational behaviour on the part of all concerned. Such models have more working parts and therefore a higher degree of complexity, but the effort is richly rewarded by the scope and relevance of the predictions.36
It may explain some behaviours as illustrated below, but fails to explain that the actions of fund managers depend on the mandates for particular funds. A fund may be designed to provide capital growth over a period of time, for example, while avoiding the risks and costs involved in constantly churning securities or engaging in high-risk investments.
In other words, the objectives of the fund in which individuals invest has to be taken into account, but Woolley and Vayanos identify only two:
fundamental investing, which uses estimates of cash flows to determine the worth of assets, whereas momentum investing disregards valuation and simply rides the trends usually over the short-to-medium term … Bizarrely and damagingly, the rise in momentum investing means that the bulk of equity investment is now conducted without regard to the value of the assets being traded.
Our new asset pricing model shows the a priori risk-adjusted returns from competing strategies and their variants, and demonstrates their suitability for different categories of investor. In particular, it shows momentum to be the strategy of choice only for investors with short horizons. Most large funds have long-term liabilities, and for them it pays to invest based predominantly on valuation.37
The significance of these articles lies simply in the fact that they point in the direction of the complexity of the market by noting the major participants and that they have different objectives in mind, but this is still a long way from recognizing the complexities of the market and its context. It is also interesting to see the limited notion of value, which is disregarded by momentum investors and the long-term investors depending on valuations.
Looking at the market and the decisions taken in the wider context by various market participants leads to a fuller understanding of the information required. That will, or should, include a thorough knowledge of the company, the market in which the company operates, and all the factors which may affect its performance. These may include particular events; for example, decisions made by the central bank, sanctions against Russia, unrest in the Middle East, or the assassination of a leading politician, to take a few recent examples at random, and the economic situation of the country, region, the Eurozone, emerging markets or the global economy. Weighing all of these factors, determining their relevance and relative weight means that investment decisions are not simply reactive, not just based on all the relevant information, but are a matter of judgement. Taking long sets of data, however important they may be, out of context, and basing models on such simplified approaches means that the models will always be inadequate. If regulators, policymakers and those assessing investment risks thought that such models were sufficient then that could be part, but only part, of the explanation of what went wrong.
Trust and the market
Part of the context in which the market operates is the degree of confidence which market participants must have in each other. Once that confidence collapses, the market ceases to function. Hank Paulson explained at the time that the fall of Lehman Brothers led to a ‘system crisis. Credit markets froze and substantially reduced interbank lending. Confidence was severely compromised throughout our financial system. Our system was on the verge of collapse.’38 Bernanke made his now famous statement, ‘We may not have an economy on Monday.’39 It was indeed trust that was destroyed by the fall of Lehman, because no one could be sure about the quality of the assets held by other banks, and therefore no one was willing to lend.
Some see in this approach echoes of Walter Bagehot's Lombard Street, when he commented that we should not be
surprised at the sudden panics [in the banking system]. During the period of reaction and adversity, just even at the last instant of prosperity, the whole structure is delicate. The peculiar essence of our banking system is an unprecedented trust between man and man; and when that trust is much weakened by hidden cause, a small accident may greatly hurt it, and a great accident for a moment may almost destroy it.40
Swedberg, in his analysis of the collapse of Lehman Brothers and its effects on confidence, draws on Bagehot's emphasis on trust, but moves the view in a different direction in which actions depend on ‘proxy signs’ which are used by investors when direct information is not available to them and they want to invest in a firm or lend someone some money. They play the role of stand-ins for information about the actual situation.
Ideally, a proxy sign can be assumed to be either aligned with the state of affairs or not … Confidence is maintained when a positive proxy sign signifies a positive state of affairs and a negative sign correctly indicates a negative state of affairs. If they are not aligned, and the proxy sign misrepresents the situation, then confidence suffers. When the proxy sign is positive and the state of affairs in the banking community is negative, we are then in Bagehot's dangerous situation, in which it is not known who has losses and who has not, and in which an accident may set off a general panic.41
The proxy signs may refer to the state of the economic affairs, but the proxy does not refer to ‘some object and “true” reality but is a “social construction”.’42 The obvious problem here is: how do market participants know that the proxy signs are not aligned with the state of affairs, especially when the sign is negative, but ‘state of affairs’ is in fact positive?
Swedberg refers to Professor Gorton's analysis of The Panic of 2007, in which his references to relevance of the ABX.HE indices could be seen as just the kind of ‘proxy sign’ giving rise to the loss of trust, perhaps rather than the loss of confidence which Swedberg describes. Professor Gorton argued that the problems with subprime mortgages resulted in a systemic crisis because of the ‘loss of information about the location and size of risks of loss due to default on a number of interlinked securities, special purpose vehicles, and derivatives, all related to subprime mortgages.’43 The residential mortgage-backed securities (RMBSs), consisting of subprime mortgages, were placed in CDOs, and commercial mortgage-backed securities (CMBSs), and ultimately into off-balance sheet vehicles, with additional risk being created through credit default swaps. The latter were incorporated into hybrid or synthetic CDOs. This dizzying interlinking of securities, structures and derivatives ‘resulted in a loss of information and ultimately a loss of confidence’. The introduction, Gorton argues, of the ABX.HE (ABX) indices, which trade over the counter, enabled information about subprime values and risks to be aggregated and revealed for the first time.44
While the location of the risks was unknown, market participants could, for the first time, express views about the value of the subprime bonds, by buying or selling protection. In 2007 the ABX prices plummeted … The ABX information together with a lack of information about the location of the risks led to a loss of confidence on the part of banks in the ability of their counterparties to honour contractual obligations.
The ABX indices allowed investors to realize that the market was now lowering the price on securities based on subprime mortgages, but they did not allow investors to figure out which securities were of low quality and which were not.’45
In the context of the collapse of Lehman Brothers, it is better to refer to market participants, since the lack of trust stemmed from the demands of Lehman's counterparties for increased collateral, which led to what amounted to a run on the bank. In the days before the final weekend, Lehman's clearing and settlement banks demanded increased collateral: JP Morgan, $5bn (which Lehman managed to find); Citigroup, $2bn for the trades it was settling and $500m. with the Bank of America. Lehman also had $500m. with HSBC and a further account with JP Morgan. Lehman counted all of these in its liquidity pool, allegedly $42bn, despite the fact that the withdrawal of any of this capital would impact on Citigroup and JPMorgan's willingness to clear and settle their trades.
Close market participants such as JP Morgan, Citigroup and others did not have to rely on factors such as indices to assess the strength of Lehman. It was not just a question of acting as clearing banks for Lehman, but being engaged in joint ventures such as the purchase of Archstone. Other investors had already decided on their verdict since its shares fell by 95 per cent between January and September 2008, given both the general environment (the collapse of Bear Stearns, Fannie and Freddie into conservatorship, falling house prices) and Lehman's announcement of losses in its June financial report. What was undermined was trust.
Reference has already been made to David Einhorn's work, but it is worth drawing attention again to some of the comments he made and hence to Swedberg's use of that assessment in what he has to say about confidence:
Lehman does not provide enough transparency for us even to hazard a guess as to how they have accounted for these items. Lehman responds to requests for improved transparency grudgingly. I suspect that greater transparency on these valuations would not inspire market confidence.46
The effect of Lehman's bankruptcy was also to be found in the
indirect effects or effects without direct interaction. This type of effect includes actions that were caused by the fear that was unleashed by Lehman's collapse, by rumours that began to circulate, and the like. Following Bagehot, we would assume that the indirect effects are more dangerous than the direct effects.47
He goes on to distinguish between the collapse of confidence; the ‘hidden losses’ that may emerge and the ‘withdrawal of confidence’, as a result of calm and rational deliberation, such as occurred with the freezing up of the money market, the repo market and the interbank market. However, the decision not to engage in these markets is not just a matter of fear and the belief in rumours, but is a direct result of the revelations about the nature of the CDOs, which ultimately depended on the value of the underlying assets, the mortgages, in this case. It continued until corrective actions were taken by the Federal government.
Swedburgh's paper is important in that it points out that trust underlies the smooth, or one might say, efficient functioning of the market. For that, transparency is a necessary but not a sufficient condition. The requirement for transparency in financial accounts and the nature of the various complex derivatives, which are currently traded on the markets is essential, but it is equally essential to ensure that takes place through clear regulations, the ability and competence of the company boards and regulators to ensure that all the information made to the market is timely, comprehensible and also true and to take the necessary enforcement actions if not. That will all seem obvious and indeed those involved in the market in whatever capacity will undoubtedly agree, but the point is to ensure that trust is well-founded. If that does not happen, then the market will simply not function.
The value of a security or a derivative is encapsulated at any one time in the price that investors or buyers are prepared to pay for it. The price is determined by a wide range of factors and might be described as being mispriced or over-priced, if as in the years described above, the price is dependant on information which subsequently is shown to be false or to have been misrepresented in some way. Then the price may rapidly change, and usually falls.
At one stage, Fama in the definition quoted above referred to the ‘intrinsic value’ of a security, stating that at any point in time the actual price of a security will be a good estimate of its intrinsic value, taking into account the equilibrium price, depending on its earnings potential of the security, including the quality of management, the outlook for the industry, the economy etc., that is, factors in relation to the real economy. This version allows for the possibility that the value is the price that people are prepared to pay on the basis of the knowledge they have and the judgements they are able to make about the security and general trends in the market at that particular point in time. The market operates efficiently in the ordinary sense of the term if dealers and fund managers can be reasonably confident of the reliability of the information provided, that the judicial system is free from corruption, that judges are experienced, knowledgeable and competent in cases involving stock markets and the financial services industry and that accountants and other professionals are honest and competent. As the development of capital markets has taken place in emerging markets and a banking system has been established, it has become increasingly clear to policy makers that markets and banks cannot flourish without such a framework.
Prices fluctuate over time. Where the securities traded in the stock markets are shares and bonds issued by companies producing a range of goods and services, the value of the asset lies in its utility in the view of those managing the funds as agents for others or by the ever-dwindling number of individuals investing in the markets on their own account. As long as its utility remains, and if it is not overtaken by other events, such as technological developments, people are willing to pay something for it, or at least some people will be willing to pay for it, even in the worst of times. Perceptions of utility will shift over time, and often quite quickly. Hence the price people will pay varies over time and in differing circumstance. Since the term ‘utility’ is often confused with dismissing an object as purely ‘utilitarian’, that is, not decorative or extravagant, it should be noted that the value as equivalent to the price of a security, may include companies selling designer handbags and other luxury goods. To return to Lehman, however, apart from the derivative contracts in which a price was agreed between Lehman Brothers and the counterparty, the recoveries process rescued at least part of the original value of the investments in commercial and residential real estate. This is perhaps the clearest illustration of what is meant by utility as the key element in value, but value is not an intrinsic to the physical object or the services offered. They retain value as long as they are useful and desirable in a context of markets in which complex factors all play their part, making it hard to predict future prices, not because they are random but because of interrelated internal and external factors, which make the construction of models difficult. That is, however, all that there is to value. It is represented by the price. The hunt for any intrinsic or enduring value is another instance of chasing a chimera.
This chapter has covered the leading theories of the markets. They are abstract theories; indeed, more so than they should be, since they lack a sound empirical base. The dominant theory of the Efficient Market Hypothesis distracted regulators, market participants and central bankers from paying attention to market prices as signals or from recognizing the existence of bubbles in the housing market, as Alan Greenspan admitted. The decline in house prices should have been a signal, and was indeed a real indicator of deeper troubles in the market. Instead of regarding the market as being efficient, attention should have been paid to the real world – not chasing chimeras.