Contrarian Investment Strategies Read online

Page 8


  An example is the aftermath of the 1987 crash. In four trading days the Dow fell 769 points, culminating with the 508-point decline on Black Monday, October 19, 1987. This wiped out almost $1 trillion worth of value. “Is This 1929?” asked the media in bold headlines. Many investors, taking this heuristic shortcut, fled to cash, caught up in a false parallel.

  At the time the situations seemed eerily similar. We had not had a stock market crash for fifty-eight years. Generations grew up believing that because a depression had followed the 1929 crash it would always happen this way. A large part of Wall Street’s experts, the media, and the investing public agreed. Overlooked was the fact that the two crashes had only the remotest similarity. To start with, 1929 was a special case. The nation had numerous panics and crashes in the nineteenth and early twentieth centuries, without a depression. Crashes or no, the U.S. economy has always bounced back in short order. So crash and depression are not synonymous.

  More important, it was apparent even in the spring of 1988 that the economic and investment climate was entirely different. My Forbes column of May 2, 1988, noted some of the differences clearly visible at the time. The column stated that although market savants and the media were presenting charts showing the breathtaking similarity of the market postcrash stock movements after 1987 to those of 1929, there was far less to it than met the eye. In 1929, the market rallied smartly after the debacle before beginning a free fall in the spring of 1930, and many experts believed that history would repeat itself fifty-eight years later. However, a chart, unlike a picture, is not always worth a thousand words; sometimes it is just downright misleading.

  Bottom line: the economic and investment fundamentals of 1988 were worlds apart from those of 1930. After the 1987 crash, the economy was rolling along at a rate above most estimates precrash and sharply above the recession levels forecasted postcrash, accompanied by earnings far higher than projected in the weeks following the October 19 debacle. The P/E of the S&P 500 was a little over 13 times earnings, down sharply from 20 times just prior to the crash and below the long-term average of 15 to 16. The situation was diametrically different from that after the crash of 1929. Back then, corporate earnings, along with the financial system, collapsed, and unemployment soared.

  Thus 1987 was certainly not 1929, and investors who succumbed to the representativeness bias missed an enormous buying opportunity; by July 1997, the market had quadrupled from its low.

  A more recent example of the bias at work is the pricing of oil and commodities in the 2007–2008 crash and the early part of the Great Recession through 2010. Since 1992, oil prices have risen fairly steadily, from $20 a barrel to $100 in early 2008. The fundamentals for oil were very sound. World demand had outstripped new supply every year from 1982 through 2007. Even after the economy began to fall apart in late 2008, the demand for oil dropped only 1.1 percent in 2009. Moreover, the cost of finding oil was going up sharply, and the discovery of vast new fields was a dream of the past. The last million-barrels-a-day oil field was found in the late 1970s. Too, with the dramatic industrialization of China, along with that of other underdeveloped economies in the Far East and elsewhere, demand for oil expanded rapidly.

  The price of oil shot upward, breaking $100 a barrel in early 2008, and continued its sharp increase, reaching $145 a barrel by mid-2008. Investors were desperately selling financial assets other than government bonds and piling into oil and other commodities through 2007 to the spring of 2008. Then panic took over. Untold numbers of comparisons between the 2008–2009 economy and that of the Great Depression were made. Nothing was safe, not even oil, regardless of its fundamentals. Within months, oil plummeted from its then high of $145 dollars a barrel to $35, a drop of 76 percent. The price of oil had fallen well below the cost of new discoveries. As markets calmed by mid-2009, oil began to rise again; it reached $95 a barrel by June 2011, approximately 170 percent above its low.

  You can see how the effects of cognitive biases combine with those of Affect in this case. The fear of losses contributed to people’s putting undue weight on the supposed similarity in the collapse of the oil price between 2008 and the free fall of stock prices after 1929 and failing to rationally recalculate that the demand for oil had actually dropped very modestly and usage would climb significantly with even a moderate economic recovery. When such emotion is mixed with the effects of our biases, it’s the equivalent of nuclear fusion in the marketplace.

  Was it possible to detect that some stocks and commodities were sharply undervalued, just as we saw they were greatly overvalued during the 1996–2000 bubble? My answer is yes. I wrote a column in early June 2009 stating that oil was enormously undervalued for the reasons noted above.19 Oil was then trading around $68 and appreciating about 40 percent from then to June 2011.

  The representativeness heuristic can apply just as forcefully to a company or an industry as to the market as a whole. In 2007–2008, many stocks in a number of excellent industries were swept away by the powerful tides that panicked many people. Not only was the sky falling for banks and financial stocks, but many investors calculated that it was also likely to obliterate very strong industrials with worldwide demand for their products, such as Eaton Corporation and Emerson Electric, which dropped 69 percent and 56 percent, respectively, from their late-2007 highs. As the panic worsened, the more analytic investors sold in droves, too. The prevailing fear was that those companies and myriads of others would show minimal earnings for a decade and that many wouldn’t survive at all.

  By early March 2009, those dire predictions were seen as a lot of hogwash. Eaton and Emerson would rise 265 percent and 142 percent, respectively, by the end of June 2011, and dozens of more cyclical industries, from heavy equipment to mining to oil drilling, had similar bounce backs. Freeport-McMoRan Copper & Gold shot up 315 percent and United Technologies Corporation 149 percent through the end of June 2011. That was the reality.

  Awareness of the representativeness bias leads to another helpful Psychological Guideline:

  PSYCHOLOGICAL GUIDELINE 3: Look beyond obvious similarities between a current investment situation and one that appears similar in the past. Consider other important factors that may result in a markedly different outcome.

  The Law of Small Numbers

  A particular flaw in thinking that falls under the rubric of representativeness is what Amos Tversky and Daniel Kahneman called the “law of small numbers.”20 Examining journals in psychology and education, they found that researchers systematically overstated the importance of findings taken from small samples. The statistically valid “law of large numbers” states that large samples will usually be highly representative of the population from which they are drawn; for example, public opinion polls are fairly accurate because they draw on large and representative groups. The smaller the sample used, however (or the shorter the record), the more likely the findings are to be chance rather than meaningful.

  Yet the Tversky and Kahneman study showed that psychological or educational experimenters typically premise their research theories on samples so small that the results have a very high probability of being chance.21 The psychologists and educators are far too confident in the significance of results based on a few observations or a short period of time, even though they are trained in statistical techniques and are aware of the dangers. This is a very important cognitive error and is often repeated in markets in a wide range of circumstances.

  For example, investors regularly flock to mutual funds that are better performing for a year or few years, even though financial researchers have shown that the “hot” funds in one time period are often the poorest performers in another. The final verdict on the most sizzling funds in the 1996–2000 dot-com bubble shows why this decision making can be disastrous. During the bubble, tens of billions of dollars flowed into the Janus Capital Group. Its short-term record was spectacular. The flagship Janus Fund surged 10.3 percent ahead of the rapidly rising S&P 500 in 1998 and 26.1 percent in 1999, just
before the bubble burst. Still, over the ten years ending in 2003, despite its hot hand when dot-com stocks were flying, the Janus Fund averaged only an 8.7 percent return, which meant it underperformed the market by 22 percent over the entire time frame, including both the dot-com bubble and the crash afterward.

  Even this grim statistic doesn’t do justice to the damage wrought by Janus. The Janus Fund’s assets were only $9 billion in 1993 but reached $49 billion by March 31, 2000, only weeks after the dot-com market peaked. A large number of its customers arrived too late to enjoy the fabulous returns in the 1990s but were there to get bludgeoned in the ensuing bear market. Needless to say, that crowd received a lot less than the 8.7 percent Janus returned for the decade.

  Chasing the hot hand ends in disaster more times than not. Some of Janus’s competitors in the hot-stock derby performed just as badly. Fidelity Select Telecommunications Equipment fund beat the S&P by 12.4 percent in 1998 and 45.5 percent in 1999, while the AllianceBernstein Technology Fund bested the index by 34.6 percent and 50.7 percent, respectively, in those two years. In spite of the huge years in which the dot-com stocks trounced the performance of everything else, the Fidelity fund lagged behind the S&P 500 by 6 percent, and the AllianceBernstein fund was about flat, annually, for the ten-year period ending December 31, 2003. During this bubble, investors lost many hundreds of billions of dollars in red-hot tech and Internet mutual funds as well as in the hot stocks themselves. The so-called hot funds, it turned out, could not hold a candle to the long-term records of many conservative blue-chip funds.22

  The Remarkable Success of Wannabe Gunslingers

  Another way in which investors regularly make decisions based on much too small a sample of results is putting way too much faith in “hot” analysts. Investors and the media are continually seduced by “hot” performance, even if it lasts for the briefest of periods. Money managers or analysts who have had one or two sensational stock calls, or technicians who call one major market move correctly, are believed to have established a credible record and can readily find a market following for all eternity.

  In fact, it doesn’t matter if the adviser was repeatedly wrong beforehand; the name of the game is to get a dramatic prediction out there. A well-timed call can bring huge rewards to a popular newsletter writer. Eugene Lerner, a former finance professor and a market letter writer who headed Disciplined Investment Advisors, speaking of what making a bearish call in a declining market can do, said, “If the market goes down for the next three years you’ll be as rich as Crassus. . . . The next time around, everyone will listen to you.”23 With hundreds and hundreds of advisory letters out there, someone has to be right. Again, it’s just the odds. We do have lottery winners, despite the fact that many millions of tickets may have to be sold to get one.

  Elaine Garzarelli gained near-immortality when she purportedly “called” the 1987 crash. Although, as she was the market strategist for Shearson Lehman, her forecast was never published in a research report nor indeed communicated to the company’s clients, she still received widespread recognition and publicity for this call, which was made in a short TV interview on CNBC.

  Since this “brilliant call,” her record, according to a fellow strategist, “has been somewhat mixed, like most of us.”24 Still, her remark on CNBC that the Dow could drop sharply from its then-5,300 level rocked an already nervous market on July 23, 1996. What had been a 57-point gain for the Dow turned into a 44-point loss, a good deal of which was attributed to her comments. Only a few days earlier, Ms. Garzarelli had predicted that the Dow would rise to 6,400 from its then value of 5,400. Even so, people widely followed her because of “the great call in 1987.”

  Jim Cramer, the popular CNBC commentator—and an ex–hedge fund manager—whom we met earlier, was one of the most voracious cheerleaders of the high-tech dot-com bubble. In a December 27, 1999, missive he wrote, referring to money managers who refused to buy enormously overpriced dot-com stocks, “The losers better change or they will lose again next year”25—this less than three weeks before the dot-com market collapsed.

  Shortly thereafter, Don Phillips, then the president and CEO of Morningstar, blasted Cramer for outspokenly playing an active role in reinforcing this enormous bubble, noting in passing that his public recommendations were down 90 percent. Barron’s followed Cramer’s subsequent record and stated that he had underperformed the market for several years after the high-tech bubble. Although Cramer’s record is hardly sizzling, the law of averages dictates that some of his recommendations have to go up. Cramer is astute at blowing these recommendations out of proportion and is probably more popular today than ever.

  Make a few good calls, and you’re a hero for a while and the chips pile up in front of you. Being a wannabe gunslinger may not have kept you alive long in the Old West, but it works just fine with investors and the media today. A new Psychological Guideline is helpful here.

  PSYCHOLOGICAL GUIDELINE 4: Don’t be influenced by the short-term record or “great” market calls of a money manager, analyst, market timer, or economist, no matter how impressive they are; don’t accept cursory economic or investment news without significant substantiation.

  Beware of Instant Government Statistics

  Sometimes the evidence we accept because of the law of small numbers runs to the absurd. Another good example of the major overreaction this heuristic bias causes is the almost blind faith investors place in Federal Reserve or government economic releases on employment, industrial production, the health of the banking system, the consumer price index, inventories, and dozens of similar statistics.

  These reports frequently trigger major stock and bond market reactions, particularly if the news is bad. For example, if unemployment rises one-tenth of 1 percent in a month when it was expected to be unchanged or if industrial production falls slightly more than the experts expected, stock prices can fall, at times sharply. Should this happen? No. “Flash” statistics, more times than not, are nearly worthless. They are the archetypal case of mistaken decision making due to the law of small numbers. Initial economic and Fed figures are revised, often significantly, for weeks or months after their release, as new and more current information flows in. Thus an increase in employment, consumer purchasing, or factory orders can turn into a decrease or a large drop in each as the revised figures appear. These revisions occur with such regularity that you would think investors, particularly the pros, would treat the initial numbers with the skepticism they deserve. Yet too many investors treat as Street gospel all authoritative-sounding releases that they think pinpoint the development of important trends.

  Just as irrational is the overreaction to every utterance by Fed Chairman Bernanke or his predecessor, Alan Greenspan, ignoring the fact that they entirely missed the subprime mortgage crisis from 2005 through mid-2007. Still, the market hangs on every new comment of Chairman Bernanke and the chairmen of the twelve regional Federal Reserve Banks, from New York and Philadelphia to Dallas and San Francisco, as it does to other senior Fed or government officials, no matter how offhand or contradictory to one another the comments are, or how mediocre the chairmen’s forecasting records have been overall.

  Like ancient priests examining chicken entrails to foretell events, many pros scrutinize every remark and act upon it immediately, even though they are often not sure what it is they are acting on. Remember the advice of a world-champion chess player who was asked how to avoid making a bad move. His answer: “Sit on your hands.”

  But too many investors don’t sit on their hands; they dance on tiptoe, ready to flit after the least particle of information as if it were a strongly documented trend. The law of averages indicates that dozens of experts will have excellent records—usually playing popular trends—often for months and sometimes for several years, only to stumble disastrously later. These are the lessons that investors have learned the hard way for centuries, and that have to be relearned with each new supposedly unbeatable market opportunity.

&nbs
p; The Disregard of Prior Probabilities

  One of the results of the tendency to see similarities between situations is to fail to appreciate the lessons of the past. We neglect to study the outcomes of very similar situations in prior years. These are called “prior probabilities” and logically should be looked at to help to guide present decisions, but our ability to disregard them is truly astonishing.26 It is another major reason we so often overemphasize the case rate and pay little attention to the base rate.

  The tendency to underestimate or ignore prior probabilities in making a decision is undoubtedly one of the most significant problem of intuitive prediction in fields as diverse as financial analysis, accounting, geography, engineering, and military intelligence.27

  Consider the case of an interesting experiment in which a test was conducted with a group of advanced psychology students.28 They were given a brief personality write-up of a graduate student that was intended to give them no truly relevant information with which to answer the question they were then going to be asked about him: what subject he was studying. The assessment was said to have been written by a psychologist who had conducted some tests on the subject several years earlier. The analysis not only was outdated but contained no indication of the subject’s academic preference. (Note that psychology students are specifically taught that profiles of this sort can be enormously in-accurate.)