The original premise of a “hedged fund,” as a financial journalist originally described the concept in 1949, was simple: A portfolio balanced between long and short positions could profit in nearly any market.
That idea may have taken a while to seep into the mainstream, but as it has over the past decade, the hedge fund industry has exploded, rocketing from $310 billion in assets under management in 2002 to more than $2 trillion today. Institutional investors and high net-worth individuals flocked to these largely unregulated, nonpublic funds in no small part because they offered access to assets and trading strategies that are all but impossible to replicate.
But new research from Nicolas Bollen, the E. Bronson Ingram Professor of Finance, says those hedge funds that are hardest to imitate—something investors look for and for which they often pay a premium—are the ones most prone to failure.
In addition, Bollen finds that these types of funds contain a significant amount of volatility, indicating that they are vulnerable to the type of risks they are supposed to guard against. “This result suggests the presence of an omitted but potentially catastrophic risk factor in funds for which standard regression analysis fails,” Bollen writes in the study, forthcoming in the Journal of Financial and Quantitative Analysis.
Those previously undetected risks raise the annual probability of failure for hard-to-replicate funds from 10 percent to 12 percent. The findings have implications for investors who rely on statistical models to screen funds for heightened risk factors as part of their due diligence process.
Determining Hedge-fund Performance
The difficulty in assessing hedge fund performance lies in the industry’s opacity. Fund managers report returns publicly at their discretion, leaving wide gaps of data about holdings, accuracy, and even whether a fund is still operating. (In October 2011, hedge funds with more than $1.5 billion in assets under management were required to start disclosing fund details to U.S. regulators, but that information will not be made public.)
As hedge funds have grown, academic researchers have developed statistical models designed to correlate hedge fund returns with known investment strategies. Using these models, along with data from a broad cross section of funds from 1994-2008, Bollen found that more than one-third of all funds cannot be correlated to known style factors. The phenomenon becomes even more pronounced in funds with short histories.
Bollen suggests those results indicate that using hedge fund regression models to learn about a fund manager’s trading style and selection of assets may be even weaker than previously thought. Further, he says it may be a Sisyphean task to try to develop a complete set of risk factors, especially those representing catastrophic losses during rare events.
Where does that leave investors? For the time being, relying more heavily on qualitative judgments about things like a fund manager’s background and strategy mix than the quantitative analyses of econometricians.
A version of this article originally appeared in VB Intelligence.
Lowest prices of the year! Markdowns! Exclusive Dealer! Top Quality! We’ve all been exposed to them—the marketing strategies promising bargains or high value. Yet as alluring as those pitches can be, consumers draw very different—and sometimes contradictory— conclusions when it comes to sale prices or value. To make it even more challenging, consumers often fill in gaps in their knowledge by drawing inferences about products.
New research co-authored by Steve Posavac, the E. Bronson Ingram Professor in Marketing, finds that in some consumers’ minds, price denotes quality. Yet for others, low price leads a consumer to believe he or she is getting a good value.
“Consumers rarely have complete information and use various strategies to fill the gaps in their knowledge as they consider and choose products,” the researchers wrote in an article published in the April 2013 Journal of Consumer Research. “One of these strategies involves using naive theories: informal, common sense explanations that consumers use to make sense of their environment. For example, consumers may believe that popular products are high in quality while also believing that scarce products are high in quality.”
Posavac and collaborators Hélène Deval, Susan P. Mantel and Frank R. Kardes found that consumers use a series of theories when considering value and price. How they size up a possible purchase depends on what is on their mind when they’re thinking about a given product— something marketers need to take into account when crafting ads, marketing strategies and promotions.
Price vs. Quality Experiments
The researchers conducted eight experiments that tested marketing techniques that leaned toward price or quality. In one experiment, consumers were shown an ad for a bottle of wine with either a high or low price. When subtly reminded of quality, consumers evaluated the expensive wine more favorably than the cheap wine. However, when subtly reminded of value, they rated the cheap wine more favorably.
“In the case of price, most people simultaneously believe that low prices mean good value and that low prices mean low quality. But these two beliefs are not equally present in consumers’ minds all the time,” the authors wrote. In short, people can hold opposing beliefs about the same product.
When Product Marketing Backfires
Sales promotions succeed when consumers perceive that they are getting a good deal, but they can also backfire if consumers perceive that lower prices indicate poor quality. And if the company makes assumptions that one naive theory guides consumers, they run the risk that the strategy could actually cause a decrease in sales and perceived value. “For example, a marketer who feels that low prices signal value may go all in on a low-price strategy in an attempt to drive sales but may succeed only at reducing brand value and alienating consumers if a substantial percentage of the firm’s customers believe that low prices are commensurate with low quality,” they wrote.
Posavac and his fellow researchers cite retailer J.C. Penney as an example. The company moved to a new strategy of abandoning sales events in favor of everyday low pricing. However, J.C. Penney customers had been so conditioned to the naive theory that sales promotions signified good deals that the absence of such events was taken by many longterm customers to mean that there were no longer opportunities to get good deals—and sales dropped.
“[Companies] design a strategy by assuming that a certain naive theory is going to drive consumer evaluation and choice when, in fact, several naive theories are available to the consumer,” the authors conclude. So what’s the best strategy? The authors suggest that, in practice, marketing communications that set the stage by suggesting a given naive theory—quality, for example—and then make a product appeal in keeping with that theory will have the best results.
From research scientists working in drug discovery to portfolio managers waiting for the markets to bear out their investment theses, how do certain types of professionals sustain their energy and enthusiasm over long periods?
That’s the question undertaken in a new study co-authored by Bruce Barry, the Brownlee O. Currey Jr. Professor of Management.
“Why and how do people stay motivated in their work when goal accomplishment is at best many years off and may never occur at all?” Barry and co-author Thomas Bateman, of the University of Virginia’s McIntire School of Commerce, ask in a paper for the Journal of Organizational Behavior.
Professionals who are able to sustain the long-term pursuit of their work goals start like anyone setting out to accomplish a set of tasks. They start by focusing on a specific goal, expending some initial effort and show some perseverance over the short term.
But then, these professionals enter “a complex set of cognitive and affective phenomena that implicate perceptions of self, the future, task activities, and a variety of other gratifications,” Barry and Bateman write.
To understand the psychological forces at play when pursuing long-term goals, the co-authors identified and conducted in-depth interviews with 25 professionals whose work goals included the following traits: Eventual success could take years, or perhaps generations; real progress comes very slowly; there is a significant chance of failure. While these conditions may define the most extreme cases of pursuing long-term goals, Barry and Bateman say the insights generated from the interviews have wide-reaching implications for both professionals and managers.
The researchers then distilled the key elements of the interviews into eight sources of motivation that provide “psychological sustenance” in the pursuit of long-term goals:
Allegory: figurative representations or abstractions that offer significant, consequential meaning (e.g., comparisons to the Wright Brothers or the moon landing)
Futurity: allusions to the long-term impact and possibilities associated with the ultimate outcomes that may result from the realization of a long-term goal (e.g., setting the stage for children and grandchildren)
Self: statements that invoke personal identity, reputation or personal belief systems (e.g., expressing personal creativity)
Singularity: references to the perceived uniqueness of the endeavor (e.g., the big exploration that nobody could have done before)
Knowledge: statements that refer to skill development, new understanding, acquiring truth and finding ways to control events (e.g., any knowledge that’s created is good)
The Work: allusions to the nature of the work, including challenges, methods, risks and uncertainties, as well as elements that are fun or surprising (e.g., like a puzzle that needs solving)
Embeddedness: ways in which individuals see their work situated within social contexts, as well as ways in which their work garners social legitimacy within their professions and in society (e.g., an enjoyment from disproving the skeptics)
Progress: statements that emphasize the notion of forward movement, often short-term, in the direction of long-term goal pursuit (e.g., advancements in tools and techniques that facilitate the work)
These motivational themes incorporate near-term (proximal) and long-term (distal) features that weave together immediate payoffs with a perception of doing important and lasting work. In addition, all the subjects interviewed by Barry and Bateman for this study mentioned the important role self-regulation plays.
“We saw [self-regulation] as an overarching process and set of strategies implicated in many of the motivating themes identified in our analysis,” they write. The co-authors highlight six forms of self-regulation that include maintaining focus on goal-directed actions, controlling emotions, and coping with failure—using it as a basis for improvement rather than an injurious setback.
While the sample size may have been limited, with no means to compare similar data sets, Barry and Bateman write that the study is meant to offer meaningful conceptual extensions to well-established theoretical areas, setting the stage for future investigations.
“Long-term goals arguably are at least as important as short-term goals in their ultimate consequences for individuals, organizations, and societies,” Barry and Bateman write. “Now, we believe, is the time to expand our field’s search for theories and strategies that can help people and organizations pursue and achieve important long-term goals.”
A version of this article originally appeared in VB Intelligence on July 25, 2012.
In a pivotal scene from the film The Social Network, the Hollywood retelling of Facebook’s founding, an exasperated Mark Zuckerberg exhorts business partner and Harvard classmate Eduardo Saverin to join him in Silicon Valley.
“You’ve gotta move here, Wardo. This is where it’s all happening,” the Zuckerberg character pleads.
Whether this piece of dialogue is true or not, the sentiment behind the statement is perhaps more accurate than Zuckerberg—or the movie’s writers—could have known.
In a recent paper examining performance differences within geographic business clusters, Assistant Professor of Strategic Management Brian McCann, MBA’04, finds that younger firms, as well as those with a “deeper knowledge stock” (i.e., more patents), gain the biggest boosts from geographic clusters.
By locating in a geographic cluster, “entrepreneurs may accrue particular benefits to agglomeration during the early phases of a firm,” McCann writes with co-author Timothy B. Folta of Purdue University’s Krannert School of Management. The paper was published recently in the Journal of Business Venturing.
While scholars such as Paul Krugman and Michael Porter have long written about the positive business effects of geographic clusters, this research is among the first to investigate which firms benefit the most from agglomeration.
McCann and Folta examined 806 biotechnology firms founded in the United States between 1973 and 1998. Although the firms were spread across 85 Metropolitan Statistical Areas, the co-authors found that the top 10 clusters in 1994 (the peak of the industry in terms of the number of companies operating during the study’s time frame) provided locations for nearly 75 percent of the firms in the industry.
McCann and Folta also measured the “knowledge richness” of each cluster by totaling the number of patents held by all the biotech firms in each cluster. Statistical analysis of these data sets yielded several findings of note:
Locating in a cluster is beneficial for firms: Firms in larger and more knowledge-rich clusters have a higher probability of patenting in any given year.
For each firm that’s added to a cluster, companies within that location have a 1 percent higher increase in the odds of patenting. While that figure may seem small, the effects add up quickly. Increasing a cluster size by 10 firms raises a firm’s patenting probability by nearly 10 percent.
Within clusters, the increased patenting probability is even higher for firms that already have a richer knowledge base—that is, a higher number of patents. “Our conjecture is that such firms are better able to benefit from knowledge spillovers,” McCann and Folta write.
Younger firms enjoy a similarly higher patenting probability within clusters, according to the analysis. “This result is consistent with prior scholars who have speculated that young firms might more effectively draw benefits from their clustered regions,” the co-authors write. This is because younger firms tend to have more flexible organizations and a greater need to rely on outside knowledge sources because of their limited resources.
McCann and Folta write that these findings hold relevance for researchers, business practitioners and policymakers.
The evidence regarding what types of firms benefit most from clustering “should be of principal importance to scholars of entrepreneurship or strategic management wondering about the relationship between location and competitive advantage,” they write. “For those deciding whether to locate in an agglomeration or for those attempting to recruit firms to an agglomeration, it provides advice on those firms that are most likely to benefit.”
A version of this article originally appeared in VB Intelligence on July 25, 2012.
From Bob Hope’s hawking American Express Travelers Cheques in the 1950s to quirky actress Zooey Deschanel’s selling the latest iPhone today, celebrities have long served as the advertising industry’s not-so-secret weapon.
As consumers, we want the same services and products as the good-looking, glamorous set—or if nothing else, we tend to remember the famous associations these figures bring to whatever they’re endorsing.
And while these celebrities may be among the most commonly familiar and well-liked people, the vast majority of the population knows very little about their political and religious attitudes and values.
In fact, the less that is known about a famous figure, the more the public views them in a favorable light, according to new research co-authored by Steve Posavac, the E. Bronson Ingram Professor of Marketing.
“As perceivers learn more, there is an increased likelihood that the evidence will indicate that celebrities have middling attributes” that show just as many flaws as other people, Posavac and his co-authors write in a paper for Basic and Applied Social Psychology. “Perhaps more significantly, evidence may reveal to perceivers that celebrities’ political views, religious practices and social attitudes are different from their own, leading to less liking.”
So carefully controlled are the images of most celebrities that the researchers in this study found it difficult to compile reliable information on possible test subjects in preparation for their experiments. For one of the main experiments, however, they did find two famous figures whose personal viewpoints are well-known and diametrically opposed: Tom Hanks and Mel Gibson.
With 131 undergraduates participating in the study for extra credit, the students were randomly given descriptions about Hanks and Gibson—one innocuously detailing each actor’s film career, the other discussing specific political and religious points of view.
When asked about the likability of each, “liberals and conservatives did not differ in their evaluations of Hanks and Gibson when information was not presented,” according to the study. “However, when descriptions of the practices and attitudes of the celebrities were provided, liberals and conservatives diverged in their evaluations of the actors, particularly Gibson.” (It should be noted that the study was conducted prior to widespread news coverage of Gibson’s domestic conflicts.)
In another experiment, student participants were chosen based on the results of a pretest in which they favorably rated six celebrities: Will Smith, Jennifer Aniston, George Clooney, Natalie Portman, Johnny Depp and Scarlett Johansson. They were then asked a series of questions about the celebrities’ political and religious views.
For the researchers, this served as a mental prompt that allowed them to compare attitudes for celebrities prior to thinking about how much participants knew of them personally, versus after completing the questionnaire. What Posavac and his colleagues found is that, “participants perceived the celebrities to have significantly less credibility” when they were made aware of how little they knew about them.
This dynamic can be seen in many real-world contexts. Tom Cruise, Lindsey Lohan and Tiger Woods, to name but a few, all experienced sharp declines in popularity (and celebrity endorsement deals) after personal shortcomings were revealed. Posavac and his colleagues also point to the case of Rashard Mendenhall, an NFL running back who posted unpopular views about the killing of Osama Bin Laden on Twitter. The backlash led the apparel manufacturer Champion to end its endorsement deal with him and endangered his spot on the Pittsburgh Steelers.
“People appear to be taken by celebrities, in part, because they are highly familiar while being simultaneously unknown,” the researchers write. That is, in the absence of information, people fill in the personality blanks of celebrities with their own views and values.
What’s more, distinct groups differ in how they perceive celebrities once they have more information about their views. In the experiment with Hanks and Gibson, liberals and women tended to rate Gibson less favorably with more information. Similarly, likability ratings among conservatives and men dropped as they learned more about Hanks’ views.
“The findings reveal one of the important foundations underlying the adoration of celebrities: ignorance,” Posavac and his co-authors write. “Unless celebrities harbor mainstream attitudes that have widespread appeal, they are probably better off financially keeping their opinions and practices private.”
A version of this article originally appeared in VB Intelligence on July 25, 2012.
The Vanderbilt Health Care Conference and Career Fair hosted more than 500 participants and 35 companies at a one-day session in Nashville this past fall. It was the fourth year for the student-organized conference, which is designed for anyone interested in the intersection of business and health care.
Headlining the October event was Nancy-Ann DeParle, Deputy Chief of Staff to President Barack Obama. Drawing on her experience as Director of the White House Office of Health Reform, DeParle outlined the contrast in the national health care system prior to passage of the 2009 Patient Protection and Affordable Care Act and what it will look like once the plan is fully implemented.
Where We Were
Health insurance premiums doubled: Family premiums for employer coverage rose from nearly $6,000 to more than $13,000 between 1999 and 2000.
Insured Americans and businesses paid a hidden tax: Up to $1,000 of uncompensated care was shifted from the uninsured to already-insured families. In 2008 we spent $43 billion on uncompensated care.
Millions lacked quality, affordable health care: 50 million Americans were uninsured in 2009, and millions more lacked access to quality care, preventive services and catastrophic protection when ill or injured.
People with pre-existing conditions were locked out: As many as 129 million Americans have a pre-existing condition that could limit access to insurance.
“Even after spending almost twice as much per capita on health care as every other industrialized country in the world, we continue to rank near the bottom when it comes to health care outcomes,” DeParle said. “Those of you working in health care understand that this is bad for business. Imagine you’re selling cars. If cars become more expensive, but the quality stays the same, or even gets worse, you don’t need an MBA … to realize that you’re in trouble.
“Health care isn’t like most other industries. If people can’t afford insurance, they don’t stop coming to the hospital. They just stop paying for the care they receive. So to tweak the analogy that I just used, not only are customers not buying cars, but you have to hand them out for free. That’s not sustainable.”
What the Law Does
Allows young adults to stay on their parents’ policies: More than 1 million 18- to 26-year-olds have benefited.
Gives uninsured with pre-existing conditions affordable insurance: The Pre-existing Condition Insurance Plan has covered more than 30,000 people and is a bridge to 2014 when discriminating against anyone with a pre-existing condition will be illegal.
Protects retiree coverage: $5 billion is provided to keep coverage affordable for early retirees in more than 6,600 plans.
Expands community health centers and workforce: Clinics can serve nearly 20 million more Americans, adding 16,000 primary-care providers during the next five years.
Holds health insurers accountable: The law implements a patient’s bill of rights, eliminates double-digit rate hikes without review, guarantees that overhead expenses are held in check, and promotes pricing transparency among health plans on healthcare.gov.
Creates a competitive and affordable insurance marketplace: Starting in 2014, consumers will be offered the same health plan choices as members of Congress. Tax credits and Medicare coverage will be made available to ensure that coverage is affordable for families and small businesses. The law also protects existing employer-based coverage while ensuring that all Americans who can afford it get health insurance, increasing the insurance purchasing pool, ending pre-existing condition exclusions, and eliminating the “hidden tax” of cost shifting.
Lowers cost and improves quality: Health care fraud persecutions are up 85 percent, and billions have already been saved. The law promotes prevention and offers incentives to reduce hospital readmissions and conditions acquired in health care facilities. It also provides tax credits to small businesses and relief for seniors. There was record low growth in national health spending in 2009 and 2010.
“I’m not saying it’s going to be easy for us to make all of these changes,” DeParle said. “But what I’m saying is the framework is there and the incentives are there in this new law.
“Do we embrace this new law, this new world of health reform, as a first step and work together to make it better? Or do we fight to restore an unsustainable status quo that left millions of our neighbors on their own in their time of need?”
A version of this article originally appeared inVB Intelligenceon Nov. 17, 2011.
Generation Y, the first group to come of age in the Internet era, is all grown up and ready to launch the next wave of multibillion-dollar tech companies. And investors are ready to help them do it.
“If you’re 20-something and have an idea of what you want to build, you can go out and build it,” Harj Taggar, a partner at the Silicon Valley incubator Y Combinator, told the Financial Times in a recent story, echoing the tech boom of the late 1990s.
But after a dizzying decade that ushered in everything from Google’s search engines to touch-screen tablets—and plenty of flops in between—how much more technology are consumers willing to adopt?
It’s a critical question that Mark Ratchford, Assistant Professor of Marketing at the Owen School, is helping companies explore with a new tool called the Technology Adoption Propensity (TAP) index.
“Effectively segmenting and targeting customers based on their likelihood to purchase and use new technologies could help firms better capitalize on their high-tech investments,” Ratchford writes in a recent paper for the Journal of Business Research that introduced the TAP index. The study was co-authored by Michelle Barnhart, Assistant Professor of Marketing at Oregon State University.
Similar psychological measurements have been developed previously to gauge a consumer’s willingness to use new technologies. For example, the Technology Acceptance Model (TAM) was introduced in 1986 to explore user acceptance of—or resistance to—various technology-based systems, including email, word processors and the Internet.
Another stream of technology-related marketing research led to the creation in 2000 of the Technology Readiness Index (TRI), which focused primarily on a person’s likelihood of adopting service-based technologies, often related to e-commerce.
The problem with the TRI, according to Ratchford, is that its questions depend on specific technologies, making it increasingly obsolescent since this once narrow area has grown to cover everything from social media to smartphones.
“References to specific technologies grounds the TRI in a particular technological era and limits its usefulness as a measure of overall technological readiness,” Ratchford writes. “Hence, a new scale that measures consumers’ attitudes toward a varied and flexible concept of technology that seamlessly incorporates the specific technologies of each new era would be useful to researchers and marketers.”
The research team developed an initial 47-item psychological battery, based on 17 items included in the TRI and 30 new ones. To make the TAP index shorter without compromising its effectiveness, Ratchford and his co-author winnowed the items down to 14. Those were then aligned with traits that contribute to technology adoption (“optimism” and “proficiency”) or that inhibit adoption (“dependence” and “vulnerability”).
To validate the TAP index, the study asked more than 1,300 survey respondents to answer a series of yes-or-no questions designed to assess their current use of technology products and services. The results were then matched up against findings from the TAP index itself, showing that those who scored highly on the TAP index were the same ones already using technology. Conversely, those with low TAP scores were not likely to be heavy technology users.
“We show that the TAP index can predict consumers’ technology usage behaviors across a range of high-tech products and services,” Ratchford writes. “We expect that, as a more succinct and timeless measurement tool than prior scales designed for a similar purpose, the TAP index will prove to be a robust and useful scale for academics and practitioners alike.”
A version of this article originally appeared inVB Intelligence on Sept. 30, 2011.
The trading volume of stock options has more than quintupled in the past decade, as banks, hedge funds and other traders have flocked to the investments. But retail options investors may be getting left out in the cold, unknowingly giving up as much as $1.9 billion in lost profits during that same time frame, according to new research from Kate Barraclough, Lecturer of Finance and Director of the Master of Finance program, and Bob Whaley, the Valere Blair Potter Professor of Finance and Co-director of the Financial Markets Research Center.
The problem uncovered by the Vanderbilt team happens with put options—contracts that allow owners to sell an underlying asset at a specific price and within a certain time frame. (Put-option holders make money when the underlying asset price declines.)
Because American-style put options can be exercised anytime before they expire—as opposed to European-style options that can be acted upon only at expiration—investors must find the optimal point at which to close their positions. Otherwise they will forgo interest income that’s, in some cases, greater than their expected profit.
In the study, which will be published in a forthcoming Journal of Finance, Barraclough and Whaley develop a model to test when it’s most advantageous for investors to close put-option positions that are deep in the money. In other words, for put options whose underlying asset has declined to such a level that a maximum profit is all but assured, where is the point when it’s more advantageous to close the put-option position and instead collect the net interest income on the cash proceeds?
“A deep in-the-money put has no time value remaining and is priced at its floor value,” Barraclough and Whaley write. “The difference between forgone interest income and the value of future exercise opportunities determines whether the put should be exercised early or not.”
As it happens, professional investors appear to have realized that money is being left on the table. In response, they’ve developed an arbitrage strategy to capture the forgone interest of those who don’t exercise put options when it’s optimal to do so.
Barraclough and Whaley show that more than 3.96 million put options between January 1996 and September 2008—3.7 percent of all put options outstanding—were not exercised when they should have been. That cost long put-option holders more than $1.9 billion during that period.
In its simplest terms, when long put-option holders don’t exercise at the right time, short put-option holders can (and do) come in and snatch interest income.
Why do investors give up this money? One possible explanation lies in the additional trading costs for long put-options investors, according to Barraclough and Whaley. However, even when estimated trading costs are included, the Vanderbilt team still found nearly $1.82 billion in forgone net interest income.
Another reason is that retail put-option investors simply don’t know about—and don’t use—an appropriate early exercise decision rule.
“Both market makers and proprietary firms demonstrate that they know the early exercise decision rule and apply it in a timely and appropriate fashion,” Barraclough and Whaley write. “That is not to say that the nonprofessional traders are behaving irrationally. The costs of learning the early exercise decision rule and constantly monitoring open put-option positions may be too high relative to the perceived benefits.”
(In a similar 2007 study that Whaley co-authored, the researchers found that call option holders gave up an estimated $491 million during a 10-year period for failing to exercise the options on dividend-paying equities at the optimal time.)
Based on the finding of this most recent put-options study, Barraclough and Whaley say the bottom line is that long put-option investors are “implicitly paying a premium for the ability to early exercise that they rarely use.” In addition, market makers and proprietary firms are appropriating the potential gains of those in a short put-option position.
“Among other things, this raises fundamental concerns regarding contract design and market integrity,” they write. “If many option buyers pay for the right to early exercise but either cannot or do not take advantage of it as a result of exercise costs, unawareness of appropriate decision rules, inability to continually monitor open positions, or irrationality, would not the integrity of the market be better preserved with stock option contracts that are European-style?”
A version of this article originally appeared inVB Intelligenceon Feb. 13, 2012.
For the past 24 years, the Financial Markets Research Center (FMRC) at the Owen School has hosted a spring research conference designed to facilitate discussion between academic researchers and business practitioners. Starting with the 1987 Wall Street crash, many of the best minds in finance have assembled at the annual event to analyze topics ranging from globalization to securitization.
This year was no exception. Brett Sweet, Vice Chancellor and Chief Financial Officer at Vanderbilt, chaired presentations on regulating risky banks, while Margaret Blair, the Milton R. Underwood Chair in Free Enterprise at Vanderbilt Law School, led a session about the federal rule-making process.
The primary focus of this year’s FMRC conference, held May 5–6, centered on implementation of the Dodd-Frank Wall Street Reform and Consumer Protection Act. Almost a year after the bill was signed, federal regulators continue to draft new rules overseeing hundreds of trillions of dollars’ worth of activity that touches everything from the credit-default swaps that played a role in the 2008 financial crisis to overhauling Fannie Mae and Freddie Mac. In fact, the task of implementing the law has proven so massive that regulators have pushed many of its deadlines back six months to Dec. 31, 2011.
Even as regulators finish their work, however, many questions remain (including from those within the government) about the law’s ultimate impact.
Mortgage Reform
Regarding home mortgages, Edward J. DeMarco, Acting Director of the Federal Housing Finance Agency (FHFA), said in his FMRC keynote speech that keeping Fannie and Freddie in an indefinite state of conservatorship—which has stretched on for more than three years—poses risks to an already fragile sector. Total taxpayer support of the companies could climb to $363 billion by 2013, according to FHFA estimates, and so far none of the reform proposals put forth by Congress and the White House have gained much political traction.
“The only thing Congress can agree on is not renewing their original charters,” he said.
Whatever plan does finally emerge for Fannie and Freddie, DeMarco indicated that there are at least three elements that any framework must include:
Uniform mortgage standards: From collecting borrower data to developing guidelines for home appraisals, he said the industry needs consistency and transparency throughout the life of a loan. Without these elements, the world of private capital won’t be able to price and evaluate risk correctly.
Diversity of product offerings: Lenders shouldn’t lock themselves into offering only traditional 30-year fixed-rate mortgages just because data standardization is needed. “This is a big country with lots of people in many different situations,” he said. “The mortgage market of the future really needs to be not just liquid and stable, but it needs to have an appropriate diversity of offerings.”
Clarity about the role of the taxpayer: To properly calibrate how risk is assigned, priced and managed, DeMarco said it’s imperative that investors fully understand the role of the taxpayer in any future mortgage finance system.
Derivatives Oversight
There’s a climactic scene in Michael Lewis’ bestselling book The Big Short in which Dr. Michael Burry, the neurologist-turned-investor, finally sees his $1.9 billion bet against subprime mortgages start to pay off—that is, until he contacts his counterparties and tries to collect.
Under the Dodd-Frank Act, federal regulators drafted new rules overseeing trillions of dollars’ worth of financial activity in the U.S.
As it happened, the very banks from which Burry purchased the products that, in theory, should have been making him rich were also the same institutions responsible for pricing his investments.
“It was determined by Goldman Sachs and Bank of America and Morgan Stanley, who decided each day whether Mike Burry’s credit-default swaps had made or lost money,” Lewis wrote.
Under the Dodd-Frank Act, many of those kinds of privately traded derivatives—worth as much as $600 trillion—will now be transparently priced and exchanged through a central clearinghouse. In addition, the Securities and Exchange Commission (SEC) will split oversight of these financial instruments with the Commodity Futures Trading Commission (CFTC).
Joanne Moffic-Silver, Executive Vice President and General Counsel for the Chicago Board of Options Exchange, told FRMC conference participants that her company is interested to see how the two federal overseers handle these new regulations.
“One question with the Dodd-Frank Act is: Will having two agencies split jurisdiction over functionally equivalent products work?” Moffic-Silver said. “Ideally, and this is my personal opinion, there should be little or no difference between the SEC and CFTC on swaps rules.”
As written in the new law, the SEC will handle swaps that are backed by securities like a single or narrow group of stocks; the CFTC will manage the rest, including 22 listed categories that include interest-rate, credit-default and currency swaps.
But Moffic-Silver said Dodd-Frank includes a number of possible exceptions that would exclude various swaps from being traded through a clearinghouse. In addition, the new law allows for the creation of a new “Swap Execution Facility” (SEF) that would be an alternative to listing on an exchange. Current proposals from both the CFTC and SEC differ on some of the specifics of how these SEFs would operate, Moffic-Silver pointed out.
“The rule-making process has been interesting. The SEC and CFTC do talk, they do meet, and they have a current memorandum of understanding where they are supposed to coordinate regulation of similar products,” Moffic-Silver said. “But the proposals have differed in some very important areas.”
‘Bail-ins’ instead of Bailouts
On Sept. 15, 2008, the world awoke to what Andrew Ross Sorkin, writing in The New York Times, called “one of the most dramatic days in Wall Street’s history.”
The storied brokerage firm Merrill Lynch was sold for $50 billion, just half of what it had been worth the previous year; and after failing to find a buyer, Lehman Brothers filed the largest bankruptcy on record, culminating in a painful $150 billion liquidation.
“When an airline goes out of business, air traffic control doesn’t go haywire. When a phone company goes down, we can still make phone calls. But when banks go down, it’s different.”
—Wilson Ervin
After those catastrophic events, Congress approved a $700 billion bailout several weeks later to help banks unload their “toxic” mortgage-related assets to prevent further shocks. Now, instead of once again using taxpayer money to help prevent a contagion risk that could bring down the banking world, there’s discussion about designing a “bail-in” mechanism.
Wilson Ervin, Managing Director at Credit Suisse, explained in a presentation at the FMRC conference that a “bail-in” would give regulatory officials the authority to impose a resolution designed like a prepackaged bankruptcy. “Think of a Chapter 11 bankruptcy on steroids,” he said.
“When an airline goes out of business, air traffic control doesn’t go haywire. When a phone company goes down, we can still make phone calls,” Ervin said. “But when banks go down, it’s different.”
Using the case of Lehman Brothers as an example, Ervin said by writing down assets and converting a portion of the debt to new equity, the bank could have preserved a capital base of more than $40 million, giving it some hope to raise additional investment from other financial services firms.
“The process would not be pretty, but overall investors should be relieved by the result,” Ervin wrote in an Economist essay he co-authored on the subject. “In [the Lehman] example the bail-in would have saved them over $100 billion in aggregate, and everybody—other than short-sellers in Lehman—would have been better off than today.”
New Vanderbilt Research
In addition to the speakers from government and private industry, several members of the Owen faculty presented new research, including Bob Whaley, the Valere Blair Potter Professor of Management, who discussed his collaboration with Jacob Sagi, the Vanderbilt Financial Markets Research Center Associate Professor of Finance, in launching NASDAQ’s Alpha Indexes. Also Nick Bollen, the E. Bronson Ingram Professor of Finance, shared results from a recent paper investigating hedge fund investment strategies, while Hans Stoll, the Anne Marie and Thomas B. Walker Jr. Professor of Finance and Director of the FMRC, presented new research with Thomas Ho, the FMRC Research Professor of Finance, examining the interaction between financial markets and regulations.
Enron. WorldCom. Tyco. These are among the most notorious names associated with a wave of accounting scandals that plagued the early 2000s and ultimately helped spur passage of the 2002 accounting reform law known as Sarbanes-Oxley.
While accounting restatements haven’t gone away entirely since then—there were 735 last year, down from a peak of 1,795 in 2006, according to Audit Analytics—they don’t always result in cataclysmic failure. In fact, the market can learn much about the future fate of a company based on the buying or selling of stock by the firm and its managers preceding an accounting restatement.
That’s according to new research from Nicole Thorne Jenkins, Associate Professor of Accounting, and co-authors Brad Badertscher of the University of Notre Dame and Paul Hribar of the University of Iowa. Their paper was published in The Accounting Review in September 2011.
“We predict and find evidence that when a firm restates its financial statements, the market uses the magnitude and direction of prior insider and corporate trades to help price the implications of the restatement,” the authors wrote.
Typically when a company issues an accounting restatement, it suffers an average loss of 10 percent in market value. That figure climbs to 20 percent or greater for firms whose restatements have been caused by “irregularities.” More than half the cases of restatements in the authors’ data occurred because of an issue with revenue recognition. Nearly 30 percent were due to things such as improperly recognizing expenses or wrongly capitalizing expenditures.
In the short run at least, Jenkins and her colleagues found that the negative impact of a restatement is softened “when there are net stock repurchases or insider purchases.” The opposite is also true—losses worsen—when “there are net equity issuances or insider selling,” they wrote.
The authors take the study a step further by demonstrating that the market is in fact using a company’s insider buying or selling behavior as a signal for how to price the restatement event. The positive (and negative) effect of buying (or selling) on share price is only found for those trades which have been disclosed publicly.
Preceding a restatement, “selling suggests more nefarious behavior on the part of management, and is likely to increase the information risk premium … while prior buying might help mitigate the uncertainty facing investors,” the authors wrote.
This study offers a “directional” hypothesis, rather than trying to determine the exact magnitude of the effect. In addition, where other research looks for reasons behind accounting restatements—fraud, for example—the authors here look only at how the market acts on public information about the buying or selling actions of management and the company.
Ultimately, the authors conclude, the evidence in this study “suggests that the market begins to look for corroborating or contradicting evidence regarding the future performance of the firm once the restatement is announced.”
New research by Bob Whaley, the Valere Blair Potter Professor of Management, and Jacob Sagi, the Vanderbilt Financial Markets Research Center Associate Professor of Finance, has led to the creation of a recently launched group of NASDAQ indexes. The NASDAQ OMX Alpha Indexes are designed to help investors measure performance between individual stocks and exchange-traded funds. In practice, this means that the returns of popular holdings such as Apple and Citigroup could be isolated from sharp swings in the market. Traditionally it has been difficult—if not impossible for some—to trade directly on the relative performance of one asset compared to another.
In a new research paper describing how relative performance indexes work, Sagi and Whaley use the example of Apple (AAPL) compared to the S&P 500. In September 2008, AAPL’s share price plummeted by 32 percent, more than three times the amount lost in the broader markets. Seeing such an outsized decline in AAPL’s share price compared to the market overall may have signaled a buying opportunity to some investors. But as the financial crisis worsened, AAPL’s share price fell by another 1.4 percent. During the same time, however, the broader market fell by about 16.6 percent.
“AAPL outperformed the market as the investor expected,” Sagi and Whaley write. But anyone who had purchased shares of AAPL, while beating the market, would still have suffered a loss.
If an investment tool based on a relative performance index had been available to capture AAPL’s performance against the market, however, it would have yielded a substantial gain.
To try and replicate that same trade using the tools available at the time, an investor would have had to buy a long position in AAPL, while shorting, or betting against, a product like an S&P 500 index fund. The central risk in that scenario is that an investor would be exposed to an unlimited loss in the short position. By contrast, a relative performance index would place at risk only the original amount of the investment. Further, the money and time spent rebalancing those trades to account for volatility in both the long and short position would be too much for most investors to bear.
While the derivative products on the indexes developed by Sagi and Whaley can be used to invest in the relative performance of any pair of securities or exchange-traded funds, NASDAQ OMX so far has unveiled 23 index options tracking “highly liquid” assets, including:
AAPL vs. SPY Index (symbol: AVSPY)
Gold (GLD) vs. SPY Index (symbol: GVSPY)
Twenty-plus Year Treasury Bonds (TLT) vs. SPY Index (symbol: TVSPY)
Citigroup (C) vs. Financial Sector (XLF) Index (symbol: CVXLF)
Emerging Markets (EEM) Index vs. SPY Index (symbol: EVSPY)
For now, Sagi and Whaley see the relative performance indexes as providing an easy and low-cost way to execute what traditionally has been a cumbersome trade. But as these indexes become more widely used, the authors say, they could introduce entirely “new return/risk management strategies to the investment arsenal.”
With millions of new patients coming into the U.S. health care system over the next decade, the term “operations” is taking on a whole new meaning in America’s hospitals. Starting in the year 2014, as many as 32 million additional people will be covered by health insurance under the federal reform law passed last year. That additional demand comes at a time when medical facilities are struggling to reduce costs, improve safety and provide higher patient satisfaction.
To address these issues in the past, many health care facilities looked to manufacturing and its specialized product lines as a model for how to treat patients most efficiently and effectively. But little research has been done on how well these new “focused delivery” units have worked. To investigate this question, Nancy Lea Hyer, Associate Professor of Operations Management, turned to a dedicated trauma unit at Vanderbilt University Medical Center for answers. The case study was published in the Journal of Operations Management in 2009, and last year won that publication’s Best Paper Award.
“The following two questions were used to guide our research,” writes Hyer, who collaborated on the project with Dr. John A. Morris Jr., Professor of Surgery and Director of the Division of Trauma and Surgical Critical Care at Vanderbilt University Medical Center, and Professor Urban Wemmerlöv from the University of Wisconsin–Madison. “(1) How does the concept of focus, as it is used in connection with manufacturing plants, transfer to the context of a critical care hospital setting?, and (2) How does focus affect operational, clinical, and financial outcomes?”
Since 1987 the Medical Center has served as Middle Tennessee’s only Level I trauma center—meaning it is equipped to handle the most severely injured patients—within a 150-mile radius. By 1993 executives at the Medical Center authorized the creation of a separate $5 million, 31-bed facility called the Vanderbilt Trauma Center. The dedicated facility opened in August 1998.
Administrators staffed the Trauma Center with a designated team of doctors, nurses, social workers, even security and cleaning personnel, and outfitted it with X-ray and lab equipment. Physicians work 12-hour shifts instead of the traditional 24-hour rotation, so they are in-house and immediately available during their time on duty. And the facility itself was designed with an open-bay concept to accommodate the center’s 14 intensive care unit beds. This layout permits cross-trained staff to assist each other as needed and makes it easy to rapidly reconfigure the unit to respond to changing circumstances, such as a sudden influx of critically injured patients.
Prior to the creation of the Trauma Center, patients were transferred from unit to unit throughout the hospital as their recovery progressed. In directing patient care, trauma physicians traveled all over the hospital seeing patients and interacting with a wide array of staff, who cared for both trauma and nontrauma patients. In the new critical care facility, the dedicated staff delivers care in a single location and manages patient care through much, if not all, of the patient’s hospital stay. This has allowed the unit to develop and hone a specialized set of treatment protocols.
The Trauma Center also has its own financial director, who tracks physician and staff performance in much the same way a private company would. And to further differentiate themselves as an independent entity in the hospital, nurses and other staff wear distinctive black uniforms.
Did the measures work? To answer that question, Hyer and her co-authors examined the dedicated center’s length-of-stay, mortality rates and financial metrics. To compare before and after performance, researchers used data from 1996–1998, the two years before the separate Trauma Center opened, and from 2000–2002, which allowed for a period of adjustment.
The length of stay for patients treated in the dedicated trauma center declined by an average of 6.5 percent overall, and by 15 percent for those with more severe injuries. However, researchers detected no change in the mortality rate.
On the financial front, trauma care operations showed losses of $2.2 million and $1.75 million in 1996 and 1997, respectively. In 1998 the Trauma Center saw a small surplus of $50,000. By 2000 and 2002, however, the independent trauma facility showed surpluses ranging from $5.5 million to $7.89 million. Calculated on a per-patient basis, the unit turned an average loss of $578 per patient into a surplus of $2,493 per patient. Researchers also found similarly positive results when comparing the unit’s performance to benchmark peers.
“The discovery by the trauma unit’s managers that [Vanderbilt] discharged patients quicker, while charging comparatively less for its services, was both an affirmation that the focused hospital unit worked as intended and a sign that they needed to increase the charge levels and have [the Medical Center] translate these into better contracts with the payers,” the authors write.
Looking into some of the reasons for the improved performance on length-of-stay and financial measurements, researchers found ways the staff became more efficient treating patients. For example, before the creation of a separate trauma center, tracheotomies (a common procedure for patients who may be on a ventilator for an extended time) had typically been performed in an operating room, where it took an average of 80 minutes’ worth of physician time. In the new Trauma Center, doctors developed a bedside tracheotomy procedure, which takes only 15 minutes of a doctor’s time and is just as safe as one performed in a fully equipped operating room. Whether done in an operating room or at the bedside, however, the high reimbursement rates are the same, allowing the Trauma Center to capture the financial gains.
The finding prompted the unit to redouble its efforts to charge procedures more accurately for reimbursement, and most important, not to overlook them as had sometimes been done when patients were shuttled between departments.
But the authors acknowledge that one case study is not enough to suggest a need for broader changes within the health care system. Further, they write that nothing in their research suggests that the creation of a dedicated unit in itself is “a sufficient condition for success.” Rather, such initiatives should be aligned with other changes to factors such as hospital infrastructure, management and culture.
Nevertheless the case does indicate a need to further explore reasons why a dedicated unit can help bolster hospital performance. “Hopefully,” the authors write, “future studies will determine with greater precision what factors need to coexist with [focused hospital units] to create better performing health care delivery organizations.”
The 2010 U.S. Congressional elections saw an unprecedented boom in campaign spending—$4 billion in all, with about $1.12 billion coming in the form of individual contributions to candidates, according to the Center for Responsive Politics. While political pundits continue to debate what impact this money has on election outcomes, new research from the Owen School points to some clear winners: the individuals who donate and the corporations they support.
Using innovative techniques to match geographic areas that are most affected by government policy with “economically relevant” politicians, the husband-and-wife team of Alexei Ovtchinnikov, Assistant Professor of Finance, and Eva Pantaleoni, Researcher at the Vanderbilt Kennedy Center for Research on Human Development, analyzed nearly 5 million campaign donations between 1991 and 2008.
What they describe in a new research paper is strong evidence that individuals who make political donations—whether at the behest of firms or not—directly benefit companies in their communities.
“The reason we look at individual contributions is because it accounts for about two-thirds of all the money given directly to politicians,” Ovtchinnikov says, noting that only about 10 percent of firms are actively involved in campaign finance. “Individuals are the big players in this game.”
But it’s companies that are reaping the most recognizable benefits. Ovtchinnikov says firms located in areas that most intensely target “economically relevant” politicians see positive changes in return on asset (ROA) and market-to-book ratios. The bottom-line boost that comes from campaign donations is similar to investing in a new research-and-development or capital-expenditure project.
Further, the economic benefit to firms strengthens when donations come from areas that have high unemployment rates—even if the politicians on the receiving end do not live in that district.
The new study also finds that political contributions flow disproportionately from companies’ home districts to key members of Congressional committees with jurisdiction over their industry. “What you’re seeing is an ability for people to reach politicians with dollars when they can’t reach them with votes,” Ovtchinnikov says.
The net result is that a significant amount of political donations come from narrow geographic clusters. Between 1991 and 2008, for example, three small areas around New York, Chicago and Washington, D.C., accounted for 11.7 percent of all campaign contributions—$425.9 million—even though they represented less than 2 percent of the population.
While the most recent study examined individual donations, a previous study by Ovtchinnikov and others published last year in The Journal of Finance shows a correlation between corporate political donations and higher stock returns.
“Our results … suggest an extremely high rate of return for firms participating in the political contribution process,” Ovtchinnikov and his co-authors write. “Alternatively, it is possible that politicians find it most beneficial to grant favors to large firms because those are the firms that generate the largest amount of tax revenues and jobs.”
In a similar study, David Parsley, the E. Bronson Ingram Professor in Economics and Finance, has found that corporate lobbying is “positively related” to a firm’s financial performance. In that study—where the top five firms in the U.S. accounted for 42 percent, or $160 million, of the total amount spent on lobbying in 2005—Parsley says that portfolios of companies engaged in the most intense lobbying efforts significantly outperformed their benchmark peers.
“Firms in this category earned an excess return of 5.5 percent over the three years following portfolio formation, while the rest of the firms earned essentially a zero excess return,” Parsley and his co-authors write. They do note, however, that the study’s results indicate that the gains were achieved through defensive lobbying, suggesting that simply spending the most on lobbying does not necessarily lead to better financial performance.
For his part, Ovtchinnikov says these studies, by demonstrating that campaign and lobbying expenditures have positive effects on corporate performance, open up intriguing new lines of inquiry for researchers.
“We have shown that firms are benefiting,” he says. “Now we need to begin asking why they benefit.”
Try asking any Monday morning quarterback about blown fourth-down play calls in the NFL and you are guaranteed passionate opinions. In most fourth-down plays, an NFL team will punt or try for a field goal. But occasionally teams decide to do something that is viewed as risky—attempt a fourth-down conversion, or “go for it.”
Associate Professor of Management Ranga Ramanujam, David Lehman from the National University of Singapore and other researchers studied 22,603 fourth-down decisions over five NFL seasons to understand when teams were more likely to attempt a seemingly risky fourth-down conversion. The goal is to apply this research to organizations. These findings are reported in a new study in Organization Science. The paper is titled “The Dynamics of the Performance-Risk Relationship within a Performance Period: The Moderating Role of Deadline Proximity.”
Ramanujam and his co-authors report that teams that were behind generally were more likely to go for it on fourth down. Also the more points behind they were, the more likely it was they would go for it. This effect became stronger as the game progressed. In other words, a trailing team was more likely to go for it later in the game rather than earlier in the game. Interestingly this was only the case for teams trailing by a margin of about three touchdowns or less. Teams trailing by wider margins actually became less and less likely to go for it as the game progressed.
“The idea here is that as the deadline approaches, the time available begins to factor into the decision to try something different in response to underperformance,” Ramanujam says.
The results suggest that as the game clock runs down, teams within striking distance of their opponent grow more and more eager to try risky plays that might help them win the game. However, teams outside that striking distance grow increasingly concerned with “saving face” or avoiding a risky move that might backfire and make them look stupid.
“We argue that this same tension between chasing organizational goals and avoiding reputational threats can help us understand risk-taking behaviors in other types of organizations,” Lehman says.
The goal of the study was to understand how risky organizational decisions might be shaped by performance feedback (i.e., the extent to which current performance is above or below the aspired level of performance) and deadline proximity (i.e., time remaining before an important deadline such as an earnings report date).
“We know that deadlines play an important role in organizational life. Part of what we’re trying to understand is how deadlines affect the well-known relationship between underperformance and risk taking,” Ramanujam says. “In other words, when are organizations more likely to deviate from routines? This is an important question for understanding a variety of important organizational outcomes such as innovation, change and fraud.”
Ramanujam, though, is quick to acknowledge that “although many managers have a natural talent for finding a football analogy for every business situation, football games are very different from the operations of business organizations. However, they are sufficiently similar in some key respects to make these findings potentially relevant to business organizations.”
For instance, fourth-down decisions typically are organizational decisions in that they are based on the inputs of various people on and off the field and are based on performance relative to a target and in reference to a deadline.
Ramanujam also notes that unlike prior studies that analyzed whether fourth-down conversions are as risky as they are made out to be, this study was about understanding when teams were more likely to go for it.
“What is especially relevant to our study is that teams treat this as a nonroutine choice,” Ramanujam says. “In more than 80 percent of fourth-down plays, the teams punted the ball.
Generations of MBA graduates have mastered pricing models designed to evaluate companies based on capital assets like equipment, land and raw materials. But as the world economy shifts to one that increasingly places a premium on brainpower instead of horsepower, there are few, if any, reliable methods for analyzing the financial value of human capital.
To help bridge that gap, Vanderbilt’s Financial Markets Research Center brought together researchers last October to look at the value and risk of human capital and how it affects a firm’s business and financial strategy. For the event’s organizer, Miguel Palacios, Assistant Professor of Finance, the pursuit of this new model is more than just an interesting theoretical exercise—it’s personal.
The Colombia native is co-founder of Lumni Inc., a Miami-based company that has developed investment products to help send promising students to school who otherwise would be shut out of higher education by soaring costs. Operating in four countries, including the U.S., Lumni was cited by Businessweek in 2009 as one of America’s most promising social ventures. More recently The Economist highlighted the company in a piece about innovative new microfinance ventures designed to help students pay for higher education.
With co-founder and fellow Colombian Felipe Vergara, a former McKinsey consultant, Lumni has financed education opportunities for 1,800 students using more than $10 million. The company raises money from investors such as the Inter-American Development Bank, foundations, universities and wealthy donors. Students commit to paying a fixed percentage of their income—never more than 15 percent—into a Lumni-created fund typically for five years after graduation. In turn, that fund pays out proceeds to investors.
As The Boston Globe has described it, “These contracts, proponents say, would allow more kids to finish college. They would free graduates from crushing debt. And they could liberate youngsters to pursue socially valuable but low-paying work such as teaching.”
But it is not just save-the-world types who would benefit. Palacios and others point out that this model would also help fund the next generation of doctors and lawyers as a way to spread investor risk through a broad pool of students.
“The best analogy is insurance,” Palacios says. “Not everybody crashes. If you pool everybody together, you are in a much better position.”
The analogy may end there, however, for unlike insurance there is no set of accepted standards for quantitatively measuring risk or reward when it comes to education. How, for example, does the value of human capital change when a person adds a medical degree versus a bachelor’s in philosophy?
“This is the largest asset that most people have,” says Palacios, who estimates that human capital accounts for about 90 percent of aggregate wealth.
Palacios says human capital poses a particular challenge for researchers because, while it represents the single largest asset class in the world’s economy, one cannot directly observe its value or dynamics. “We merely observe wages, human capital’s dividends,” Palacios wrote in a 2009 paper. “Thus, we need a framework to determine human capital’s value.”
Researchers continue to look for a breakthrough model in the field, and so far Palacios’ work indicates that human capital is less prone to economic shocks than equities, bringing a new level of empirical rigor to academic colleagues and potential investors alike.
Yet there are other, more practical concerns for the student-investment concept beyond technical valuation, namely how to guard against a student putting off a career because there is no pressure to repay loans. To address this, Lumni draws on psychologists to help screen its applicant pools and designs the student contracts in such a way as to deter abuse of the system.
There is also a risk that potentially high earners—medical school students, for example—would avoid signing up because they could end up paying out more to Lumni’s investors than they would to a private student loan company. In such a case, the company offers better terms such as a lower percentage of income to be repaid.
Lumni is not the first company to attempt this invest-in-students model; its conceptual roots stretch back to the 1940s and 1950s when economist Milton Friedman first proposed the idea. More recently the companies MyRichUncle, a U.S. student financing company that is now out of business, and German-based CareerConcept, launched versions of the idea. MyRichUncle began experimenting with these types of student contracts in 2001. CareerConcept, however, has seen success since it began offering student investment funds in 2002. It has sent thousands of students to school across more than 20 countries, mostly in Europe, through eight funds totaling 40 million euros.
For now, Lumni is trying to establish itself in the Americas, with a goal of financing 1 million students over the next 12 years. Vergara told Businessweek, “My vision is to create a revolution in investing in human capital to show it’s possible to receive an education despite low income.”
But companies like Lumni can only execute on that vision if they have the analytical tools being developed by Palacios and others to correctly value the contracts they sign.