Financial analysts have a tendency to favor deceptive CEOs, particularly high-status analysts, according to new research published in the Strategic Management Journal. Using an advanced machine learning model, the researchers measured the likelihood of deception by CEOs more accurately than previous methods. They did find, however, that CEO deception is less effective among analysts who are repeatedly exposed to their lies.
The researchers — led by Steven J. Hyde of Boise State University and including Eric Bachura of University of Texas at San Antonio, Jonathan Bundy of Arizona State University, Richard T. Gretz of University of Texas at San Antonio, and Wm. Gerard Sanders of the University of Nevada, Las Vegas — began their study with two core questions: Do analysts detect when CEOs lie? And in what context are analysts more or less likely to pick it up?
The team looked at linguistic patterns that are found to be indicative of lying among CEOs: For example, distancing language tends to be associated with lying. Previous studies of the topic used regression analyses (with an accuracy rate of about 65%), but the researchers opted for a machine learning component in which they identified instances where CEOs committed very serious fraud and created a sample of CEOs like them.
“We ran a host of robustness checks, and it would be surprising at times how accurate it was,” Hyde says. “My goal was to get an accuracy of about 70%, but we got 85%. The model exceeded our expectations.”
After the machine learning platform was designed, the researchers connected the findings with how analysts reacted. The study showed that analysts by and large reward CEOs for deception. But they also discovered a “boy who cried wolf effect,” in that analysts will catch on over time if they are repeatedly lied to. Lastly, they found that analysts with the highest reputation are the slowest to pick it up on deception — suggesting that investors concerned that a company is lying should look to analysts who are less well known, rather than those considered all-stars.
“All of these effects are being driven by our general human tendency to assume that people are being honest with us,” Hyde says. “(Star analysts’) prestige exaggerates the bias. They assume even more that people won’t be lying to them; there’s a level of ego that comes in.”
The researchers also created a second model to measure the suspiciousness of CEOs’ words, finding that the relationship between suspicion and actually lying is weak. According to Hyde, this demonstrates the power of machine learning models and what they can capture within linguistic patterns.
Hyde says he expects more analysts to incorporate linguistic analysis within their reports to provide a more accurate analysis of the firm, helping to curb the ability for people in power to lie and lose billions of dollars every year as a result of corporate fraud. He did, however, caution that the machine learning models are not perfect, and there could be false negatives or positives. Another warning: Leaders could also learn to run their speeches through these platforms and change their language to better hide their deceit.
“You still need a human component here,” Hyde says. “You still need to investigate what’s actually been said, and not just take that algorithm at face value.”