Email Sales

Email Sales

Need product support? Please visit our Customer Support page.





Back to Blog

Q&A with an Expert: The SEC is Developing Tools That Use XBRL Data to Discover Accounting Anomalies and Improve Financial Disclosures

Merrill Disclosure Solutions | April 09, 2013

An interview with Craig M. Lewis, SEC Division of Risk, Strategy, and Financial Innovation

Craig M. Lewis is Director and Chief Economist of the Division of Risk, Strategy, and Financial Innovation (the SEC's think tank, also known as RiskFin or RSFI) and is on leave as a professor of finance at Vanderbilt University. In a speech he gave in December 2012, Risk Modeling At The SEC: The Accounting Quality Model, Dr. Lewis stated that the RSFI Office of Quantitative Research is “developing cutting-edge ways to integrate data analysis into risk monitoring.” The mining of XBRL data is a key part of this work. He explained that his SEC team's early success has “fed our ambition” about what the SEC can do to create technologically advanced, data-driven monitoring programs.

Craig M. Lewis

Dimensions spoke by telephone with Dr. Lewis to learn more about the accounting quality model or AQM-a set of quantitative analytics modeling tools that the SEC is designing to review filings-and about the role of XBRL in this application. The accounting quality model will search for financial statements that “appear anomalous” and will automatically flag them for review by an examiner. The model is not expected to be fully implemented until the end of 2013.

What is this new automated tool that the SEC is developing for monitoring risk and discovering accounting anomalies? How do you expect it to work?

It is a predictive model that attempts to identify firms which have made unusual accounting choices relative to their peer group. A firm has a significant amount of discretion in the way it chooses to report elements for financial statement purposes.

Craig M. Lewis - Quote 1

The degree to which, let's say, a CFO uses this discretion can have an impact on the numbers that are actually reported. For example, consider somebody who wants to smooth earnings. If a firm was having a down year and felt that the actual numbers were lower than its peer group, it may seek ways to try and boost income, maybe by not recording as much bad debt expense.

There is a significant amount of discretion around how someone could choose to accrue for bad debt expense. One way to do that is to recognize the type of year a firm is having. Suppose it is a bad year. A manager may simply say: “Well, it's a bad year; let's take something out of the accrual bank.” To do this, one would then say: “These credits look solid to us; we don't think we're going to lose much.” In a good year, you look at the exact same set of accounts and you say: “You know something? A lot of these credits are likely to be unable to pay us, so we want to take a little more bad debt expense.” This allows you to make a deposit to the accrual bank.

So there is a mechanism for raising income in bad years and lowering income in good years. The reason why firms are interested in doing both is that all accounting entries eventually reverse. To be able to over-report income in a particular year, you actually have to have something in the bank that you can take out and use when you need it, and firms can use the way they accrue for certain liabilities to accomplish this-or the way they recognize revenue, for that matter.

So it is very much a peer-comparison type of tool.

Yes, the way you would identify unusual accounting choices is to compare them to those of your peers, because firms that operate in the same line of business tend to have very similar accounting reporting issues and make similar choices about how they report elements. If you are an oil and gas producer, there are a lot of accounting rules about how oil and gas producers have to book income, account for reserves, etc. If you are a software manufacturer, those same rules would not apply to you, so you would not want to compare a software manufacturer against an oil and gas company.

The tool has been referred to as “RoboCop.” Does that make it sound too automated?

It is an automated process. But the RoboCop reference, I thought, seems to be based more on the idea of a fraud-detection model-the robot police coming out and busting the fraudsters-as opposed to what I was hoping it would do, which is to simply be a tool to improve the quality of financial reports. But it is a fully automated system that effectively takes a firm's filing the day it comes in, processes it, and then keeps it in the database so that somebody who is interested in looking at a report on that company would be able to do so within 24 hours of the filing being posted on EDGAR.

Would you be able to do this if companies were not tagging their financials with XBRL?

Craig M. Lewis - Quote 2

It would not be as useful a tool as it otherwise would be. My reasoning is that the tool could be developed using commercial databases-actually, the prototype was developed around commercial databases because many companies were not required to make XBRL filings until last year. I believe that is an issue, because the commercial databases contain only a subset of the filers. To be a useful tool for the SEC, it has to be something that can be applied broadly to the entire filer space. So XBRL is critical to the development of the tool simply because it allows us to have complete coverage.

How will this monitoring tool parse the XBRL data to select the companies whose financials need extra review?

The tool itself tries to model what is known in the financial-accounting literature as discretionary accruals. It is a predictive model that estimates how much of the total accruals that a company reports are discretionary. (Total accruals are the difference between cash flows and net income.)

Those are actual XBRL tags?

Those would be consolidated tags. Net income would be an XBRL tag. You also could obtain cash flows from the XBRL filing. What we are doing is building this model from factors. One of the exercises we need to go through is to take the taxonomy and synthesize it in a way so that we can compress the actual taxonomy choices that companies make and the way they use the taxonomy into high-level financial statements. By converting all those choices that firms make about how to tag elements, we are going to look at more of a high-level financial statement presentation. The factors we develop will be based on this stylized set of financial statements, and that gives us the ability to really compare firms.

Companies develop their own XBRL extensions. Does that cause a problem in your system?

To the extent that firms use unique extensions, we have to make decisions about how to collapse those extensions into the way we represent these stylized financial statements. Is it a problem? No. Is it something we are addressing in the way we actually build the model out? Yes.

One of the things we have noticed is that the longer a firm is actually making XBRL filings, the fewer unique extensions they tend to choose. So there is a learning curve that seems to be going on, where filers may begin by using unique extensions, but over time, as they become more comfortable with using the taxonomy, the number of those unique extensions tends to collapse.

While developing this tool, have you had any concerns about the quality and consistency of XBRL tagging beyond the issue of extensions? Are there other issues with the quality of XBRL filings that are making it harder to develop this tool and the analytics around it?

Craig M. Lewis - Quote 3

What you will find is that anyone who uses structured financial statement data will be required to come up with a rule-based approach to dealing with outliers. Whether there are errors in the taxonomy itself, whether there are errors in the way the XBRL data is being tagged, we have to come up with an approach that will allow us to identify unusual elements. Even if you use the commercial databases, you make these same choices. There are ways of dealing with outliers in the data that are fairly standard among people who do empirical corporate finance. We are taking a similar approach: using our expertise as financial economists to create similar rules for the XBRL data.

To take it a step farther, though, with respect to the quality of the data, now that there is an actual liability associated with inaccurate XBRL statements, I fully expect quality to improve. My view is that the real solution to this is inline XBRL: creating a document where the tags are embedded directly into your filing so that you do not have to have two separate documents. This seems to be where the industry is moving, and I fully support that.

If the industry does move to inline XBRL, will that make your model easier or harder to use?

As long as it is tagged, it is structurally the same data. The only thing that inline XBRL would do for us is to [reduce] the potential error rate that you might see in tagged data. So any time you can remove a step in the process where there is an opportunity for additional errors, it will improve the quality of the data you get.

How can companies be sure their XBRL filings are not automatically flagged in the monitoring system you are developing?

I would say, check your work. If you make a mistake in how you record an element, that would affect the score you get from the model and might make you more likely to be pulled up for a review. I would argue, correctly so. The model will tell the reviewer which factor was contributing to the score, and if one factor comes out and has a large impact on the score and can be traced back to a recording error in the XBRL data, you will be flagged because you made a mistake in providing us your XBRL data. I do not view that as a problem.

Some companies look at the resources involved in preparing XBRL documents and claim it is not worth the expense and time involved because nobody is using XBRL data. What is the message you want to get out about how the SEC is and will be using XBRL?

Let me preface my remark with some observations about the data. I do not think the data has been around long enough to actually be an incredibly useful tool for financial statement analysis. Anybody who actually wants to analyze financial statements needs a time series, and a few years is an insufficient time series to do meaningful analysis. The lack of uptake by people outside the SEC is simply because XBRL is still in the development phase. Once you get a long enough time series, you will find people will start to use this tool. It is a chicken-and-egg problem. You need sufficient data before you can find it useful. When people say it is not being used, they are missing the point. In its current form, it is not as useful as it will be five years from now.

Craig M. Lewis - Quote 4

What I like to say is that the SEC is using this data. It seems natural to me that we would want to use this data. But just like the observation I made about utility for individual investors, the same concern is there for us.

There is a learning curve when companies start to use this data. The early data will have errors. Over time, as those filers become more experienced with XBRL, their error rate goes down, and their data becomes significantly more useful. The SEC was really just allowing firms to have a window to figure out how to tag data. Now that the window has shut, we are just going to start to use the data. I view it as the natural outcome of giving filers the opportunity to figure out what they are doing with XBRL.

One of the interesting things is that new filers have significantly lower error rates than the original filers, and that is because so many of them use third-party vendors to help them with their filings.

Once a filing has been flagged by the tool, what happens next?

The tool can be used in a number of different ways. One way is to possibly assist in the scheduling of firms to examine. There is a requirement under the Sarbanes-Oxley Act that the SEC needs to examine every 10-K filing at least once every three years. By risk-scoring the filers, you can deploy resources within the SEC efficiently by directing staff to filers that might benefit from immediate attention. So that would require you to essentially develop a database of scores that you would rank.

Once the schedule is set, we will generate customized, company-specific reports. The report will hopefully identify areas where we think it would be most natural to focus review time-because something unusual might be happening with respect to particular accounting choices. So [the Division of] Corporation Finance may have one use for it, which is to improve the quality of the corporate disclosures; while the Division of Enforcement may have an independent need for the tool. I also see an interplay between the two. There are a lot of ways internally in which the tool can be used.

What is your vision for how the system will improve financial disclosures and prevent fraud?

It is a tool, not the solution. It may be used by a particular team in Corporation Finance to identify areas that warrant further attention. That could be done for all filers. If a problem turns out to be actionable, and possibly something fraudulent, I see it being referred to Enforcement for additional investigation.

NOTE: The views expressed here are entirely those of Dr. Lewis and do not necessarily reflect those of the Securities and Exchange Commission (SEC) or any other organization.

Click here to access all Dimensions eNewsletters

I agree

This site uses cookies to offer you a better experience. For more information, view our privacy policy.