Back to Blog

Interview With An Expert: XBRL’s Potential As A Revolutionary Tool

Merrill Disclosure Solutions | October 15, 2013

An interview with Prof. Dhananjay Gode, Leonard N. Stern School of Business, NYU

Prof Dhananjay Gode

At the XBRL and Financial Analysis Conference held earlier in 2013 at Baruch College's Zicklin School of Business, Prof. Dhananjay (Dan) Gode gave one of the keynote addresses. [For a conference summary, see the April 2013 issue of Dimensions.] In his talk, Dr. Gode praised the purpose and concept of XBRL, but he also emphasized that the SEC and America's companies, in their respective ways, still have much to do to improve the accuracy of XBRL data in financial communications.

Dr. Gode is a Clinical Associate Professor of Accounting at New York University's Stern School of Business. He teaches courses in corporate financial accounting and pursues research interests in financial analysis, legal liability of firms, valuation, managerial accounting, and performance measurement. In a recent telephone interview with Dimensions, Dr. Gode elaborated on his view of XBRL data-both the current progress in its use and its potential as a revolutionary tool.

Dr. Gode, do you think XBRL tagging by companies is getting better? Are companies making fewer mistakes?

It is getting better. XBRL is the way of the future. It is not going to die or go away. But it is a matter of the SEC making the commitment and saying to companies: “You can't make mistakes in your XBRL.” There must be consequences for using the wrong tags. If the SEC writes a comment letter that ends up on the CFO's desk, of course it will get attention. It is a matter of how important you make it. Is this a side thing that companies should do, or is this something that the SEC will penalize you for if you do not?

XBRL is the way of the future

The first stage is telling companies: “Please provide XBRL data. Do the best you can.” The second stage is saying: “You are required to do this, and you are required to do this accurately. And if you do not do it, that's a serious problem.” The second stage the SEC has not gone into yet, because they wanted to tread carefully, I assume, and to make sure that they were not asking companies to do something that companies were not technologically ready to do.

What common problems do you see in XBRL data? Have you noticed trends?
[One problem is] companies using the wrong tag or coming up with too many custom tags, and you don't know which tag to get. The user interface on tag selection, tag standardization, and enforcement of that whole discipline is simply not there. If I want to get, let's say, product-warranty data on companies, there are different tags, and I do not know how to search for the tags that I am supposed to pick up. Suppose I just want to find out which companies have the most accrued warranties as liabilities on the balance sheet. I just don't know which tags to go and look at. That makes it extremely hard. That data is not available from many of the vendors, because data is available only if they think it is important and choose to include it in their standard set of data points. The appeal of XBRL is that you will get far more granular data. But that has not translated itself into something that is directly usable.

Why do you have concerns about XBRL accuracy?
Financial statement data is audited by reputable auditors; XBRL accuracy is not audited. Companies are supposed to follow proven practices, but it's simply not there in terms of accuracy-and will never get there unless it becomes part of what auditors certify is done properly. It is always given to somebody or outsourced or something. And even if the data is 95% accurate, maybe you can use it in a classroom; but if you are doing any professional work, 95% accuracy is not good enough- even 98% is not good enough. If you're wondering about the data you are looking at, it is not particularly efficient. The promise is that it will be a lot cheaper than using some of these [data-providing] vendors. But right now it's not there yet. It's not audited-that's the problem.

So XBRL is revolutionary in that sense

How extensive are the XBRL tagging inaccuracies that you see? Do you see them in only a small percentage of filings, or only in particular topic areas?
I have tried to get, for example, unremitted foreign earnings data from companies. One company's 10-K, which I had read, said $17 billion, but the XBRL tag came out with $4.3 billion. Now, I do not analyze the whole database; I'm just a user. If I see something like that, I get annoyed. I do not know how many more mistakes there are in the whole database. Even if you try to do four things and you find mistakes in each of those four queries, it leaves you quite a bit unsettled. That's [a] problem.

The plus is that I could not have gotten this unremitted foreign earnings data by looking at the regular fact set from data providers. With XBRL, you get much, much more granular data. So once the accuracy comes in, there will be a tipping point. Right now the tipping point has not been reached because of concerns about accuracy. But this is the way of the future. This is going to lead to all kinds of granular analysis that we simply could not do earlier.

Suppose I wanted to study unremitted foreign earnings among hundreds of companies. The only way I could have done this in the past [was to] wait for some investment bank to hire a bunch of analysts and have them pore through the 10-K, collect all the data, and put it in a spreadsheet. That's the only way this could have been analyzed-a lot of manual work. Or I would have to wait for data-providers to put it in their database. Without those two ways, I would not have had the time to go through hundreds of financial statements. So XBRL is revolutionary in that sense.

What do you think are the design limitations of the current XBRL taxonomy and document-creation process?
Let's say I am looking at an XBRL download for unremitted foreign earnings. Suppose I doubt that the company is showing the right number-I think that $4.3 billion is incorrect. I should be able to click on that XBRL number and go right to where the 10-K text description is so that I can read the actual 10-K and see whether the number was accurate. I could then get some comfort from the context. Also, many times you get to that data point and want to read the rest of the stuff that is disclosed with the data point. That does not exist right now. There is no link back to the text in which this data point was embedded.

Now [the SEC is] discussing inline XBRL. If the 10-K and the XBRL document were linked so that I could click between them, that would be great. But it is not there right now.

What can companies do to improve the quality of their XBRL tagging?
Companies talk to their large investors, and large investors have resources and access to data-providers, so companies do not really hear from smaller outfits or academics who could benefit from more accurate data. I'm not sure companies are hearing the concerns that people have with XBRL as much as they might. Once XBRL tools become better and people really start using XBRL, then companies will hear from users: “Hey, your XBRL tag is wrong, and that misled us.” Right now, I doubt that companies get any feedback from end users about XBRL display or choice of tags and inaccuracies in data.

In fact, I remember talking to a CFO who said they put up an XBRL document online, and nobody downloaded it. No small user is going to write their own code to parse XBRL. This has to be done by intermediaries. Those intermediaries may notice some mistakes and get back to companies, but I doubt that the companies really listen. So I do not think this is on the radar screen of CFOs-that their XBRL data has errors. It's something that lower-level bureaucracy deals with. It does not percolate to the senior management.

Do the data-providers always have accurate data? Are there mistakes in their data?
Fewer mistakes. Of course, they have mistakes. That's why investment banks, if they are doing a deal, will go to the actual 10-K or 10-Q and double-check data. But to run an initial screen of what the data-providers provide is good enough.

XBRL is not human readable data

Is this why financial data aggregators and intermediaries play such a crucial role with XBRL data?
They are absolutely essential. XBRL in its raw form requires somebody to write a parser to make sense of it. It is not human-readable data. You can download an XBRL document, but you cannot really read it. It has to be made more user friendly by somebody who provides a front end.

How can the SEC better encourage companies to produce reliable XBRL data? What further incentives can it provide?
The SEC wields a stick, not a carrot. They do not pay companies; they penalize them. You just have to penalize companies- not financially, but through a letter that embarrasses companies that have XBRL mistakes in their financial statements. If the board of the company or if the CFO hears about it, they will provide appropriate resources. It can be done. But it has not received the priority it should.

It's a chicken-and-egg problem. If you ask companies to spend lots of money on XBRL, they will say, “Well, who uses it?” But the users say, “Well, I'm not going to really use it unless I know the data is accurate.”

Maybe the push will come from data-providers who will point out to the companies that there are errors in their XBRL data. But I do not see that happening. The SEC has to take leadership and make us believe that this will be done properly-and perhaps even provide their own interface.

I use EDGAR all the time to download documents, and I find it very fast and easy to use. If the SEC itself provided an XBRL front end, that would be remarkable and would really make using XBRL much easier. That would really help people. I think XBRL would be heavily used if the SEC did that.

What progress do you foresee in the XBRL taxonomy and the use of XBRL data during the next ten years?
It depends on what leadership the SEC decides to take. I think this is the way of the future. XBRL is much more efficient. It provides a lot more granular data. If the SEC decides to provide a front end, it will be a lot cheaper. They can do some really good stuff if they decide to do [so].

XBRL is much more efficient

I see XBRL getting better. Eventually companies will standardize XBRL. There will be pressure to make it a lot less labor intensive. I think the adoption has been slower than anybody would have predicted six years ago. I do not think it will be a revolution unless the SEC decides to make it happen. But XBRL will get better. The errors keep getting pointed out, and companies will keep fixing them. I expect XBRL to be much more widespread in 10 years than it is now. In accounting standards codification, there is a subsection on XBRL, so the standard-setters are also paying attention. I see more improvement all along.

I think the problems with XBRL are not structural but more procedural. XBRL is a promising technology. The procedures need to be fixed-tagging, consistency, interfaces-but XBRL data is highly useful. You just have to do it right.

The views expressed here are entirely Prof. Gode's and do not necessarily reflect those of Stern School of Business or any other organization.

Click here to access all Dimensions eNewsletters


This site uses cookies to offer you a better experience. For more information, view our privacy policy.