574 Shields Against Validity Challenges in Plato's Cave
An Appeal for Replication and Other Commentaries/Dialogs in an Electronic Journal Supplemental Commentaries and Replication Abstracts
Bob Jensen at Trinity University

With a Rejoinder from the 2010 Senior Editor of The Accounting Review (TAR), Steven J. Kachelmeier

Accountics is the mathematical science of values.
Charles Sprague [1887] as quoted by McMillan [1998, p. 1]
http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm#_msocom_1

Tom Lehrer on Mathematical Models and Statistics ---
http://www.youtube.com/watch?v=gfZWyUXn3So
You must watch this to the ending to appreciate it.

"David Ginsberg, chief data scientist at SAP, said communication skills are critically important in the field, and that a key player on his big-data team is a “guy who can translate Ph.D. to English. Those are the hardest people to find.”
James Willhite

The second is the comment that Joan Robinson made about American Keynsians: that their theories were so flimsy that they had to put math into them. In accounting academia, the shortest path to respectability seems to be to use math (and statistics), whether meaningful or not.
Professor Jagdish Gangolly, SUNY Albany

David Johnstone asked me to write a paper on the following:
"A Scrapbook on What's Wrong with the Past, Present and Future of Accountics Science"
Bob Jensen
February 19, 2014
SSRN Download:  http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2398296 

Abstract

For operational convenience I define accountics science as research that features equations and/or statistical inference. Historically, there was a heated debate in the 1920s as to whether the main research journal of academic accounting, The Accounting Review (TAR) that commenced in 1926, should be an accountics journal with articles that mostly featured equations. Practitioners and teachers of college accounting won that debate.

TAR articles and accountancy doctoral dissertations prior to the 1970s seldom had equations.  For reasons summarized below, doctoral programs and TAR evolved to where in the 1990s there where having equations became virtually a necessary condition for a doctoral dissertation and acceptance of a TAR article. Qualitative normative and case method methodologies disappeared from doctoral programs.

What’s really meant by “featured equations” in doctoral programs is merely symbolic of the fact that North American accounting doctoral programs pushed out most of the accounting to make way for econometrics and statistics that are now keys to the kingdom for promotion and tenure in accounting schools ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

The purpose of this paper is to make a case that the accountics science monopoly of our doctoral programs and published research is seriously flawed, especially its lack of concern about replication and focus on simplified artificial worlds that differ too much from reality to creatively discover findings of greater relevance to teachers of accounting and practitioners of accounting. Accountics scientists themselves became a Cargo Cult.

Why Do Accountics Scientists Get Along So Well?
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

Why Pick on TAR and the Cargo Cult?

Real-Science Versus Pseudo Science

Why the “Maximizing Shareholder Value” Theory of Corporate Governance is Bogus

Purpose of Theory:  Prediction Versus Explanation

TAR versus AMR and AMJ and Footnotes of the American Sociology Association

Introduction to Replication Commentaries

A May 2012 Commentary in TAR 

Over Reliance on Public Databases and Failure to Error Check

Consensus Seeking in Real Science Versus Accountics Science  

Are accountics scientists more honest and ethical than real scientists?

TAR Versus JEC

Robustness Issues 

Accounting Research Versus Social Science Research

The Cult of Statistical Significance: How Standard Error Costs Us Jobs, Justice, and Lives ---
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

Mathematical Analytics in Plato's Cave
TAR Researchers Playing by Themselves in an Isolated Dark Cave That the Sunlight Cannot Reach

Thank You Dana Hermanson for Putting Accounting Horizons Back on Track

Increasing Complexity of the World and Its Mathematical Models

Is Anecdotal Evidence Irrelevant?

Statistical Inference vs Substantive Inference

High Hopes Dashed for a Change in Policy of TAR Regarding Commentaries on Previously Published Research

Low Hopes for Less Inbreeding in the Stable of TAR Referees

Rejoinder from the Current Senior Editor of TAR, Steven J. Kachelmeier

Do financial incentives improve manuscript quality and manuscript reviews?

Case Research in Accounting

The Sad State of Accounting Doctoral Programs in North America

Simpson's Paradox and Cross-Validation
What happened to cross-validation in accountics science research?

Citation Fraud:  Why are accountics science journal articles cited in other accountics science research papers so often?

Common Accountics Science and Econometric Science Statistical Mistakes ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsScienceStatisticalMistakes.htm

Accountics is the mathematical science of values.
Charles Sprague [1887] as quoted by McMillan [1998, p. 1]
http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm#_msocom_1

Tom Lehrer on Mathematical Models and Statistics ---
http://www.youtube.com/watch?v=gfZWyUXn3So
You must watch this to the ending to appreciate it.

Strategies to Avoid Data Collection Drudgery and Responsibilities for Errors in the Data

Obsession With R-Squared

Drawing Inferences From Very Large Data-Sets

The Insignificance of Testing the Null

Zero Testing for Beta Error

Scientific Irreproducibility

Can You Really Test for Multicollinearity?  

Models That aren't Robust

Simpson's Paradox and Cross-Validation

Reverse Regression

David Giles' Top Five Econometrics Blog Postings for 2013

David Giles Blog

A Cautionary Bedtime Story

574 Shields Against Validity Challenges in Plato's Cave ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

Real Science versus Pseudo Science ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Pseudo-Science

How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

"How Non-Scientific Granulation Can Improve Scientific Accountics"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf

Gaming for Tenure as an Accounting Professor ---
http://www.trinity.edu/rjensen/TheoryTenure.htm
(with a reply about tenure publication point systems from Linda Kidwell)

The AAA's Pathways Commission Accounting Education Initiatives Make National News
Accountics Scientists Should Especially Note the First Recommendation

Conclusion and Recommendation for a  Journal Named Supplemental Commentaries and Replication Abstracts

Appendix 1:  Business Firms and Business School Teachers Largely Ignore TAR Research Articles

Appendix 2:  Integrating Academic Research Into Undergraduate Accounting Courses

Appendix 3:  Audit Pricing in the Real World

Appendix 4:  Replies from Jagdish Gangolly and Paul Williams 

Appendix 5:  Steve Supports My Idea and Then Douses it in Cold Water

Appendix 6:  And to Captain John Harry Evans III,  I salute and say “Welcome Aboard.”

Appendix 7:  Science Warriors' Ego Trips

Appendix 8:  Publish Poop or Perish
                      We Must Stop the Avalanche of Low-Quality Research

Appendix 9:  Econtics:  How Scientists Helped Cause Financial Crises (across 800 years)

Appendix 10:  Academic Worlds (TAR) vs. Practitioner Worlds (AH)

Appendix 11:  Insignificance of Testing the Null

Appendis 12:  The BYU Study of Accounting Programs Ranked by Research Publications

Appendix 13:  What is "the" major difference between medical research and accounting research published in top research journals?

Appendix 14:  What are two of the most  Freakonomish and Simkinish processes in accounting research and practice?

Appsendix 15:  Essays on the State of Accounting Scholarship  

Appendix 16:  Gasp! How could an accountics scientist question such things? This is sacrilege!

Appendix 17:  A Scrapbook on What's Wrong with the Past, Present and Future of Accountics Science

Acceptance Speech for the August 15, 2002 American Accounting Association's Outstanding Educator Award --- http://www.trinity.edu/rjensen/000aaa/AAAaward_files/AAAaward02.htm

Real Science versus Pseudo Science ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Pseudo-Science

How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 

Essays on the State of Accounting Scholarship ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Essays

The Sad State of Economic Theory and Research ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#EconomicResearch 

The Cult of Statistical Significance:  How Standard Error Costs Us Jobs, Justice, and Lives, by Stephen T. Ziliak and Deirdre N. McCloskey (Ann Arbor:  University of Michigan Press, ISBN-13: 978-472-05007-9, 2007)
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

Page 206
Like scientists today in medical and economic and other sizeless sciences, Pearson mistook a large sample size for the definite, substantive significance---evidence s Hayek put it, of "wholes." But it was as Hayek said "just an illusion." Pearson's columns of sparkling asterisks, though quantitative in appearance and as appealing a is the simple truth of the sky, signified nothing.

In Accountics Science R2 = 0.0004 = (-.02)(-.02) Can Be Deemed a Statistically Significant Linear Relationship ---
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

 

"So you want to get a Ph.D.?" by David Wood, BYU ---
http://www.byuaccounting.net/mediawiki/index.php?title=So_you_want_to_get_a_Ph.D.%3F

Do You Want to Teach? ---
http://financialexecutives.blogspot.com/2009/05/do-you-want-to-teach.html

Jensen Comment
Here are some added positives and negatives to consider, especially if you are currently a practicing accountant considering becoming a professor.

Accountancy Doctoral Program Information from Jim Hasselback ---
http://www.jrhasselback.com/AtgDoctInfo.html 

Why must all accounting doctoral programs be social science (particularly econometrics) "accountics" doctoral programs?
http://www.trinity.edu/rjensen/theory01.htm#DoctoralPrograms

What went wrong in accounting/accountics research?
http://www.trinity.edu/rjensen/theory01.htm#WhatWentWrong

Bob Jensen's Codec Saga: How I Lost a Big Part of My Life's Work
Until My Friend Rick Lillie Solved My Problem
http://www.cs.trinity.edu/~rjensen/video/VideoCodecProblems.htm

One of the most popular Excel spreadsheets that Bob Jensen ever provided to his students ---
www.cs.trinity.edu/~rjensen/Excel/wtdcase2a.xls

 


I think a PhD seminar should focus on the dogged tradition in other disciplines to replicate original research findings. We usually think of the physical sciences for replication examples, although the social science research journals are getting more and more concerned about replication and validity. Interestingly, some areas of the humanities are dogged about replication, particularly historians. Much of historical research is devoted to validating historical claims. For example, see http://hnn.us/articles/568.html

The Cult of Statistical Significance:  How Standard Error Costs Us Jobs, Justice, and Lives, by Stephen T. Ziliak and Deirdre N. McCloskey (Ann Arbor:  University of Michigan Press, ISBN-13: 978-472-05007-9, 2007)
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

Page 206
Like scientists today in medical and economic and other sizeless sciences, Pearson mistook a large sample size for the definite, substantive significance---evidence s Hayek put it, of "wholes." But it was as Hayek said "just an illusion." Pearson's columns of sparkling asterisks, though quantitative in appearance and as appealing a is the simple truth of the sky, signified nothing.

In Accountics Science R2 = 0.0004 = (-.02)(-.02) Can Be Deemed a Statistically Significant Linear Relationship ---
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

 

How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm  

 

"The Absence of Dissent," by Joni J. Young, Accounting and the Public Interest 9 (1), 1 (2009); doi: 10.2308/api.2009.9.1.1 ---
Click Here

ABSTRACT:
The persistent malaise in accounting research continues to resist remedy. Hopwood (2007) argues that revitalizing academic accounting cannot be accomplished by simply working more diligently within current paradigms. Based on an analysis of articles published in Auditing: A Journal of Practice & Theory, I show that this paradigm block is not confined to financial accounting research but extends beyond the work appearing in the so-called premier U.S. journals. Based on this demonstration I argue that accounting academics must tolerate (and even encourage) dissent for accounting to enjoy a vital research academy. ©2009 American Accounting Association

June 15, 2010 reply from Paul Williams [Paul_Williams@NCSU.EDU]

Bob,
Thank you advertising the availability of this paper in API, the on line journal of the AAA Public Interest Section (which I just stepped down from editing after my 3+ years stint). Joni is one of the most (incisively) thoughtful people in our discipline (her paper in AOS, "Making Up Users" is a must read). The absence of dissent is evident from even casual perusal of the so-called premier journals. Every paper is erected on the same premises -- assumptions about human decision making (i.e., rational decision theory), "free markets," economic naturalism, etc. There is a metronomic repetition of the same meta-narrative about the "way the world is" buttressed by exercises in statistical causal analysis (the method of agricultural research, but without any of the controls). There is a growing body of evidence that these premises are myths -- the so-called rigorous research valorized in the "top" journals is built on an ideological foundation of sand.

Paul Williams paul_williams@ncsu.edu
 (919)515-4436

A Must Read Document
The Pathways Commission Implementing Recommendations for the Future of Accounting Education: The First Year Update
American Accounting Association
August 2013
http://commons.aaahq.org/files/3026eae0b3/Pathways_Update_FIN.pdf

Draft: August 3, 2010
http://commons.aaahq.org/files/8273566240/Overview_8_03_10.pdf

I hope some creative AECM and CPA-L threads emerge on this topic. In particular, I hope this document stimulates academic accounting research that is more focused on the needs of the business world and the profession (which was the main theme of Bob Kaplan’s outstanding plenary session on August 4 in San Francisco).

Note that to watch the entire Kaplan video ---
http://commons.aaahq.org/hives/531d5280c3/posts?postTypeName=session+video
I think the video is only available to AAA members.

Also note the AAA’s new Issues and Resources page ---
http://aaahq.org/resources.cfm

September 9, 2011 reply from Paul Williams

Bob,
I have avoided chiming in on this thread; have gone down this same road and it is a cul-de-sac.  But I want to say that this line of argument is a clever one.  The answer to your rhetorical question is, No, they aren't more ethical than other "scientists."   As you tout the Kaplan speech I would add the caution that before he raised the issue of practice, he still had to praise the accomplishments of "accountics" research by claiming numerous times that this research has led us to greater understanding about analysts, markets, info. content, contracting, etc.  However, none of that is actually true.  As a panelist at the AAA meeting I juxtaposed Kaplan's praise for what accountics research has taught us with Paul Krugman's observations about Larry Summer's 1999 observation that GAAP is what makes US capital markets so stable and efficient.  Of course, as Krugman noted, none of that turned out to be true.  And if that isn't true, then Kaplan's assessment of accountics research isn't credible, either.  If we actually did understand what he claimed we now understand much better than we did before, the financial crisis of 2008 (still ongoing) would not have happened.  The title of my talk was (the panel was organized by Cheryl McWatters) "The Epistemology of Ignorance."  An obsessive preoccupation with method could be a choice not to understand certain things-- a choice to rigorously understand things as you already think they are or want so desperately to continue to believe for reasons other than scientific ones. 

Paul

 


"Social Media Lure Academics Frustrated by Journals," by Jennifer Howard, Chronicle of Higher Education, February 22, 2011 ---
http://chronicle.com/article/Social-Media-Lure-Academics/126426/

Social media have become serious academic tools for many scholars, who use them for collaborative writing, conferencing, sharing images, and other research-related activities. So says a study just posted online called "Social Media and Research Workflow." Among its findings: Social scientists are now more likely to use social-media tools in their research than are their counterparts in the biological sciences. And researchers prefer popular applications like Twitter to those made for academic users.

The survey, conducted late last year, is the work of Ciber, as the Centre for Information Behaviour and the Evaluation of Research is known. Ciber is an interdisciplinary research center based in University College London's department of information studies. It takes on research projects for various clients. This one was paid for by the Emerald Publishing Group Ltd. The idea for the survey came from the Charleston Observatory, the research arm of the annual Charleston Conference of librarians, publishers, and vendors.

An online questionnaire went to researchers and editors as well as publishers, administrators, and librarians on cross-disciplinary e-mail lists maintained by five participating publishers—Cambridge University Press; Emerald; Kluwer; Taylor & Francis; and Wiley. Responses came from 2,414 researchers in 215 countries and "every discipline under the sun," according to David Nicholas, one of the lead researchers on the study. He directs the department of information studies at University College London.

Continued in article

Bob Jensen's threads on social networking are at
http://www.trinity.edu/rjensen/ListservRoles.htm


The videos of the three plenary speakers at the 2010 Annual Meetings in San Francisco are now linked at
http://commons.aaahq.org/hives/1f77f8e656/summary

Although all three speakers provided inspirational presentations, Steve Zeff and I both concluded that Bob Kaplan’s presentation was possibly the best that we had ever viewed among all past AAA plenary sessions. And we’ve seen a lot of plenary sessions in our long professional careers.

Now that Kaplan’s video is available I cannot overstress the importance that accounting educators and researchers watch the video of Bob Kaplan's August 4, 2010 plenary presentation
Note that to watch the entire Kaplan video ---
http://commons.aaahq.org/hives/531d5280c3/posts?postTypeName=session+video
I think the video is only available to AAA members.

Also see (slow loading)
http://www.trinity.edu/rjensen/theory01.htm#WhatWentWrong

Trivia Questions
1.  Why did Bob wish he’d worn a different color suit?

2.  What does JAE stand for besides the Journal of Accounting and Economics?

 

September 9, 2011 reply from Paul Williams

Bob,
I have avoided chiming in on this thread; have gone down this same road and it is a cul-de-sac.  But I want to say that this line of argument is a clever one.  The answer to your rhetorical question is, No, they aren't more ethical than other "scientists."   As you tout the Kaplan speech I would add the caution that before he raised the issue of practice, he still had to praise the accomplishments of "accountics" research by claiming numerous times that this research has led us to greater understanding about analysts, markets, info. content, contracting, etc.  However, none of that is actually true.  As a panelist at the AAA meeting I juxtaposed Kaplan's praise for what accountics research has taught us with Paul Krugman's observations about Larry Summer's 1999 observation that GAAP is what makes US capital markets so stable and efficient.  Of course, as Krugman noted, none of that turned out to be true.  And if that isn't true, then Kaplan's assessment of accountics research isn't credible, either.  If we actually did understand what he claimed we now understand much better than we did before, the financial crisis of 2008 (still ongoing) would not have happened.  The title of my talk was (the panel was organized by Cheryl McWatters) "The Epistemology of Ignorance."  An obsessive preoccupation with method could be a choice not to understand certain things-- a choice to rigorously understand things as you already think they are or want so desperately to continue to believe for reasons other than scientific ones. 

Paul


 

TAR versus AMR and AMJ and Footnotes of the American Sociology Association

Introduction

Accountics Scientists Seeking Truth: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 

Hi Roger,

Although I agree with you regarding how the AAA journals do not have a means of publishing "short research articles quickly," Accounting Horizons (certainly not TAR) for publishing now has a Commentaries section. I don't know if the time between submission and publication of an AH Commentary is faster on average than mainline AH research articles, but my priors are that it is quicker to get AH Commentaries published on a more timely basis.


The disappointing aspect of the published AH Commentaries to date is that they do not directly  focus on controversies of published research articles. Nor are they a vehicle for publishing abstracts of attempted replications of published accounting research. I don't know if this is AH policy or just the lack of replication in accountics science. In real science journals there are generally alternatives for publishing abstracts of replication outcomes and commentaries on published science articles. The AH Commentaries do tend to provide literature reviews on narrow topics.


The American Sociological Association has a journal called Footnotes ---
http://www.asanet.org/journals/footnotes.cfm
 

Article Submissions are limited to 1,100 words and must have journalistic value (e.g., timeliness, significant impact, general interest) rather than be research-oriented or scholarly in nature. Submissions are reviewed by the editorial board for possible publication.

ASA Forum (including letters to the editor) - 400-600-word limit.

Obituaries - 700-word limit.

Announcements - 150-word limit.

All submissions should include a contact name and an email address. ASA reserves the right to edit for style and length all material published.

Deadline for all materials is the first of the month preceding publication (e.g., February 1 for March issue).

Send communications on materials, subscriptions, and advertising to:

American Sociological Association
1430 K Street, NW - Suite 600
Washington, DC 20005-4701

 

The American Accounting Association Journals do not have something comparable to Footnotes or the ASA Forum, although the AAA does have both the AAA Commons and the AECM where non-refereed "publishing" is common for gadflies like Bob Jensen. The Commons is still restricted to AAA members and as such does not get covered by search crawlers like Google. The AECM is unrestricted to AAA Members, but since it requires free subscribing it does not get crawled over by Google, Yahoo, Bing, etc.

 


Hi Zane,

I, along with others, have been trying to make TAR and other AAA journals more responsible about publishing the commentaries on previously published resear4ch papers, including commentaries on successful or failed replication efforts.


TAR is particularly troublesome in this regard. Former TAR Senior Editor Steve Kachelmeier insists that the problem does not lie with TAR editors. Literally every submitted commentary, including short reports of replication efforts, has been rejected by TAR referees for decades.


So I looked into how other research journals met their responsibilities for publishing these commentaries. They do it in a variety of ways, but my preferred model is the Dialogue section of The Academy of Management Journal (AMJ) --- in part because the AMJ has been somewhat successful in engaging practitioner commententaries. I wrote the following at


The Dialogue section of the AMJ invites reader comments challenging validity of assumptions in theory and, where applicable, the assumptions of an analytics paper. The AMJ takes a slightly different tack for challenging validity in what is called an “Editors’ Forum,” examples of which are listed in the index at
http://journals.aomonline.org/amj/amj_index_2007.pdf
 


 

One index had some academic vs. practice Editors' Forum articles that especially caught my eye as it might be extrapolated to the schism between academic accounting research versus practitioner needs for applied research:

Bartunek, Jean M. Editors’ forum (AMJ turns 50! Looking back and looking ahead)—Academic-practitioner collaboration need not require joint or relevant research: Toward a relational

Cohen, Debra J. Editors’ forum (Research-practice gap in human resource management)—The very separate worlds of academic and practitioner publications in human resource management: Reasons for the divide and concrete solutions for bridging the gap. 50(5): 1013–10

Guest, David E. Editors’ forum (Research-practice gap in human resource management)—Don’t shoot the messenger: A wake-up call for academics. 50(5): 1020–1026.

Hambrick, Donald C. Editors’ forum (AMJ turns 50! Looking back and looking ahead)—The field of management’s devotion to theory: Too much of a good thing? 50(6): 1346–1352.

Latham, Gary P. Editors’ forum (Research-practice gap in human resource management)—A speculative perspective on the transfer of behavioral science findings to the workplace: “The times they are a-changin’.” 50(5): 1027–1032.

Lawler, Edward E, III. Editors’ forum (Research-practice gap in human resource management)—Why HR practices are not evidence-based. 50(5): 1033–1036.

Markides, Costas. Editors’ forum (Research with relevance to practice)—In search of ambidextrous professors. 50(4): 762–768.

McGahan, Anita M. Editors’ forum (Research with relevance to practice)—Academic research that matters to managers: On zebras, dogs, lemmings,

Rousseau, Denise M. Editors’ forum (Research-practice gap in human resource management)—A sticky, leveraging, and scalable strategy for high-quality connections between organizational practice and science. 50(5): 1037–1042.

Rynes, Sara L. Editors’ forum (Research with relevance to practice)—Editor’s foreword—Carrying Sumantra Ghoshal’s torch: Creating more positive, relevant, and ecologically valid research. 50(4): 745–747.

Rynes, Sara L. Editors’ forum (Research-practice gap in human resource management)—Editor’s afterword— Let’s create a tipping point: What academics and practitioners can do, alone and together. 50(5): 1046–1054.

Rynes, Sara L., Tamara L. Giluk, and Kenneth G. Brown. Editors’ forum (Research-practice gap in human resource management)—The very separate worlds of academic and practitioner periodicals in human resource management: Implications

More at http://journals.aomonline.org/amj/amj_index_2007.pdf

Also see the index sites for earlier years --- http://journals.aomonline.org/amj/article_index.htm


My appeal for an AMJ model as a way to meet TAR responsibilities for reporting replications and commentaries  fell on deaf ears in the AECM.


So now I'm working on another tack The AAA Commons now publishes TAR tables of contents. But the accountics science authors have never made an effort to explain their research on the Commons. And members of the AAA have never taken an initiative to comment on those articles or to report successful or failed replication efforts.


I think the problem is that a spark has to ignite both the TAR authors and the AAA membership to commence dialogs on TAR articles as well as articles published by other AAA journals.


To this extent I have the start of a working paper on these issues at
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 


My purpose in starting the above very unfinished working paper is two fold.


Firstly, it is to show how the very best of the AAA's accountics scientists up to now just don't give a damn about supporting the AAA Commons. My mission for the rest my life will be to change this.


Secondly, it is to show that the AAA membership has shown no genuine interest to discuss research published in the AAA journals. My mission in life for the rest of my life will be to change this. Julie Smith David, bless her heart, is now working at my behest to provide me with data regarding who has been the most supportive of the AAA Commons over since it was formed in 2008. From this I hope to learn more about what active contributors truly want from their Commons. To date my own efforts have simply been to add honey-soaked tidbits to help attract the publish to the AAA Commons. I most certainly like more active contributors to relieve me of this chore in my life.


My impossible dream is to draw accounting teachers, students, and practitioners into public hives of discussion of AAA journal research.


Maybe I'm just a dreamer. But at least I'm still  trying after every other initiative I've attempted to draw accountics researchers onto the Commons has failed. I know we have some accountics scientist lurkers on the AECM, but aside from Steve Kachelmeier they do not submit posts regarding their work in progress or their published works.


Thank you Steve for providing value added in your AECM debates with me and some others like Paul Williams even if that debate did boil over.


Respectfully,
Bob Jensen

Hi Marc,

Paul Williams has addressed your accountics scientists power questions much better than me in both an AOS article and in AECM messaging ---
http://www.trinity.edu/rjensen/TheoryTAR.htm#Comments


Williams, P. F., Gregory, J. J., I. L. (2006). The Winnowing Away of Behavioral Accounting Research in the U.S.:The Process of Anointing Academic Elites. Accounting, Organizations and Society/Elsevier, 31, 783-818.


Williams, P.F. “Reshaping Accounting Research: Living in the World in Which We Live,” Accounting Forum, 33, 2009: 274 – 279.


Schwartz, B., Williams, S. and Williams, P.F., “U.S. Doctoral Students Familiarity with Accounting Journals: Insights into the Structure of the U.S. Academy,” Critical Perspectives on Accounting, 16(2),April 2005: 327-348.


Williams, Paul F., “A Reply to the Commentaries on: Recovering Accounting as a Worthy Endeavor,” Critical Perspectives on Accounting, 15(4/5), 2004: 551-556.
Jensen Note:  This journal prints Commentaries on previous published articles, something that TAR referees just will not allow.


Williams, Paul and Lee, Tom, “Accounting from the Inside: Legitimizing the Accounting Academic Elite,” Critical Perspectives on Accounting (forthcoming).


Jensen Comment
As far as accountics science power in the AAA is concerned, I think that in year 2010 we will look back at years 2011-12 as monumental shifts in power, not the least of which is the democratization of the AAA. Changes will take time in both the AAA and in the AACSB's accountancy  doctoral programs where accountics scientists are still firmly entrenched.


But accountics scientist political power will wane, Changes will begin with the AAA Publications Committee and then with key editorships, notably the editorship of TAR.


If I have any influence in any of this it will be to motivate our leading accountics scientists to at last start making contributions to the AAA Commons.


I know that making accountics scientists feel guilty of negligence on the AAA Commons is not the best motivator as a rule, but what other choice have I got at this juncture?
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 



Respectfully,
Bob Jensen


Calvin Ball

Accountics science is defined at http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm
One of the main reasons Bob Jensen contends that accountics science is not yet a real science is that lack of exacting replications of accountics science findings. By exacting replications he means reproducibility as defined in the IAPUC Gold Book  ---
http://en.wikipedia.org/wiki/IUPAC_Gold_Book

The leading accountics science (an indeed the leading academic accounting research journals) are The Accounting Review (TAR), the Journal of Accounting Research (JAR), and the Journal of Accounting and Economics (JAE). Publishing accountics science in these journals is a necessary condition for nearly all accounting researchers at top R1 research universities in North America.

On the AECM listserv, Bob Jensen and former TAR Senior Editor Steven Kachelmeier have had an ongoing debate about accountics science relevance and replication for well over a year in what Steve now calls a game of CalvinBall. When Bob Jensen noted the lack of exacting replication in accountics science, Steve's CalvinBall reply was that replication is the name of the game in accountics science:

The answer to your question, "Do you really think accounting researchers have the hots for replicating their own findings?" is unequivocally YES, though I am not sure about the word "hots." Still, replications in the sense of replicating prior findings and then extending (or refuting) those findings in different settings happen all the time, and they get published regularly. I gave you four examples from one TAR issue alone (July 2011). You seem to disqualify and ignore these kinds of replications because they dare to also go beyond the original study. Or maybe they don't count for you because they look at their own watches to replicate the time instead of asking to borrow the original researcher's watch. But they count for me.

To which my CalvinBall reply to Steve is --- "WOW!" In the past four decades of all this unequivocal replication in accountics science there's not been a single scandal. Out of the thousands of accountics science papers published in TAR, JAR, and JAE there's not been a single paper withdrawn after publication, to my knowledge, because of a replication study discovery. Sure there have been some quibbles about details in the findings and some improvements in statistical significance by tweaking the regression models, but there's not been a replication finding serious enough to force a publication retraction or serious enough to force the resignation of an accountics scientist.

In real science, where more exacting replications really are the name of the game, there have been many scandals over the past four decades. Nearly all top science journals have retracted articles because independent researchers could not exactly replicate the reported findings. And it's not all that rare to force a real scientist to resign due to scandalous findings in replication efforts.

The most serious scandals entail faked data or even faked studies. These types of scandals apparently have never been detected among thousands of accountics science publications. The implication is that accountics scientists are more honest as a group than real scientists. I guess that's either good news or bad replicating.

Given the pressures brought to bear on accounting faculty to publish accountics science articles, the accountics science scandal may be that accountics science replications have never revealed a scandal --- to my knowledge at least.


One of the most recent scandals arose when a very well-known psychologist named Mark Hauser.
"Author on leave after Harvard inquiry Investigation of scientist’s work finds evidence of misconduct, prompts retraction by journal," by Carolyn Y. Johnson, The Boston Globe, August 10, 2010 ---
http://www.boston.com/news/education/higher/articles/2010/08/10/author_on_leave_after_harvard_inquiry/

Harvard University psychologist Marc Hauser — a well-known scientist and author of the book “Moral Minds’’ — is taking a year-long leave after a lengthy internal investigation found evidence of scientific misconduct in his laboratory.

The findings have resulted in the retraction of an influential study that he led. “MH accepts responsibility for the error,’’ says the retraction of the study on whether monkeys learn rules, which was published in 2002 in the journal Cognition.

Two other journals say they have been notified of concerns in papers on which Hauser is listed as one of the main authors.

It is unusual for a scientist as prominent as Hauser — a popular professor and eloquent communicator of science whose work has often been featured on television and in newspapers — to be named in an investigation of scientific misconduct. His research focuses on the evolutionary roots of the human mind.

In a letter Hauser wrote this year to some Harvard colleagues, he described the inquiry as painful. The letter, which was shown to the Globe, said that his lab has been under investigation for three years by a Harvard committee, and that evidence of misconduct was found. He alluded to unspecified mistakes and oversights that he had made, and said he will be on leave for the upcoming academic year.

Continued in article

Update:  Hauser resigned from Harvard in 2011 after the published research in question was retracted by the journals.

Not only have there been no similar reported accountics science scandals called to my attention, I'm not aware of any investigations of impropriety that were discovered among all those "replications" claimed by Steve.

Below is a link to a long article about scientific misconduct and the difficulties of investigating such misconduct. The conclusion seems to rest mostly upon what insiders apparently knew but were unwilling to testify about in public. Marc Hauser eventually resigned from Harvard. The most aggressive investigator in this instance appears to be Harvard University itself.

"Disgrace: On Marc Hauser," by Mark Gross, The Nation, January 9, 2012 ---
http://www.thenation.com/article/165313/disgrace-marc-hauser?page=0,2

. . .

Although some of my knowledge of the Hauser case is based on conversations with sources who have preferred to remain unnamed, there seems to me to be little doubt that Hauser is guilty of scientific misconduct, though to what extent and severity remains to be revealed. Regardless of the final outcome of the investigation of Hauser by the federal Office of Research Integrity, irreversible damage has been done to the field of animal cognition, to Harvard University and most of all to Marc Hauser.


"Dutch University Suspends Prominent Social Psychologist," Inside Higher Ed, September 12, 2011 ---
http://www.insidehighered.com/news/2011/09/12/qt#270113

Tilburg University, in the Netherlands, announced last week that it was suspending D.A. Stapel from his positions as professor of cognitive social psychology and dean of the School of Social and Behavioral Sciences because he "has committed a serious breach of scientific integrity by using fictitious data in his publications." The university has convened a panel to determine which of Stapel's papers were based on false data. Science noted that Stapel's work -- in that publication and elsewhere -- was known for attracting attention. Science reported that Philip Eijlander, Tilburg's rector, told a Dutch television station that Stapel had admitted to the fabrications. Eijlander said that junior researchers in Stapel's lab came forward with concerns about the honesty of his data, setting off an investigation by the university.

Jensen Comment
Actually I'm being somewhat unfair here. It was not exacting replication studies that upended Professor Stapel in this instance. There are, of course, other means of testing internal controls in scientific research. But the most common tool is replication of reproducible experiments.

Replication researchers did upend Marc Hauser at Harvard ---
http://www.trinity.edu/rjensen/TheoryTAR.htm


Below is a link to a long article about scientific misconduct and the difficulties of investigating such misconduct. The conclusion seems to rest mostly upon what insiders apparently knew but were unwilling to testify about in public. Marc Hauser eventually resigned from Harvard. The most aggressive investigator in this instance appears to be Harvard University itself.

"Disgrace: On Marc Hauser," by Mark Gross, The Nation, January 9, 2012 ---
http://www.thenation.com/article/165313/disgrace-marc-hauser?page=0,2

. . .

Although some of my knowledge of the Hauser case is based on conversations with sources who have preferred to remain unnamed, there seems to me to be little doubt that Hauser is guilty of scientific misconduct, though to what extent and severity remains to be revealed. Regardless of the final outcome of the investigation of Hauser by the federal Office of Research Integrity, irreversible damage has been done to the field of animal cognition, to Harvard University and most of all to Marc Hauser.

Bob Jensen's threads on the lack of validity testing and investigations of misconduct in accountics science ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

 

 

"Bad science: The psychology behind exaggerated & false research [infographic]," Holykaw, December 21, 2011 ---
http://holykaw.alltop.com/bad-science-the-psychology-behind-exaggerated

One in three scientists admits to using shady research practices.
Bravo:  Zero accountics scientists admit to using shady research practices.

One in 50 scientists admit to falsifying data outright.
Bravo:  Zero accountics scientists admit to falsifying data in the history of accountics science.

Reports of colleague misconduct are even more common.
Bravo:  But not in accountics science

Misconduct rates are highest among clinical, medical, and phamacological researchers
Bravo:  Such reports are lowest (zero) among accountics scientists

Four ways to make research more honest

  1. Make all raw data available to other scientists
     
  2. Hold journalists accountable
     
  3. Introduce anonymous publication
     
  4. Change from real science into accountics science where research is unlikely to be validated/replicated except on rare occasions where no errors are ever found

"Fraud Scandal Fuels Debate Over Practices of Social Psychology:  Even legitimate researchers cut corners, some admit," by Christopher Shea, Chronicle of Higher Education, November 13, 2011 ---
http://chronicle.com/article/As-Dutch-Research-Scandal/129746/

Jensen Comment
This leads me to wonder why in its entire history, there has never been a reported scandal or evidence of data massaging in accountics (accounting) science. One possible explanation is that academic accounting researchers are more careful and honest than academic social psychologists. Another explanation is that accountics science researchers rely less on teams of student assistants who might blow the whistle, which is how Professor Diederik A. Stapel got caught in social psychology.

But there's also a third possible reason there have been no scandals in the last 40 years of accountics research. That reason is that the leading accountics research journal referees discourage validity testing of accountics research findings ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

Yet a fifth and more probable explanation is that there's just not enough interest in most accountics science findings to inspire replications and active debate/commentaries in either the academic journals or the practicing profession's journals.

There also is the Steve Kachelmeier argument that there are indirect replications taking place that do not meet scientific standards for replications but nevertheless point to consistencies in some of the capital markets studies (rarely the behavioral accounting studies). This does not answer the question of why nearly all of the indirect replications rarely point to inconsistencies. It follows that accountics science researchers are just more accurate and honest than their social science colleagues.

Yeah Right!
Accountics scientists "never cut corners" except where fully disclosed in their research reports.
We just know what's most important in legitimate science.
Why can't real scientists be more like us --- ever honest and ever true?


What is an Exacting Replication?
I define an exacting replication as one in which the findings are reproducible by independent researchers using the IAPUC Gold Book standards for reproducibility. Steve makes a big deal about time extensions when making such exacting replications almost impossible in accountics science. He states:

By "exacting replication," you appear to mean doing exactly what the original researcher did -- no more and no less. So if one wishes to replicate a study conducted with data from 2000 to 2008, one had better not extend it to 2009, as that clearly would not be "exacting." Or, to borrow a metaphor I've used earlier, if you'd like to replicate my assertion that it is currently 8:54 a.m., ask to borrow my watch -- you can't look at your watch because that wouldn't be an "exacting" replication.

That's CalvinBall bull since in many of these time extensions it's also possible to conduct an exacting replication. Richard Sansing pointed out the how he conducted an exacting replication of the findings in Dhaliwal, Li and R. Trezevant (2003), "Is a dividend tax penalty incorporated into the return on a firm’s common stock?," Journal of Accounting and Economics 35: 155-178. Although Richard and his coauthor extend the Dhaliwal findings they first conducted an exacting replication in their paper published  in The Accounting Review 85 (May 2010): 849-875.

My quibble with Richard is mostly that conducting an exacting replication of the Dhaliwal et al. paper was not exactly a burning (hot) issue if nobody bothered to replicate that award winning JAE paper for seven years.

This begs the question of why there are not more frequent and timely exacting replications conducted in accountics science if the databases themselves are commercially available like the CRSP, Compustat, and AuditAnalytics databases. Exacting replications from these databases are relatively easy and cheap to conduct. My contention here is that there's no incentive to excitedly conduct exacting replications if the accountics journals will not even publish commentaries about published studies. Steve and I've played CalvinBall with the commentaries issue before. He contends that TAR editors do not prevent commentaries from being published in TAR. The barriers to validity questioning commentaries in TAR are the 574 referees who won't accept submitted commentaries ---
http://www.trinity.edu/rjensen/TheoryTAR.htm#ColdWater

Exacting replications of behavioral experiments in accountics science is more difficult and costly because the replicators must conduct their own experiments by collecting their own data. But it seems to me that it's no more difficult in accountics science than in performing exacting replications that are reported in the research literature of psychology. However, psychologists often have more incentives to conduct exacting replications for the following reasons that I surmise:

  1. Practicing psychologists are more demanding of validity tests of research findings. Practicing accountants seem to pretty much ignore behavioral experiments published in TAR, JAR, and JAE such that there's not as much pressure brought to bear on validity testing of accountics science findings. One test of practitioner lack of interest is the lack of citation of accountics science in practitioner journals.
     
  2. Psychology researchers have more incentives to replicate experiments of others since there are more outlets for publication credits of replication studies, especially in psychology journals that encourage commentaries on published research ---
    http://www.trinity.edu/rjensen/TheoryTAR.htm#TARversusJEC

My opinion remains that accountics science will never be a real science until exacting replication of research findings become the name of the game in accountics science. This includes exacting replications of behavioral experiments as well as analysis of public data from CRSP, Compustat, AuditAnalytics, and other commercial databases. Note that willingness of accountics science authors to share their private data for replication purposes is a very good thing (I fought for this when I was on the AAA Executive Committee), but conducting replication studies of such data does not hold up well under the IAPUC Gold Book.

Note, however, that lack of exacting replication and other validity testing in general are only part of the huge problems with accountics science. The biggest problem, in my judgment, is the way accountics scientists have established monopoly powers over accounting doctoral programs, faculty hiring criteria, faculty performance criteria, and pay scales. Accounting researchers using other methodologies like case and field research become second class faculty.

Since the odds of getting a case or field study published are so low, very few of such studies are even submitted for publication in TAR in recent years. Replication of these is a non-issue in TAR.

"Annual Report and Editorial Commentary for The Accounting Review," by Steven J. Kachelmeier The University of Texas at Austin, The Accounting Review, November 2009, Page 2056.

Insert Table

There's not much hope for case, field, survey, and other non-accountics researchers to publish in the leading research journal of the American Accounting Association.

What went wrong with accountics research?
http://www.trinity.edu/rjensen/theory01.htm#WhatWentWrong

"We fervently hope that the research pendulum will soon swing back from the narrow lines of inquiry that dominate today's leading journals to a rediscovery of the richness of what accounting research can be. For that to occur, deans and the current generation of academic accountants must give it a push."
Granof and Zeff --- http://www.trinity.edu/rjensen/TheoryTAR.htm#Appendix01
Michael H. Granof
is a professor of accounting at the McCombs School of Business at the University of Texas at Austin. Stephen A. Zeff is a professor of accounting at the Jesse H. Jones Graduate School of Management at Rice University.

I admit that I'm just one of those professors heeding the Granof and Zeff call to "give it a push," but it's hard to get accountics professors to give up their monopoly on TAR, JAR, JAE, and in recent years Accounting Horizons. It's even harder to get them to give up their iron monopoly clasp on North American Accountancy Doctoral Programs ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms 

September 9, 2011 reply from Paul Williams

Bob,
I have avoided chiming in on this thread; have gone down this same road and it is a cul-de-sac.  But I want to say that this line of argument is a clever one.  The answer to your rhetorical question is, No, they aren't more ethical than other "scientists."   As you tout the Kaplan speech I would add the caution that before he raised the issue of practice, he still had to praise the accomplishments of "accountics" research by claiming numerous times that this research has led us to greater understanding about analysts, markets, info. content, contracting, etc.  However, none of that is actually true.  As a panelist at the AAA meeting I juxtaposed Kaplan's praise for what accountics research has taught us with Paul Krugman's observations about Larry Summer's 1999 observation that GAAP is what makes US capital markets so stable and efficient.  Of course, as Krugman noted, none of that turned out to be true.  And if that isn't true, then Kaplan's assessment of accountics research isn't credible, either.  If we actually did understand what he claimed we now understand much better than we did before, the financial crisis of 2008 (still ongoing) would not have happened.  The title of my talk was (the panel was organized by Cheryl McWatters) "The Epistemology of Ignorance."  An obsessive preoccupation with method could be a choice not to understand certain things-- a choice to rigorously understand things as you already think they are or want so desperately to continue to believe for reasons other than scientific ones. 

Paul


September 10, 2011 reply from Bob Jensen (known on the AECM as Calvin of Calvin and Hobbes)
This is a reply to Steve Kachelmeier, former Senior Editor of The Accounting Review (TAR)

I agree Steve and will not bait you further in a game of Calvin Ball.

It is, however, strange to me that exacting replication (reproducibility)  is such a necessary condition to almost all of real science empiricism and such a small part of accountics science empiricism.

My only answer to this is that the findings themselves in science seem to have greater importance to both the scientists interested in the findings and the outside worlds affected by those findings.
It seems to me that empirical findings that are not replicated with as much exactness as possible are little more than theories that have only been tested once but we can never be sure that the tests were not faked or contain serious undetected errors for other reasons.
It is sad that the accountics science system really is not designed to guard against fakers and careless researchers who in a few instances probably get great performance evaluations for their hits in TAR, JAR, and JAE. It is doubly sad since internal controls play such an enormous role in our profession but not in our accoutics science.

I thank you for being a noted accountics scientist who was willing to play Calvin Ball.with me for a while. I want to stress that this is not really a game with me. I do want to make a difference in the maturation of accountics science into real science. Exacting replications in accountics science would be an enormous giant step in the real-science direction.

Allowing validity-questioning commentaries in TAR would be a smaller start in that direction but nevertheless a start. Yes I know that it was your 574 TAR referees who blocked the few commentaries that were submitted to TAR about validity questions. But the AAA Publications Committees and you as Senior Editor could've done more to encourage both submissions of more commentaries and submissions of more non-accountics research papers to TAR --- cases, field studies, history studies, AIS studies, and (horrors) normative research.

I would also like to bust the monopoly that accountics scientists have on accountancy doctoral programs. But I've repeated my arguments here far to often ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

In any case thanks for playing Calvin Ball with me. Paul Williams and Jagdish Gangolly played Calvin Ball with me for a while on an entirely different issue --- capitalism versus socialism versus bastardized versions of both that evolve in the real world.

Hopefully there's been some value added on the AECM in my games of Calvin Ball.

Even though my Calvin Ball opponents have walked off the field, I will continue to invite others to play against me on the AECM.

And I'm certain this will not be the end to my saying that accountics farmers are more interested in their tractors than their harvests. This may one day be my epitaph.

Respectfully,
Calvin


November 22, 2011 reply from Steve Kachelmeier

First, Table 3 in the 2011 Annual Report (submissions and acceptances by area) only includes manuscripts that went through the regular blind reviewing process. That is, it excludes invited presidential scholar lectures, editorials, book reviews, etc. So "other" means "other regular submissions."

Second, you are correct Bob that "other" continues to represent a small percentage of the total acceptances. But "other" is also a very small percentage of the total submissions. As I state explicitly in the report, Table 3 does not prove that TAR is sufficienty diverse. It does, however, provide evidence that TAR acceptances by topical area (or by method) are nearly identically proportional to TAR submissions by topical area (or by method).

Third, for a great example of a recently published TAR study with substantial historical content, see Madsen's analysis of the historical development of standardization in accounting that we published in in the September 2011 issue. I conditionally accepted Madsen's submission in the first round, backed by favorable reports from two reviewers with expertise in accounting history and standardization.

Take care,

Steve

November 23, 2011 reply from Bob Jensen

Hi Steve,

Thank you for the clarification.

Interestingly, Madsen's September 2011 historical study (which came out after your report's May 2011 cutoff date) is a heavy accountics science paper with a historical focus.

It would be interesting to whether such a paper would've been accepted by TAR referees without the factor (actually principal components) analysis. Personally, I doubt any history paper would be accepted without equations and quantitative analysis. Once again I suspect that accountics science farmers are more interested in their tractors than in their harvests.

In the case of Madsen's paper, if I were a referee I would probably challenge the robustness of the principal components and loadings ---
http://en.wikipedia.org/wiki/Principle_components_analysis 
Actually factor analysis in general like nonlinear multiple regression and adaptive versions thereof suffer greatly from lack of robustness. Sometimes quantitative models gild the lily to a fault.

Bob Kaplan's Presidential Scholar historical study was published, but this was not subjected to the usual TAR refereeing process.

The History of The Accounting Review paper written by Jean Heck and Bob Jensen which won a best paper award from the Accounting Historians Journal was initially flatly rejected by TAR. I was never quite certain if the main reason was that it did not contain equations or if the main reason was that it was critical of TAR editorship and refereeing. In any case it was flatly rejected by TAR, including a rejection by one referee who refused to put reasons in writing for feed\back to Jean and me.

“An Analysis of the Evolution of Research Contributions by The Accounting Review: 1926-2005,” (with Jean Heck), Accounting Historians Journal, Volume 34, No. 2, December 2007, pp. 109-142.

I would argue that accounting history papers, normative methods papers, and scholarly commentary papers (like Bob Kaplan's plenary address) are not submitted to TAR because of the general perception among the AAA membership that such submissions do not have a snowball's chance in Hell of being accepted unless they are also accountics science papers.

It's a waste of time and money to submit papers to TAR that are not accountics science papers.

In spite of differences of opinion, I do thank you for the years of blood, sweat, and tears that you gave us as Senior Editor of TAR.

And I wish you and all U.S. subscribers to the AECM a very Happy Thanksgiving. Special thanks to Barry and Julie and the AAA staff for keeping the AECM listserv up and running.

Respectfully,
Bob Jensen

 


In only one way do I want to distract from the quality and quantity of effort of TAR Senior Editor Steve Kachelmeier. The job of TAR's Senior Editor is overwhelming given the greatly increased number of submissions to TAR while he's been our Senior Editor. Steve's worked long and hard assembling a superb team of associate editors and reviewers for over 600 annual submissions. He's had to resolve many conflicts between reviewers and deal personally with often angry and frustrated authors. He's helped to re-write a lot of badly written papers reporting solid research. He's also suggested countless ways to improve the research itself. And in terms of communications with me (I can be a pain in the butt), Steve has been willing to take time from his busy schedule to debate with me in private email conversations.

The most discouraging aspect of Steve's editorship is, in my viewpoint, his failure to encourage readers to submit discussions, comments, replication abstracts, or commentaries on previously published articles in TAR. He says that readers are free to submit most anything to him, but that if a submission does not "extend" the research in what is essentially a new research paper, his teams of referees are likely to reject it.

While Steve has been Senior Editor of TAR, I do not know of any submitted discussion or comment on a previously published paper that simply raised questions about a published paper but did not actually conduct research needed to submit an entirely new research product.  Hence, if readers want to comment on a TAR article they should, according to Steve, submit a full research paper for review that extends that research in a significant way or find some other outlet for commentary such as the AECM listserv that only reaches a relatively small subset of all accountants, accounting teachers, and accounting researchers in the world.

Steve replied by stating that, during his term as Senior Editor, he only sent out one comment submission that was resoundingly  rejected by his referees but was  later accepted  after the author conducted empirical research and extended the original study in a significant way. However, he and I differ with respect to what I call a "commentary" for purposes of this document. For this document I am limiting the term "commentary" to a comment or discussion of a previously published paper that does not extend the research in a significant way. I consider a "commentary" here to be more like a discussant's comments when the paper is presented at a conference. Without actually conducting additional empirical research a discussant can criticize or praise a paper and suggest ways that the research can be improved. The discussant does not actually have to conduct the suggested research extensions that Steve tells me is a requisite for his allowing TAR to publish a comment.

I also allow, in this document, the term "commentary" to include a brief abstract of an attempt to exactly replicate the research reported in a previously-published TAR paper. The replication report can be more of a summary than a complete research paper. It might simply report on how a replication succeeded or failed. I elaborate on the term "replication" below. I do not know of a single exact replication report ever published in TAR regarding a lab experiment. I'm hoping that someone will point out where TAR published a report of an exact replication of a lab experiment. Of course, some empirical study replications are more complex, and I discuss this below.

In fairness, I was wrong to have asserted that Steve will not send a "commentary" as defined above out for review. His reply to me was as follows:

No, no, no! Once again, your characterization makes me out to be the dictator who decides the standards of when a comment gets in and when it doesn’t. The last sentence is especially bothersome regarding what “Steve tells me is a requisite for his allowing TAR to publish a comment.” I never said that, so please don’t put words in my mouth.

If I were to receive a comment of the “discussant” variety, as you describe, I would send it out for review to two reviewers in a manner 100% consistent with our stated policy on p. 388 of the January 2010 issue (have you read that policy?). If both reviewers or even the one independent reviewer returned favorable assessments, I would then strongly consider publishing it and would most likely do so. My observation, however, which you keep wanting to personalize as “my policy,” is that most peer reviewers, in my experience, want to see a meaningful incremental contribution. (Sorry for all the comma delimited clauses, but I need this to be precise.) Bottom line: Please don’t make it out to be the editor’s “policy” if it is a broader phenomenon of what the peer community wants to see. And the “peer community,” by the way, are regular professors from all varieties of backgrounds. I name 574 of them in the November 2009 issue.

Steve reports that readers of TAR just do not submit the "discussant" variety to him for consideration for publication in TAR. My retort is that, unlike the AMR discussed below, Steve has not encouraged TAR readers to send in such commentaries about papers published in TAR. To the contrary, in meetings and elsewhere he's consistently stated that his referees are likely to reject any commentaries that simply question underlying assumptions, model structures, or data in a previously published paper. Hence, I contend that there are 574 Shields Against Validity Challenges in Plato's Cave,

An illustration of a commentary that two of the 574 guards would resoundingly reject is illustrated at 
http://www.trinity.edu/rjensen/TheoryTAR.htm#Analytics
However, I think this commentary might be of value to accounting students, faculty, and practitioners. Students could write similar commentaries about other selected TAR articles and then meet in chat rooms or class to search for common themes or patterns in their commentaries.

Most papers published in TAR simply accept external validity of underlying assumptions. Normative arguments to the contrary are not likely to be published in TAR.
"Deductive reasoning,"  Phil Johnson-Laird, Wiley Interscience, ,2009 ---
http://www3.interscience.wiley.com/cgi-bin/fulltext/123228371/PDFSTART?CRETRY=1&SRETRY=0

This article begins with an account of logic, and of how logicians formulate formal rules of inference for the sentential calculus, which hinges on analogs of negation and the connectives if, or, and and. It considers the various ways in which computer scientists have written programs to prove the validity of inferences in this and other domains. Finally, it outlines the principal psychological theories of how human reasoners carry out deductions.  2009 John Wiley & Sons, Ltd. WIREs Cogn Sci 2010 1 8–1

 

Accountics is the mathematical science of values.
Charles Sprague [1887] as quoted by McMillan [1998, p. 1]
http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm#_msocom_1

By far the most important recommendation that I make below in this message is for the American Accounting Association to create an electronic journal for purposes of commentaries and replication abstracts that follow up on previously published articles in AAA research journals, particularly TAR. In that context, my recommendation is an extension of the Dialogue section of the Academy of Management Review.

Nearly all the articles published in TAR over the past several decades are limited to accountics studies that, in my viewpoint, have questionable internal and external validity due to missing variables, measurement errors, and simplistic mathematical structures. If accountants grounded in the real world were allowed to challenge the external validity of accountics studies it is possible that accountics researchers would pay greater attention to external validity --- http://en.wikipedia.org/wiki/External_Validity

 Similarly if accountants grounded in the real world were allowed to challenge the external validity of accountics studies it is possible that accountics researchers would pay greater attention to internal validity --- http://en.wikipedia.org/wiki/Internal_Validity

An illustration of a commentary that the 574 guards would refuse to put out to review is illustrated at
http://www.trinity.edu/rjensen/TheoryTAR.htm#Analytics
However, I think this commentary might be of value to accounting students, faculty, and practitioners. Students could write similar commentaries about other selected TAR articles and then meet in chat rooms or class to search for common themes or patterns in their commentaries.

I should note that the above commentary is linked at the AAA Commons. Perhaps the AAA Commons should start a special hive for commentaries about TAR articles, including student commentaries submitted by their instructors to the Commons --- http://commons.aaahq.org/pages/home

In the practitioner literature readers have to be a little careful on the definition of "analytics." Practitioners often define analytics in terms of micro-level use of data for decisions such as decisions to adopt a new product or launch a promotion campaign..

See Analytics at Work: Smarter Decisions, Better Results, by Tom Davenport (Babson College) --- ISBN-13: 9781422177693, February 2010

Listen to Tom Davenport being interviewed about his book ---
 http://blogs.hbr.org/ideacast/2010/01/better-decisions-through-analy.html?cm_mmc=npv-_-DAILY_ALERT-_-AWEBER-_-DATE

The book does not in general find a niche for analytics for huge decisions such as mergers, but the above book does review an application by Chevron.

The problem with "big decisions" is that the analytical models generally cannot mathematically model or get good data on some of the most relevant variables. In academe, professors often simply assume the real world away and derive elegant solutions to fantasy-land problems in Plato's Cave. This is all well and good, but these academic researchers generally ignore validity tests of their harvests inside Plato's Cave.


June 30, 2012
Hi again Steve and David,


I think most of the problem of relevance of academic accounting research to the accounting profession commenced with the development of the giant commercial databases like CRSP, Compustat, and AuditAnalytics. To a certain extent it hurt sociology research to have giant government databases like the giant census databases. This gave rise to accountics researchers and sociometrics researchers who commenced to treat their campuses like historic castles with moats. The researchers no longer mingled with the outside world due, to a great extent, to a reduced need to collect their own data from the riff raff.



The focus of our best researchers turned toward increasing creativity of mathematical and statistical models and reduced creativity in collecting data. If data for certain variables cannot be found in a commercial database then our accounting professors and doctoral students merely assume away the importance of those variables --- retreating more and more into Plato's Cave.


I think the difference between accountics versus sociometrics researchers, however, is that sociometrics researchers often did not get as far removed from database building as accountics researchers. They are more inclined to field research. One of my close sociometric scientist friends is Mike Kearl. The reason his Website is one of the most popular Websites in Sociology is Mike's dogged effort to make privately collected databases available to other researchers ---

Mike Kearl's great social theory site
Go to http://www.trinity.edu/rjensen/theory02.htm#Kearl


I cannot find a single accountics researcher counterpart to Mike Kearl.


Meanwhile in accounting research, the gap between accountics researchers in their campus castles and the practicing profession became separated by widening moats.


 

In the first 50 years of the American Accounting Association over half the membership was made up of practitioners, and practitioners took part in committee projects, submitted articles to TAR, and in various instances were genuine scholarly leaders in the AAA. All this changed when accountics researchers evolved who had less and less interest in close interactions with the practitioner world.


 

“An Analysis of the Evolution of Research Contributions by The Accounting Review: 1926-2005,” (with Jean Heck), Accounting Historians Journal, Volume 34, No. 2, December 2007, pp. 109-142.

. . .

Practitioner membership in the AAA faded along with their interest in journals published by the AAA [Bricker and Previts, 1990]. The exodus of practitioners became even more pronounced in the 1990s when leadership in the large accounting firms was changing toward professional managers overseeing global operations. Rayburn [2006, p. 4] notes that practitioner membership is now less than 10 percent of AAA members, and many practitioner members join more for public relations and student recruitment reasons rather than interest in AAA research. Practitioner authorship in TAR plunged to nearly zero over recent decades, as reflected in Figure 2.

 

I think that much good could come from providing serious incentives to accountics researchers to row across the mile-wide moats. Accountics leaders could do much to help. For example, they could commence to communicate in English on the AAA Commons ---
How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 

 

Secondly, I think TAR editors and associate editors could do a great deal by giving priority to publishing more applied research in TAR so that accountics researchers might think more about the practicing profession. For example, incentives might be given to accountics researchers to actually collect their own data on the other side of the moat --- much like sociologists and medical researchers get academic achievement rewards for collecting their own data.


 

Put in another way, it would be terrific if accountics researchers got off their butts and ventured out into the professional world on the other side of their moats.


 

Harvard still has some (older) case researchers like Bob Kaplan who  interact extensively on the other side of the Charles River. But Bob complains that journals like TAR discourage rather than encourage such interactions.

Accounting Scholarship that Advances Professional Knowledge and Practice
Robert S. Kaplan
The Accounting Review, March 2011, Volume 86, Issue 2, 


 

Recent accounting scholarship has used statistical analysis on asset prices, financial reports and disclosures, laboratory experiments, and surveys of practice. The research has studied the interface among accounting information, capital markets, standard setters, and financial analysts and how managers make accounting choices. But as accounting scholars have focused on understanding how markets and users process accounting data, they have distanced themselves from the accounting process itself. Accounting scholarship has failed to address important measurement and valuation issues that have arisen in the past 40 years of practice. This gap is illustrated with missed opportunities in risk measurement and management and the estimation of the fair value of complex financial securities. This commentary encourages accounting scholars to devote more resources to obtaining a fundamental understanding of contemporary and future practice and how analytic tools and contemporary advances in accounting and related disciplines can be deployed to improve the professional practice of accounting. ©2010 AAA

 

It's high time that the leaders of accountics scientists make monumental efforts to communicate with the teachers of accounting and the practicing professions. I have enormous optimism regarding our forthcoming fabulous accountics scientist Mary Barth when she becomes President of the AAA.
 

I'm really, really hoping that Mary will commence the bridge building across moats ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

 

 

The American Sociological Association has a journal called the American Sociological Review (ASR) that is to the ASA much of what TAR is to the AAA.


The ASR like TAR publishes mostly statistical studies. But there are some differences that I might note. Firstly, ASR authors are more prone to gathering their own data off campus rather than only dealing with data they can purchase or behavioral experimental data derived from students on campus.


Another thing I've noticed is that the ASR papers are more readable and many have no complicated equations. For example, pick any recent TAR paper at random and then compare it with the write up at
http://www.asanet.org/images/journals/docs/pdf/asr/Aug11ASRFeature.pdf 


Then compare the randomly chosen TAR paper with a randomly chosen ASR paper at
http://www.asanet.org/journals/asr/index.cfm#articles 


Hi Roger,

Although I agree with you regarding how the AAA journals do not have a means of publishing "short research articles quickly," Accounting Horizons (certainly not TAR) for publishing now has a Commentaries section. I don't know if the time between submission and publication of an AH Commentary is faster on average than mainline AH research articles, but my priors are that it is quicker to get AH Commentaries published on a more timely basis.


The disappointing aspect of the published AH Commentaries to date is that they do not directly  focus on controversies of published research articles. Nor are they a vehicle for publishing abstracts of attempted replications of published accounting research. I don't know if this is AH policy or just the lack of replication in accountics science. In real science journals there are generally alternatives for publishing abstracts of replication outcomes and commentaries on published science articles. The AH Commentaries do tend to provide literature reviews on narrow topics.


The American Sociological Association has a journal called Footnotes ---
http://www.asanet.org/journals/footnotes.cfm
 

Article Submissions are limited to 1,100 words and must have journalistic value (e.g., timeliness, significant impact, general interest) rather than be research-oriented or scholarly in nature. Submissions are reviewed by the editorial board for possible publication.

ASA Forum (including letters to the editor) - 400-600-word limit.

Obituaries - 700-word limit.

Announcements - 150-word limit.

All submissions should include a contact name and an email address. ASA reserves the right to edit for style and length all material published.

Deadline for all materials is the first of the month preceding publication (e.g., February 1 for March issue).

Send communications on materials, subscriptions, and advertising to:

American Sociological Association
1430 K Street, NW - Suite 600
Washington, DC 20005-4701

 

The American Accounting Association Journals do not have something comparable to Footnotes or the ASA Forum, although the AAA does have both the AAA Commons and the AECM where non-refereed "publishing" is common for gadflies like Bob Jensen. The Commons is still restricted to AAA members and as such does not get covered by search crawlers like Google. The AECM is unrestricted to AAA Members, but since it requires free subscribing it does not get crawled over by Google, Yahoo, Bing, etc.

 


Accountics Scientists Seeking Truth: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 

Introduction to Replication Commentaries
In this message I will define a research "replication" as an experiment that exactly and independently reproduces hypothesis testing of an original scientific experiment. The replication must be done by "independent" researchers using the same hypotheses and models that test those hypotheses such as multivariate statistical models. Researchers must be sufficiently independent such that the replication is not performed by the same scientists or students/colleagues of those scientists. Experimental data sets may be identical in original studies and replications, although if replications generate different data sets the replications also test for errors in data collection and recording. When identical data sets are used, replicators are mainly checking analysis errors apart from data errors.

Presumably a successful replication "reproduces" exactly the same outcomes and authenticates/verifies the original research. In scientific research, such authentication is considered extremely important. The IAPUC Gold Book makes a distinction between reproducibility and repeatability at
http://goldbook.iupac.org/
For purposes of this message, replication, reproducibility, and repeatability will be viewed as synonyms.

It would be neat if replication clearly marked the difference between the real sciences versus the pseudo sciences, but this demarcation is not so clear cut since pseudo scientists sometimes (not as often) replicate research findings. A more clear cut demarcation is the obsession with finding causes that cannot be discovered in models from big data like census databases, financial statement databases   (e.g. Compustat and EDGAR), and economic statistics generated by governments and the United Nations. Real scientists slave away to go beyond discovered big data correlations in search of causality ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf

Real Science versus Pseudo Science ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Pseudo-Science

Having said this scientists, especially real scientists, are obsessed with replication

Presumably a successful replication "reproduces" exactly the same outcomes and authenticates/verifies the original research. In scientific research, such authentication is considered extremely important. The IAPUC Gold Book makes a distinction between reproducibility and repeatability at
http://goldbook.iupac.org/
For purposes of this message, replication, reproducibility, and repeatability will be viewed as synonyms.

Allowance should be made for "conceptual replications" apart from "exact replications ---
http://www.jasnh.com/pdf/Vol6-No2.pdf

"Scientists Fail to Identify Their Tools, Study Finds, and May Hurt Replication," by Paul Voosen, Chronicle of Higher Education, September 5, 2013 ---
http://chronicle.com/article/Scientists-Fail-to-Identify/141389/?cid=at

Define your terms. It's one of the oldest rules of writing. Yet when it comes to defining the exact resources used to conduct their research, many scientists fail to do exactly that. At least that's the conclusion of a new study, published on Thursday in the journal PeerJ.

Looking at 238 recently published papers, pulled from five fields of biomedicine, a team of scientists found that they could uniquely identify only 54 percent of the research materials, from lab mice to antibodies, used in the work. The rest disappeared into the terse fuzz and clipped descriptions of the methods section, the journal standard that ostensibly allows any scientist to reproduce a study.

"Our hope would be that 100 percent of materials would be identifiable," said Nicole A. Vasilevsky, a project manager at Oregon Health & Science University, who led the investigation.

The group quantified a finding already well known to scientists: No one seems to know how to write a proper methods section, especially when different journals have such varied requirements. Those flaws, by extension, may make reproducing a study more difficult, a problem that has prompted, most recently, the journal Nature to impose more rigorous standards for reporting research.

"As researchers, we don't entirely know what to put into our methods section," said Shreejoy J. Tripathy, a doctoral student in neurobiology at Carnegie Mellon University, whose laboratory served as a case study for the research team. "You're supposed to write down everything you need to do. But it's not exactly clear what we need to write down."

Ms. Vasilevsky's study offers no grand solution. Indeed, despite its rhetoric, which centers on the hot topic of reproducibility, it provides no direct evidence that poorly labeled materials have hindered reproduction. That finding tends to rest on anecdote. Stories abound of dissertations diverted for years as students struggled to find the genetic strain or antibody used in a study they were recreating.

A Red Herring?

Here's what the study does show: In neuroscience, in immunology, and in developmental, molecular, and general biology, catalog codes exist to uniquely identify research materials, and they are often not used. (The team studied five biomedical resources in all: antibody proteins, model organisms, cell lines, DNA constructs, and gene-silencing chemicals.) Without such specificity, it can be difficult, for example, to distinguish multiple antibodies from the same vendor. That finding held true across the journals, publishers, and reporting methods surveyed—including, surprisingly, the few journals considered to have strict reporting requirements.

This goes back to anecdote, but the interior rigor of the lab also wasn't reflected in its published results. Ms. Vasilevsky found that she could identify about half of the antibodies and organisms used by the Nathan N. Urban lab at Carnegie Mellon, where Mr. Tripathy works. The lab's interior Excel spreadsheets were meticulous, but somewhere along the route to publication, that information disappeared.

How deep and broad a problem is this? It's difficult to say. Ms. Vasilevsky wouldn't be surprised to see a similar trend in other sciences. But for every graduate student reluctant to ask professors about their methods, for fear of sounding critical, other scientists will give them a ring straightaway. Given the shoddy state of the methods section, such calls will remain a staple even if 100 percent of materials are perfectly labeled, Ms. Vasilevsky added. And that's not necessarily a problem.

Continued in article

This message does have a very long quotation from a study by Watson et al. (2008) that does elaborate on quasi-replication and partial-replication. That quotation also elaborates on concepts of external versus internal validity grounded in the book:
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues for field settings. Boston: Houghton Mifflin Company.

I define an "extended study" as one which may have similar hypotheses but uses non-similar data sets and/or non-similar models. For example, study of female in place of male test subjects is an extended study with different data sets. An extended study may vary the variables under investigation or change the testing model structure such as changing to a logit model as an extension of a more traditional regression model.

Extended studies that create knew knowledge are not replications in terms of the above definitions, although an extended study my start with an exact replication.


A paper can become highly cited because it is good science – or because it is eye-catching, provocative or wrong. Luxury-journal editors know this, so they accept papers that will make waves because they explore sexy subjects or make challenging claims. This influences the science that scientists do. It builds bubbles in fashionable fields where researchers can make the bold claims these journals want, while discouraging other important work, such as replication studies.

"How journals like Nature, Cell and Science are damaging science:  The incentives offered by top journals distort science, just as big bonuses distort banking," Randy Schekman, The Guardian, December 9, 2013 ---
http://www.theguardian.com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-science

I am a scientist. Mine is a professional world that achieves great things for humanity. But it is disfigured by inappropriate incentives. The prevailing structures of personal reputation and career advancement mean the biggest rewards often follow the flashiest work, not the best. Those of us who follow these incentives are being entirely rational – I have followed them myself – but we do not always best serve our profession's interests, let alone those of humanity and society.

e all know what distorting incentives have done to finance and banking. The incentives my colleagues face are not huge bonuses, but the professional rewards that accompany publication in prestigious journals – chiefly Nature, Cell and Science.

These luxury journals are supposed to be the epitome of quality, publishing only the best research. Because funding and appointment panels often use place of publication as a proxy for quality of science, appearing in these titles often leads to grants and professorships. But the big journals' reputations are only partly warranted. While they publish many outstanding papers, they do not publish only outstanding papers. Neither are they the only publishers of outstanding research.

These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called "impact factor" – a score for each journal, measuring the number of times its papers are cited by subsequent research. Better papers, the theory goes, are cited more often, so better journals boast higher scores. Yet it is a deeply flawed measure, pursuing which has become an end in itself – and is as damaging to science as the bonus culture is to banking.

It is common, and encouraged by many journals, for research to be judged by the impact factor of the journal that publishes it. But as a journal's score is an average, it says little about the quality of any individual piece of research. What is more, citation is sometimes, but not always, linked to quality. A paper can become highly cited because it is good science – or because it is eye-catching, provocative or wrong. Luxury-journal editors know this, so they accept papers that will make waves because they explore sexy subjects or make challenging claims. This influences the science that scientists do. It builds bubbles in fashionable fields where researchers can make the bold claims these journals want, while discouraging other important work, such as replication studies.

In extreme cases, the lure of the luxury journal can encourage the cutting of corners, and contribute to the escalating number of papers that are retracted as flawed or fraudulent. Science alone has recently retracted high-profile papers reporting cloned human embryos, links between littering and violence, and the genetic profiles of centenarians. Perhaps worse, it has not retracted claims that a microbe is able to use arsenic in its DNA instead of phosphorus, despite overwhelming scientific criticism.

There is a better way, through the new breed of open-access journals that are free for anybody to read, and have no expensive subscriptions to promote. Born on the web, they can accept all papers that meet quality standards, with no artificial caps. Many are edited by working scientists, who can assess the worth of papers without regard for citations. As I know from my editorship of eLife, an open access journal funded by the Wellcome Trust, the Howard Hughes Medical Institute and the Max Planck Society, they are publishing world-class science every week.

Funders and universities, too, have a role to play. They must tell the committees that decide on grants and positions not to judge papers by where they are published. It is the quality of the science, not the journal's brand, that matters. Most importantly of all, we scientists need to take action. Like many successful researchers, I have published in the big brands, including the papers that won me the Nobel prize for medicine, which I will be honoured to collect tomorrow.. But no longer. I have now committed my lab to avoiding luxury journals, and I encourage others to do likewise.

Coninued in article

Bob Jensen's threads on how prestigious journals in academic accounting research have badly damaged academic accounting research, especially in the accountics science takeover of doctoral programs where dissertation research no longer is accepted unless it features equations ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

Lack or Replication in Accountics Science:
574 Shields Against Validity Challenges in Plato's Cave ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

 


A validity testing testimony illustration about how research needs to be replicated.
GM is also the company that bought the patent rights to the doomed Wankel Engine ---
http://en.wikipedia.org/wiki/Wankel_Engine

"The Sad Story of the Battery Breakthrough that Proved Too Good to Be True," by Kevin Bullis, MIT's Technology Review, December 6, 2013 ---
http://www.technologyreview.com/view/522361/the-sad-story-of-the-battery-breakthrough-that-proved-too-good-to-be-true/?utm_campaign=newsletters&utm_source=newsletter-daily-all&utm_medium=email&utm_content=20131209

Two lurkers on the AECM listserv forwarded the link below:
"The Replication Myth: Shedding Light on One of Science’s Dirty Little Secrets
," by Jared Horvath, Scientific American, December 4, 2013 ---
http://blogs.scientificamerican.com/guest-blog/2013/12/04/the-replication-myth-shedding-light-on-one-of-sciences-dirty-little-secrets/

In a series of recent articles published in The Economist (Unreliable Research: Trouble at the Lab and Problems with Scientific Research: How Science Goes Wrong), authors warned of a growing trend in unreliable scientific research. These authors (and certainly many scientists) view this pattern as a detrimental byproduct of the cutthroat ‘publish-or-perish’ world of contemporary science.

In actuality, unreliable research and irreproducible data have been the status quo since the inception of modern science. Far from being ruinous, this unique feature of research is integral to the evolution of science.

At the turn of the 17th century, Galileo rolled a brass ball down a wooden board and concluded that the acceleration he observed confirmed his theory of the law of the motion of falling bodies. Several years later, Marin Mersenne attempted the same experiment and failed to achieve similar precision, causing him to suspect that Galileo fabricated his experiment.

Early in the 19th century, after mixing oxygen with nitrogen, John Dalton concluded that the combinatorial ratio of the elements proved his theory of the law of multiple proportions. Over a century later, J. R. Parington tried to replicate the test and concluded that “…it is almost impossible to get these simple ratios in mixing nitric oxide and air over water.”

At the beginning of the 20th century, Robert Millikan suspended drops of oil in an electric field, concluding that electrons have a single charge. Shortly afterwards, Felix Ehrenhaft attempted the same experiment and not only failed to arrive at an identical value, but also observed enough variability to support his own theory of fractional charges.

Other scientific luminaries have similar stories, including Mendel, Darwin and Einstein. Irreproducibility is not a novel scientific reality. As noted by contemporary journalists William Broad and Nicholas Wade, “If even history’s most successful scientists resort to misrepresenting their findings in various ways, how extensive may have been the deceits of those whose work is now rightly forgotten?”

There is a larger lesson to be gleaned from this brief history. If replication were the gold standard of scientific progress, we would still be banging our heads against our benches trying to arrive at the precise values that Galileo reported. Clearly this isn’t the case.

The 1980’s saw a major upswing in the use of nitrates to treat cardiovascular conditions. With prolonged use, however, many patients develop a nitrate tolerance. With this in mind, a group of drug developers at Pfizer set to creating Sildenafil, a pill that would deliver similar therapeutic benefits as nitrates without declining efficacy. Despite its early success, a number of unanticipated drug interactions and side-effects—including penile erections—caused doctors to shelve Sildenafil. Instead, the drug was re-trialed, re-packaged and re-named Viagra. The rest is history.

This tale illustrates the true path by which science evolves. Despite a failure to achieve initial success, the results generated during Sildenafil experimentation were still wholly useful and applicable to several different lines of scientific work. Had the initial researchers been able to massage their data to a point where they were able to publish results that were later found to be irreproducible, this would not have changed the utility of a sub-set of their results for the field of male potency.

Many are taught that science moves forward in discreet, cumulative steps; that truth builds upon truth as the tapestry of the universe slowly unfolds. Under this ideal, when scientific intentions (hypotheses) fail to manifest, scientists must tinker until their work is replicable everywhere at anytime. In other words, results that aren’t valid are useless.

In reality, science progresses in subtle degrees, half-truths and chance. An article that is 100 percent valid has never been published. While direct replication may be a myth, there may be information or bits of data that are useful among the noise. It is these bits of data that allow science to evolve. In order for utility to emerge, we must be okay with publishing imperfect and potentially fruitless data. If scientists were to maintain the ideal, the small percentage of useful data would never emerge; we’d all be waiting to achieve perfection before reporting our work.

This is why Galileo, Dalton and Millikan are held aloft as scientific paragons, despite strong evidence that their results are irreproducible. Each of these researchers presented novel methodologies, ideas and theories that led to the generation of many useful questions, concepts and hypotheses. Their work, if ultimately invalid, proved useful.

Doesn’t this state-of-affairs lead to dead ends, misused time and wasted money? Absolutely. It is here where I believe the majority of current frustration and anger resides. However, it is important to remember two things: first, nowhere is it written that all science can and must succeed. It is only through failure that the limits of utility can be determined. And second, if the history of science has taught us anything, it is that with enough time all scientific wells run dry. Whether due to the achievement of absolute theoretical completion (a myth) or, more likely, the evolution of more useful theories, all concepts will reach a scientific end.

Two reasons are typically given for not wanting to openly discuss the true nature of scientific progress and the importance of publishing data that may not be perfectly replicable: public faith and funding. Perhaps these fears are justified. It is a possibility that public faith will dwindle if it becomes common knowledge that scientists are too-often incorrect and that science evolves through a morass of noise. However, it is equally possible that public faith will decline each time this little secret leaks out in the popular press. It is a possibility that funding would dry up if, in our grant proposals, we openly acknowledge the large chance of failure, if we replace gratuitous theories with simple unknowns. However, it is equally possible that funding will diminish each time a researcher fails to deliver on grandiose (and ultimately unjustified) claims of efficacy and translatability.

Continued in article

Jensen Comment
I had to chuckle that in an article belittling the role of reproducibility in science the author leads out with an illustration of how Marin Mersenne could not reproduce one of Galileo's experiments led to suspicions that the experiment was faked by Galileo. It seems to me that this illustration reinforces the importance of reproducibility/replication in science.

I totally disagree that "unreliable research and irreproducible data have been the status quo since the inception of modern science." If it really were the "status quo" then all science would be pseudo science. Real scientists are obsessed with replication to a point that modern science findings in experiments are not considered new knowledge until they have been independently validated. That of course does not mean that it's always easy or sometimes even possible to validate findings in modern science. Much of the spending in real science is devoted to validating earlier discoveries and databases to be shared with other scientists.

Real scientists are generally required by top journals and funding sources to maintain detailed lab books of steps performed in laboratories. Data collected for use by other scientists (such as ocean temperature data) is generally subjected to validation tests such that research outcomes are less likely to be based upon flawed data. There are many examples of where reputations of scientists were badly tarnished due to inability of other scientists to validate findings ---
http://www.trinity.edu/rjensen/Plagiarism.htm#ProfessorsWhoPlagiarize

Nearly all real science journals have illustrations where journal articles are later retracted because the findings could not be validated.

What the article does point out that real scientists do not always validate findings independently. What this is saying is that real science is often imperfect. But this does not necessarily make validation, reproduction, and replication of original discoveries less important. It only says that the scientists themselves often deviate from their own standards of validation.

The article does above does not change my opinion that reproducibility is the holy grail of real science. If findings are not validated what you have is imperfect implementation of a scientific process rather than imperfect standards.

Accountics science is defined at http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm
in short, an accountics science study is any accounting research study that features equations and/or statistical inference.
One of the main reasons Bob Jensen contends that accountics science is not yet a real science is that lack of exacting replications of accountics science findings. By exacting replications he means reproducibility as defined in the IAPUC Gold Book  ---
http://en.wikipedia.org/wiki/IUPAC_Gold_Book

 

My study of the 2013 articles in The Accounting Review suggests that over 90% of the articles rely upon public databases that are purchased, such as the CompuStat, CRSP, Datastream, and AuditAnalytics. The reason I think accountics scientists are not usually real scientists includes the following:

Audit Fees By Industry, As Presented By Audit Analytics ---
http://goingconcern.com/post/audit-fees-industry-presented-audit-analytics

Jensen Comment
In auditing courses, students might do some research on misleading aspects of the above data apart from being self reported data. For example, some clients save on audit fees by spending more in internal audit activities. Audit fees may vary depending upon the quality of internal controls or lack thereof.

Audit fees may differ for two clients in the same industry where one client is in great financial shape and the other client's employees are wearing waders. There may also be differences between what different audit firms charge for similar services. Aggregations of apples and oranges can be somewhat misleading.

Accountics scientists prefer purchased data such as data from Audit Analytics so that the accountics scientists are not responsible for errors in the data. My research of TAR suggests that accountics science research uses purchased databases over 90% of the time. That way accountics scientists are not responsible for collecting data or errors in that data. Audit Analytics is a popular database purchased by accountics scientists even though it is probably more prone to error than most of the other purchased databases. A huge problem is reliance on self reporting by auditors and clients.

 

These and my other complaints about the lack of replications in accountics science can be found at
http://www.trinity.edu/rjensen/TheoryTAR.htm

 

The source of these oddities is Brian Dillon's intriguing Curiosity: Art and the Pleasures of Knowing (Hayward Publishing), a new volume of essays, excerpts, descriptions, and photographs that accompanies his exhibit of the same name, touring Britain and the Netherlands during 2013-14. But what does it mean to be curious?

"Triumph of the Strange," by James Delbourgo, Chronicle of Higher Education, December 8, 2013 ---
http://chronicle.com/article/Triumph-of-the-Strange/143365/?cid=cr&utm_source=cr&utm_medium=en

Bob Jensen's threads on Real Science versus Pseudo Science ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

 


Replication Research May Take Years to Resolve
Purdue University is investigating “extremely serious” concerns about the research of Rusi Taleyarkhan, a professor of nuclear engineering who has published articles saying that he had produced nuclear fusion in a tabletop experiment, The New York Times reported. While the research was published in Science in 2002, the findings have faced increasing skepticism because other scientists have been unable to replicate them. Taleyarkhan did not respond to inquiries from The Times about the investigation.
Inside Higher Ed, March 08, 2006 --- http://www.insidehighered.com/index.php/news/2006/03/08/qt
The New York Times March 9 report is at http://www.nytimes.com/2006/03/08/science/08fusion.html?_r=1&oref=slogin 

"Climategate's Phil Jones Confesses to Climate Fraud," by Marc Sheppard, American Thinker, February 14, 2010 ---
http://www.americanthinker.com/2010/02/climategates_phil_jones_confes.html

Interesting Video
"The Placebo Effect,"  by Gwen Sharp, Sociological Images, March 10, 2011 --- Click Here
http://thesocietypages.org/socimages/2011/03/10/the-placebo-effect/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+SociologicalImagesSeeingIsBelieving+%28Sociological+Images%3A+Seeing+Is+Believing%29

A good example of replication in econometrics is illustrated by the inability of obscure graduate students and an economics  professor at the University of Massachusetts to to replicate the important findings of two famous Harvard monetary economics scientists named Carmen Reinhart and Kenneth Roghoff ---
http://en.wikipedia.org/wiki/Carmen_Reinhart#Research_and_publication

In 2013, Reinhart and Rogoff were in the spotlight after researchers discovered that their 2010 paper "Growth in a Time of Debt" in the American Economic Review Papers and Proceedings had a computational error. The work argued that debt above 90% of GDP was particularly harmful to economic growth, while corrections have shown that the negative correlation between debt and growth does not increase above 90%. A separate and previous criticism is that the negative correlation between debt and growth need not be causal.  Rogoff and Reinhardt claimed that their fundamental conclusions were accurate, despite the errors.

A review by Herndon, Ash and Pollin of [Reinhart's] widely cited paper with Rogoff, "Growth in a time of debt", argued that "coding errors, selective exclusion of available data, and unconventional weighting of summary statistics lead to serious errors that inaccurately represent the relationship between public debt and GDP growth among 20 advanced economies in the post-war period."

Their error detections that received worldwide attention demonstrates that high debt countries grew at 2.2 percent, rather than the −0.1 percent figure claimed by Reinhart and Rogoff.

I'm critical of this replication example in one respect. Why did it take over two years? In chemistry such an important finding would've most likely been replicated in weeks or months rather than years.

Thus we do often have a difference between the natural sciences and the social sciences with respect to how immediate replications transpire. In the natural sciences it is common for journals to not even publish findings before they've been replicated. The social sciences, also known as the softer sciences, are frequently softer with respect to timings of replications.


DATABASE BIASES AND ERRORS
My casual studies of accountics science articles suggests that over 90% of those studies rely exclusively on one or more public database whenever the studies use data. I find few accountics science research into bias and errors of those databases. Here's a short listing of research into these biases and errors, some of which were published by accountics scientists ---
 

DATABASE BIASES AND ERRORS ---
http://www.kellogg.northwestern.edu/rc/crsp-cstat-references.htm

This page provides references for articles that study specific aspects of CRSP, Compustat and other popular sources of data used by researchers at Kellogg. If you know of any additional references, please e-mail researchcomputing-help@kellogg.northwestern.edu.

What went wrong with accountics science?
http://www.trinity.edu/rjensen/Theory01.htm#WhatWentWrong

 


October 21, 2013 message from Dan Stone

A recent article in "The Economist" decries the absence of replication in
science.

short url:
http://tinyurl.com/lepu6zz

http://www.economist.com/news/leaders/21588069-scientific-research-
has-changed-world-now-it-needs-change-itself-how-science-goes-wrong


 

October 21, 2013 reply from Bob Jensen

I read The Economist every week and usually respect it sufficiently to quote it a lot. But sometimes articles disappoint me as an academic in search of evidence for controversial assertions like the one you link to about declining replication in the sciences.

Dartmouth Professor Nyhan paints a somewhat similar picture about where some of the leading medical journals now "tend to fail to replicate." However other journals that he mentions are requiring a replication archives and replication audits. It seems to me that some top science journals are becoming more concerned about validity of research findings while perhaps others have become more lax.

"Academic reforms: A four-part proposal," by Brendon Nyhan, April 16, 2013 ---
http://www.brendan-nyhan.com/blog/2012/04/academic-reforms-a-four-part-proposal.html

The "collaborative replication" idea has become a big deal. I have a former psychology colleague at Trinity who has a stellar reputation for empirical brain research in memory. She tells me that she does not submit articles any more until they have been independently replicated by other experts.

It may well be true that natural science journals have become negligent in requiring replication and in providing incentives to replicate. However, perhaps, because the social science journals have a harder time being believed, I think that some of their top journals have become more obsessed with replication.

In any case I don't know of any science that is less concerned with lack of replication than accountics science. TAR has a policy of not publishing replications or replication abstracts unless the replication is only incidental to extending the findings with new research findings. TAR also has a recent reputation of not encouraging commentaries on the papers it publishes.

Has TAR even published a commentary on any paper it published in recent years?

Have you encountered any recent investigations into errors in our most popular public databases in accountics science?

Thanks,
Bob Jensen

 

 


November 11, 2012
Before reading Sudipta's posting of a comment to one of my earlier postings on the AAA Commons, I would like to call your attention to the following two links:


How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

Sudipta Basu has posted a new comment in Research Tools, on the post titled "Gaming Publications and Presentations at Academic...".

To view the comment (and 3 other comment(s) in the thread), or to post your own, visit: http://commons.aaahq.org/comment/19181

posted 05:13 PM EST by Sudipta Basu
Comment: You will probably love the new issue of Perspectives on Psychological Science (November 2012) which is entirely devoted to (lack of) Replication and other Research (mal)Practice issues in psychology (behavioral research). I think there is lots of thought-provoking material with implications for accounting research (not only of the accountics variety). The link for the current issue is (will change once the next issue is uploaded):

http://pps.sagepub.com/content/current

One website that provides useful documentation on errors in standard accountics databases, differences between databases, and their implications for previously published research is (even as I agree that many researchers pay little attention to these documented problems):

http://www.kellogg.northwestern.edu/rc/crsp-cstat-references.htm

I note that several accounting researchers appear as authors in the website above, although likely fewer than desired (possible biases in database coverage...)

 


Some Comments About Accountics Science Versus Real Science

This is the lead article in the May 2013 edition of The Accounting Review
"On Estimating Conditional Conservatism
Authors

Ray Ball (The University of Chicago)
S. P. Kothari )Massachusetts Institute of Technology)
Valeri V. Nikolaev (The University of Chicago)

The Accounting Review, Volume 88, No. 3, May 2013, pp. 755-788

The concept of conditional conservatism (asymmetric earnings timeliness) has provided new insight into financial reporting and stimulated considerable research since Basu (1997). Patatoukas and Thomas (2011) report bias in firm-level cross-sectional asymmetry estimates that they attribute to scale effects. We do not agree with their advice that researchers should avoid conditional conservatism estimates and inferences from research based on such estimates. Our theoretical and empirical analyses suggest the explanation is a correlated omitted variables problem that can be addressed in a straightforward fashion, including fixed-effects regression. Correlation between the expected components of earnings and returns biases estimates of how earnings incorporate the information contained in returns. Further, the correlation varies with returns, biasing asymmetric timeliness estimates. When firm-specific effects are taken into account, estimates do not exhibit the bias, are statistically and economically significant, are consistent with priors, and behave as a predictable function of book-to-market, size, and leverage.

. . .

We build on and provide a different interpretation of the anomalous evidence reported by PT. We begin by replicating their [Basu (1997). Patatoukas and Thomas (2011)] results. We then provide evidence that scale-related effects are not the explanation. We control for scale by sorting observations into relatively narrow portfolios based on price, such that within each portfolio approximately 99 percent of the cross-sectional variation in scale is eliminated. If scale effects explain the anomalous evidence, then it would disappear within these portfolios, but the estimated asymmetric timeliness remains considerable. We conclude that the data do not support the scale-related explanation.4 It thus becomes necessary to look for a better explanation.

Continued in article

Jensen Comment
The good news is that the earlier findings were replicated. This is not common in accountics science research. The bad news is that such replications took 16 years and two years respectively. And the probability that TAR will publish a one or more commentaries on these findings is virtually zero.

How does this differ from real science?
In real science most findings are replicated before or very quickly after publication of scientific findings. And interest is in the reproducible results without also requiring an extension of the research for publication of the replication outcomes.

In accountics science there is little incentive to perform exact replications since top accountics science journals neither demand such replications nor will they publish (even in commentaries) replication outcomes. A necessary condition to publish replication outcomes in accountics science is the extend the research into new frontiers.

How long will it take for somebody to replicate these May 2013 findings of Ball, Kothari, and Nikolaev? If the past is any indicator of the future the BKN findings will never be replicated. If they are replicated it will most likely take years before we receive notice of such replication in an extension of the BKN research published in 2013.


Epistemologists present several challenges to Popper's arguments
"Separating the Pseudo From Science," by Michael D. Gordon, Chronicle of Higher Education, September 17, 2012 ---
http://chronicle.com/article/Separating-the-Pseudo-From/134412/


Bridging the Gap Between Academic Accounting Research and Audit Practice
"Highlights of audit research:  Studies examine auditors' industry specialization, auditor-client negotiations, and executive confidence regarding earnings management,". By Cynthia E. Bolt-Lee and D. Scott Showalter, Journal of Accountancy, August 2012 ---
http://www.journalofaccountancy.com/Issues/2012/Jul/20125104.htm

Jensen Comment
This is a nice service of the AICPA in attempting to find accountics science articles most relevant to the practitioner world and to translate (in summary form) these articles for a practitioner readership.

Sadly, the service does not stress that research is of only limited relevance until it is validated in some way at a minimum by encouraging critical commentaries and at a maximum by multiple and independent replications by scientific standards for replications ---
http://www.trinity.edu/rjensen/TheoryTAR.htm


Unlike real scientists, accountics scientists seldom replicate published accountics science research by the exacting standards real science ---
http://www.trinity.edu/rjensen/TheoryTAR.htm#Replication

Multicollinearity --- http://en.wikipedia.org/wiki/Multicollinearity

Robust Statistics --- http://en.wikipedia.org/wiki/Robust_statistics

Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normally distributed. Robust statistical methods have been developed for many common problems, such as estimating location, scale and regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from parametric distributions. For example, robust methods work well for mixtures of two normal distributions with different standard-deviations, for example, one and three; under this model, non-robust methods like a t-test work badly.

Continued in article

Jensen Comment
To this might be added that models that grow adaptively by adding components in sequencing are not robust if the mere order in which components are added changes the outcome of the ultimate model.

David Johnstone wrote the following:

Indeed if you hold H0 the same and keep changing the model, you will eventually (generally soon) get a significant result, allowing “rejection of H0 at 5%”, not because H0 is necessarily false but because you have built upon a false model (of which there are zillions, obviously).

Jensen Comment
I spent a goodly part of two think-tank years trying in vain to invent robust adaptive regression and clustering models where I tried to adaptively reduce modeling error by adding missing variables and covariance components. To my great frustration I found that adaptive regression and cluster analysis seems to almost always suffer from lack of robustness. Different outcomes can be obtained simply because of the order in which new components are added to the model, i.e., ordering of inputs changes the model solutions.

Accountics scientists who declare they have "significant results" may also have non-robust results that they fail to analyze.

When you combine issues on non-robustness with the impossibility of testing for covariance you have a real mess in accountics science and econometrics in general.

It's relatively uncommon for accountics scientists to criticize each others' published works. A notable exception is as follows:
"Selection Models in Accounting Research," by Clive S. Lennox, Jere R. Francis, and Zitian Wang,  The Accounting Review, March 2012, Vol. 87, No. 2, pp. 589-616.

This study explains the challenges associated with the Heckman (1979) procedure to control for selection bias, assesses the quality of its application in accounting research, and offers guidance for better implementation of selection models. A survey of 75 recent accounting articles in leading journals reveals that many researchers implement the technique in a mechanical way with relatively little appreciation of important econometric issues and problems surrounding its use. Using empirical examples motivated by prior research, we illustrate that selection models are fragile and can yield quite literally any possible outcome in response to fairly minor changes in model specification. We conclude with guidance on how researchers can better implement selection models that will provide more convincing evidence on potential selection bias, including the need to justify model specifications and careful sensitivity analyses with respect to robustness and multicollinearity.

. . .

CONCLUSIONS

Our review of the accounting literature indicates that some studies have implemented the selection model in a questionable manner. Accounting researchers often impose ad hoc exclusion restrictions or no exclusion restrictions whatsoever. Using empirical examples and a replication of a published study, we demonstrate that such practices can yield results that are too fragile to be considered reliable. In our empirical examples, a researcher could obtain quite literally any outcome by making relatively minor and apparently innocuous changes to the set of exclusionary variables, including choosing a null set. One set of exclusion restrictions would lead the researcher to conclude that selection bias is a significant problem, while an alternative set involving rather minor changes would give the opposite conclusion. Thus, claims about the existence and direction of selection bias can be sensitive to the researcher's set of exclusion restrictions.

Our examples also illustrate that the selection model is vulnerable to high levels of multicollinearity, which can exacerbate the bias that arises when a model is misspecified (Thursby 1988). Moreover, the potential for misspecification is high in the selection model because inferences about the existence and direction of selection bias depend entirely on the researcher's assumptions about the appropriate functional form and exclusion restrictions. In addition, high multicollinearity means that the statistical insignificance of the inverse Mills' ratio is not a reliable guide as to the absence of selection bias. Even when the inverse Mills' ratio is statistically insignificant, inferences from the selection model can be different from those obtained without the inverse Mills' ratio. In this situation, the selection model indicates that it is legitimate to omit the inverse Mills' ratio, and yet, omitting the inverse Mills' ratio gives different inferences for the treatment variable because multicollinearity is then much lower.

In short, researchers are faced with the following trade-off. On the one hand, selection models can be fragile and suffer from multicollinearity problems, which hinder their reliability. On the other hand, the selection model potentially provides more reliable inferences by controlling for endogeneity bias if the researcher can find good exclusion restrictions, and if the models are found to be robust to minor specification changes. The importance of these advantages and disadvantages depends on the specific empirical setting, so it would be inappropriate for us to make a general statement about when the selection model should be used. Instead, researchers need to critically appraise the quality of their exclusion restrictions and assess whether there are problems of fragility and multicollinearity in their specific empirical setting that might limit the effectiveness of selection models relative to OLS.

Another way to control for unobservable factors that are correlated with the endogenous regressor (D) is to use panel data. Though it may be true that many unobservable factors impact the choice of D, as long as those unobservable characteristics remain constant during the period of study, they can be controlled for using a fixed effects research design. In this case, panel data tests that control for unobserved differences between the treatment group (D = 1) and the control group (D = 0) will eliminate the potential bias caused by endogeneity as long as the unobserved source of the endogeneity is time-invariant (e.g., Baltagi 1995; Meyer 1995; Bertrand et al. 2004). The advantages of such a difference-in-differences research design are well recognized by accounting researchers (e.g., Altamuro et al. 2005; Desai et al. 2006; Hail and Leuz 2009; Hanlon et al. 2008). As a caveat, however, we note that the time-invariance of unobservables is a strong assumption that cannot be empirically validated. Moreover, the standard errors in such panel data tests need to be corrected for serial correlation because otherwise there is a danger of over-rejecting the null hypothesis that D has no effect on Y (Bertrand et al. 2004).10

Finally, we note that there is a recent trend in the accounting literature to use samples that are matched based on their propensity scores (e.g., Armstrong et al. 2010; Lawrence et al. 2011). An advantage of propensity score matching (PSM) is that there is no MILLS variable and so the researcher is not required to find valid Z variables (Heckman et al. 1997; Heckman and Navarro-Lozano 2004). However, such matching has two important limitations. First, selection is assumed to occur only on observable characteristics. That is, the error term in the first stage model is correlated with the independent variables in the second stage (i.e., u is correlated with X and/or Z), but there is no selection on unobservables (i.e., u and υ are uncorrelated). In contrast, the purpose of the selection model is to control for endogeneity that arises from unobservables (i.e., the correlation between u and υ). Therefore, propensity score matching should not be viewed as a replacement for the selection model (Tucker 2010).

A second limitation arises if the treatment variable affects the company's matching attributes. For example, suppose that a company's choice of auditor affects its subsequent ability to raise external capital. This would mean that companies with higher quality auditors would grow faster. Suppose also that the company's characteristics at the time the auditor is first chosen cannot be observed. Instead, we match at some stacked calendar time where some companies have been using the same auditor for 20 years and others for not very long. Then, if we matched on company size, we would be throwing out the companies that have become large because they have benefited from high-quality audits. Such companies do not look like suitable “matches,” insofar as they are much larger than the companies in the control group that have low-quality auditors. In this situation, propensity matching could bias toward a non-result because the treatment variable (auditor choice) affects the company's matching attributes (e.g., its size). It is beyond the scope of this study to provide a more thorough assessment of the advantages and disadvantages of propensity score matching in accounting applications, so we leave this important issue to future research.

Jensen Comment
To this we might add that it's impossible in these linear models to test for multicollinearity.


David Johnstone posted the following message on the AECM Listserv on November 19, 2013:

An interesting aspect of all this is that there is a widespread a priori or learned belief in empirical research that all and only what you have to do to get meaningful results is to get data and run statistics packages, and that the more advanced the stats the better. Its then just a matter of turning the handle. Admittedly it takes a lot of effort to get very proficient at this kind of work, but the presumption that it will naturally lead to reliable knowledge is an act of faith, like a religious tenet. What needs to be taken into account is that the human systems (markets, accounting reporting, asset pricing etc.) are madly complicated and likely changing structurally continuously. So even with the best intents and best methods, there is no guarantee of reliable or lasting findings a priori, no matter what “rigor” has gone in.

 

Part and parcel of the presumption that empirical research methods are automatically “it” is the even stronger position that no other type of work is research. I come across this a lot. I just had a 4th year Hons student do his thesis, he was particularly involved in the superannuation/pension fund industry, and he did a lot of good practical stuff, thinking about risks that different fund allocations present, actuarial life expectancies etc. The two young guys (late 20s) grading this thesis, both excellent thinkers and not zealots about anything, both commented to me that the thesis was weird and was not really a thesis like they would have assumed necessary (electronic data bases with regressions etc.). They were still generous in their grading, and the student did well, and it was only their obvious astonishment that there is any kind of worthy work other than the formulaic-empirical that astonished me. This represents a real narrowing of mind in academe, almost like a tendency to dark age, and cannot be good for us long term. In Australia the new push is for research “impact”, which seems to include industry relevance, so that presents a hope for a cultural widening.

 

I have been doing some work with a lawyer-PhD student on valuation in law cases/principles, and this has caused similar raised eyebrows and genuine intrigue with young colleagues – they just have never heard of such stuff, and only read the journals/specific papers that do what they do. I can sense their interest, and almost envy of such freedom, as they are all worrying about how to compete and make a long term career as an academic in the new academic world.

 

 


"Good Old R-Squared," by David Giles, Econometrics Beat:  Dave Giles’ Blog, University of Victoria, June 24, 2013 ---
http://davegiles.blogspot.com/2013/05/good-old-r-squared.html 

My students are often horrified when I tell them, truthfully, that one of the last pieces of information that I look at when evaluating the results of an OLS regression, is the coefficient of determination (R2), or its "adjusted" counterpart. Fortunately, it doesn't take long to change their perspective!

After all, we all know that with time-series data, it's really easy to get a "high" R2 value, because of the trend components in the data. With cross-section data, really low R2 values are really common. For most of us, the signs, magnitudes, and significance of the estimated parameters are of primary interest. Then we worry about testing the assumptions underlying our analysis. R2 is at the bottom of the list of priorities.

Continued in article

Also see http://davegiles.blogspot.com/2013/07/the-adjusted-r-squared-again.html

Bob Jensen's threads on validity testing in accountics science ---
http://www.trinity.edu/rjensen/TheoryTAR.htm


"Can You Actually TEST for Multicollinearity?" by David Giles, Econometrics Beat:  Dave Giles’ Blog, University of Victoria, June 24, 2013 ---
http://davegiles.blogspot.com/2013/06/can-you-actually-test-for.html

. . .

Now, let's return to the "problem" of multicollinearity.

 
What do we mean by this term, anyway? This turns out to be the key question!

 
Multicollinearity is a phenomenon associated with our particular sample of data when we're trying to estimate a regression model. Essentially, it's a situation where there is insufficient information in the sample of data to enable us to enable us to draw "reliable" inferences about the individual parameters of the underlying (population) model.


I'll be elaborating more on the "informational content" aspect of this phenomenon in a follow-up post. Yes, there are various sample measures that we can compute and report, to help us gauge how severe this data "problem" may be. But they're not statistical tests, in any sense of the word

 

Because multicollinearity is a characteristic of the sample, and not a characteristic of the population, you should immediately be suspicious when someone starts talking about "testing for multicollinearity". Right?


Apparently not everyone gets it!


There's an old paper by Farrar and Glauber (1967) which, on the face of it might seem to take a different stance. In fact, if you were around when this paper was published (or if you've bothered to actually read it carefully), you'll know that this paper makes two contributions. First, it provides a very sensible discussion of what multicollinearity is all about. Second, the authors take some well known results from the statistics literature (notably, by Wishart, 1928; Wilks, 1932; and Bartlett, 1950) and use them to give "tests" of the hypothesis that the regressor matrix, X, is orthogonal.


How can this be? Well, there's a simple explanation if you read the Farrar and Glauber paper carefully, and note what assumptions are made when they "borrow" the old statistics results. Specifically, there's an explicit (and necessary) assumption that in the population the X matrix is random, and that it follows a multivariate normal distribution.


This assumption is, of course totally at odds with what is usually assumed in the linear regression model! The "tests" that Farrar and Glauber gave us aren't really tests of multicollinearity in the sample. Unfortunately, this point wasn't fully appreciated by everyone.


There are some sound suggestions in this paper, including looking at the sample multiple correlations between each regressor, and all of the other regressors. These, and other sample measures such as variance inflation factors, are useful from a diagnostic viewpoint, but they don't constitute tests of "zero multicollinearity".


So, why am I even mentioning the Farrar and Glauber paper now?


Well, I was intrigued to come across some STATA code (Shehata, 2012) that allows one to implement the Farrar and Glauber "tests". I'm not sure that this is really very helpful. Indeed, this seems to me to be a great example of applying someone's results without understanding (bothering to read?) the assumptions on which they're based!


Be careful out there - and be highly suspicious of strangers bearing gifts!


 
References

 
Bartlett, M. S., 1950. Tests of significance in factor analysis. British Journal of Psychology, Statistical Section, 3, 77-85.

 
Farrar, D. E. and R. R. Glauber, 1967. Multicollinearity in regression analysis: The problem revisited.  Review of Economics and Statistics, 49, 92-107.

 
Shehata, E. A. E., 2012. FGTEST: Stata module to compute Farrar-Glauber Multicollinearity Chi2, F, t tests.

Wilks, S. S., 1932. Certain generalizations in the analysis of variance. Biometrika, 24, 477-494.

Wishart, J., 1928. The generalized product moment distribution in samples from a multivariate normal population. Biometrika, 20A, 32-52.

Bob Jensen's threads on validity testing in accountics science ---
http://www.trinity.edu/rjensen/TheoryTAR.htm


"Statistical Significance - Again " by David Giles, Econometrics Beat:  Dave Giles’ Blog, University of Victoria, December 28, 2013 ---
http://davegiles.blogspot.com/2013/12/statistical-significance-again.html

Statistical Significance - Again

 
With all of this emphasis on "Big Data", I was pleased to see this post on the Big Data Econometrics blog, today.

 
When you have a sample that runs to the thousands (billions?), the conventional significance levels of 10%, 5%, 1% are completely inappropriate. You need to be thinking in terms of tiny significance levels.

 
I discussed this in some detail back in April of 2011, in a post titled, "Drawing Inferences From Very Large Data-Sets". If you're of those (many) applied researchers who uses large cross-sections of data, and then sprinkles the results tables with asterisks to signal "significance" at the 5%, 10% levels, etc., then I urge you read that earlier post.

 
It's sad to encounter so many papers and seminar presentations in which the results, in reality, are totally insignificant!

Also see
"Drawing Inferences From Very Large Data-Sets,"   by David Giles, Econometrics Beat:  Dave Giles’ Blog, University of Victoria, April 26, 2013 ---
http://davegiles.blogspot.ca/2011/04/drawing-inferences-from-very-large-data.html

. . .

Granger (1998; 2003has reminded us that if the sample size is sufficiently large, then it's virtually impossible not to reject almost any hypothesis. So, if the sample is very large and the p-values associated with the estimated coefficients in a regression model are of the order of, say, 0.10 or even 0.05, then this really bad news. Much, much, smaller p-values are needed before we get all excited about 'statistically significant' results when the sample size is in the thousands, or even bigger. So, the p-values reported above are mostly pretty marginal, as far as significance is concerned. When you work out the p-values for the other 6 models I mentioned, they range from  to 0.005 to 0.460. I've been generous in the models I selected.

Here's another set of  results taken from a second, really nice, paper by
Ciecieriski et al. (2011) in the same issue of Health Economics:

Continued in article

Jensen Comment
My research suggest that over 90% of the recent papers published in TAR use purchased databases that provide enormous sample sizes in those papers. Their accountics science authors keep reporting those meaningless levels of statistical significance.

What is even worse is when meaningless statistical significance tests are used to support decisions.

Bob Jensen's threads on the often way analysts, particularly accountics scientists, often cheer for statistical significance of large sample outcomes that praise statistical significance of insignificant results such as R2 values of .0001 ---
The Cult of Statistical Significance: How Standard Error Costs Us Jobs, Justice, and Lives ---
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm


"Solution to Regression Problem," by David Giles, Econometrics Beat:  Dave Giles’ Blog, University of Victoria, December 26, 2013 ---
http://davegiles.blogspot.com/2013/12/solution-to-regression-problem.html

O.K. - you've had long enough to think about that little regression problem I posed the other day. It's time to put you out of your misery!

 
Here's the problem again, with a solution.


Problem:
Suppose that we estimate the following regression model by OLS:

 
                     yi = α + β xi + εi .

 
The model has a single regressor, x, and the point estimate of β turns out to be 10.0.

 
Now consider the "reverse regression", based on exactly the same data:

 
                    xi = a + b yi + ui .

 
What can we say about the value of the OLS point estimate of b?
 
Solution:

Continued in article


David Giles' Top Five Econometrics Blog Postings for 2013 ---
Econometrics Beat:  Dave Giles’ Blog, University of Victoria, December 31, 2013 ---
http://davegiles.blogspot.com/2013/12/my-top-5-for-2013.html

Everyone seems to be doing it at this time of the year. So, here are the five most popular new posts on this blog in 2013:
  1. Econometrics and "Big Data"
  2. Ten Things for Applied Econometricians to Keep in Mind
  3. ARDL Models - Part II - Bounds Tests
  4. The Bootstrap - A Non-Technical Introduction
  5. ARDL Models - Part I

Thanks for reading, and for your comments.

Happy New Year!

Jensen Comment
I really like the way David Giles thinks and writes about econometrics. He does not pull his punches about validity testing.Bob Jensen's threads on validity testing in accountics science ---
http://www.trinity.edu/rjensen/TheoryTAR.htm


The Insignificance of Testing the Null

"Statistics: reasoning on uncertainty, and the insignificance of testing null," by Esa Läärä
Ann. Zool. Fennici 46: 138–157
ISSN 0003-455X (print), ISSN 1797-2450 (online)
Helsinki 30 April 2009 © Finnish Zoological and Botanical Publishing Board 200
http://www.sekj.org/PDF/anz46-free/anz46-138.pdf

The practice of statistical analysis and inference in ecology is critically reviewed. The dominant doctrine of null hypothesis signi fi cance testing (NHST) continues to be applied ritualistically and mindlessly. This dogma is based on superficial understanding of elementary notions of frequentist statistics in the 1930s, and is widely disseminated by influential textbooks targeted at biologists. It is characterized by silly null hypotheses and mechanical dichotomous division of results being “signi fi cant” ( P < 0.05) or not. Simple examples are given to demonstrate how distant the prevalent NHST malpractice is from the current mainstream practice of professional statisticians. Masses of trivial and meaningless “results” are being reported, which are not providing adequate quantitative information of scientific interest. The NHST dogma also retards progress in the understanding of ecological systems and the effects of management programmes, which may at worst contribute to damaging decisions in conservation biology. In the beginning of this millennium, critical discussion and debate on the problems and shortcomings of NHST has intensified in ecological journals. Alternative approaches, like basic point and interval estimation of effect sizes, likelihood-based and information theoretic methods, and the Bayesian inferential paradigm, have started to receive attention. Much is still to be done in efforts to improve statistical thinking and reasoning of ecologists and in training them to utilize appropriately the expanded statistical toolbox. Ecologists should finally abandon the false doctrines and textbooks of their previous statistical gurus. Instead they should more carefully learn what leading statisticians write and say, collaborate with statisticians in teaching, research, and editorial work in journals.

 

Jensen Comment
And to think Alpha (Type 1) error is the easy part. Does anybody ever test for the more important Beta (Type 2) error? I think some engineers test for Type 2 error with Operating Characteristic (OC) curves, but these are generally applied where controlled experiments are super controlled such as in quality control testing.

Beta Error --- http://en.wikipedia.org/wiki/Beta_error#Type_II_error

The Cult of Statistical Significance

The Cult of Statistical Significance:  How Standard Error Costs Us Jobs, Justice, and Lives, by Stephen T. Ziliak and Deirdre N. McCloskey (Ann Arbor:  University of Michigan Press, ISBN-13: 978-472-05007-9, 2007)
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

Page 206
Like scientists today in medical and economic and other sizeless sciences, Pearson mistook a large sample size for the definite, substantive significance---evidence s Hayek put it, of "wholes." But it was as Hayek said "just an illusion." Pearson's columns of sparkling asterisks, though quantitative in appearance and as appealing a is the simple truth of the sky, signified nothing.

pp. 250-251
The textbooks are wrong. The teaching is wrong. The seminar you just attended is wrong. The most prestigious journal in your scientific field is wrong.

You are searching, we know, for ways to avoid being wrong. Science, as Jeffreys said, is mainly a series of approximations to discovering the sources of error. Science is a systematic way of reducing wrongs or can be. Perhaps you feel frustrated by the random epistemology of the mainstream and don't know what to do. Perhaps you've been sedated by significance and lulled into silence. Perhaps you sense that the power of a Roghamsted test against a plausible Dublin alternative is statistically speaking low but you feel oppressed by the instrumental variable one should dare not to wield. Perhaps you feel frazzled by what Morris Altman (2004) called the "social psychology rhetoric of fear," the deeply embedded path dependency that keeps the abuse of significance in circulation. You want to come out of it. But perhaps you are cowed by the prestige of Fisherian dogma. Or, worse thought, perhaps you are cynically willing to be corrupted if it will keep a nice job

 

 

 


Thank you Jagdish for adding another doubt in to the validity of more than four decades of accountics science worship.
"Weak statistical standards implicated in scientific irreproducibility: One-quarter of studies that meet commonly used statistical cutoff may be false." by Erika Check Hayden, Nature, November 11, 2013 ---
http://www.nature.com/news/weak-statistical-standards-implicated-in-scientific-irreproducibility-1.14131

 The plague of non-reproducibility in science may be mostly due to scientists’ use of weak statistical tests, as shown by an innovative method developed by statistician Valen Johnson, at Texas A&M University in College Station.

Johnson compared the strength of two types of tests: frequentist tests, which measure how unlikely a finding is to occur by chance, and Bayesian tests, which measure the likelihood that a particular hypothesis is correct given data collected in the study. The strength of the results given by these two types of tests had not been compared before, because they ask slightly different types of questions.

So Johnson developed a method that makes the results given by the tests — the P value in the frequentist paradigm, and the Bayes factor in the Bayesian paradigm — directly comparable. Unlike frequentist tests, which use objective calculations to reject a null hypothesis, Bayesian tests require the tester to define an alternative hypothesis to be tested — a subjective process. But Johnson developed a 'uniformly most powerful' Bayesian test that defines the alternative hypothesis in a standard way, so that it “maximizes the probability that the Bayes factor in favor of the alternate hypothesis exceeds a specified threshold,” he writes in his paper. This threshold can be chosen so that Bayesian tests and frequentist tests will both reject the null hypothesis for the same test results.

Johnson then used these uniformly most powerful tests to compare P values to Bayes factors. When he did so, he found that a P value of 0.05 or less — commonly considered evidence in support of a hypothesis in fields such as social science, in which non-reproducibility has become a serious issue corresponds to Bayes factors of between 3 and 5, which are considered weak evidence to support a finding.

False positives

Indeed, as many as 17–25% of such findings are probably false, Johnson calculates1. He advocates for scientists to use more stringent P values of 0.005 or less to support their findings, and thinks that the use of the 0.05 standard might account for most of the problem of non-reproducibility in science — even more than other issues, such as biases and scientific misconduct.

“Very few studies that fail to replicate are based on P values of 0.005 or smaller,” Johnson says.

Some other mathematicians said that though there have been many calls for researchers to use more stringent tests2, the new paper makes an important contribution by laying bare exactly how lax the 0.05 standard is.

“It shows once more that standards of evidence that are in common use throughout the empirical sciences are dangerously lenient,” says mathematical psychologist Eric-Jan Wagenmakers of the University of Amsterdam. “Previous arguments centered on ‘P-hacking’, that is, abusing standard statistical procedures to obtain the desired results. The Johnson paper shows that there is something wrong with the P value itself.”

Other researchers, though, said it would be difficult to change the mindset of scientists who have become wedded to the 0.05 cutoff. One implication of the work, for instance, is that studies will have to include more subjects to reach these more stringent cutoffs, which will require more time and money.

“The family of Bayesian methods has been well developed over many decades now, but somehow we are stuck to using frequentist approaches,” says physician John Ioannidis of Stanford University in California, who studies the causes of non-reproducibility. “I hope this paper has better luck in changing the world.”

574 Shields Against Validity Challenges in Plato's Cave
An Appeal for Replication and Commentaries in Accountics Science
http://www.trinity.edu/rjensen/TheoryTAR.htm


 

April 11, 2012 reply by Steve Kachelmeier

Thank you for acknowledging this Bob.  I've tried to offer other
examples of critical replications before, so it is refreshing to see you
identify one.  I agree that the Lennox et al. (2012) article is a great
example of the type of thing for which you have long been calling, and
I was proud to have been the accepting editor on their article.
Steve Kachelmeier

April 11, 2011 reply by Bob Jensen

Hi Steve

I really do hate to be negative so often, but even in the excellent Lennox et al. study I have one complaint to raise about the purpose of the replication. In real science, the purpose of most replications is driven out of interest in the conclusions (findings) more than the methods or techniques. The main purpose of the Lennox et al. study was more one of validating model robustness rather than the findings themselves which are validated more or less incidentally to the main purpose.

Respectfully,
Bob Jensen

April 12, 2012 reply by Steve Kachelmeier

Fair enough Bob. But those other examples exist also, and one immediately came to mind as I read your reply. Perhaps at some point you really ought to take a look at Shaw and Zhang, "Is CEO Cash Compensation Punished for Poor Firm Performance?" The Accounting Review, May 2010. It's an example I've raised before. Perhaps there are not as many of these as there should be, but they do exist, and in greater frequency than you acknowledge.

Best,
Steve

April 12, 2011 reply by Bob Jensen

Firstly,

Firstly, I might note that in the past you and I have differed as to what constitutes "replication research" in science. I stick by my definitions.---
http://www.trinity.edu/rjensen/TheoryTAR.htm#Replication

In your previous reply you drew our attention to the following article:
"Is CEO Cash Compensation Punished for Poor Firm Performance?" by Kenneth W. Shaw and May H. Zhang, The Accounting Review, May 2010 ---
http://aaajournals.org/doi/pdf/10.2308/accr.2010.85.3.1065

ABSTRACT:
Leone et al. (2006) conclude that CEO cash compensation is more sensitive to negative stock returns than to positive stock returns, due to Boards of Directors enforcing an ex post settling up on CEOs. Dechow (2006) conjectures that Leone et al.’s 2006 results might be due to the sign of stock returns misclassifying firm performance. Using three-way performance partitions, we find no asymmetry in CEO cash compensation for firms with low stock returns. Further, we find that CEO cash compensation is less sensitive to poor earnings performance than it is to better earnings performance. Thus, we find no evidence consistent with ex post settling up for poor firm performance, even among the very worst performing firms with strong corporate governance. We find similar results when examining changes in CEO bonus pay and when partitioning firm performance using earnings-based measures. In sum, our results suggest that CEO cash compensation is not punished for poor firm performance.

The above Shaw and Zhang study does indeed replicate an earlier study and is critical of that earlier study. Shaw and Zhang then extend that earlier research. As such it is a great step in the right direction since there are so few similar replications in accountics science research.

My criticisms of TAR and accountics science, however, still are valid.
Note that it took four years before the Leone (2006) study was replicated. In real science the replication research commences on the date studies are published or even before. Richard Sansing provided me with his own accountics science replication effort, but that one took seven years after the study being replicated was published.

Secondly, replications are not even mentioned in TAR unless these replications significantly extend or correct the original publications in what are literally new studies being published. In real science, journals have outlets for mentioning replication research that simply validates the original research without having to significantly extend or correct that research.

What TAR needs to do to encourage more replication efforts in accountics science is to provide an outlet for commentaries on published studies, possibly in a manner styled after the Journal of Electroanalytical Chemistry (JEC) that publishes short versions of replication studies. I mention this journal because one of its famous published studies on cold fusion in 1989 could not (at least not yet) be replicated. The inability of any researchers worldwide to replicate that study destroyed the stellar reputations of the original authors Stanley Pons and Martin Fleischmann.

Others who were loose with their facts:  former Harvard researcher John Darsee (faked cardiac research); radiologist Rober Slutsky (altered data; lied); obstetrician William McBride (changed data, ruined stellar reputation), and physicist J. Hendrik Schon (faked breakthroughs in molecular electronics).
Discover Magazine, December 2010, Page 43

See http://www.trinity.edu/rjensen/TheoryTAR.htm#TARversusJEC

In any case, I hope you will continue to provide the AECM illustrations of replication efforts in accountics science. Maybe one day accountics science will grow into real science and, hopefully, also become more of interest to the outside world.

Respectfully,
Bob Jensen

 


Replication Paranoia:  Can you imagine anything like this happening in accountics science?

"Is Psychology About to Come Undone?" by Tom Bartlett, Chronicle of Higher Education, April 17, 2012 --- Click Here
http://chronicle.com/blogs/percolator/is-psychology-about-to-come-undone/29045?sid=at&utm_source=at&utm_medium=en

If you’re a psychologist, the news has to make you a little nervous—particularly if you’re a psychologist who published an article in 2008 in any of these three journals: Psychological Science, the Journal of Personality and Social Psychology, or the Journal of Experimental Psychology: Learning, Memory, and Cognition.

Because, if you did, someone is going to check your work. A group of researchers have already begun what they’ve dubbed the Reproducibility Project, which aims to replicate every study from those three journals for that one year. The project is part of Open Science Framework, a group interested in scientific values, and its stated mission is to “estimate the reproducibility of a sample of studies from the scientific literature.” This is a more polite way of saying “We want to see how much of what gets published turns out to be bunk.”

For decades, literally, there has been talk about whether what makes it into the pages of psychology journals—or the journals of other disciplines, for that matter—is actually, you know, true. Researchers anxious for novel, significant, career-making findings have an incentive to publish their successes while neglecting to mention their failures. It’s what the psychologist Robert Rosenthal named “the file drawer effect.” So if an experiment is run ten times but pans out only once you trumpet the exception rather than the rule. Or perhaps a researcher is unconsciously biasing a study somehow. Or maybe he or she is flat-out faking results, which is not unheard of. Diederik Stapel, we’re looking at you.

So why not check? Well, for a lot of reasons. It’s time-consuming and doesn’t do much for your career to replicate other researchers’ findings. Journal editors aren’t exactly jazzed about publishing replications. And potentially undermining someone else’s research is not a good way to make friends.

Brian Nosek knows all that and he’s doing it anyway. Nosek, a professor of psychology at the University of Virginia, is one of the coordinators of the project. He’s careful not to make it sound as if he’s attacking his own field. “The project does not aim to single out anybody,” he says. He notes that being unable to replicate a finding is not the same as discovering that the finding is false. It’s not always possible to match research methods precisely, and researchers performing replications can make mistakes, too.

But still. If it turns out that a sizable percentage (a quarter? half?) of the results published in these three top psychology journals can’t be replicated, it’s not going to reflect well on the field or on the researchers whose papers didn’t pass the test. In the long run, coming to grips with the scope of the problem is almost certainly beneficial for everyone. In the short run, it might get ugly.

Nosek told Science that a senior colleague warned him not to take this on “because psychology is under threat and this could make us look bad.” In a Google discussion group, one of the researchers involved in the project wrote that it was important to stay “on message” and portray the effort to the news media as “protecting our science, not tearing it down.”

The researchers point out, fairly, that it’s not just social psychology that has to deal with this issue. Recently, a scientist named C. Glenn Begley attempted to replicate 53 cancer studies he deemed landmark publications. He could only replicate six. Six! Last December I interviewed Christopher Chabris about his paper titled “Most Reported Genetic Associations with General Intelligence Are Probably False Positives.” Most!

A related new endeavour called Psych File Drawer allows psychologists to upload their attempts to replicate studies. So far nine studies have been uploaded and only three of them were successes.

Both Psych File Drawer and the Reproducibility Project were started in part because it’s hard to get a replication published even when a study cries out for one. For instance, Daryl J. Bem’s 2011 study that seemed to prove that extra-sensory perception is real — that subjects could, in a limited sense, predict the future — got no shortage of attention and seemed to turn everything we know about the world upside-down.

Yet when Stuart Ritchie, a doctoral student in psychology at the University of Edinburgh, and two colleagues failed to replicate his findings, they had a heck of a time getting the results into print (they finally did, just recently, after months of trying). It may not be a coincidence that the journal that published Bem’s findings, the Journal of Personality and Social Psychology, is one of the three selected for scrutiny.

Continued in article

Jensen Comment

Scale Risk
In accountics science such a "Reproducibility Project" would be much more problematic except in behavioral accounting research. This is because accountics scientists generally buy rather than generate their own data (Zoe-Vonna Palmrose is an exception). The problem with purchased data from such as CRSP data, Compustat data, and AuditAnalytics data is that it's virtually impossible to generate alternate data sets, and if there are hidden serious errors in the data it can unknowingly wipe out thousands of accountics science publications all at one --- what we might call a "scale risk."

Assumptions Risk
A second problem in accounting and finance research is that researchers tend to rely upon the same models over and over again. And when serious  flaws were discovered in a model like CAPM it not only raised doubts about thousands of past studies, it made accountics and finance researchers make choices about whether or not to change their CAPM habits in the future. Accountics researchers that generally look for an easy way out blindly continued to use CAPM in conspiracy with journal referees and editors who silently agreed to ignore CAPM problems and limitations of assumptions about efficiency in capital markets---
http://www.trinity.edu/rjensen/Theory01.htm#EMH
We might call this an "assumptions risk."

Hence I do not anticipate that there will ever be a Reproducibility Project in accountics science. Horrors. Accountics scientists might not continue to be the highest paid faculty on their respected campuses and accounting doctoral programs would not know how to proceed if they had to start focusing on accounting rather than econometrics.

Bob Jensen's threads on replication and other forms of validity checking ---
http://www.trinity.edu/rjensen/TheoryTAR.htm


Thomas Kuhn --- http://en.wikipedia.org/wiki/Thomas_Kuhn

On its 50th anniversary, Thomas Kuhn’s "The Structure of Scientific Revolutions" remains not only revolutionary but controversial.
"Shift Happens," David Weinberger, The Chronicle Review, April 22, 2012 ---
http://chronicle.com/article/Shift-Happens/131580/

April 24, 2012 reply from Jagdish Gangolly

Bob,

A more thoughtful analysis of Kuhn is at the Stanford Encyclopedia of Philosophy. This is one of the best resources apart from the Principia Cybernetika ( http://pespmc1.vub.ac.be/ ).

http://plato.stanford.edu/entries/thomas-kuhn/ 

Regards,

Jagdish

 

April 24, 2012

Excellent article. It omits one aspect of Kuhn's personal life (probably because the author thought it inconsequential). Apparently Kuhn liked to relax by riding roller coasters.In a way, that's a neat metaphor for the impact of his work.

Thanks Bob.

Roger

Roger Collins
TRU School of Business & Economics

April 24, 2012 message from Zane Swanson

One of the unintended consequences of a paradigm shift may have meaning for the replication discussion which has occurred on this list.  Consider the relevance of the replications when a paradigm shifts.  The change permits an examination of replications pre and post the paradigm shift of key attributes.  In accounting, one paradigm shift is arguably the change from historical to fair value.  For those looking for a replication reason of being, it might be worthwhile to compare replication contributions before and after the historical to fair value changes.

  In other words, when the prevailing view was that “the world is flat” … the replication “evidence” appeared to support it. But, when the paradigm shifted to “the world is round”, the replication evidence changed also.  So, what is the value of replications and do they matter?  Perhaps, the replications have to be novel in some way to be meaningful.

Zane Swanson

www.askaref.com accounting dictionary for mobile devices

 

April 25, 2012 reply from Bob Jensen

Kuhn wrote of science that "In a science, on the other had, a paradigm is rarely an object for replication. Instead like a judicial decision in the common law, it is an object for further articulation and specification under new and more stringent conditions." This is the key to Kuhn's importance in the development of law and science for children's law. He did seek links between the two fields of knowledge and he by this insight suggested how the fields might work together ...
Michael Edmund Donnelly, ISBN 978-0-8204-1385 --- Click Here
http://books.google.com/books?id=rGKEN11r-9UC&pg=PA23&lpg=PA23&dq=%22Kuhn%22+AND+%22Replication%22+AND+%22Revolution%22&source=bl&ots=RDDBr9VBWt&sig=htGlcxqtX9muYqrn3D4ajnE0jF0&hl=en&sa=X&ei=F9WXT7rFGYiAgweKoLnrBg&ved=0CCoQ6AEwAg#v=onepage&q=%22Kuhn%22%20AND%20%22Replication%22%20AND%20%22Revolution%22&f=false

My question Zane is whether historical cost (HC) accounting versus fair value (FV) accounting is truly a paradigm shift. For centuries the two paradigms have worked in tandem for different purposes where FV is used by the law for personal estates and non-going concerns and HC accounting has never been a pure paradigm for any accounting in the real world. Due to conservatism and other factors, going-concern accounting has always been a mixed-model of historical cost modified in selected instances for fair value as in the case of lower-of-cost-or-market (LCM) inventories.

I think Kuhn was thinking more in terms of monumental paradigm "revolutions" like we really have not witnessed in accounting standards that are more evolutionary than revolutionary.

My writings are at
574 Shields Against Validity Challenges in Plato's Cave ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

Respectfully,
Bob Jensen


Biography of an Experiment --- http://www.haverford.edu/kinsc/boe/

Questions

  1. Apart from accountics science journals are there real science journals that refuse to publish replications?
  2. What are biased upward positive effects?
  3. What is the "decline" effect as research on a topic progresses?
  4. Why is scientific endeavor sometimes a victim of its own success?
  5. What is “statistically significant but not clinically significant” problem.
    Jensen note: 
    I think this is a serious drawback of many accountics science published papers.

    In the past when invited to be a discussant, this is the first problem I look for in the paper assigned for me to discuss.
    This is a particular problem in capital markets events studies having very, very large sample sizes. Statistical significance is almost always assured when sample sizes are huge even when the clinical significance of small differences may be completely insignificant.

    An example:
    "Discussion of Foreign Currency Exposure of Multinational Firms: Accounting Measures and Market Valuation," by Robert E. Jensen,  Rutgers University at Camden, Camden, New Jersey, May 31, 1997. Research Conference on International Accounting and Related Issues,

 

"The Value of Replication," by Steven Novella, Science-Based Medicine, June 15, 2011 ---
http://www.sciencebasedmedicine.org/index.php/the-value-of-replication/

Daryl Bem is a respected psychology researcher who decided to try his hand at parapsychology. Last year he published a series of studies in which he claimed evidence for precognition — for test subjects being influenced in their choices by future events. The studies were published in a peer-reviewed psychology journal, the Journal of Personality and Social Psychology. This created somewhat of a controversy, and was deemed by some to be a failure of peer-review.

While the study designs were clever (he simply reversed the direction of some standard psychology experiments, putting the influencing factor after the effect it was supposed to have), and the studies looked fine on paper, the research raised many red flags — particularly in Bem’s conclusions.

The episode has created the opportunity to debate some important aspects of the scientific literature. Eric-Jan Wagenmakers and others questioned the p-value approach to statistical analysis, arguing that it tends to over-call a positive result. They argue for a Bayesian analysis, and in their re-analysis of the Bem data they found the evidence for psi to be “weak to non-existent.” This is essentially the same approach to the data that we support as science-based medicine, and the Bem study is a good example of why. If the standard techniques are finding evidence for the impossible, then it is more likely that the techniques are flawed rather than the entire body of physical science is wrong.

Now another debate has been spawned by the same Bem research — that involving the role and value of exact replication. There have already been several attempts to replicate Bem’s research, with negative results: Galak and Nelson, Hadlaczky, and Circee, for example. Others, such as psychologist Richard Wiseman, have also replicated Bem’s research with negative results, but are running into trouble getting their studies published — and this is the crux of the new debate.

According to Wiseman, (as reported by The Psychologist, and discussed by Ben Goldacre) the Journal of Personality and Social Psychology turned down Wiseman’s submission on the grounds that they don’t publish replications, only “theory-advancing research.” In other words — strict replications are not of sufficient scientific value and interest to warrant space in their journal. Meanwhile other journals are reluctant to publish the replication because they feel the study should go in the journal that published the original research, which makes sense.

This episode illustrates potential problems with the  scientific literature. We often advocate at SBM that individual studies can never be that reliable — rather, we need to look at the pattern of research in the entire literature. That means, however, understanding how the scientific literature operates and how that may create spurious artifactual patterns.

For example, I recently wrote about the so-called “decline effect” — a tendency for effect sizes to shrink or “decline” as research on a phenomenon progresses. In fact, this was first observed in the psi research, as the effect is very dramatic there — so far, all psi effects have declined to non-existence. The decline effect is likely a result of artifacts in the literature. Journals are more inclined to publish dramatic positive studies (“theory-advancing research”), and are less interested in boring replications, or in initially negative research. A journal is unlikely to put out a press release that says, “We had this idea, and it turned out to be wrong, so never-mind.” Also, as research techniques and questions are honed, research results are likely to become closer to actual effect sizes, which means the effect of researcher bias will be diminished.

If the literature itself is biased toward positive studies, and dramatic studies, then this would further tend to exaggerate apparent phenomena — whether it is the effectiveness of a new drug or the existence of anomalous cognition. If journals are reluctant to publish replications, that might “hide the decline” (to borrow an inflammatory phrase) — meaning that perhaps there is even more of a decline effect if we consider unpublished negative replications. In medicine this would be critical to know — are we basing some treatments on a spurious signal in the noise of research.

There have already been proposals to create a registry of studies, before they are even conducted (specifically for human research), so that the totality of evidence will be transparent and known — not just the headline-grabbing positive studies, or the ones that meet the desires of the researchers or those funding the research. This proposal is primarily to deal with the issue of publication bias — the tendency not to publish negative studies.

Wiseman now makes the same call for a registry of trials before they even begin to avoid the bias of not publishing replications. In fact, he has taken it upon himself to create a registry of attempted replications of Bem’s research.

While this may be a specific fix for replications for Bem’s psi research — the bigger issues remain. Goldacre argues that there are systemic problems with how information filters down to professionals and the public. Reporting is highly biased toward dramatic positive studies, while retractions, corrections, and failed replications are quiet voices lost in the wilderness of information.

Most readers will already understand the critical value of replication to the process of science. Individual studies are plagued by flaws and biases. Most preliminary studies turn out to be wrong in the long run. We can really only arrive at a confident conclusion when a research paradigm produces reliable results in different labs with different researchers. Replication allows for biases and systematic errors to average out. Only if a phenomenon is real should it reliably replicate.

Further — the excuse by journals that they don’t have the space now seems quaint and obsolete, in the age of digital publishing. The scientific publishing industry needs a bit of an overhaul, to fully adapt to the possibilities of the digital age and to use this as an opportunity to fix some endemic problems. For example, journals can publish just abstracts of certain papers with the full articles available only online. Journals can use the extra space made available by online publishing (whether online only or partially in print) to make dedicated room for negative studies and for exact replications (replications that also expand the research are easier to publish). Databases and reviews of such studies can also make it as easy to find and access negative studies and replications as it is the more dramatic studies that tend to grab headlines.

Conclusion

The scientific endeavor is now a victim of its own success, in that research is producing a tsunami of information. The modern challenge is to sort through this information in a systematic way so that we can find the real patterns in the evidence and reach reliable conclusions on specific questions. The present system has not fully adapted to this volume of information, and there remain obsolete practices that produce spurious apparent patterns in the research. These fake patterns of evidence tend to be biased toward the false positive — falsely concluding that there is an effect when there really isn’t — or at least in exaggerating effects.

These artifactual problems with the literature as a whole combine with the statistical flaws in relying on the p-value, which tends to over-call positive results as well. This problem can be fixed by moving to a more Bayesian approach (considering prior probability).

All of this is happening at a time when prior probability (scientific plausibility) is being given less attention than it should, in that highly implausible notions are being seriously entertained in the peer-reviewed literature. Bem’s psi research is an excellent example, but we deal with many other examples frequently at SBM, such as homeopathy and acupuncture. Current statistical methods and publication biases are not equipped to deal with the results of research into highly implausible claims. The result is an excess of false-positive studies in the literature — a residue that is then used to justify still more research into highly implausible ideas. These ideas can never quite reach the critical mass of evidence to be generally accepted as real, but they do generate enough noise to confuse the public and regulators, and to create an endless treadmill of still more research.

The bright spot is that highly implausible research has helped to highlight some of these flaws in the literature. Now all we have to do is fix them.

Jensen Recommendation
Read all or at least some of the 58 comments following this article

daedalus2u comments:
Sorry if this sounds harsh, it is meant to be harsh. What this episode shows is that the journal JPSP is not a serious scientific journal. It is fluff, it is pseudoscience and entertainment, not a journal worth publishing in, and not a journal worth reading, not a journal that has scientific or intellectual integrity.

“Professor Eliot Smith, the editor of JPSP (Attitudes and Social Cognition section) told us that the journal has a long-standing policy of not publishing simple replications. ‘This policy is not new and is not unique to this journal,’ he said. ‘The policy applies whether the replication is successful or unsuccessful; indeed, I have rejected a paper reporting a successful replication of Bem’s work [as well as the negative replication by Ritchie et al].’ Smith added that it would be impractical to suspend the journal’s long-standing policy precisely because of the media attention that Bem’s work had attracted. ‘We would be flooded with such manuscripts and would not have page space for anything else,’ he said.”

Scientific journals have an obligation to the scientific community that sends papers to them to publish to be honest and fair brokers of science. Arbitrarily rejecting studies that directly bear on extremely controversial prior work they have published, simply because it is a “replication”, is an abdication of their responsibility to be a fair broker of science and an honest record of the scientific literature. It conveniently lets them publish crap with poor peer review and then never allow the crap work to be responded to.

If the editor consider it impractical to publish any work that is a replication because they would then have no space for anything else, then they are receiving too many manuscripts. If the editor needs to apply a mindless triage of “no replications”, then the editor is in over his head and is overwhelmed. The journal should either revise the policy and replace the overwhelmed editor, or real scientists should stop considering the journal a suitable place to publish.

. . .

Harriet Hall comments
A close relative of the “significant but trivial” problem is the “statistically significant but not clinically significant” problem. Vitamin B supplements lower blood homocysteine levels by a statistically significant amount, but they don’t decrease the incidence of heart attacks. We must ask if a statistically significant finding actually represents a clinical benefit for patient outcome, if it is POEMS – patient-oriented evidence that matters.

 

"Alternative Treatments for ADHD Alternative Treatments for ADHD: The Scientific Status," David Rabiner, Attention Deficit Disorder Resources, 1998 ---
http://www.addresources.org/?q=node/279 

Based on his review of the existing research literature, Dr. Arnold rated the alternative treatments presented on a 0-6 scale. It is important to understand this scale before presenting the treatments. (Note: this is one person's opinion based on the existing data; other experts could certainly disagree.) The scale he used is presented below:

Only one treatment reviewed received a rating of 5. Dr. Arnold concluded that there is convincing scientific evidence that some children who display

Continued in article

"If you can write it up and get it published you're not even thinking of reproducibility," said Ken Kaitin, director of the Tufts Center for the Study of Drug Development. "You make an observation and move on. There is no incentive to find out it was wrong."
April 14, 2012 reply from Richard Sansing

Inability to replicate may be a problem in other fields as well.

http://www.vision.org/visionmedia/article.aspx?id=54180

Richard Sansing

 

Bob Jensen's threads on replication in accountics science ---
http://www.trinity.edu/rjensen/TheoryTAR.htm


"The Baloney Detection Kit: A 10-Point Checklist for Science Literacy," by Maria Popova, Brain Pickings, March 16, 2012 --- Click Here
http://www.brainpickings.org/index.php/2012/03/16/baloney-detection-kit/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+brainpickings%2Frss+%28Brain+Pickings%29&utm_content=Google+Reader

Video Not Included Here

The above sentiment in particular echoes this beautiful definition of science as “systematic wonder” driven by an osmosis of empirical rigor and imaginative whimsy.

The complete checklist:

  1. How reliable is the source of the claim?
  2. Does the source make similar claims?
  3. Have the claims been verified by somebody else?
  4. Does this fit with the way the world works?
  5. Has anyone tried to disprove the claim?
  6. Where does the preponderance of evidence point?
  7. Is the claimant playing by the rules of science?
  8. Is the claimant providing positive evidence?
  9. Does the new theory account for as many phenomena as the old theory?
  10. Are personal beliefs driving the claim?

The charming animation comes from UK studio Pew 36. The Richard Dawkins Foundation has a free iTunes podcast, covering topics as diverse as theory of mind, insurance policy, and Socrates’ “unconsidered life.”


Possibly the Worst Academic Scandal in Past 100 Years:  Deception at Duke
The Loose Ethics of Co-authorship of Research in Academe

In general we don't allow faculty to have publications ghost written for tenure and performance evaluations. However, the rules are very loose regarding co-author division of duties. A faculty member can do all of the research but pass along all the writing to a co-author except when co-authoring is not allowed such as in the writing of dissertations.

In my opinion the rules are too loose regarding co-authorship. Probably the most common abuse in the current "publish or perish" environment in academe is the partnering of two or more researchers to share co-authorships when their actual participation rate in the research and writing of most the manuscripts is very small, maybe less than 10%. The typical partnering arrangement is for an author to take the lead on one research project while playing only a small role in the other research projects
Gaming for Tenure as an Accounting Professor ---
http://www.trinity.edu/rjensen/TheoryTenure.htm
(with a reply about tenure publication point systems from Linda Kidwell)

Another common abuse, in my opinion, is where a senior faculty member with a stellar reputation lends his/her name to an article written and researched almost entirely by a lesser-known colleague or graduate student. The main author may agree to this "co-authorship" when the senior co-author's name on the paper improves the chances for publication in a prestigious book or journal.

This is what happened in a sense in what is becoming the most notorious academic fraud in the history of the world. At Duke University a famous cancer researcher co-authored research that was published in the most prestigious science and medicine journals in the world. The senior faculty member of high repute is now apologizing to the world for being a part of a fraud where his colleague fabricated a significant portion of the data to make it "come out right" instead of the way it actually turned out.

What is interesting is to learn about how super-knowledgeable researchers at the Anderson Cancer Center in Houston detected this fraud and notified the Duke University science researchers of their questions about the data. Duke appears to have resisted coming out with the truth way to long by science ethics standards and even continued to promise miraculous cures to 100 Stage Four cancer patients who underwent the miraculous "Duke University" cancer cures that turned out to not be miraculous at all. Now Duke University is exposed to quack medicine lawsuit filed by families of the deceased cancer patients who were promised phone 80% cure rates.

The above Duke University scandal was the headline module in the February 12, 2012 edition of CBS Sixty Minutes. What an eye-opening show about science research standards and frauds ---
Deception at Duke (Sixty Minutes Video) --- http://www.cbsnews.com/8301-18560_162-57376073/deception-at-duke/

Next comes the question of whether college administrators operate under different publishing and speaking ethics vis-à-vis their faculty
"Faking It for the Dean," by Carl Elliott, Chronicle of Higher Education, February 7, 2012 ---
http://chronicle.com/blogs/brainstorm/says-who/43843?sid=cr&utm_source=cr&utm_medium=en

Added Jensen Comment
I've no objection to "ghost writing" of interview remarks as long as the ghost writer is given full credit for doing the writing itself.

I also think there is a difference between speeches versus publications with respect to citations. How awkward it would be if every commencement speaker had to read the reference citation for each remark in the speech. On the other hand, I think the speaker should announce at the beginning and end that some of the points made in the speech originated from other sources and that references will be provided in writing upon request.

Bob Jensen's threads on professors who let students cheat ---
http://www.trinity.edu/rjensen/Plagiarism.htm#RebeccaHoward

Bob Jensen's threads on professors who cheat
http://www.trinity.edu/rjensen/Plagiarism.htm#ProfessorsWhoPlagiarize


Steven J. Kachelmeier's July 2011 Editorial as Departing Senior Editor of The Accounting Review (TAR)

"Introduction to a Forum on Internal Control Reporting and Corporate Debt," by Steven J. Kachelmeier, The Accounting Review, Vol. 86, No. 4, July 2011 pp. 1129–113 (not free online) ---
http://aaapubs.aip.org/getpdf/servlet/GetPDFServlet?filetype=pdf&id=ACRVAS000086000004001129000001&idtype=cvips&prog=normal

One of the more surprising things I have learned from my experience as Senior Editor of The Accounting Review is just how often a ‘‘hot topic’’ generates multiple submissions that pursue similar research objectives. Though one might view such situations as enhancing the credibility of research findings through the independent efforts of multiple research teams, they often result in unfavorable reactions from reviewers who question the incremental contribution of a subsequent study that does not materially advance the findings already documented in a previous study, even if the two (or more) efforts were initiated independently and pursued more or less concurrently. I understand the reason for a high incremental contribution standard in a top-tier journal that faces capacity constraints and deals with about 500 new submissions per year. Nevertheless, I must admit that I sometimes feel bad writing a rejection letter on a good study, just because some other research team beat the authors to press with similar conclusions documented a few months earlier. Research, it seems, operates in a highly competitive arena.

Fortunately, from time to time, we receive related but still distinct submissions that, in combination, capture synergies (and reviewer support) by viewing a broad research question from different perspectives. The two articles comprising this issue’s forum are a classic case in point. Though both studies reach the same basic conclusion that material weaknesses in internal controls over financial reporting result in negative repercussions for the cost of debt financing, Dhaliwal et al. (2011) do so by examining the public market for corporate debt instruments, whereas Kim et al. (2011) examine private debt contracting with financial institutions. These different perspectives enable the two research teams to pursue different secondary analyses, such as Dhaliwal et al.’s examination of the sensitivity of the reported findings to bank monitoring and Kim et al.’s examination of debt covenants.

Both studies also overlap with yet a third recent effort in this arena, recently published in the Journal of Accounting Research by Costello and Wittenberg-Moerman (2011). Although the overall ‘‘punch line’’ is similar in all three studies (material internal control weaknesses result in a higher cost of debt), I am intrigued by a ‘‘mini-debate’’ of sorts on the different conclusions reache  by Costello and Wittenberg-Moerman (2011) and by Kim et al. (2011) for the effect of material weaknesses on debt covenants. Specifically, Costello and Wittenberg-Moerman (2011, 116) find that ‘‘serious, fraud-related weaknesses result in a significant decrease in financial covenants,’’ presumably because banks substitute more direct protections in such instances, whereas Kim et al. Published Online: July 2011 (2011) assert from their cross-sectional design that company-level material weaknesses are associated with more financial covenants in debt contracting.

In reconciling these conflicting findings, Costello and Wittenberg-Moerman (2011, 116) attribute the Kim et al. (2011) result to underlying ‘‘differences in more fundamental firm characteristics, such as riskiness and information opacity,’’ given that, cross-sectionally, material weakness firms have a greater number of financial covenants than do non-material weakness firms even before the disclosure of the material weakness in internal controls. Kim et al. (2011) counter that they control for risk and opacity characteristics, and that advance leakage of internal control problems could still result in a debt covenant effect due to internal controls rather than underlying firm characteristics. Kim et al. (2011) also report from a supplemental change analysis that, comparing the pre- and post-SOX 404 periods, the number of debt covenants falls for companies both with and without material weaknesses in internal controls, raising the question of whether the

Costello and Wittenberg-Moerman (2011) finding reflects a reaction to the disclosures or simply a more general trend of a declining number of debt covenants affecting all firms around that time period. I urge readers to take a look at both articles, along with Dhaliwal et al. (2011), and draw their own conclusions. Indeed, I believe that these sorts . . .

Continued in article

Jensen Comment
Without admitting to it, I think Steve has been embarrassed, along with many other accountics researchers, about the virtual absence of validation and replication of accounting science (accountics) research studies over the past five decades. For the most part, accountics articles are either ignored or accepted as truth without validation. Behavioral and capital markets empirical studies are rarely (ever?) replicated. Analytical studies make tremendous leaps of faith in terms of underlying assumptions that are rarely challenged (such as the assumption of equations depicting utility functions of corporations).

Accounting science thereby has become a pseudo science where highly paid accountics professor referees are protecting each others' butts ---
"574 Shields Against Validity Challenges in Plato's Cave" --- http://www.trinity.edu/rjensen/TheoryTAR.htm
The above link contains Steve's rejoinders on the replication debate.

In the above editorial he's telling us that there is a middle ground for validation of accountics studies. When researchers independently come to similar conclusions using different data sets and different quantitative analyses they are in a sense validating each others' work without truly replicating each others' work.

I agree with Steve on this, but I would also argue that these types of "validation" is too little to late relative to genuine science where replication and true validation are essential to the very definition of science. The types independent but related research that Steve is discussing above is too infrequent and haphazard to fall into the realm of validation and replication.

When's the last time you witnesses a TAR author criticizing the research of another TAR author (TAR does not publish critical commentaries)?
Are TAR articles really all that above criticism?
Even though I admire Steve's scholarship, dedication, and sacrifice, I hope future TAR editors will work harder at turning accountics research into real science!

What Went Wrong With Accountics Research? --- http://www.trinity.edu/rjensen/theory01.htm#WhatWentWrong

September 10, 2011 reply from Bob Jensen (known on the AECM as Calvin of Calvin and Hobbes)
This is a reply to Steve Kachelmeier, former Senior Editor of The Accounting Review (TAR)

I agree Steve and will not bait you further in a game of Calvin Ball.

It is, however, strange to me that exacting replication (reproducibility)  is such a necessary condition to almost all of real science empiricism and such a small part of accountics science empiricism.

My only answer to this is that the findings themselves in science seem to have greater importance to both the scientists interested in the findings and the outside worlds affected by those findings.
It seems to me that empirical findings that are not replicated with as much exactness as possible are little more than theories that have only been tested once but we can never be sure that the tests were not faked or contain serious undetected errors for other reasons.
It is sad that the accountics science system really is not designed to guard against fakers and careless researchers who in a few instances probably get great performance evaluations for their hits in TAR, JAR, and JAE. It is doubly sad since internal controls play such an enormous role in our profession but not in our accoutics science.

I thank you for being a noted accountics scientist who was willing to play Calvin Ball.with me for a while. I want to stress that this is not really a game with me. I do want to make a difference in the maturation of accountics science into real science. Exacting replications in accountics science would be an enormous giant step in the real-science direction.

Allowing validity-questioning commentaries in TAR would be a smaller start in that direction but nevertheless a start. Yes I know that it was your 574 TAR referees who blocked the few commentaries that were submitted to TAR about validity questions. But the AAA Publications Committees and you as Senior Editor could've done more to encourage both submissions of more commentaries and submissions of more non-accountics research papers to TAR --- cases, field studies, history studies, AIS studies, and (horrors) normative research.

I would also like to bust the monopoly that accountics scientists have on accountancy doctoral programs. But I've repeated my arguments here far to often ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

In any case thanks for playing Calvin Ball with me. Paul Williams and Jagdish Gangolly played Calvin Ball with me for a while on an entirely different issue --- capitalism versus socialism versus bastardized versions of both that evolve in the real world.

Hopefully there's been some value added on the AECM in my games of Calvin Ball.

Even though my Calvin Ball opponents have walked off the field, I will continue to invite others to play against me on the AECM.

And I'm certain this will not be the end to my saying that accountics farmers are more interested in their tractors than their harvests. This may one day be my epitaph.

Respectfully,
Calvin

"574 Shields Against Validity Challenges in Plato's Cave" --- See Below


"Psychology’s Treacherous Trio: Confirmation Bias, Cognitive Dissonance, and Motivated Reasoning," by sammcnerney, Why We Reason, September 7, 2011 --- Click Here
http://whywereason.wordpress.com/2011/09/07/psychologys-treacherous-trio-confirmation-bias-cognitive-dissonance-and-motivated-reasoning/


Regression Towards the Mean --- http://en.wikipedia.org/wiki/Regression_to_the_mean

"The Truth Wears Off Is there something wrong with the scientific method?"  by Johah Lehrer, The New Yorker, December 12, 2010 ---
http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer

Jensen Comment
This article deals with instances where scientists honestly cannot replicate earlier experiments including their own experiments.


"Milgram's obedience studies - not about obedience after all?" Research Digest, February 2011 --- Click Here
http://bps-research-digest.blogspot.com/2011/02/milgrams-obedience-studies-not-about.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+BpsResearchDigest+%28BPS+Research+Digest%29


"Success Comes From Better Data, Not Better Analysis," by Daryl Morey, Harvard Business Review Blog, August 8, 2011 --- Click Here
http://blogs.hbr.org/cs/2011/08/success_comes_from_better_data.html?referral=00563&cm_mmc=email-_-newsletter-_-daily_alert-_-alert_date&utm_source=newsletter_daily_alert&utm_medium=email&utm_campaign=alert_date

Jensen Comment
I think accountics researchers often use purchased databases (e.g., Compustat, AuditAnalytics, and CRSP) without questioning the possibility of data errors and limitation. For example, we recently took a look at the accounting litigation database of AuditAnalytics and found many serious omissions.

These databases are used by multiple accountics researchers, thereby compounding the felony,.

Bob Jensen's threads on what went wrong with accountics research are at
http://www.trinity.edu/rjensen/theory01.htm#WhatWentWrong


A Mutation in the Evolution of Accountics Science Toward Real Science:  A Commentary Published in TAR in May 2012

The publication of the Moser and Martin commentary in the May 2012 edition of TAR is a mutation of progress in accountics science evolution. We owe a big thank you to both TAR Senior Editors Steve Kachelmeier and Harry Evans.

Accountics is the mathematical science of values.
Charles Sprague [1887] as quoted by McMillan [1998, p. 1]
http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm#_msocom_1

 

A small step for accountics science, A giant step for accounting

Accountics science made a giant step in its evolution toward becoming a real science when it published a commentary in The Accounting Review (TAR) in the May 2012 edition.

""A Broader Perspective on Corporate Social Responsibility Research in Accounting," by Donald V. Moser and Patrick R. Martin, The Accounting Review, Vol. 87, May 2012, pp. 797-806 ---
http://aaajournals.org/doi/full/10.2308/accr-10257

We appreciate the helpful comments of Ramji Balakrishnan, Harry Evans, Lynn Hannan, Steve Kachelmeier, Geoff Sprinkle, Greg Waymire, Michael Williamson, and the authors of the two Forum papers on earlier versions of this commentary. Although we have benefited significantly from such comments, the views expressed are our own and do not necessarily represent the views of others who have kindly shared their insights with us.

. . .

In this commentary we suggest that CSR research in accounting could benefit significantly if accounting researchers were more open to (1) the possibility that CSR activities and related disclosures are driven by both shareholders and non-shareholder constituents, and (2) the use of experiments to answer important CSR questions that are difficult to answer with currently available archival data. We believe that adopting these suggestions will help accounting researchers obtain a more complete understanding of the motivations for corporate investments in CSR and the increasing prevalence of related disclosures.

Our two suggestions are closely related. Viewing CSR more broadly as being motivated by both shareholders and a broader group of stakeholders raises new and important questions that are unlikely to be studied by accounting researchers who maintain the traditional perspective that firms only engage in CSR activities that maximize shareholder value. As discussed in this commentary, one example is that if CSR activities actually respond to the needs or demands of a broader set of stakeholders, it is more likely that some CSR investments are made at the expense of shareholders. Data limitations make it very difficult to address this and related issues in archival studies. In contrast, such issues can be addressed directly and effectively in experiments. Consequently, we believe that CSR research is an area in which integrating the findings from archival and experimental studies can be especially fruitful. The combination of findings from such studies is likely to provide a more complete understanding of the drivers and consequences of CSR activities and related disclosures. Providing such insights will help accounting researchers become more prominent players in CSR research. Our hope is that the current growing interest in CSR issues, as reflected in the two papers included in this Forum, represents a renewed effort to substantially advance CSR research in accounting.

 

Jensen Comment
There are still two disappointments for me in the evolution of accountics science into real science.


It's somewhat revealing to track how this Moser and Martin commentary found its way into TAR. You might begin by noting the reason former Senior Editor Steve Kachelmeier gave to the absence of commentaries in TAR (since 1998). In fairness, I was wrong to have asserted that Steve will not send a "commentary" out to TAR referees. His reply to me was as follows ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

No, no, no! Once again, your characterization makes me out to be the dictator who decides the standards of when a comment gets in and when it doesn’t. The last sentence is especially bothersome regarding what “Steve tells me is a requisite for his allowing TAR to publish a comment.” I never said that, so please don’t put words in my mouth.

If I were to receive a comment of the “discussant” variety, as you describe, I would send it out for review to two reviewers in a manner 100% consistent with our stated policy on p. 388 of the January 2010 issue (have you read that policy?). If both reviewers or even the one independent reviewer returned favorable assessments, I would then strongly consider publishing it and would most likely do so. My observation, however, which you keep wanting to personalize as “my policy,” is that most peer reviewers, in my experience, want to see a meaningful incremental contribution. (Sorry for all the comma delimited clauses, but I need this to be precise.) Bottom line: Please don’t make it out to be the editor’s “policy” if it is a broader phenomenon of what the peer community wants to see. And the “peer community,” by the way, are regular professors from all varieties of backgrounds. I name 574 of them in the November 2009 issue.

Thus the reason given by Steve that a commentary was not published by TAR since 1998 is that the TAR referees rejected each and every submitted commentary  since 1998. In the back of my mind, however, I always thought the Senior and Associate Editors of TAR could do more to encourage the publication of commentaries in TAR.

Thus it's interesting to track the evolution of the May 2012 Moser and Martin commentary published in TAR.

"A FORUM ON CORPORATE SOCIAL RESPONSIBILITY RESEARCH IN ACCOUNTING  Introduction," by John Harry Evans III (incoming Senior Editor of TAR),  The Accounting Review, Vol. 87, May 2012, pp. 721-722 ---
http://aaajournals.org/doi/full/10.2308/accr-10279

In July 2011, shortly after I began my term as Senior Editor of The Accounting Review, outgoing editor Steve Kachelmeier alerted me to an excellent opportunity. He and his co-editors (in particular, Jim Hunton) had conditionally accepted two manuscripts on the topic of corporate social responsibility (CSR), and the articles were scheduled to appear in the May 2012 issue of TAR. Steve suggested that I consider bundling the two articles as a “forum on corporate social responsibility research in accounting,” potentially with an introductory editorial or commentary.

Although I had never worked in the area of CSR research, I was aware of a long history of interest in CSR research among accounting scholars. In discussions with my colleague, Don Moser, who was conducting experiments on CSR topics with his doctoral student, Patrick Martin, I was struck by the potential for synergy in a forum that combined the two archival articles with a commentary by experimentalists (Don and Patrick). Because archival and experimental researchers face different constraints in terms of what they can observe and control, they tend to address different, but related, questions. The distinctive questions and answers in each approach can then provide useful challenges to researchers in the other, complementary camp. A commentary by Moser and Martin also offered the very practical advantage that, with Don and Patrick down the hall from me, it might be feasible to satisfy a very tight schedule calling for completing the commentary and coordinating it with the authors of the archival articles within two to three months.

The Moser and Martin (2012) commentary offers potential insight concerning how experiments can complement archival research such as the two fine studies in the forum by Dhaliwal et al. (2012) and by Kim et al. (2012). The two forum archival studies document that shareholders have reason to care about CSR disclosure because of its association with lower analyst forecast errors and reduced earnings management. These are important findings about what drives firms' CSR activities and disclosures, and these results have natural ties to traditional financial accounting archival research issues.

Like the two archival studies, the Moser and Martin (2012) commentary focuses on the positive question of what drives CSR activities and disclosures in practice as opposed to normative or legal questions about what should drive these decisions. However, the Moser and Martin approach to addressing the positive question begins by taking a broader perspective that allows for the possibility that firms may potentially consider the demands of stakeholders other than shareholders in making decisions about CSR activities and disclosures. They then argue that experiments have certain advantages in understanding CSR phenomena given this broader environment. For example, in a tightly controlled environment in which future economic returns are known for certain and individual reputation can play no role, would managers engage in CSR activities that do not maximize profits and what information would they disclose about such activities? Also, how would investors respond to such disclosures?

 

Jensen Comment
And thus we have a mutation in the evolution of "positive" commentaries in TAR with the Senior TAR editor being a driving force in that mutation. However, in accountics science we have a long way to go in terms of publishing critical commentaries and performing replications of accountics science research ---
http://www.trinity.edu/rjensen/TheoryTAR.htm#Replication
As Joni Young stated, there's still "an absence of dissent" in accountics science.

We also have a long way to go in the evolution of accountics science in that accountics scientists do very little to communicate with accounting teachers and practitioners ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 

But the publication of the Moser and Martin commentary in the May 2012 edition of TAR is a mutation of progress in accountics science evolution. We owe a big thank you to both TAR Senior Editors Steve Kachelmeier and Harry Evans.

 

Bob Jensen's threads on Corporate Social Responsibility research and Triple-Bottom (Social, Environmental, Human Resource) Reporting ---
---
http://www.trinity.edu/rjensen/Theory02.htm#TripleBottom


Fortunately this sort of public dispute has never happened in accountics science where professors just don't steal each others' ideas or insultingly review each others' work in public. Accountics science is a polite science ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

"Publicizing (Alleged) Plagiarism," by Alexandra Tilsley, Inside Higher Ed, October 22, 2012 ---
http://www.insidehighered.com/news/2012/10/22/berkeley-launches-plagiarism-investigation-light-public-nature-complaints

The varied effects of the Internet age on the world of academic research are well-documented, but a website devoted solely to highlighting one researcher’s alleged plagiarism has put a new spin on the matter.

The University of California at Berkeley has begun an investigation into allegations of plagiarism in professor Terrence Deacon’s book, Incomplete Nature: How Mind Emerged from Matter, largely in response to the website created about the supposed problems with Deacon’s book. IIncomplete Nature, Deacon, the chair of Berkeley's anthropology department, melds science and philosophy to explain how mental processes, the stuff that makes us human, emerged from the physical world.

The allegations are not of direct, copy-and-paste plagiarism, but of using ideas without proper citation. In a June review in The New York Review of Books, Colin McGinn, a professor of philosophy at the University of Miami, writes that ideas in Deacon’s book draw heavily on ideas in works by Alicia Juarrero, professor emerita of philosophy at Prince George’s Community College who earned her Ph.D. at Miami, and Evan Thompson, a philosophy professor at the University of Toronto, though neither scholar is cited, as Thompson also notes in his own review in Nature.

McGinn writes: “I have no way of knowing whether Deacon was aware of these books when he was writing his: if he was, he should have cited them; if he was not, a simple literature search would have easily turned them up (both appear from prominent presses).”

That is an argument Juarrero and her colleagues Carl Rubino and Michael Lissack have pursued forcefully and publicly. Rubino, a classics professor at Hamilton College, published a book with Juarrero that he claims Deacon misappropriated, and that book was published by Lissack’s Institute for the Study of Coherence and Emergence. Juarrero, who declined to comment for this article because of the continuing investigation, is also a fellow of the institute.

Continued in article

Bob Jensen's threads on professors who cheat  ---
http://www.trinity.edu/rjensen/Plagiarism.htm#ProfessorsWhoPlagiarize

 


Consensus Seeking in Real Science Versus Accountics Science

Question
Are there any illustrations of consensus seeking in accountics like consensus seeking in the real sciences, e.g., consensus seeking on climate change, consensus seeking on pollution impacts, and consensus seeking on the implosion of the Twin Towers on 9/11 (whether the towers had to be laced with explosives in advance to bring them down)?

For example, some scientists predicted environmental disaster when Saddam set virtually all the oil wells ablaze near the end of the Gulf War. But there was no consensus among the experts, and those that made dire predictions ultimately turned out wrong.

Noam Chomsky Schools 9/11 Truther; Explains the Science of Making Credible Claims ---
http://www.openculture.com/2013/10/noam-chomsky-derides-911-truthers.html

Jensen Comment
I can't recall any instances where high numbers of accountics scientists were polled with respect to any of their research findings. Are there any good illustrations that I missed?

In the real sciences consensus seeking is sometimes sought when scientists cannot agree on the replication outcomes or where replication is impractical or impossible based upon theory that has not yet been convincingly tested., I suspect consensus seeking is more common in the natural sciences than in the social sciences with economics being somewhat of an exception. Polls among economists are somewhat common, especially regarding economic forecasts.

The closest thing to accounting consensus seeking might take place among expert witnesses in court, but this is a poor example since consensus may only be sought among a handful of experts. In science and engineering consensus seeking takes place among hundreds or even thousands of experts.


Over Reliance Upon Public Databases and Failure to Error Check

DATABASE BIASES AND ERRORS
My casual studies of accountics science articles suggests that over 90% of those studies rely exclusively on one or more public database whenever the studies use data. I find few accountics science research into bias and errors of those databases. Here's a short listing of research into these biases and errors, some of which were published by accountics scientists ---
 

DATABASE BIASES AND ERRORS ---
http://www.kellogg.northwestern.edu/rc/crsp-cstat-references.htm

This page provides references for articles that study specific aspects of CRSP, Compustat and other popular sources of data used by researchers at Kellogg. If you know of any additional references, please e-mail researchcomputing-help@kellogg.northwestern.edu.

What went wrong with accountics science?
http://www.trinity.edu/rjensen/Theory01.htm#WhatWentWrong

 

In 2013 I scanned all six issues of The Accounting Review (TAR) published in 2013 to detect what public databases were (usually at relatively heavy fees for a system of databases) in the 72 articles published January-November, 2013 in TAR. The outcomes were as follows:

42 35.3%   Miscellaneous public databases used infrequently
33 27.7%   Compustat --- http://en.wikipedia.org/wiki/Compustat
21 17.6%   CRSP --- http://en.wikipedia.org/wiki/Center_for_Research_in_Security_Prices
17 14.3%   Datastream --- http://en.wikipedia.org/wiki/Thomson_Financial
6 5.0%   Audit Analytics --- http://www.auditanalytics.com/
119 100.0%   Total Purchased Public Databases
10   Non-public Databases (usually experiments) and mathematical analysis studies with no data
    Note that there are subsets of databases within database like Compustat. CRSP, and Datastream

Many of these 72 articles used more than one public database, and when the Compustat and CRSP joint database was used I counted one for the Compustat Database and one for the CRSP Database. Most of the non-public databases are behavioral experiments using students as surrogates for real-world decision makers.

My opinion is that 2013 is a typical year where over 92% of the articles published in TAR used putvhsdrf public databases.  The good news is that most of these public databases are enormous, thereby allowing for huge samples for which statistical inference is probably superfluous. For very large samples even miniscule differences are significant for hypothesis testing making statistical inference testing superfluous:

My theory is that accountics science gained dominance in accounting research, especially in North American accounting Ph.D. programs, because it abdicated responsibility:

1.     Most accountics scientists buy data, thereby avoiding the greater cost and drudgery of collecting data.

 

2.     By relying so heavily on purchased data, accountics scientists abdicate responsibility for errors in the data.

 

3.     Since adding missing variable data to the public database is generally not at all practical in purchased databases, accountics scientists have an excuse for not collecting missing variable data.

The small subset of accountics scientists that do conduct behavior experiments generally use students as surrogates for real world decision makers. In addition the tasks are hypothetical and artificial such that making extrapolations concerning real world behavior are dubious to say the least.

 

The good news is that most of these public databases are enormous, thereby allowing for huge samples for which statistical inference is probably superfluous. For very large samples even miniscule differences are significant for hypothesis testing making statistical inference testing superfluous:

The Cult of Statistical Significance: How Standard Error Costs Us Jobs, Justice, and Lives ---
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

Association is Not Causation
The bad news is that the accountics scientists who rely only on public databases are limited to what is available in those databases. It is much more common in the real sciences for scientists to collect their own data in labs and field studies. Accountics scientists tend to model data but not collect their own data (with some exceptions, especially in behavioral experiments and simulation games). As a result real scientists can often make causal inferences whereas accountics scientists can only make correlation or other types of association inferences leaving causal analysis to speculation.

Of course real scientists many times are forced to work with public databases like climate and census databases. But they are more obsessed with collecting their own data that go deeper into root causes. This also leads to more risk of data fabrication and the need for independent replication efforts (often before the original results are even published) ---
http://www.trinity.edu/rjensen/Plagiarism.htm#ProfessorsWhoPlagiarize

Note the quotation below from from veteran accountics science researchers:
Title:  "Fair Value Accounting for Financial Instruments: Does It Improve the Association between Bank Leverage and Credit Risk?"
Authors:  Elizabeth Blankespoor, Thomas J. Linsmeier, Kathy R. Petroni and Catherine Shakespeare
Source:  The Accounting Review, July 2013, pp. 1143-1178
http://aaajournals.org/doi/full/10.2308/accr-50419

"We test for association, not causation."

Bob Jensen discusses the inability to search for causes in the following reference
"How Non-Scientific Granulation Can Improve Scientific Accountics"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf:

Potential Database Errors
Inability to search for causes is only one of the problems of total reliance on public databases rather than databases collected by researchers themselves. The other potentially huge problem is failure to test for errors in the public databases. This is an enormous problem because accountics science public databases are exceptionally large with tens of thousands of companies from which thousands of companies are sampled by accountics scientists. It's sometimes possible to randomly test for database errors but doing so is tedious and not likely to end up with corrections that are very useful for large samples.

What I note is that accountics scientists these days overlook potential problems of errors in their databases. In the past there were some efforts to check for errors, but I don't know of recent attempts. This is why I'm asking AECMers to cite where accountics scientists recently tested for errors in their public databases.

The Audit Analytics database is purportedly especially prone to errors and biases, but I've not seen much in the way of  published studies on these potential problems. This database is critically analyzed with several others in the following reference:

A Critical Analysis of Databases Used in Financial Misconduct Research
 by Jonathan M. Karpoff , Allison Koester, D. Scott Lee, and Gerald S. Martin
July 20, 2012
http://www.efa2012.org/papers/s1a1.pdf
Also see
http://www.fesreg.com/index.php/research/financial-misconduct/88-a-critical-analysis-of-databases-used-in-financial-misconduct-research

ERROR RATES IN CRSP AND COMPUSTAT DATA BASES AND THEIR IMPLICATIONS
Barr Rosenberg Associate Professor†, Michel Houglet Associate Professor†
The Journal of Finance
Volume 29, Issue 4, pages 1303–1310, September 1974

Higgledy piggledy bankruptcy
Douglas Wood, Jenifer Piesse
Volume 148 of Manchester business school. working paper 1987
http://books.google.com/books/about/Higgledy_piggledy_bankruptcy.html?id=bZBXAAAAMAAJ

The market reaction to 10-K and 10-Q filings and to subsequent The Wall Street Journal earnings announcements
EK Stice -
Accounting Review, 1991

On The Operating Performance of REITs Following Seasoned Equity Offerings: Anomaly Revisited
by C Ghosh, S Roark, CF Sirmans
The Journal of Real Estate Finance and …, 2013 - Springer

A further examination of income shifting through transfer pricing considering firm size and/or distress TL Conover, NB Nichols - The International Journal of Accounting, 2000 - Elsevier ... of information as well as the firm characteristics. Kinney and Swanson (1993) specifically addressed COMPUSTAT errors and omissions involving the tax fields. Since research investigating transfer prices involves the impact ...

On Alternative Measures of Accruals

L Shi, H Zhang - Accounting Horizons, 2011 - aaajournals.org

... Panel B reports results on non-articulations in changes in accounts receivable. The main
explanation for this type of non-articulation is Compustat errors, to which five out of the six
observations can be attributed. ... All of them can be attributed to Compustat errors. ...

 

Questions (actually a favor request)
Are there some current references on the data errors in public databases that are mostly used in accountics science studies?


For example, how reliable are the Datastream databases?
I have not seen much published about Datastream errors and biases.

October 21, 2013 reply from Dan Stone

A recent article in "The Economist" decries the absence of replication in
science.

short url:
http://tinyurl.com/lepu6zz

http://www.economist.com/news/leaders/21588069-scientific-research-
has-changed-world-now-it-needs-change-itself-how-science-goes-wrong


 

October 21, 2013 reply from Bob Jensen

I read The Economist every week and usually respect it sufficiently to quote it a lot. But sometimes articles disappoint me as an academic in search of evidence for controversial assertions like the one you link to about declining replication in the sciences.

Dartmouth Professor Nyhan paints a somewhat similar picture about where some of the leading medical journals now "tend to fail to replicate." However other journals that he mentions are requiring a replication archives and replication audits. It seems to me that some top science journals are becoming more concerned about validity of research findings while perhaps others have become more lax.

"Academic reforms: A four-part proposal," by Brendon Nyhan, April 16, 2013 ---
http://www.brendan-nyhan.com/blog/2012/04/academic-reforms-a-four-part-proposal.html

The "collaborative replication" idea has become a big deal. I have a former psychology colleague at Trinity who has a stellar reputation for empirical brain research in memory. She tells me that she does not submit articles any more until they have been independently replicated by other experts.

It may well be true that natural science journals have become negligent in requiring replication and in providing incentives to replicate. However, perhaps, because the social science journals have a harder time being believed, I think that some of their top journals have become more obsessed with replication.

In any case I don't know of any science that is less concerned with lack of replication than accountics science. TAR has a policy of not publishing replications or replication abstracts unless the replication is only incidental to extending the findings with new research findings. TAR also has a recent reputation of not encouraging commentaries on the papers it publishes.

Has TAR even published a commentary on any paper it published in recent years?

Have you encountered any recent investigations into errors in our most popular public databases in accountics science?

Thanks,
Bob Jensen

 

October 22, 2013 reply from Roman Chychyla

Hello Professor Jensen,

My name is Roman Chychyla and I am a 5th year PhD student in AIS at Rutgers business school. I have seen your post at AECM regarding errors in accounting databases. I find this issue quite interesting. As a matter of fact, it is a part of my dissertation. I have recently put on SSRN a working paper that I wrote with my adviser, Alex Kogan, that compares annual numbers in Compustat to numbers in 10-K filings on a large-scale basis using the means of XBRL technology: http://ssrn.com/abstract=2304473

My impression from working on that paper is that the volume of errors in Compustat is relatively low (probably by now Compustat has decent data verification process in place). However, the Compustat adjustments designed to standardize variables may be a serious issue. These adjustments sometimes results in both economically and statistically significant differences between Compustat and 10-K concepts that change the distribution of underlying variables. This, in turn, may affect the outcome of empirical models that rely on Compustat data.

Arguably, the adjustments may be a good thing (although an opposite argument is that companies themselves are in the best position to present their numbers adequately). But it may well be the case that accounting researches are not fully aware of these adjustments and do not take them into account. For example, a number of archival accounting studies implicitly assume that market participants operate based on Compustat numbers at the times of financial reports being released, while what market participants really see are the unmodified numbers in financial reports. Moreover, Compustat does not provide original numbers from financial reports, and it was unknown how large the differences are. In our paper, we study the amount and magnitude of these differences and document them.

Hope you find this information interesting. Please feel free to contact me any time. Thanks.

All the best,
Roman

October 22, 2013 reply from Bob Jensen

Hi Roman,

Thank you so much for your reply. I realize that Compustat and CRSP have been around long enough to program in some error controls. However, you are on a tack that I never thought of taking.

My interest is more with the newer Datastream database and with Audit Analytics where I'm still not trusting.

May I share your reply with the AECM?

Thanks,
Bob

 

October 23, 2013  reply from Roman Chychyla

I agree, new databases are more prone to errors. There were a lot of errors in early versions of Compustat and CRSP as Rosenberg and Houglet showed. On the other hand, the technology now is better and the error-verification processes should be more advanced and less costly.

Of course, feel free to share our correspondence with the AECM.

Thanks!

Best,
Roman

October 21, 2013 reply from Dan Stone

A recent article in "The Economist" decries the absence of replication in
science.

short url:
http://tinyurl.com/lepu6zz

http://www.economist.com/news/leaders/21588069-scientific-research-
has-changed-world-now-it-needs-change-itself-how-science-goes-wrong


 

October 21, 2013 reply from Bob Jensen

I read The Economist every week and usually respect it sufficiently to quote it a lot. But sometimes articles disappoint me as an academic in search of evidence for controversial assertions like the one you link to about declining replication in the sciences.

Dartmouth Professor Nyhan paints a somewhat similar picture about where some of the leading medical journals now "tend to fail to replicate." However other journals that he mentions are requiring a replication archives and replication audits. It seems to me that some top science journals are becoming more concerned about validity of research findings while perhaps others have become more lax.

"Academic reforms: A four-part proposal," by Brendon Nyhan, April 16, 2013 ---
http://www.brendan-nyhan.com/blog/2012/04/academic-reforms-a-four-part-proposal.html

The "collaborative replication" idea has become a big deal. I have a former psychology colleague at Trinity who has a stellar reputation for empirical brain research in memory. She tells me that she does not submit articles any more until they have been independently replicated by other experts.

It may well be true that natural science journals have become negligent in requiring replication and in providing incentives to replicate. However, perhaps, because the social science journals have a harder time being believed, I think that some of their top journals have become more obsessed with replication.

In any case I don't know of any science that is less concerned with lack of replication than accountics science. TAR has a policy of not publishing replications or replication abstracts unless the replication is only incidental to extending the findings with new research findings. TAR also has a recent reputation of not encouraging commentaries on the papers it publishes.

Has TAR even published a commentary on any paper it published in recent years?

Have you encountered any recent investigations into errors in our most popular public databases in accountics science?

Thanks,
Bob Jensen

 

 


Are accountics scientists more honest and ethical than real scientists?

Accountics science is defined at http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm
One of the main reasons Bob Jensen contends that accountics science is not yet a real science is that lack of exacting replications of accountics science findings. By exacting replications he means reproducibility as defined in the IAPUC Gold Book  ---
http://www.trinity.edu/rjensen/TheoryTAR.htm#Replication

The leading accountics science (an indeed the leading academic accounting research journals) are The Accounting Review (TAR), the Journal of Accounting Research (JAR), and the Journal of Accounting and Economics (JAE). Publishing accountics science in these journals is a necessary condition for nearly all accounting researchers at top R1 research universities in North America.

On the AECM listserv, Bob Jensen and former TAR Senior Editor Steven Kachelmeier have had an ongoing debate about accountics science relevance and replication for well over a year in what Steve now calls a game of CalvinBall. When Bob Jensen noted the lack of exacting replication in accountics science, Steve's CalvinBall reply was that replication is the name of the game in accountics science:

The answer to your question, "Do you really think accounting researchers have the hots for replicating their own findings?" is unequivocally YES, though I am not sure about the word "hots." Still, replications in the sense of replicating prior findings and then extending (or refuting) those findings in different settings happen all the time, and they get published regularly. I gave you four examples from one TAR issue alone (July 2011). You seem to disqualify and ignore these kinds of replications because they dare to also go beyond the original study. Or maybe they don't count for you because they look at their own watches to replicate the time instead of asking to borrow the original researcher's watch. But they count for me.

To which my CalvinBall reply to Steve is --- "WOW!" In the past four decades of all this unequivocal replication in accountics science there's not been a single scandal. Out of the thousands of accountics science papers published in TAR, JAR, and JAE there's not been a single paper withdrawn after publication, to my knowledge, because of a replication study discovery. Sure there have been some quibbles about details in the findings and some improvements in statistical significance by tweaking the regression models, but there's not been a replication finding serious enough to force a publication retraction or serious enough to force the resignation of an accountics scientist.

In real science, where more exacting replications really are the name of the game, there have been many scandals over the past four decades. Nearly all top science journals have retracted articles because independent researchers could not exactly replicate the reported findings. And it's not all that rare to force a real scientist to resign due to scandalous findings in replication efforts.

The most serious scandals entail faked data or even faked studies. These types of scandals apparently have never been detected among thousands of accountics science publications. The implication is that accountics scientists are more honest as a group than real scientists. I guess that's either good news or bad replicating.

Given the pressures brought to bear on accounting faculty to publish accountics science articles, the accountics science scandal may be that accountics science replications have never revealed a scandal --- to my knowledge at least.

One of the most recent scandals arose when a very well-known psychologist named Mark Hauser.
"Author on leave after Harvard inquiry Investigation of scientist’s work finds evidence of misconduct, prompts retraction by journal," by Carolyn Y. Johnson, The Boston Globe, August 10, 2010 ---
http://www.boston.com/news/education/higher/articles/2010/08/10/author_on_leave_after_harvard_inquiry/

Harvard University psychologist Marc Hauser — a well-known scientist and author of the book “Moral Minds’’ — is taking a year-long leave after a lengthy internal investigation found evidence of scientific misconduct in his laboratory.

The findings have resulted in the retraction of an influential study that he led. “MH accepts responsibility for the error,’’ says the retraction of the study on whether monkeys learn rules, which was published in 2002 in the journal Cognition.

Two other journals say they have been notified of concerns in papers on which Hauser is listed as one of the main authors.

It is unusual for a scientist as prominent as Hauser — a popular professor and eloquent communicator of science whose work has often been featured on television and in newspapers — to be named in an investigation of scientific misconduct. His research focuses on the evolutionary roots of the human mind.

In a letter Hauser wrote this year to some Harvard colleagues, he described the inquiry as painful. The letter, which was shown to the Globe, said that his lab has been under investigation for three years by a Harvard committee, and that evidence of misconduct was found. He alluded to unspecified mistakes and oversights that he had made, and said he will be on leave for the upcoming academic year.

Continued in article

Update:  Hauser resigned from Harvard in 2011 after the published research in question was retracted by the journals.

Not only have there been no similar reported accountics science scandals called to my attention, I'm not aware of any investigations of impropriety that were discovered among all those "replications" claimed by Steve.

What is an Exacting Replication?
I define an exacting replication as one in which the findings are reproducible by independent researchers using the IAPUC Gold Book standards for reproducibility. Steve makes a big deal about time extensions when making such exacting replications almost impossible in accountics science. He states:

By "exacting replication," you appear to mean doing exactly what the original researcher did -- no more and no less. So if one wishes to replicate a study conducted with data from 2000 to 2008, one had better not extend it to 2009, as that clearly would not be "exacting." Or, to borrow a metaphor I've used earlier, if you'd like to replicate my assertion that it is currently 8:54 a.m., ask to borrow my watch -- you can't look at your watch because that wouldn't be an "exacting" replication.

That's CalvinBall bull since in many of these time extensions it's also possible to conduct an exacting replication. Richard Sansing pointed out the how he conducted an exacting replication of the findings in Dhaliwal, Li and R. Trezevant (2003), "Is a dividend tax penalty incorporated into the return on a firm’s common stock?," Journal of Accounting and Economics 35: 155-178. Although Richard and his coauthor extend the Dhaliwal findings they first conducted an exacting replication in their paper published  in The Accounting Review 85 (May 2010): 849-875.

My quibble with Richard is mostly that conducting an exacting replication of the Dhaliwal et al. paper was not exactly a burning (hot) issue if nobody bothered to replicate that award winning JAE paper for seven years.

This begs the question of why there are not more frequent and timely exacting replications conducted in accountics science if the databases themselves are commercially available like the CRSP, Compustat, and AuditAnalytics databases. Exacting replications from these databases are relatively easy and cheap to conduct. My contention here is that there's no incentive to excitedly conduct exacting replications if the accountics journals will not even publish commentaries about published studies. Steve and I've played CalvinBall with the commentaries issue before. He contends that TAR editors do not prevent commentaries from being published in TAR. The barriers to validity questioning commentaries in TAR are the 574 referees who won't accept submitted commentaries ---
http://www.trinity.edu/rjensen/TheoryTAR.htm#ColdWater

Exacting replications of behavioral experiments in accountics science is more difficult and costly because the replicators must conduct their own experiments by collecting their own data. But it seems to me that it's no more difficult in accountics science than in performing exacting replications that are reported in the research literature of psychology. However, psychologists often have more incentives to conduct exacting replications for the following reasons that I surmise:

  1. Practicing psychologists are more demanding of validity tests of research findings. Practicing accountants seem to pretty much ignore behavioral experiments published in TAR, JAR, and JAE such that there's not as much pressure brought to bear on validity testing of accountics science findings. One test of practitioner lack of interest is the lack of citation of accountics science in practitioner journals.
     
  2. Psychology researchers have more incentives to replicate experiments of others since there are more outlets for publication credits of replication studies, especially in psychology journals that encourage commentaries on published research ---
    http://www.trinity.edu/rjensen/TheoryTAR.htm#TARversusJEC

My opinion remains that accountics science will never be a real science until exacting replication of research findings become the name of the game in accountics science. This includes exacting replications of behavioral experiments as well as analysis of public data from CRSP, Compustat, AuditAnalytics, and other commercial databases. Note that willingness of accountics science authors to share their private data for replication purposes is a very good thing (I fought for this when I was on the AAA Executive Committee), but conducting replication studies of such data does not hold up well under the IAPUC Gold Book.

Note, however, that lack of exacting replication and other validity testing in general are only part of the huge problems with accountics science. The biggest problem, in my judgment, is the way accountics scientists have established monopoly powers over accounting doctoral programs, faculty hiring criteria, faculty performance criteria, and pay scales. Accounting researchers using other methodologies like case and field research become second class faculty.

Since the odds of getting a case or field study published are so low, very few of such studies are even submitted for publication in TAR in recent years. Replication of these is a non-issue in TAR.

"Annual Report and Editorial Commentary for The Accounting Review," by Steven J. Kachelmeier The University of Texas at Austin, The Accounting Review, November 2009, Page 2056.

There's not much hope for case, field, survey, and other non-accountics researchers to publish in the leading research journal of the American Accounting Association.

What went wrong with accountics research?
http://www.trinity.edu/rjensen/theory01.htm#WhatWentWrong

"We fervently hope that the research pendulum will soon swing back from the narrow lines of inquiry that dominate today's leading journals to a rediscovery of the richness of what accounting research can be. For that to occur, deans and the current generation of academic accountants must give it a push."
Granif and Zeff --- http://www.trinity.edu/rjensen/TheoryTAR.htm#Appendix01

Michael H. Granof
is a professor of accounting at the McCombs School of Business at the University of Texas at Austin. Stephen A. Zeff is a professor of accounting at the Jesse H. Jones Graduate School of Management at Rice University.

I admit that I'm just one of those professors heeding the Granof and Zeff call to "give it a push," but it's hard to get accountics professors to give up their monopoly on TAR, JAR, JAE, and in recent years Accounting Horizons. It's even harder to get them to give up their iron monopoly clasp on North American Accountancy Doctoral Programs ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms 

September 10, 2011 message from Bob Jensen

Hi Raza,
 

Please don't get me wrong. As an old accountics researcher myself, I'm all in favor of continuing accountics research full speed ahead. The younger mathematicians like Richard Sansing are doing it better these days. What I'm upset about is the way the accountics science quants took over TAR, AH, accounting faculty performance standards in R1 universities, and virtually all accounting doctoral programs in North America.

Monopolies are not all bad --- they generally do great good they for mankind. The problem is that monopolies shut out the competition. In the case of accountics science, the accountics scientists have shut out competing research methods to a point where accounting doctoral students must write accountics science dissertations, and TAR referees will not open the door to case studies, field studies, accounting history studies, or commentaries critical of accountics science findings in TAR.

The sad thing is that even if we open up our doctoral programs to other research methodologies, the students themselves may prefer accountics science research. It's generally easier to apply regression models to CRSP, Compustat, and Audit Analytics databases than have to go off campus to collect data and come up with clever ideas to improve accounting practice in ways that will amaze practitioners.

Another problem with accountics science is that this monopoly has not created incentives for validity checking of accountics science findings. This has prevented accountics science from being real science where validity checking is a necessary condition for research and publication. If TAR invited commentaries on validity testing of TAR publications, I think there would be more replication efforts.

If TAR commenced a practitioners' forum where practitioners were "assigned" to discuss TAR articles, perhaps there would be more published insights into possible relevance of accountics science to the practice of accountancy. I put "assign" in quotations since practitioners may have to be nudged in some ways to get them to critique accountics science articles.

There are some technical areas where practitioners have more expertise than accountics scientists, particularly in the areas of insurance accounting, pension accounting, goodwill impairment testing, accounting for derivative financial instruments, hedge accounting, etc. Perhaps these practitioner experts might even publish a "research needs" forum in TAR such that our very bright accountics scientists would be inspired to focus their many talents on some accountancy practice technical problems.

My main criticism of accountics scientists is that the 600+ TAR referees have shut down critical commentaries of their works and the recent editors of TAR have been unimaginative when in thinking of ways to motivate replication research, TAR article commentaries, and focus of accountics scientists on professional practice problems.

Some ideas for improving TAR are provided at
http://www.trinity.edu/rjensen/TheoryTAR.htm

Particularly note the module on
TAR versus AMR and AMJ

 


Accountics Scientists Seeking Truth: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 

title:
Science Warriors' Ego Trips (Accountics)
citation:
"Science Warriors' Ego Trips," by Carlin Romano, Chronicle of Higher Education's The Chronicle Review, April 25, 2010 ---
 http://chronicle.com/article/Science-Warriors-Ego-Trips/65186/
journal/magazine/etc.:
The Chronicle Review
publication date:
April 25, 2010
article text:
It is the mark of an educated mind to be able to entertain a thought without accepting it.
Aristotle

"Science Warriors' Ego Trips," by Carlin Romano, Chronicle of Higher Education's The Chronicle Review, April 25, 2010 ---
http://chronicle.com/article/Science-Warriors-Ego-Trips/65186/

Standing up for science excites some intellectuals the way beautiful actresses arouse Warren Beatty, or career liberals boil the blood of Glenn Beck and Rush Limbaugh. It's visceral. The thinker of this ilk looks in the mirror and sees Galileo bravely muttering "Eppure si muove!" ("And yet, it moves!") while Vatican guards drag him away. Sometimes the hero in the reflection is Voltaire sticking it to the clerics, or Darwin triumphing against both Church and Church-going wife. A brave champion of beleaguered science in the modern age of pseudoscience, this Ayn Rand protagonist sarcastically derides the benighted irrationalists and glows with a self-anointed superiority. Who wouldn't want to feel that sense of power and rightness?

You hear the voice regularly—along with far more sensible stuff—in the latest of a now common genre of science patriotism, Nonsense on Stilts: How to Tell Science From Bunk (University of Chicago Press), by Massimo Pigliucci, a philosophy professor at the City University of New York. Like such not-so-distant books as Idiot America, by Charles P. Pierce (Doubleday, 2009), The Age of American Unreason, by Susan Jacoby (Pantheon, 2008), and Denialism, by Michael Specter (Penguin Press, 2009), it mixes eminent common sense and frequent good reporting with a cocksure hubris utterly inappropriate to the practice it apotheosizes.

According to Pigliucci, both Freudian psychoanalysis and Marxist theory of history "are too broad, too flexible with regard to observations, to actually tell us anything interesting." (That's right—not one "interesting" thing.) The idea of intelligent design in biology "has made no progress since its last serious articulation by natural theologian William Paley in 1802," and the empirical evidence for evolution is like that for "an open-and-shut murder case."

Pigliucci offers more hero sandwiches spiced with derision and certainty. Media coverage of science is "characterized by allegedly serious journalists who behave like comedians." Commenting on the highly publicized Dover, Pa., court case in which U.S. District Judge John E. Jones III ruled that intelligent-design theory is not science, Pigliucci labels the need for that judgment a "bizarre" consequence of the local school board's "inane" resolution. Noting the complaint of intelligent-design advocate William Buckingham that an approved science textbook didn't give creationism a fair shake, Pigliucci writes, "This is like complaining that a textbook in astronomy is too focused on the Copernican theory of the structure of the solar system and unfairly neglects the possibility that the Flying Spaghetti Monster is really pulling each planet's strings, unseen by the deluded scientists."

Is it really? Or is it possible that the alternate view unfairly neglected could be more like that of Harvard scientist Owen Gingerich, who contends in God's Universe (Harvard University Press, 2006) that it is partly statistical arguments—the extraordinary unlikelihood eons ago of the physical conditions necessary for self-conscious life—that support his belief in a universe "congenially designed for the existence of intelligent, self-reflective life"? Even if we agree that capital "I" and "D" intelligent-design of the scriptural sort—what Gingerich himself calls "primitive scriptural literalism"—is not scientifically credible, does that make Gingerich's assertion, "I believe in intelligent design, lowercase i and lowercase d," equivalent to Flying-Spaghetti-Monsterism?

Tone matters. And sarcasm is not science.

The problem with polemicists like Pigliucci is that a chasm has opened up between two groups that might loosely be distinguished as "philosophers of science" and "science warriors." Philosophers of science, often operating under the aegis of Thomas Kuhn, recognize that science is a diverse, social enterprise that has changed over time, developed different methodologies in different subsciences, and often advanced by taking putative pseudoscience seriously, as in debunking cold fusion. The science warriors, by contrast, often write as if our science of the moment is isomorphic with knowledge of an objective world-in-itself—Kant be damned!—and any form of inquiry that doesn't fit the writer's criteria of proper science must be banished as "bunk." Pigliucci, typically, hasn't much sympathy for radical philosophies of science. He calls the work of Paul Feyerabend "lunacy," deems Bruno Latour "a fool," and observes that "the great pronouncements of feminist science have fallen as flat as the similarly empty utterances of supporters of intelligent design."

It doesn't have to be this way. The noble enterprise of submitting nonscientific knowledge claims to critical scrutiny—an activity continuous with both philosophy and science—took off in an admirable way in the late 20th century when Paul Kurtz, of the University at Buffalo, established the Committee for the Scientific Investigation of Claims of the Paranormal (Csicop) in May 1976. Csicop soon after launched the marvelous journal Skeptical Inquirer, edited for more than 30 years by Kendrick Frazier.

Although Pigliucci himself publishes in Skeptical Inquirer, his contributions there exhibit his signature smugness. For an antidote to Pigliucci's overweening scientism 'tude, it's refreshing to consult Kurtz's curtain-raising essay, "Science and the Public," in Science Under Siege (Prometheus Books, 2009, edited by Frazier), which gathers 30 years of the best of Skeptical Inquirer.

Kurtz's commandment might be stated, "Don't mock or ridicule—investigate and explain." He writes: "We attempted to make it clear that we were interested in fair and impartial inquiry, that we were not dogmatic or closed-minded, and that skepticism did not imply a priori rejection of any reasonable claim. Indeed, I insisted that our skepticism was not totalistic or nihilistic about paranormal claims."

Kurtz combines the ethos of both critical investigator and philosopher of science. Describing modern science as a practice in which "hypotheses and theories are based upon rigorous methods of empirical investigation, experimental confirmation, and replication," he notes: "One must be prepared to overthrow an entire theoretical framework—and this has happened often in the history of science ... skeptical doubt is an integral part of the method of science, and scientists should be prepared to question received scientific doctrines and reject them in the light of new evidence."

Considering the dodgy matters Skeptical Inquirer specializes in, Kurtz's methodological fairness looks even more impressive. Here's part of his own wonderful, detailed list: "Psychic claims and predictions; parapsychology (psi, ESP, clairvoyance, telepathy, precognition, psychokinesis); UFO visitations and abductions by extraterrestrials (Roswell, cattle mutilations, crop circles); monsters of the deep (the Loch Ness monster) and of the forests and mountains (Sasquatch, or Bigfoot); mysteries of the oceans (the Bermuda Triangle, Atlantis); cryptozoology (the search for unknown species); ghosts, apparitions, and haunted houses (the Amityville horror); astrology and horoscopes (Jeanne Dixon, the "Mars effect," the "Jupiter effect"); spoon bending (Uri Geller). ... "

Even when investigating miracles, Kurtz explains, Csicop's intrepid senior researcher Joe Nickell "refuses to declare a priori that any miracle claim is false." Instead, he conducts "an on-site inquest into the facts surrounding the case." That is, instead of declaring, "Nonsense on stilts!" he gets cracking.

Pigliucci, alas, allows his animus against the nonscientific to pull him away from sensitive distinctions among various sciences to sloppy arguments one didn't see in such earlier works of science patriotism as Carl Sagan's The Demon-Haunted World: Science as a Candle in the Dark (Random House, 1995). Indeed, he probably sets a world record for misuse of the word "fallacy."

To his credit, Pigliucci at times acknowledges the nondogmatic spine of science. He concedes that "science is characterized by a fuzzy borderline with other types of inquiry that may or may not one day become sciences." Science, he admits, "actually refers to a rather heterogeneous family of activities, not to a single and universal method." He rightly warns that some pseudoscience—for example, denial of HIV-AIDS causation—is dangerous and terrible.

But at other points, Pigliucci ferociously attacks opponents like the most unreflective science fanatic, as if he belongs to some Tea Party offshoot of the Royal Society. He dismisses Feyerabend's view that "science is a religion" as simply "preposterous," even though he elsewhere admits that "methodological naturalism"—the commitment of all scientists to reject "supernatural" explanations—is itself not an empirically verifiable principle or fact, but rather an almost Kantian precondition of scientific knowledge. An article of faith, some cold-eyed Feyerabend fans might say.

In an even greater disservice, Pigliucci repeatedly suggests that intelligent-design thinkers must want "supernatural explanations reintroduced into science," when that's not logically required. He writes, "ID is not a scientific theory at all because there is no empirical observation that can possibly contradict it. Anything we observe in nature could, in principle, be attributed to an unspecified intelligent designer who works in mysterious ways." But earlier in the book, he correctly argues against Karl Popper that susceptibility to falsification cannot be the sole criterion of science, because science also confirms. It is, in principle, possible that an empirical observation could confirm intelligent design—i.e., that magic moment when the ultimate UFO lands with representatives of the intergalactic society that planted early life here, and we accept their evidence that they did it. The point is not that this is remotely likely. It's that the possibility is not irrational, just as provocative science fiction is not irrational.

Pigliucci similarly derides religious explanations on logical grounds when he should be content with rejecting such explanations as unproven. "As long as we do not venture to make hypotheses about who the designer is and why and how she operates," he writes, "there are no empirical constraints on the 'theory' at all. Anything goes, and therefore nothing holds, because a theory that 'explains' everything really explains nothing."

Here, Pigliucci again mixes up what's likely or provable with what's logically possible or rational. The creation stories of traditional religions and scriptures do, in effect, offer hypotheses, or claims, about who the designer is—e.g., see the Bible. And believers sometimes put forth the existence of scriptures (think of them as "reports") and a centuries-long chain of believers in them as a form of empirical evidence. Far from explaining nothing because it explains everything, such an explanation explains a lot by explaining everything. It just doesn't explain it convincingly to a scientist with other evidentiary standards.

A sensible person can side with scientists on what's true, but not with Pigliucci on what's rational and possible. Pigliucci occasionally recognizes that. Late in his book, he concedes that "nonscientific claims may be true and still not qualify as science." But if that's so, and we care about truth, why exalt science to the degree he does? If there's really a heaven, and science can't (yet?) detect it, so much the worse for science.

As an epigram to his chapter titled "From Superstition to Natural Philosophy," Pigliucci quotes a line from Aristotle: "It is the mark of an educated mind to be able to entertain a thought without accepting it." Science warriors such as Pigliucci, or Michael Ruse in his recent clash with other philosophers in these pages, should reflect on a related modern sense of "entertain." One does not entertain a guest by mocking, deriding, and abusing the guest. Similarly, one does not entertain a thought or approach to knowledge by ridiculing it.

Long live Skeptical Inquirer! But can we deep-six the egomania and unearned arrogance of the science patriots? As Descartes, that immortal hero of scientists and skeptics everywhere, pointed out, true skepticism, like true charity, begins at home.

Carlin Romano, critic at large for The Chronicle Review, teaches philosophy and media theory at the University of Pennsylvania.

Jensen Comment
One way to distinguish my conceptualization of science from pseudo science is that science relentlessly seeks to replicate and validate purported discoveries, especially after the discoveries have been made public in scientific journals ---
http://www.trinity.edu/rjensen/TheoryTar.htm
Science encourages conjecture but doggedly seeks truth about that conjecture. Pseudo science is less concerned about validating purported discoveries than it is about publishing new conjectures that are largely ignored by other pseudo scientists.

 

Accountics Scientists Seeking Truth: 
 "Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
 http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
 
One more mission in what's left of my life will be to try to change this
 
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 

 


TAR Versus JEC
Nearly all lab experiments or other empirical studies published in the Journal of Electroanalytical Chemistry (JEC) are replicated.  I mention this journal because one of its famous published studies on cold fusion in 1989 could not (at least not yet) be replicated. The inability of any researchers worldwide to replicate that study destroyed the stellar reputations of the original authors Stanley Pons and Martin Fleischmann.

Others who were loose with their facts:  former Harvard researcher John Darsee (faked cardiac research); radiologist Rober Slutsky (altered data; lied); obstetrician William McBride (changed data, ruined stellar reputation), and physicist J. Hendrik Schon (faked breakthroughs in molecular electronics).
Discover Magazine, December 2010, Page 43


Question
Has an accountics researcher ever retracted a claim?
Among the thousands of published accountics studies some author must be aware, maybe in retrospect, of a false claim?
Perhaps we'll never know!
http://www.trinity.edu/rjensen/TheoryTAR.htm

It's been a bad year for Harvard University science retractions
"3 Harvard Researchers Retract a Claim on the Aging of Stem Cells," by Nicolas Wade, The New York Times, October 14, 2010 ---
http://www.nytimes.com/2010/10/15/science/15retract.html?hpw

Harvard researchers have retracted a far-reaching claim they made in January that the aging of stem cells might be reversible.

The retraction was published in Thursday’s issue of Nature and is signed by the senior author, Amy J. Wagers, and two others. They say that serious concerns, which they did not specify, have undermined their confidence in the original report.

A fourth author, Shane R. Mayack, maintained that the results were still valid and refused to sign the retraction. All four scientists are affiliated with Harvard University and the Joslin Diabetes Center, a Harvard affiliate.

The original article, published by Nature in January, asserted that there was a rejuvenating factor in the blood of young mice that could reverse symptoms of aging in the blood-forming stem cells of elderly mice. The therapeutic use of such a factor would be “to extend the youthful function of the aging blood system,” Dr. Wagers and her colleagues wrote.

The article states that Dr. Wagers designed and interpreted the experiments and that Dr. Mayack, a post-doctoral student, performed and analyzed them.

Dr. Wagers issued a statement saying that she had immediately brought the disturbing information to the attention of Nature and the Harvard Medical School, and that she was working to repeat the experiments. She said by e-mail that the information came to light in the course of studies in her laboratory, prompting her to re-examine the reported data.

Press officers at Harvard Medical School, Joslin and the Harvard Stem Cell Institute said the matter was being reviewed but declined to comment further. Rachel Twinn, a Nature press officer, said she could not comment.

Dr. Wagers has expressed her doubts about a second paper co-authored with Dr. Mayack and published in the journal Blood in August 2008. In a statement issued today, the journal said it was posting a “Notice of Concern” about the paper pending further review.

Continued in article


Natural scientists in general are motivated to conduct replication studies in large measure because their commentaries or abstracts on their research, including results of replication testing, are widely published in top science journals. Replication publications, however may be limited to short commentaries or published abstracts. that are refereed. In any case, replicators get publication credits in the academy. Natural scientists deem integrity and accuracy to be too important to play down by not providing some sort of publication outlet.

There are virtually no published reports of replications of experiments published in The Accounting Review (TAR), although nearly all of TAR's articles in the last 25 years, aside from strictly mathematics analytical papers, are lab experiments or other empirical studies. There are occasional extensions of capital markets (archival database) empiricism, but it's not common in those studies to report independent replication outcomes per se. Since the odds of getting a case or field study published are so low, very few of such studies are even submitted for publication in TAR in recent years. Replication of these is a non-issue in TAR.

"Annual Report and Editorial Commentary for The Accounting Review," by Steven J. Kachelmeier The University of Texas at Austin, The Accounting Review, November 2009, Page 2056.

Table 4 in Heck and Jensen (2007) identifies Cornell's Mark W. Nelson as the accounting scientist having the highest number (eight) of studies published in TAR in the decade 1986-2005 --- 
“An Analysis of the Evolution of Research Contributions by The Accounting Review: 1926-2005,” (with Jean Heck), Accounting Historians Journal, Volume 34, No. 2, December 2007, pp. 109-142.

Mark Nelson tends to publish excellent accountancy lab experiments, but I do not know of any of his experiments or other TAR-reported  that have ever been independently replicated. I suspect he wishes that all of his experiments are replicated because, like any researcher, he's fallible on occasion. Replication would also draw greater attention to his fine work. The current TAR editor will not publish commentaries, including abstracts reporting successful replication studies. My contention is that accounting science researchers have been discouraged from conducting replication studies of TAR research because TAR will not publish commentaries/dialogs about papers published in TAR. They may also be discouraged from replication because the hypotheses themselves are uninspiring and uninteresting, but I will not go into that in this message.

 

 

November 22, 2011 reply from Steve Kachelmeier

First, Table 3 in the 2011 Annual Report (submissions and acceptances by area) only includes manuscripts that went through the regular blind reviewing process. That is, it excludes invited presidential scholar lectures, editorials, book reviews, etc. So "other" means "other regular submissions."

Second, you are correct Bob that "other" continues to represent a small percentage of the total acceptances. But "other" is also a very small percentage of the total submissions. As I state explicitly in the report, Table 3 does not prove that TAR is sufficienty diverse. It does, however, provide evidence that TAR acceptances by topical area (or by method) are nearly identically proportional to TAR submissions by topical area (or by method).

Third, for a great example of a recently published TAR study with substantial historical content, see Madsen's analysis of the historical development of standardization in accounting that we published in in the September 2011 issue. I conditionally accepted Madsen's submission in the first round, backed by favorable reports from two reviewers with expertise in accounting history and standardization.

Take care,

Steve

 

 

November 23, 2011 reply from Bob Jensen

Hi Steve,

Thank you for the clarification.

Interestingly, Madsen's September 2011 historical study (which came out after your report's May 2011 cutoff date) is a heavy accountics science paper with a historical focus.

It would be interesting to whether such a paper would've been accepted by TAR referees without the factor (actually principal components analysis). Personally, I doubt any history paper would be accepted without equations and quantitative analysis. In the case of Madsen's paper, if I were a referee I would probably challenge the robustness of the principal components and loadings ---
http://en.wikipedia.org/wiki/Principle_components_analysis 
Actually factor analysis in general like nonlinear multiple regression and adaptive versions thereof suffer greatly from lack of robustness. Sometimes quantitative models gild the lily to a fault.

Bob Kaplan's Presidential Scholar historical study was published, but this was not subjected to the usual TAR refereeing process.

The History of The Accounting Review paper written by Jean Heck and Bob Jensen which won a best paper award from the Accounting Historians Journal was initially flatly rejected by TAR. I was never quite certain if the main reason was that it did not contain equations or if the main reason was that it was critical of TAR editorship and refereeing. In any case it was flatly rejected by TAR, including a rejection by one referee who refused to put reasons in writing for feed\back to Jean and me.

“An Analysis of the Evolution of Research Contributions by The Accounting Review: 1926-2005,” (with Jean Heck), Accounting Historians Journal, Volume 34, No. 2, December 2007, pp. 109-142.

I would argue that accounting history papers, normative methods papers, and scholarly commentary papers (like Bob Kaplan's plenary address) are not submitted to TAR because of the general perception among the AAA membership that such submissions do not have a snowball's chance in Hell of being accepted unless they are also accountics science papers.

It's a waste of time and money to submit papers to TAR that are not accountics science papers.

In spite of differences of opinion, I do thank you for the years of blood, sweat, and tears that you gave us as Senior Editor of TAR.

And I wish you and all U.S. subscribers to the AECM a very Happy Thanksgiving. Special thanks to Barry and Julie and the AAA staff for keeping the AECM listserv up and running.

Respectfully,
Bob Jensen

 

 

 

Linda Bamber is a former editor of TAR and was greatly aided in this effort by her husband.

The BAMBERs Illustration

Years back I was responsible for an afternoon workshop and enjoyed the privilege to sit in on the tail end of the morning workshop on journal editing conducted by Linda and Mike Bamber. At the time Linda was Senior Editor of The Accounting Review.

I have great respect for both Linda and Mike, and my criticism here applies to the editorial policies of the American Accounting Association and other publishers of top accounting research journals. In no way am I criticizing Linda and Mike for the huge volunteer effort that both of them are giving to The Accounting Review (TAR).

Mike’s presentation focused upon a recent publication in TAR based upon a behavioral experiment using 25 auditors. Mike greatly praised the research and the article’s write up. My question afterwards was whether TAR would accept a replication study or publish and abstract of a replication that confirmed the outcomes published original TAR publication. The answer was absolutely NO! One subsequent TAR editor even told me it would be confusing of the replication contradicted the original study.

Now think of the absurdity of the above policy on publishing at least commentary abstracts of replications. Scientists would shake their heads and snicker at accounting research. No scientific experiment is considered worthy until it has been independently replicated multiple times. Science professors thus have an advantage over accounting professors in playing the “journal hits” game for promotion and tenure, because their top journals will publish replications. Scientists are constantly seeking truth and challenging whether it’s really the truth.

Thus I come to my main point that is far beyond the co-authorship issue that stimulated this message. My main point is that in academic accounting research publishing, we are more concerned with the cleverness of the research than in the “truth” of the findings themselves.

Have I become too much of a cynic in my old age? Except in a limited number of capital markets events studies, have accounting researchers published replications due to genuine interest by the public in whether the earlier findings hold true? Or do we hold the findings as self-evident on the basis of one published study with as few as 25 experimental participants? Or is there any interest in the findings themselves to the general public apart from interest in the methods and techniques of interest to researchers themselves?

 


Accounting Research Versus Social Science Research
It is more common in the social sciences, relative to natural sciences, to publish studies that are unreplicated. However, lack of replication is often addressed more openly the articles themselves and in and stated as a limitation relative to business and accounting empirical research.

"New Center Hopes to Clean Up Sloppy Science and Bogus Research," by Tom Bartlett, Chronicle of Higher Education, March 6, 2013 ---
http://chronicle.com/article/New-Center-Hopes-to-Clean-Up/137683/

Something is wrong with science, or at least with how science is often done. Flashy research in prestigious journals later proves to be bogus. Researchers have built careers on findings that are dubious or even turn out to be fraudulent. Much of the conversation about that trend has focused on flaws in social psychology, but the problem is not confined to a single field. If you keep up with the latest retractions and scandals, it's hard not to wonder how much research is trustworthy.

But Tuesday might just be a turning point. A new organization, called the Center for Open Science, is opening its doors in an attempt to harness and focus a growing movement to clean up science. The center's organizers don't put it quite like that; they say the center aims to "build tools to improve the scientific process and promote accurate, transparent findings in scientific research." Now, anybody with an idea and some chutzpah can start a center. But what makes this effort promising is that it has some real money behind it: The center has been given $5.25-million by the Laura and John Arnold Foundation to help get started.

It's also promising because a co-director of the center is Brian Nosek, an associate professor of psychology at the University of Virginia (the other director is a Virginia graduate student, Jeffrey Spies). Mr. Nosek is the force behind the Reproducibility Project, an effort to replicate every study from three psychology journals published in 2008, in an attempt to gauge how much published research might actually be baseless.

Mr. Nosek is one of a number of strong voices in psychology arguing for more transparency and accountability. But up until now there hasn't been an organization solely devoted to solving those problems. "This gives real backing to show that this is serious and that we can really put the resources behind it to do it right," Mr. Nosek said. "This whole movement, if it is a movement, has gathered sufficient steam to actually come to this."

'Rejigger Those Incentives'

So what exactly will the center do? Some of that grant money will go to finance the Reproducibility Project and to further develop the Open Science Framework, which already allows scientists to share and store findings and hypotheses. More openness is intended to combat, among other things, the so-called file-drawer effect, in which scientists publish their successful experiments while neglecting to mention their multiple flubbed attempts, giving a false impression of a finding's robustness.

The center hopes to encourage scientists to "register" their hypotheses before they carry out experiments, a procedure that should help keep them honest. And the center is working with journals, like Perspectives on Psychological Science, to publish the results of experiments even if they don't pan out the way the researchers hoped. Scientists are "reinforced for publishing, not for getting it right in the current incentives," Mr. Nosek said. "We're working to rejigger those incentives."

Mr. Nosek and his compatriots didn't solicit funds for the center. Foundations have been knocking on their door. The Arnold Foundation sought out Mr. Nosek because of a concern about whether the research that's used to make policy decisions is really reliable.

"It doesn't benefit anyone if the publications that get out there are in any way skewed toward the sexy results that might be a fluke, as opposed to the rigorous replication and testing of ideas," said Stuart Buck, the foundation's director of research.

Other foundations have been calling too. With more grants likely to be on the way, Mr. Nosek thinks the center will have $8-million to $10-million in commitments before writing a grant proposal. The goal is an annual budget of $3-million. "There are other possibilities that we might be able to grow more dramatically than that," Mr. Nosek said. "It feels like it's raining money. It's just ridiculous how much interest there is in these issues."

Continued in article

Jensen Comment
Accountics scientists set a high bar because they replicate virtually all their published research.

Yeah Right!
Accountics science journals like The Accounting Review have referees that discourage replications by refusing to publish them. They won't even publish commentaries that question the outcomes ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

Accountics science researchers won't even discuss their work on the AAA Commons ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm


Robustness Issues

Robust Statistics --- http://en.wikipedia.org/wiki/Robust_statistics

"ECONOMICS AS ROBUSTNESS ANALYSIS," by Jaakko Kuorikoski, Aki Lehtinen and Caterina Marchionn, he University of Pittsburgh, 2007 ---
http://philsci-archive.pitt.edu/3550/1/econrobu.pdf

ECONOMICS AS ROBUSTNESS ANALYSIS
Jaakko Kuorikoski, Aki Lehtinen and Caterina Marchionni
25.9. 2007
1. Introduction ..................................................................................................................... 1
2. Making sense of robustness............................................................................................ 4
3. Robustness in economics................................................................................................ 6
4. The epistemic import of robustness analysis................................................................. 8
5. An illustration: geographical economics models ........................................................ 13
6. Independence of derivations......................................................................................... 18
7. Economics as a Babylonian science ............................................................................ 23
8. Conclusions ...................................................................................................................
 

1.Introduction
Modern economic analysis consists largely in building abstract mathematical models and deriving familiar results from ever sparser modeling assumptions is considered as a theoretical contribution. Why do economists spend so much time and effort in deriving same old results from slightly different assumptions rather than trying to come up with new and exciting hypotheses? We claim that this is because the process of refining economic models is essentially a form of robustness analysis. The robustness of modeling results with respect to particular modeling assumptions, parameter values or initial conditions plays a crucial role for modeling in economics for two reasons. First, economic models are difficult to subject to straightforward empirical tests for various reasons. Second, the very nature of economic phenomena provides little hope of ever making the modeling assumptions completely realistic. Robustness analysis is therefore a natural methodological strategy for economists because economic models are based on various idealizations and abstractions which make at least some of their assumptions unrealistic (Wimsatt 1987; 1994a; 1994b; Mäki 2000; Weisberg 2006b). The importance of robustness considerations in economics ultimately forces us to reconsider many commonly held views on the function and logical structure of economic theory.

Given that much of economic research praxis can be characterized as robustness analysis, it is somewhat surprising that philosophers of economics have only recently become interested in robustness. William Wimsatt has extensively discussed robustness analysis, which he considers in general terms as triangulation via independent ways of determination . According to Wimsatt, fairly varied processes or activities count as ways of determination: measurement, observation, experimentation, mathematical derivation etc. all qualify. Many ostensibly different epistemic activities are thus classified as robustness analysis. In a recent paper, James Woodward (2006) distinguishes four notions of robustness. The first three are all species of robustness as similarity of the result under different forms of determination. Inferential robustness refers to the idea that there are different degrees to which inference from some given data may depend on various auxiliary assumptions, and derivational robustness to whether a given theoretical result depends on the different modelling assumptions. The difference between the two is that the former concerns derivation from data, and the latter derivation from a set of theoretical assumptions. Measurement robustness means triangulation of a quantity or a value by (causally) different means of measurement. Inferential, derivational and measurement robustness differ with respect to the method of determination and the goals of the corresponding robustness analysis. Causal robustness, on the other hand, is a categorically different notion because it concerns causal dependencies in the world, and it should not be confused with the epistemic notion of robustness under different ways of determination.

In Woodward’s typology, the kind of theoretical model-refinement that is so common in economics constitutes a form of derivational robustness analysis. However, if Woodward (2006) and Nancy Cartwright (1991) are right in claiming that derivational robustness does not provide any epistemic credence to the conclusions, much of theoretical model- building in economics should be regarded as epistemically worthless. We take issue with this position by developing Wimsatt’s (1981) account of robustness analysis as triangulation via independent ways of determination. Obviously, derivational robustness in economic models cannot be a matter of entirely independent ways of derivation, because the different models used to assess robustness usually share many assumptions. Independence of a result with respect to modelling assumptions nonetheless carries epistemic weight by supplying evidence that the result is not an artefact of particular idealizing modelling assumptions. We will argue that although robustness analysis, understood as systematic examination of derivational robustness, is not an empirical confirmation procedure in any straightforward sense, demonstrating that a modelling result is robust does carry epistemic weight by guarding against error and by helping to assess the relative importance of various parts of theoretical models (cf. Weisberg 2006b). While we agree with Woodward (2006) that arguments presented in favour of one kind of robustness do not automatically apply to other kinds of robustness, we think that the epistemic gain from robustness derives from similar considerations in many instances of different kinds of robustness.

In contrast to physics, economic theory itself does not tell which idealizations are truly fatal or crucial for the modeling result and which are not. Economists often proceed on a preliminary hypothesis or an intuitive hunch that there is some core causal mechanism that ought to be modeled realistically. Turning such intuitions into a tractable model requires making various unrealistic assumptions concerning other issues. Some of these assumptions are considered or hoped to be unimportant, again on intuitive grounds. Such assumptions have been examined in economic methodology using various closely related terms such as Musgrave’s (1981) heuristic assumptions, Mäki’s (2000) early step assumptions, Hindriks’ (2006) tractability assumptions and Alexandrova’s (2006) derivational facilitators. We will examine the relationship between such assumptions and robustness in economic model-building by way of discussing a case: geographical economics. We will show that an important way in which economists try to guard against errors in modeling is to see whether the model’s conclusions remain the same if some auxiliary assumptions, which are hoped not to affect those conclusions, are changed. The case also demonstrates that although the epistemological functions of guarding against error and securing claims concerning the relative importance of various assumptions are somewhat different, they are often closely intertwined in the process of analyzing the robustness of some modeling result.

. . .

8. Conclusions
The practice of economic theorizing largely consists of building models with slightly different assumptions yielding familiar results. We have argued that this practice makes sense when seen as derivational robustness analysis. Robustness analysis is a sensible epistemic strategy in situations where we know that our assumptions and inferences are fallible, but not in what situations and in what way. Derivational robustness analysis guards against errors in theorizing when the problematic parts of the ways of determination, i.e. models, are independent of each other. In economics in particular, proving robust theorems from different models with diverse unrealistic assumptions helps us to evaluate what results correspond to important economic phenomena and what are merely artefacts of particular auxiliary assumptions. We have addressed Orzack and Sober’s criticism against robustness as an epistemically relevant feature by showing that their formulation of the epistemic situation in which robustness analysis is useful is misleading. We have also shown that their argument actually shows how robustness considerations are necessary for evaluating what a given piece of data can support. We have also responded to Cartwright’s criticism by showing that it relies on an untenable hope of a completely true economic model.

Viewing economic model building as robustness analysis also helps to make sense of the role of the rationality axioms that apparently provide the basis of the whole enterprise. Instead of the traditional Euclidian view of the structure of economic theory, we propose that economics should be approached as a Babylonian science, where the epistemically secure parts are the robust theorems and the axioms only form what Boyd and Richerson call a generalized sample theory, whose the role is to help organize further modelling work and facilitate communication between specialists.

 

Jensen Comment
As I've mentioned before I spent a goodly proportion of my time for two years in a think tank trying to invent adaptive regression and cluster analysis models. In every case the main reasons for my failures were lack of robustness. In particular, if any two models feeding in predictor variables w, x, y, and z generated different outcomes that were not robust in terms of the time ordering of the variables feeding into the algorithms. This made the results dependent of dynamic programming which has rarely been noted for computing practicality ---
http://en.wikipedia.org/wiki/Dynamic_programming

 


Appeal for a "Daisy Chain of Replication"
"Nobel laureate challenges psychologists to clean up their act: Social-priming research needs “daisy chain” of replication," by Ed Yong, Nature, October 3, 2012 ---
http://www.nature.com/news/nobel-laureate-challenges-psychologists-to-clean-up-their-act-1.11535

Nobel prize-winner Daniel Kahneman has issued a strongly worded call to one group of psychologists to restore the credibility of their field by creating a replication ring to check each others’ results.

Kahneman, a psychologist at Princeton University in New Jersey, addressed his open e-mail to researchers who work on social priming, the study of how subtle cues can unconsciously influence our thoughts or behaviour. For example, volunteers might walk more slowly down a corridor after seeing words related to old age1, or fare better in general-knowledge tests after writing down the attributes of a typical professor2.

Such tests are widely used in psychology, and Kahneman counts himself as a “general believer” in priming effects. But in his e-mail, seen by Nature, he writes that there is a “train wreck looming” for the field, due to a “storm of doubt” about the robustness of priming results.

Under fire

This scepticism has been fed by failed attempts to replicate classic priming studies, increasing concerns about replicability in psychology more broadly (see 'Bad Copy'), and the exposure of fraudulent social psychologists such as Diederik Stapel, Dirk Smeesters and Lawrence Sanna, who used priming techniques in their work.

“For all these reasons, right or wrong, your field is now the poster child for doubts about the integrity of psychological research,” Kahneman writes. “I believe that you should collectively do something about this mess.”

Kahneman’s chief concern is that graduate students who have conducted priming research may find it difficult to get jobs after being associated with a field that is being visibly questioned.

“Kahneman is a hard man to ignore. I suspect that everybody who got a message from him read it immediately,” says Brian Nosek, a social psychologist at the University of Virginia in Charlottesville.David Funder, at the University of California, Riverside, and president-elect of the Society for Personality and Social Psychology, worries that the debate about priming has descended into angry defensiveness rather than a scientific discussion about data. “I think the e-mail hits exactly the right tone,” he says. “If this doesn’t work, I don’t know what will.”

Hal Pashler, a cognitive psychologist at the University of California, San Diego, says that several groups, including his own, have already tried to replicate well-known social-priming findings, but have not been able to reproduce any of the effects. “These are quite simple experiments and the replication attempts are well powered, so it is all very puzzling. The field needs to get to the bottom of this, and the quicker the better.”

Chain of replication

To address this problem, Kahneman recommends that established social psychologists set up a “daisy chain” of replications. Each lab would try to repeat a priming effect demonstrated by its neighbour, supervised by someone from the replicated lab. Both parties would record every detail of the methods, commit beforehand to publish the results, and make all data openly available.

Kahneman thinks that such collaborations are necessary because priming effects are subtle, and could be undermined by small experimental changes.

Norbert Schwarz, a social psychologist at the University of Michigan in Ann Arbor who received the e-mail, says that priming studies attract sceptical attention because their results are often surprising, not necessarily because they are scientifically flawed.. “There is no empirical evidence that work in this area is more or less replicable than work in other areas,” he says, although the “iconic status” of individual findings has distracted from a larger body of supportive evidence.

“You can think of this as psychology’s version of the climate-change debate,” says Schwarz. “The consensus of the vast majority of psychologists closely familiar with work in this area gets drowned out by claims of a few persistent priming sceptics.”

Still, Schwarz broadly supports Kahneman’s suggestion. “I will participate in such a daisy-chain if the field decides that it is something that should be implemented,” says Schwarz, but not if it is “merely directed at one single area of research”.

Continued in article

 

 

The lack of validation is an enormous problem in accountics science, but the saving grace is that nobody much cares
574 Shields Against Validity Challenges in Plato's Cave --- See Below


Why Even Renowned Scientists Need to Have Their Research Independently Replicated

"Author on leave after Harvard inquiry Investigation of scientist’s work finds evidence of misconduct, prompts retraction by journal," by Carolyn Y. Johnson, The Boston Globe, August 10, 2010 ---
http://www.boston.com/news/education/higher/articles/2010/08/10/author_on_leave_after_harvard_inquiry/

Harvard University psychologist Marc Hauser — a well-known scientist and author of the book “Moral Minds’’ — is taking a year-long leave after a lengthy internal investigation found evidence of scientific misconduct in his laboratory.

The findings have resulted in the retraction of an influential study that he led. “MH accepts responsibility for the error,’’ says the retraction of the study on whether monkeys learn rules, which was published in 2002 in the journal Cognition.

Two other journals say they have been notified of concerns in papers on which Hauser is listed as one of the main authors.

It is unusual for a scientist as prominent as Hauser — a popular professor and eloquent communicator of science whose work has often been featured on television and in newspapers — to be named in an investigation of scientific misconduct. His research focuses on the evolutionary roots of the human mind.

In a letter Hauser wrote this year to some Harvard colleagues, he described the inquiry as painful. The letter, which was shown to the Globe, said that his lab has been under investigation for three years by a Harvard committee, and that evidence of misconduct was found. He alluded to unspecified mistakes and oversights that he had made, and said he will be on leave for the upcoming academic year.

In an e-mail yesterday, Hauser, 50, referred questions to Harvard. Harvard spokesman Jeff Neal declined to comment on Hauser’s case, saying in an e-mail, “Reviews of faculty conduct are considered confidential.’’

“Speaking in general,’’ he wrote, “we follow a well defined and extensive review process. In cases where we find misconduct has occurred, we report, as appropriate, to external agencies (e.g., government funding agencies) and correct any affected scholarly record.’’

Much remains unclear, including why the investigation took so long, the specifics of the misconduct, and whether Hauser’s leave is a punishment for his actions.

The retraction, submitted by Hauser and two co-authors, is to be published in a future issue of Cognition, according to the editor. It says that, “An internal examination at Harvard University . . . found that the data do not support the reported findings. We therefore are retracting this article.’’

The paper tested cotton-top tamarin monkeys’ ability to learn generalized patterns, an ability that human infants had been found to have, and that may be critical for learning language. The paper found that the monkeys were able to learn patterns, suggesting that this was not the critical cognitive building block that explains humans’ ability to learn language. In doing such experiments, researchers videotape the animals to analyze each trial and provide a record of their raw data.

The work was funded by Harvard’s Mind, Brain, and Behavior program, the National Science Foundation, and the National Institutes of Health. Government spokeswomen said they could not confirm or deny whether an investigation was underway.

The findings have resulted in the retraction of an influential study that he led. “MH accepts responsibility for the error,’’ says the retraction of the study on whether monkeys learn rules, which was published in 2002 in the journal Cognition.

Two other journals say they have been notified of concerns in papers on which Hauser is listed as one of the main authors.

It is unusual for a scientist as prominent as Hauser — a popular professor and eloquent communicator of science whose work has often been featured on television and in newspapers — to be named in an investigation of scientific misconduct. His research focuses on the evolutionary roots of the human mind.

In a letter Hauser wrote this year to some Harvard colleagues, he described the inquiry as painful. The letter, which was shown to the Globe, said that his lab has been under investigation for three years by a Harvard committee, and that evidence of misconduct was found. He alluded to unspecified mistakes and oversights that he had made, and said he will be on leave for the upcoming academic year.

In an e-mail yesterday, Hauser, 50, referred questions to Harvard. Harvard spokesman Jeff Neal declined to comment on Hauser’s case, saying in an e-mail, “Reviews of faculty conduct are considered confidential.’’

“Speaking in general,’’ he wrote, “we follow a well defined and extensive review process. In cases where we find misconduct has occurred, we report, as appropriate, to external agencies (e.g., government funding agencies) and correct any affected scholarly record.’’

Much remains unclear, including why the investigation took so long, the specifics of the misconduct, and whether Hauser’s leave is a punishment for his actions.

The retraction, submitted by Hauser and two co-authors, is to be published in a future issue of Cognition, according to the editor. It says that, “An internal examination at Harvard University . . . found that the data do not support the reported findings. We therefore are retracting this article.’’

The paper tested cotton-top tamarin monkeys’ ability to learn generalized patterns, an ability that human infants had been found to have, and that may be critical for learning language. The paper found that the monkeys were able to learn patterns, suggesting that this was not the critical cognitive building block that explains humans’ ability to learn language. In doing such experiments, researchers videotape the animals to analyze each trial and provide a record of their raw data.

The work was funded by Harvard’s Mind, Brain, and Behavior program, the National Science Foundation, and the National Institutes of Health. Government spokeswomen said they could not confirm or deny whether an investigation was underway.

Gary Marcus, a psychology professor at New York University and one of the co-authors of the paper, said he drafted the introduction and conclusions of the paper, based on data that Hauser collected and analyzed.

“Professor Hauser alerted me that he was concerned about the nature of the data, and suggested that there were problems with the videotape record of the study,’’ Marcus wrote in an e-mail. “I never actually saw the raw data, just his summaries, so I can’t speak to the exact nature of what went wrong.’’

The investigation also raised questions about two other papers co-authored by Hauser. The journal Proceedings of the Royal Society B published a correction last month to a 2007 study. The correction, published after the British journal was notified of the Harvard investigation, said video records and field notes of one of the co-authors were incomplete. Hauser and a colleague redid the three main experiments and the new findings were the same as in the original paper.

Science, a top journal, was notified of the Harvard investigation in late June and told that questions about record-keeping had been raised about a 2007 paper in which Hauser is the senior author, according to Ginger Pinholster, a journal spokeswoman. She said Science has requested Harvard’s report of its investigation and will “move with utmost efficiency in light of the seriousness of issues of this type.’’

Colleagues of Hauser’s at Harvard and other universities have been aware for some time that questions had been raised about some of his research, and they say they are troubled by the investigation and forthcoming retraction in Cognition.

“This retraction creates a quandary for those of us in the field about whether other results are to be trusted as well, especially since there are other papers currently being reconsidered by other journals as well,’’ Michael Tomasello, co-director of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, said in an e-mail. “If scientists can’t trust published papers, the whole process breaks down.’’

This isn’t the first time Hauser’s work has been challenged.

In 1995, he was the lead author of a paper in the Proceedings of the National Academy of Sciences that looked at whether cotton-top tamarins are able to recognize themselves in a mirror. Self-recognition was something that set humans and other primates, such as chimpanzees and orangutans, apart from other animals, and no one had shown that monkeys had this ability.

Gordon G. Gallup Jr., a professor of psychology at State University of New York at Albany, questioned the results and requested videotapes that Hauser had made of the experiment.

“When I played the videotapes, there was not a thread of compelling evidence — scientific or otherwise — that any of the tamarins had learned to correctly decipher mirrored information about themselves,’’ Gallup said in an interview.

In 1997, he co-authored a critique of the original paper, and Hauser and a co-author responded with a defense of the work.

In 2001, in a study in the American Journal of Primatology, Hauser and colleagues reported that they had failed to replicate the results of the previous study. The original paper has never been retracted or corrected.

Continued in article

“There is a difference between breaking the rules and breaking the most sacred of all rules,” said Jonathan Haidt, a moral psychologist at the University of Virginia. The failure to have performed a reported control experiment would be “a very serious and perhaps unforgivable offense,” Dr. Haidt said.

"Harvard Researcher May Have Fabricated Data," by Nicholas Wace, The New York Times, August 27, 2010 ---
http://www.nytimes.com/2010/08/28/science/28harvard.html?_r=1&hpw

Harvard authorities have made available information suggesting that Marc Hauser, a star researcher who was put on leave this month, may have fabricated data in a 2002 paper.

“Given the published design of the experiment, my conclusion is that the control condition was fabricated,” said Gerry Altmann, the editor of the journal Cognition, in which the experiment was published.

Dr. Hauser said he expected to have a statement about the Cognition paper available soon. He issued a statement last week saying he was “deeply sorry” and acknowledged having made “significant mistakes” but did not admit to any scientific misconduct.

Dr. Hauser is a leading expert in comparing animal and human mental processes and recently wrote a well-received book, “Moral Minds,” in which he explored the evolutionary basis of morality. An inquiry into his Harvard lab was opened in 2007 after students felt they were being pushed to reach a particular conclusion that they thought was incorrect. Though the inquiry was completed in January this year, Harvard announced only last week that Dr. Hauser had been required to retract the Cognition article, and it supplied no details about the episode.

On Friday, Dr. Altmann said Michael D. Smith, dean of the Faculty of Arts and Sciences, had given him a summary of the part of the confidential faculty inquiry related to the 2002 experiment, a test of whether monkeys could distinguish algebraic rules.

The summary included a description of a videotape recording the monkeys’ reaction to a test stimulus. Standard practice is to alternate a stimulus with a control condition, but no tests of the control condition are present on the videotape. Dr. Altmann, a psychologist at the University of York in England, said it seemed that the control experiments reported in the article were not performed.

Some forms of scientific error, like poor record keeping or even mistaken results, are forgivable, but fabrication of data, if such a charge were to be proved against Dr. Hauser, is usually followed by expulsion from the scientific community.

“There is a difference between breaking the rules and breaking the most sacred of all rules,” said Jonathan Haidt, a moral psychologist at the University of Virginia. The failure to have performed a reported control experiment would be “a very serious and perhaps unforgivable offense,” Dr. Haidt said.

Dr. Hauser’s case is unusual, however, because of his substantial contributions to the fields of animal cognition and the basis of morality. Dr. Altmann held out the possibility of redemption. “If he were to give a full and frank account of the errors he made, then the process can start of repatriating him into the community in some form,” he said.

Dr. Hauser’s fall from grace, if it occurs, could cast a shadow over several fields of research until Harvard makes clear the exact nature of the problems found in his lab. Last week, Dr. Smith, the Harvard dean, wrote in a letter to the faculty that he had found Dr. Hauser responsible for eight counts of scientific misconduct. He described these in general terms but did not specify fabrication. An oblique sentence in his letter said that the Cognition paper had been retracted because “the data produced in the published experiments did not support the published findings.”

Scientists trying to assess Dr. Hauser’s oeuvre are likely to take into account another issue besides the eight counts of misconduct. In 1995, Dr. Hauser published that cotton-top tamarins, the monkey species he worked with, could recognize themselves in a mirror. The finding was challenged by the psychologist Gordon Gallup, who asked for the videotapes and has said that he could see no evidence in the monkey’s reactions for what Dr. Hauser had reported. Dr. Hauser later wrote in another paper that he could not repeat the finding.

The small size of the field in which Dr. Hauser worked has contributed to the uncertainty. Only a handful of laboratories have primate colonies available for studying cognition, so few if any researchers could check Dr. Hauser’s claims.

“Marc was the only person working on cotton-top tamarins so far as I know,” said Alison Gopnik, a psychologist who studies infant cognition at the University of California, Berkeley. “It’s always a problem in science when we have to depend on one person.”

Many of Dr. Hauser’s experiments involved taking methods used to explore what infants are thinking and applying them to monkeys. In general, he found that the monkeys could do many of the same things as infants. If a substantial part of his work is challenged or doubted, monkeys may turn out to be less smart than recently portrayed.

But his work on morality involved humans and is therefore easier for others to repeat. And much of Dr. Hauser’s morality research has checked out just fine, Dr. Haidt said.

“Hauser has been particularly creative in studying moral psychology in diverse populations, including small-scale societies, patients with brain damage, psychopaths and people with rare genetic disorders that affect their judgments,” he said.

Criticisms of the Doubters: Missing Data is Not Necessarily Scientific Misconduct
"Difficulties in Defining Errors in Case Against Harvard Researcher," by Nicholas Wade, The New York Times, October 25, 2010 ---
http://www.nytimes.com/2010/10/26/science/26hauser.html?_r=1&hpw 

Jensen Comment
Hauser's accusers backed off slightly. It would seem that the best scientific evidence would be for independent researchers to collect new data and try to replicate Hauser's claims.

We must keep in mint that Hauser himself retracted one of his own scientific journal articles.

Why did Harvard take three years on this one?
http://chronicle.com/blogPost/HauserHarvard/26308/

Bob Jensen's threads on Professors Who Cheat are at
http://www.trinity.edu/rjensen/Plagiarism.htm#ProfessorsWhoPlagiarize

 Also see
http://www.trinity.edu/rjensen/TheoryTAR.htm#SocialScience

August 21, 2010 reply from Orenstein, Edith [eorenstein@FINANCIALEXECUTIVES.ORG]

I believe a broad lesson arises from the tale of Professor Hauser's monkey-business:

"It is unusual for a scientist as prominent as Hauser­ - a popular professor and eloquent communicator of science whose work has often been featured on television and in newspapers ­- to be named in an investigation of scientific misconduct."

Disclaimer: this is my personal opinion only, and I believe these lessons apply to all professions, but since this is an accounting listserv, lesson 1 with respect to accounting/auditing research is:

1. even the most prominent, popular, and eloquent communicator professors' research, including but not limited to the field of accounting, and including for purposes of standard-setting, rule-making, et al, should not be above third party review and questioning (that may be the layman's term; the technical term I assume is 'replication'). Although it can be difficult for less prominent, popular, eloquent communicators to raise such challenges, without fear of reprisal, it is important to get as close to the 'truth' or 'truths' as may (or may not) exist. This point applies not only to formal, refereed journals, but non-refereed published research in any form as well.   

 

And, from the world of accounting & auditing practice, (or any job, really), the lesson is the same:

2. even the most prominent, popular, and eloquent communicator(s) - e.g. audit clients....should not be above third party review and questioning; once again, it can be difficult for less prominent, popular, and eloquent communicators (internal or external audit staff, whether junior or senior staff) to raise challenges in the practice of auditing in the field (which is why staffing decisions, supervision, and backbone are so important). And we have seen examples where such challenges were met with reprisal or challenge (e.g. Cynthia Cooper challenging WorldCom's accounting; HealthSouth's Richard Scrushy, the Enron - Andersen saga, etc.)

Additionally, another lesson here, (I repeat this is my personal opinion only) is that in the field of standard-setting or rulemaking, testimony of 'prominent' experts and 'eloquent communicators' should be judged on the basis of substance vs. form, and others (i.e. those who may feel less 'prominent' or 'eloquent') should step up to the plate to offer concurring or counterarguments in verbal or written form (including comment letters) if their experience or thought process leads them to the same conclusion as the more 'prominent' or 'eloquent' speakers/writers - or in particular, if it leads them to another view.

I wonder sometimes, particularly in public hearings, if individuals testifying believe there is implied pressure to say what one thinks the sponsor of the hearing expects or wants to hear, vs. challenging the status quo, particular proposed changes, etc., particularly if they may fear reprisal. Once again, it is important to provide the facts as one sees them, and it is about substance vs. form; sometimes difficult to achieve.

Edith Orenstein
www.financialexecutives.org/blog   

"Harvard Clarifies Wrongdoing by Professor," Inside Higher Ed, August 23, 2010 ---
http://www.insidehighered.com/news/2010/08/23/qt#236200

Harvard University announced Friday that its investigations had found eight incidents of scientific misconduct by Marc Hauser, a prominent psychology professor who recently started a leave, The Boston Globe reported. The university also indicated that sanctions had been imposed, and that Hauser would be teaching again after a year. Since the Globe reported on Hauser's leave and the inquiry into his work, many scientists have called for a statement by the university on what happened, and Friday's announcement goes much further than earlier statements. In a statement sent to colleagues on Friday, Hauser said: "I am deeply sorry for the problems this case has caused to my students, my colleagues, and my university. I acknowledge that I made some significant mistakes and I am deeply disappointed that this has led to a retraction and two corrections. I also feel terrible about the concerns regarding the other five cases."

Why did Harvard take three years on this one?
http://chronicle.com/blogPost/HauserHarvard/26308/

Bob Jensen's threads on this cheating scandal are at
http://www.trinity.edu/rjensen/TheoryTAR.htm#SocialScience

Bob Jensen's threads on Professors Who Cheat are at
http://www.trinity.edu/rjensen/Plagiarism.htm#ProfessorsWhoPlagiarize

 


Fabricated Data at Least 145 times
"UConn Investigation Finds That Health Researcher Fabricated Data." by Tom Bartlett, Inside Higher Ed, January 11, 2012 ---
http://chronicle.com/blogs/percolator/uconn-investigation-finds-that-health-researcher-fabricated-data/28291

Jensen Comment
I knew of a few instances of plagiarism, but not once has it been discovered that an accountics scientist fabricated data. This could, however, be due to accountics scientists shielding each other from validity testing ---
http://www.trinity.edu/rjensen/TheoryTAR.htm


National Center for Case Study Teaching in Science --- http://sciencecases.lib.buffalo.edu/cs/


August 10, 2010 reply from Jagdish Gangolly [gangolly@CSC.ALBANY.EDU]

Bob,

This is a classic example that shows how difficult it is to escape accountability in science. First, when Gordon Gallup, a colleague in our Bio-Psychology in Albany questioned the results, at first Hauser tried to get away with a reply because Albany is not Harvard. But then when Hauser could not replicate the experiment he had no choice but to confess, unless he was willing to be caught some time in the future with his pants down.

However, in a sneaky way, the confession was sent by Hauser to a different journal. But Hauser at least had the gumption to confess.

The lesson I learn from this episode is to do something like what lawyers always do in research. They call it Shepardizing. It is important not to take any journal article at its face value, even if the thing is in a journal as well known as PNAS and by a person from a school as well known as Harvard. The other lesson is not to ignore a work or criticism even if it appears in a lesser known journal and is by an author from a lesser known school (as in Albany in this case).

Jagdish -- J
agdish Gangolly
(gangolly@albany.edu)
Department of Informatics College of Computing & Information
State University of New York at Albany 7A, Harriman Campus Road, Suite 220 Albany, NY 12206

August 10, 2010 message from Paul Williams [Paul_Williams@NCSU.EDU]

Bob and Jagdish,
This also illustrates the necessity of keeping records of experiments. How odd that accounting researchers cannot see the necessity of "keeping a journal!!!"

"Document Sheds Light on Investigation at Harvard," by Tom Bartlett, Chronicle of Higher Education, August 19, 2010 ---
http://chronicle.com/article/Document-Sheds-Light-on/123988/

Ever since word got out that a prominent Harvard University researcher was on leave after an investigation into academic wrongdoing, a key question has remained unanswered: What, exactly, did he do?

The researcher himself, Marc D. Hauser, isn't talking. The usually quotable Mr. Hauser, a psychology professor and director of Harvard's Cognitive Evolution Laboratory, is the author of Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong (Ecco, 2006) and is at work on a forthcoming book titled "Evilicious: Why We Evolved a Taste for Being Bad." He has been voted one of the university's most popular professors.

Harvard has also been taciturn. The public-affairs office did issue a brief written statement last week saying that the university "has taken steps to ensure that the scientific record is corrected in relation to three articles co-authored by Dr. Hauser." So far, Harvard officials haven't provided details about the problems with those papers. Were they merely errors or something worse?

An internal document, however, sheds light on what was going on in Mr. Hauser's lab. It tells the story of how research assistants became convinced that the professor was reporting bogus data and how he aggressively pushed back against those who questioned his findings or asked for verification.

A copy of the document was provided to The Chronicle by a former research assistant in the lab who has since left psychology. The document is the statement he gave to Harvard investigators in 2007.

The former research assistant, who provided the document on condition of anonymity, said his motivation in coming forward was to make it clear that it was solely Mr. Hauser who was responsible for the problems he observed. The former research assistant also hoped that more information might help other researchers make sense of the allegations.

It was one experiment in particular that led members of Mr. Hauser's lab to become suspicious of his research and, in the end, to report their concerns about the professor to Harvard administrators.

The experiment tested the ability of rhesus monkeys to recognize sound patterns. Researchers played a series of three tones (in a pattern like A-B-A) over a sound system. After establishing the pattern, they would vary it (for instance, A-B-B) and see whether the monkeys were aware of the change. If a monkey looked at the speaker, this was taken as an indication that a difference was noticed.

The method has been used in experiments on primates and human infants. Mr. Hauser has long worked on studies that seemed to show that primates, like rhesus monkeys or cotton-top tamarins, can recognize patterns as well as human infants do. Such pattern recognition is thought to be a component of language acquisition.

Researchers watched videotapes of the experiments and "coded" the results, meaning that they wrote down how the monkeys reacted. As was common practice, two researchers independently coded the results so that their findings could later be compared to eliminate errors or bias.

According to the document that was provided to The Chronicle, the experiment in question was coded by Mr. Hauser and a research assistant in his laboratory. A second research assistant was asked by Mr. Hauser to analyze the results. When the second research assistant analyzed the first research assistant's codes, he found that the monkeys didn't seem to notice the change in pattern. In fact, they looked at the speaker more often when the pattern was the same. In other words, the experiment was a bust.

But Mr. Hauser's coding showed something else entirely: He found that the monkeys did notice the change in pattern—and, according to his numbers, the results were statistically significant. If his coding was right, the experiment was a big success.

The second research assistant was bothered by the discrepancy. How could two researchers watching the same videotapes arrive at such different conclusions? He suggested to Mr. Hauser that a third researcher should code the results. In an e-mail message to Mr. Hauser, a copy of which was provided to The Chronicle, the research assistant who analyzed the numbers explained his concern. "I don't feel comfortable analyzing results/publishing data with that kind of skew until we can verify that with a third coder," he wrote.

A graduate student agreed with the research assistant and joined him in pressing Mr. Hauser to allow the results to be checked, the document given to The Chronicle indicates. But Mr. Hauser resisted, repeatedly arguing against having a third researcher code the videotapes and writing that they should simply go with the data as he had already coded it. After several back-and-forths, it became plain that the professor was annoyed.

"i am getting a bit pissed here," Mr. Hauser wrote in an e-mail to one research assistant. "there were no inconsistencies! let me repeat what happened. i coded everything. then [a research assistant] coded all the trials highlighted in yellow. we only had one trial that didn't agree. i then mistakenly told [another research assistant] to look at column B when he should have looked at column D. ... we need to resolve this because i am not sure why we are going in circles."

The research assistant who analyzed the data and the graduate student decided to review the tapes themselves, without Mr. Hauser's permission, the document says. They each coded the results independently. Their findings concurred with the conclusion that the experiment had failed: The monkeys didn't appear to react to the change in patterns.

They then reviewed Mr. Hauser's coding and, according to the research assistant's statement, discovered that what he had written down bore little relation to what they had actually observed on the videotapes. He would, for instance, mark that a monkey had turned its head when the monkey didn't so much as flinch. It wasn't simply a case of differing interpretations, they believed: His data were just completely wrong.

As word of the problem with the experiment spread, several other lab members revealed they had had similar run-ins with Mr. Hauser, the former research assistant says. This wasn't the first time something like this had happened. There was, several researchers in the lab believed, a pattern in which Mr. Hauser reported false data and then insisted that it be used.

They brought their evidence to the university's ombudsman and, later, to the dean's office. This set in motion an investigation that would lead to Mr. Hauser's lab being raided by the university in the fall of 2007 to collect evidence. It wasn't until this year, however, that the investigation was completed. It found problems with at least three papers. Because Mr. Hauser has received federal grant money, the report has most likely been turned over to the Office of Research Integrity at the U.S. Department of Health and Human Services.

The research that was the catalyst for the inquiry ended up being tabled, but only after additional problems were found with the data. In a statement to Harvard officials in 2007, the research assistant who instigated what became a revolt among junior members of the lab, outlined his larger concerns: "The most disconcerting part of the whole experience to me was the feeling that Marc was using his position of authority to force us to accept sloppy (at best) science."

Also see http://chronicle.com/blogPost/Harvard-Confirms-Hausergate/26198/


The Insignificance of Testing the Null

October 1, 2010 message from Amy Dunbar

Nick Cox posted a link to a statistics paper on statalist:

2009. Statistics: reasoning on uncertainty, and the insignificance of testing null. Annales Zoologici Fennici 46: 138-157.

http://www.sekj.org/PDF/anz46-free/anz46-138.pdf

Cox commented that the paper touches provocatively on several topics often aired on statalist including the uselessness of dynamite or detonator plots, displays for comparing group means and especially the over-use of null hypothesis testing. The main target audience is ecologists but most of the issues cut across statistical science.

Dunbar comment: The paper would be a great addition to any PhD research seminar. The author also has some suggestions for journal editors. I included some responses to Nick's original post below.

"Statistics: reasoning on uncertainty, and the insignificance of testing null," by Esa Läärä
Ann. Zool. Fennici 46: 138–157
ISSN 0003-455X (print), ISSN 1797-2450 (online)
Helsinki 30 April 2009 © Finnish Zoological and Botanical Publishing Board 200
http://www.sekj.org/PDF/anz46-free/anz46-138.pdf

The practice of statistical analysis and inference in ecology is critically reviewed. The dominant doctrine of null hypothesis signi fi cance testing (NHST) continues to be applied ritualistically and mindlessly. This dogma is based on superficial understanding of elementary notions of frequentist statistics in the 1930s, and is widely disseminated by influential textbooks targeted at biologists. It is characterized by silly null hypotheses and mechanical dichotomous division of results being “signi fi cant” ( P < 0.05) or not. Simple examples are given to demonstrate how distant the prevalent NHST malpractice is from the current mainstream practice of professional statisticians. Masses of trivial and meaningless “results” are being reported, which are not providing adequate quantitative information of scientific interest. The NHST dogma also retards progress in the understanding of ecological systems and the effects of management programmes, which may at worst contribute to damaging decisions in conservation biology. In the beginning of this millennium, critical discussion and debate on the problems and shortcomings of NHST has intensified in ecological journals. Alternative approaches, like basic point and interval estimation of effect sizes, likelihood-based and information theoretic methods, and the Bayesian inferential paradigm, have started to receive attention. Much is still to be done in efforts to improve statistical thinking and reasoning of ecologists and in training them to utilize appropriately the expanded statistical toolbox. Ecologists should finally abandon the false doctrines and textbooks of their previous statistical gurus. Instead they should more carefully learn what leading statisticians write and say, collaborate with statisticians in teaching, research, and editorial work in journals.

 

Jensen Comment
And to think Alpha (Type 1) error is the easy part. Does anybody ever test for the more important Beta (Type 2) error? I think some engineers test for Type 2 error with Operating Characteristic (OC) curves, but these are generally applied where controlled experiments are super controlled such as in quality control testing.

Beta Error --- http://en.wikipedia.org/wiki/Beta_error#Type_II_error

 


THE GENERAL SOCIAL SURVEY --- http://www.sociology.ohio-state.edu/dbd/Weakley.html

The creator of the General Social Survey (GSS), the National Opinion Research Center (NORC) was established in 1941. It serves as the oldest national research facility in the nation that is neither for profit nor university affiliated. The NORC uses a national probability sample by using government census information. The GSS was first administered in 1972, and uses personal interview information of US households. As stated on the GSS webpage, "The mission of the GSS is to make timely, high-quality, scientifically relevant data available to the social science research community" (Internet, 2000)

The NORC prides itself on the GSS’s broad coverage, its use of replication, its cross-national perspective, and its attention to data quality. The survey is, as its name explicitly states, general. The multitude of topics and interests make the GSS a fine tool for the diversity of contemporary social science research. Replication is an important component of the GSS. With the repetition of items and item sequences over time, research can be accomplished that analyzes changes or stability over time. Since 1982, NORC has had international collaborations with other research groups. Through the insight of leading specialists and a "rotating committee of distinguished social scientists," the GSS attempts to follow the highest survey standards in design, sampling, interviewing, processing, and documentation

Continued in article

"Using Replication to Help Inform Decisions about Scale-up: Three Quasi-experiments on a Middle School Unit on Motion and Forces," by Bill Watson,  Curtis Pyke, Sharon Lynch, and Rob Ochsendorf,  The George Washington University, 2008 ---
http://www.gwu.edu/~scale-up/documents/NARST 2007 - Using Replication to Inform Decisions about S..pdf

Research programs that include experiments are becoming increasingly important in science education as a means through which to develop a sound and convincing empirical basis for understanding the effects of interventions and making evidence-based decisions about their scale-up of in diverse settings. True experiments, which are characterized by the random assignment of members of a population to a treatment or a control group, are considered the “gold standard” in education research because they reduce the differences between groups to only random variation and the presence (or absence) of the treatment (Subotnik & Walberg, 2006)

For researchers, these conditions increase the likelihood that two samples drawn from the same population are comparable to each other and to the population, thereby increasing confidence in causal inferences about effectiveness (Cook & Campbell, 1979). For practitioners, those making decisions about curriculum and instruction in schools, the Institute for Educational Sciences at the US Department of Education (USDOE) suggests that only studies with randomization be considered as “strong evidence” or “possible evidence” of an intervention’s effectiveness (Institute for Educational Sciences, 2006).

Quasi-experiments are also a practical and valid means for the evaluation of interventions when a true experiment is impractical due to the presence of natural groups, such as classes and schools, within which students are clustered (Subotnik & Walberg, 2006). In these circumstances, a Quasi-experiment that includes careful sampling (e.g., random selection of schools), a priori assignment of matched pairs to a treatment or control group and/or a pretest used to control for any remaining group differences can often come close to providing the rigor of true experiment (Subotnik & Walberg, 2006). However, there are inherent threats to internal validity in Quasi-experimental designs that the research must take care to address with supplemental data. Systematic variation introduced through the clustering of subjects that occurs in Quasi-experiments can compete with the intervention studied as a cause of differences observed.

Replications of quasi-experiments can provide opportunities to adjust procedures to address some threats to the internal validity of Quasi-experiments and can study new samples to address external validity concerns. Replications can take many forms and serve a multitude of purposes (e.g., Hendrick, 1990; Kline, 2003). Intuitively, a thoughtful choice of replication of a quasi-experimental design can produce new and improved result or increase the confidence researchers have in the presence of a treatment effect found in an initial study. Therefore, replication can be important in establishing the effectiveness of an intervention when it fosters a sense of robustness in results or enhances the generalizability of findings from stand-alone

studies (Cohen, 1994; Robinson & Levin, 1997).

This paper presents data to show the utility in combining a high quality quasiexperimental design with multiple replications in school-based scale-up research. Scale-up research is research charged with producing evidence to inform scale-up decisions; decisions regarding which innovations can be expected to be effective for all students in a range of school contexts and settings – “what works best, for whom, and under what conditions” (Brown, McDonald, & Schneider, 2006, p. 1). Scaling-up by definition is the introduction of interventions whose efficacy has been established in one context into new settings, with the goal of producing similarly positive impacts in larger, frequently more diverse, populations (Brown et al., 2006).

Using Replication
Our work shows that a good first step in scaling-up an intervention is a series of experiments or quasi-experiments at small scale. Replication in Educational Research Quasi-experiments are often the most practical research design for an educational field study, including scale-up studies used to evaluate whether or not an intervention is worth taking to scale. However, because they are not true experiments and therefore do not achieve true randomization, the possibility for systematic error to occur is always present, and, with it, the risk of threats to internal and external validity of the study. For the purposes of this discussion, we consider internal validity to be “the validity with which statements can be made about whether there is a causal relationship from one variable to another in the form in which the variables were manipulated or measured” (Cook & Campbell, 1979, p. 38).

External validity refers to “the approximate validity with which conclusions are drawn about the generalizability of a causal relationship to and across populations of persons, settings, and times” (Cook & Campbell, 1979). Unlike replications with experimental designs, which almost always add to the efficacy of a sound result, the replication of a quasi-experiment may not have an inherent value if the potential threats to validity found in the initial study are not addressed.

Replication: Frameworks
In social science research, replication of research has traditionally been understood to be a process in which different researchers repeat a study’s methods independently with different subjects in different sites and at different times with the goal of achieving the same results and increasing the generalizability of findings (Meline & Paradiso, 2003; Thompson, 1996).

However, the process of replication in social science research in field settings is considerably more nuanced than this definition might suggest. In field settings, both the intervention and experimental procedures can be influenced by the local context and sample in ways that change the nature of the intervention or the experiment, or both from one experiment to another. Before conducting a replication, an astute researcher must therefore ask: In what context, with what kinds of subjects, and by which researchers will the replication be conducted? (Rosenthal, 1990).

The purpose of the replication must also be considered: Is the researcher interested in making adjustments to the study procedures or intervention to increase the internal validity of findings or will the sampling be adjusted to enhance the external validity of initial results?

A broader view of replication of field-based quasi-experiments might enable classification of different types according the multiple purposes for replication when conducting research in schools. Hendrick (1990) proposed four kinds of replication that take into account the procedural variables associated with a study and contextual variables (e.g., subject characteristics, physical setting). Hendrick’s taxonomy proposes that an exact replication adheres as closely as possible to the original variables and processes in order to replicate results.

A partial replication varies some aspects of either the contextual or procedural variables, and a conceptual replication radically departs from one or more of the procedural variables. Hendrick argued for a fourth type of replication, systematic replication, which includes first a strict replication and then either a partial or conceptual replication to isolate the original effect and explore the intervention when new variables are considered.

Rosenthal (1990) referred to such a succession of replications as a replication battery: "The simplest form of replication battery requires two replications of the original study: one of these replications is as similar as we can make it to the original study, the other is at least

Using Replication
Moderately dissimilar to the original study" (p. 6). Rosenthal (1990) argued that if the same results were obtained with similar but not exact Quasi-experimental procedures, internal validity would be increased because differences between groups could more likely be attributed to the intervention of interest and not to experimental procedures. Further, even if one of the replications is of poorer quality than the others, Rosenthal argued for its consideration in determining the overall effect of the intervention, albeit with less weight than more rigorous (presumably internally valid) replications. More recently, Kline (2003) also distinguished among several types of replication according to the different research purposes they address. For example, Kline’s
operational replications are like Hendrick’s (1990) exact replication: the sampling and experimental methods of the original study are repeated to test whether results can be duplicated. Balanced replications are akin to partial and conceptual replications in that they appear to address the limitations of quasi-experiments by manipulating additional variables to rule out competing explanations for results.

In a recent call for replication of studies in educational research, Schneider (2004) also suggested a degree of flexibility in replication, describing the process as "conducting an investigation repeatedly with comparable subjects and conditions" (p. 1473) while also suggesting that it might include making "controllable changes" to an intervention as part of its replication. Schneider’s (2004) notion of controllable changes, Kline’s (2003) description of balanced replication, Hendrick’s (1990) systematic replication, and Rosenthal’s (1990) argument in favor of the replication battery all suggest that a series of replications taken together can provide important information about an intervention’s effectiveness beyond a single Quasiexperiment.

Replication: Addressing Threats to Internal Validity
When multiple quasi-experiments (i.e., replications) are conducted with adjustments, the threats to internal validity inherent in quasi-experimentation might be more fully addressed (Cook & Campbell, 1979). Although changing quasi-experiments in the process of replicating them might decrease confidence in the external validity of an initial study finding, when a replication battery is considered, a set of studies might provide externally valid data to contribute to decision making within and beyond a particular school district. The particular threats to internal validity germane to the studies reported in this paper are those associated with the untreated control group design with pretest and posttest (Cook & Campbell, 1979). This classic and widely implemented quasi-experimental design features an observation of participants in two non-randomly assigned groups before and after one of the groups receives treatment with an intervention of interest.

The internal validity of a study or set of studies ultimately depends on the confidence that the researcher has that differences between groups are caused by the intervention of interest (Cook & Campbell, 1979). Cook and Campbell (1979) provided considerable detail about threats to internal validity in quasi-experimentation that could reduce confidence in claims of causality (p. 37-94). However, they concluded that the untreated control group design with pretest and posttest usually controls for all but four threats to internal validity: selection-maturation, instrumentation, differential regression to the mean, and local history. Table 1 briefly describes each of these threats. In addition, they are not mutually exclusive. In a study of the effectiveness of curriculum materials, for example, the extent to which the researchers are confident differential regression to the mean is not a threat relies upon their confidence that sampling methods have produced two samples similar on performance and demographic variables

Using Replication (selection-maturation) and that the assessment instrument has similar characteristics for all subjects (instrumentation). Cook and Campbell (1979) suggest that replication plays a role in establishing external validity by presenting the simplest case: An exact replication (Hendrick, 1990) of a quasiexperiment in which results are corroborated and confidence in internal validity is high.

However, we argue that the relationship between replication and validity is more complex, given the multiple combinations of outcomes that are possible when different kinds of replications are conducted. Two dimensions of replication seem particularly important. The first is the consistency of results across replication. The second is whether a replication addresses internal validity threats that were not addressed in a previous study (i.e., it improves upon the study) or informs the interpretation of the presence or absence of threats in a prior study (i.e., it enhances interpretation of the study).

In an exact replication, results can either be the same as or different from results in the original quasi-experiment. If results are different, it seems reasonable to suggest that some element of the local history - perhaps schools, teachers, or a cohort of students - could have an effect on the outcomes, in addition to (or instead of) the effect of an intervention. A partial replication therefore seems warranted to adjust the quasi-experimental procedures to address the threats. A partial replication would also be appropriate if the results are the same, but the researchers do not have confidence that threats to internal validity have been adequately addressed. Indeed, conducting partial replications in either of these scenarios is consistent with the recommendation of Hendrick (1990) to consider results from a set of replications when attempting to determine the effectiveness of an intervention.

Addressing threats to validity with partial replication, is, in turn, not a straightforward process. What if results of a partial replication of a quasi-experiment are not the same as those found in either the original quasi-experiment or its exact replication? If the partial replication addresses a threat to internal validity where the original quasi-experiment or its exact replication did not, then the partial replication improves upon the study, and its results might be considered the most robust. If threats to internal validity are still not adequately addressed in the partial replication, the researcher must explore relationships between all combinations of the quasiexperiments.

Alternatively, if the partial replication provides data that help to address threats to the internal validity of the original quasi-experiment or its exact replication, then the partial replication enhances interpretation of the original study, and its results might be considered with the results of the previous study.

Figure 1 provides a possible decision tree for researchers faced with data from a quasiexperiment and an exact replication. Because multiple replications of quasi-experiments in educational research are rare, Figure 1 is more an exercise in logic than a decision matrix supported by data produced in a series of actual replication batteries. However, the procedures and results described in this paper will provide data generated from a series of quasi-experiments with practical consequences for the scale-up of a set of curriculum materials in a large, suburban school district. We hope to support the logic of Figure 1 by applying it to the example to which we now turn.

Continued in article

 

"Internal and External Validity in Economics Research: Tradeoffs between Experiments, Field Experiments, Natural Experiments and Field Data," by Brian E. Roe and David R. Just, 2009 Proceedings Issue, American Journal of Agricultural Economics --- http://aede.osu.edu/people/roe.30/Roe_Just_AJAE09.pdf

Abstract: In the realm of empirical research, investigators are first and foremost concerned with the validity of their results, but validity is a multi-dimensional ideal. In this article we discuss two key dimensions of validity – internal and external validity – and underscore the natural tension that arises in choosing a research approach to maximize both types of validity. We propose that the most common approaches to empirical research – the use of naturally-occurring field/market data and the use of laboratory experiments – fall on the ends of a spectrum of research approaches, and that the interior of this spectrum includes intermediary approaches such as field experiments and natural experiments. Furthermore, we argue that choosing between lab experiments and field data usually requires a tradeoff between the pursuit of internal and external validity. Movements toward the interior of the spectrum can often ease the tension between internal and external validity but are also accompanied by other important limitations, such as less control over subject matter or topic areas and a reduced ability for others to replicate research. Finally, we highlight recent attempts to modify and mix research approaches in a way that eases the natural conflict between internal and external validity and discuss if employing multiple methods leads to economies of scope in research costs.

 

"What is the value of replicating other studies?" Park, C. L., Evaluation Research,13, 3, 2004. 189-195 ---
http://auspace.athabascau.ca:8080/dspace/handle/2149/1327

In response to a question on the value of replication in social science research, the author undertook a search of the literature for expert advise on the value of such an activity. Using the information gleaned and the personal experience of attempting to replicate the research of a colleague, the conclusion was drawn that replication has great value but little ‘real life’ application in the true sense. The activity itself, regardless of the degree of precision of the replication, can have great merit in extending understanding about a method or a concept.
URI: http://hdl.handle.net/2149/1327 

Sometimes experimental outcomes impounded for years in textbooks become viewed as "laws" by students, professors, and consultants. One example, is the Hawthorne Effect impounded into psychology and management textbooks for the for more than 50 years --- http://en.wikipedia.org/wiki/Hawthorne_Effect

But Steven Levitt and John List, two economists at the University of Chicago, discovered that the data had survived the decades in two archives in Milwaukee and Boston, and decided to subject them to econometric analysis. The Hawthorne experiments had another surprise in store for them. Contrary to the descriptions in the literature, they found no systematic evidence that levels of productivity in the factory rose whenever changes in lighting were implemented.
"Light work," The Economist, June 4, 2009, Page 74 ---
http://www.economist.com/finance/displaystory.cfm?story_id=13788427

 

Revisiting a Research Study After 70 Years
"Thurstone's Crime Scale Re-Visited." by Mark H. Stone, Popular Measurement, Spring 2000 ---
http://www.rasch.org/pm/pm3-53.pdf


A new one from my old behavioral accounting friend Jake
"Is Neuroaccounting Waiting in the Wings?" Jacob G. Birnberg and Ananda R. Ganguly, SSRN, February 10 ,2011 ---
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1759460

Abstract:
This paper reviews a recently published handbook on neuroeconomics (Glimcher et al. 2009H) and extends the discussion to reasons why this newly emerging discipline should be of interest to behavioral accounting researchers. We evaluate the achieved and potential contribution of neuroeconomics to the study of human economic behavior, and examine what behavioral accountants can learn from neuroeconomics and whether we should expect to see a similar sub-field emerge within behavioral accounting in the near future. We conclude that while a separate sub-field within behavioral accounting is not likely in the near future due mostly to practical reasons, the behavioral accounting researcher would do well to follow this discipline closely, and behavioral accountants are likely to collaborate with neuroeconomists when feasible to examine questions of mutual interest.

Keywords: Neuroeconomics, Neuroaccounting, Behavioral Accounting

Jensen Comment
This ties in somewhat with the work of John Dickhaut ---
http://www.neuroeconomics.org/dickhaut-memorial/in-memory-of-john-dickhaut

The lead article in the November 2009 issue of The Accounting Review is like a blue plate special that differs greatly from the usual accountics offerings on the TAR menu over the past four decades. TAR does not usually publish case studies, field studies, or theory papers or commentaries or conjectures that do not qualify as research on testable hypotheses or analytical mathematics. But the November 2009 lead article by John Dickhout is an exception.

Before reading the TAR tidbit below you should perhaps read a bit about John Dichaut at the University of Minnesota, apart from the fact that he's an old guy of my vintage with new ideas that somehow leapt out of the accountics publishing shackles that typically restrain creative ideas and "search" apart from "research."

"Gambling on Trust:  John Dickhaut uses "neuroeconomics" to study how people make decisions," OVPR, University of Minnesota --- 

On the surface, it's obvious that trust makes the economic world go round. A worker trusts that he or she will get paid at the end of the week. Investors trust that earnings reports are based on fact, not fiction. Back in the mid-1700s, Adam Smith-the father of economics-built portions of his theories on this principle, which he termed "sympathy." In the years since then, economists and other thinkers have developed hundreds of further insights into the ways that people and economies function. But what if Adam Smith was wrong about sympathy?

Professor John Dickhaut of the Carlson School of Management's accounting department is one of a growing number of researchers who uses verifiable laboratory techniques to put principles like this one to the test. "I'm interested in how people make choices and how these choices affect the economy," says Dickhaut. A decade ago, he and his colleagues developed the trust game, an experiment that tracks trust levels in financial situations between strangers. "The trust game mimics real-world situations," he says.

Luckily for modern economics-and for anyone planning an investment-Dickhaut's modern-day scientific methods verify Adam Smith's insight. People tend to err on the side of trust than mistrust-are more likely to be a little generous than a little bit stingy. In fact, a basic tendency to be trusting and to reward trustworthy behavior may be a norm of human behavior, upon which the laws of society are built. And that's just the beginning of what the trust game and the field of experimental economics can teach us.

Trust around the world

Since Dickhaut and his co-authors first published the results of their research, the trust game has traveled from the Carlson School at the University of Minnesota all the way to Russia, China, and France. It's tested gender differences and other variations.

"It's an experiment that bred a cottage industry," says Dickhaut. Because the trust game has proved so reliable, researchers now use it to explore new areas. George Mason University's Vernon Smith, 2002 Nobel Laureate for his work in experimental economics, used the trust game in some of his path-breaking work. University of Minnesota researcher and Dickhaut co-author Aldo Rustichini is discovering that people's moods can be altered in the trust games so that participants become increasingly organized in their behavior, as if this can impact the outcome. This happens after the participants are repeatedly put in situations where their trust has been violated.

Although it's too soon to be certain, such research could reveal why people respond to troubled times by tightening up regulations or imposing new ones, such as Sarbanes-Oxley. This new research suggests that calls for tighter rules may reveal more about the brain than reduce chaos in the world of finance.

Researchers who study the brain during economic transactions, or neuroeconomists, scanned the brains of trust game players in labs across the country to discover the parts of the brain that "light up" during decision-making. Already, neuroeconomists have discovered that the section of the brain investors use when making a risky investment, like in the New York Stock Exchange, is different than the one used when they invest in a less risky alternative, like a U.S. Treasury bill.

"People don't lay out a complete decision tree every time they make a choice," Dickhaut says. Understanding the part of the brain accessed during various situations may help to uncover the regulatory structures that would be most effective-since people think of different types of investments so differently, they might react to rules in different ways as well. Such knowledge might also point to why behaviors differ when faced with long- or short-term gains.

Dickhaut's original paper, "Trust, Reciprocity, and Social History," is still a hit. Despite an original publication date of 1995, the paper recently ranked first in ScienceDirect's top 25 downloads from the journal Games and Economic Behavior.

Risky business

Dickhaut hasn't spent the past 10 years resting on his laurels. Instead, he's challenged long-held beliefs with startling new data. In his latest research, Dickhaut and his coauthors create lab tests that mimic E-Bay style auctions, bidding contests for major public works projects, and others types of auctions. The results may be surprising.

"People don't appear to take risks based on some general assessment of whether they're risk-seeking or risk-averse," says Dickhaut. In other words, it's easy to make faulty assumptions about how a person will respond to risk. Even people who test as risk-averse might be willing to make a risky gamble in a certain type of auction.

This research could turn the evaluation of risk aversion upside down. Insurance company questionnaires are meant to evaluate how risky a prospective client's behavior might be. In fact, the questionnaires could simply reveal how a person answers a certain kind of question, not how he or she would behave when faced with a risky proposition.

Bubble and bust, laboratory style

In related research, Dickhaut and his students seek that most elusive of explanations: what produces a stock-market collapse? His students have successfully created models that explain market crash situations in the lab. In these crashes, brokers try to hold off selling until the last possible moment, hoping that they'll get out at the peak. Buyers try to wait until the prices are the lowest they're going to get. It's a complicated setting that happens every day-and infrequently leads to a bubble and a crash.

"It must be more than price alone," says Dickhaut. "Traditional economics tells us that people are price takers who don't see that their actions influence prices. Stock buyers don't expect their purchases to impact a stock's prices. Instead, they think of themselves as taking advantages of outcomes."

He urges thinkers to take into account that people are always trying to manipulate the market. "This is almost always going to happen," he says. "One person will always think he knows more than the other."

Transparency-giving a buyer all of the information about a company-is often suggested as the answer to avoiding inflated prices that can lead to a crash. Common sense says that the more knowledge a buyer has, the less likely he or she is to pay more than a stock is worth. Surprisingly, Dickhaut's findings refute this seemingly logical answer. His lab tests prove that transparency can cause worse outcomes than in a market with poorer information. In other words, transparent doesn't equal clearly understood. "People fail to coordinate understanding," explains Dickhaut. "They don't communicate their expectations, and they might think that they understand more than they do about a company."

Do stock prices balloon and crash because of genuine misunderstandings? Can better communication about a stock's value really be the key to avoiding future market crashes? "I wish you could say for sure," says Dickhaut. "That's one of the things we want to find out."

Experimental economics is still a young discipline, and it seems to raise new questions even as it answers old ones. Even so, the contributions are real. In 2005 John Dickhaut was awarded the Carlson School's first career research award, a signal that his research has been of significant value in his field. "It's fun," he says with a grin. "There's a lot out there to learn."

Reprinted with permission from the July 2005 edition of Insights@Carlson School, a publication of the Carlson School of Management.

 

"The Brain as the Original Accounting Institution"
John Dickhaut
The Accounting Review 84(6), 1703 (2009) (10 pages)
TAR is not a free online journal, although articles can be purchased --- http://aaahq.org/pubs.cfm

ABSTRACT:
The evolved brain neuronally processed information on human interaction long before the development of formal accounting institutions. Could the neuronal processes represent the underpinnings of the accounting principles that exist today? This question is pursued several ways: first as an examination of parallel structures that exist between the brain and accounting principles, second as an explanation of why such parallels might exist, and third as an explicit description of a paradigm that shows how the benefits of an accounting procedure can emerge in an experiment
.

The following are noteworthy in terms of this being a blue plate special apart from the usual accountics fare at the TAR Restaurant:

John was saved from the wrath of the AAA Accountics Tribunal by also having an accountics paper (with complicated equations) published in the same November 2009 edition of TAR.
"Market Efficiencies and Drift: A Computational Model"
John Dickhaut and Baohua Xin
The Accounting Review 84(6), 1805 (2009) (27 pages)

Whew!
Good work John!
John died in April 2010 at the age of 68.


The day Arthur Andersen loses the public's trust is the day we are out of business.  
Steve Samek, Country Managing Partner, United States, on Andersen's Independence and Ethical Standards CD-Rom, 1999

Mathematical Analytics in Plato's Cave
TAR Researchers Playing by Themselves in Isolated Dark Caves That the Sunlight Cannot Reach

"In Plato's Cave:  Mathematical models are a powerful way of predicting financial markets. But they are fallible" The Economist, January 24, 2009, pp. 10-14 --- http://www.trinity.edu/rjensen/2008Bailout.htm#Bailout

Plato's Allegory of the Cave --- http://en.wikipedia.org/wiki/Allegory_of_the_Cave

Two Animations of Plato’s Allegory of the Cave: One Narrated by Orson Welles, Another Made with Clay ---
http://www.openculture.com/2014/02/two-animations-of-platos-allegory-of-the-cave.html

Mathematical analytics should not be immune from validity tests even though replication is different from replication of experiments. Mathematical models published in TAR all require underlying assumptions such that the robustness of the analytics are generally only as good as the assumptions. Critical analyses of such results thereby usually focus on the realism and validity of the assumptions regarding such things as utility functions and decision behavior of persons assumed in the models. For example, it's extremely common in TAR analytics to assume that business firms are operating in a steady state equilibrium when in the real world such assumed conditions rarely, if ever, apply. And the studies themselves rarely, if ever, test the sensitivity of the conclusions to departures from steady state equilibrium.

Until the giant leap from the analytical conclusions to reality can be demonstrated, it does not take a rocket scientist to figure out why business firms and most accounting teachers simply ignore the gaming going on in TAR analytics. It's amazing to me how such analytics researchers perform such talented and sophisticated mathematical analysis and then lightly brush over their assumptions as "being reasonable" without any test of reasonableness. Without validation of the enormous assumptions, we should not simply agree on faith that these assumptions are indeed "reasonable."

At a minimum it would help greatly if TAR accepted commentaries where scholars could debate the "reasonableness" of assumptions in the analytics. Perhaps authors fear this might happen if the TAR editor invited commentarie

In most instances the defense of underlying assumptions is based upon assumptions passed down from previous analytical studies rather than empirical or even case study evidence. An example is the following conclusion:

We find that audit quality and audit fees both increase with the auditor’s expected litigation losses from audit failures. However, when considering the auditor’s acceptance decision, we show that it is important to carefully identify the component of the litigation environment that is being investigated. We decompose the liability environment into three components: (1) the strictness of the legal regime, defined as the probability that the auditor is sued and found liable in case of an audit failure, (2) potential damage payments from the auditor to investors and (3) other litigation costs incurred by the auditor, labeled litigation frictions, such as attorneys’ fees or loss of reputation. We show that, in equilibrium, an increase in the potential damage payment actually leads to a reduction in the client rejection rate. This effect arises because the resulting higher audit quality increases the value of the entrepreneur’s investment opportunity, which makes it optimal for the entrepreneur to increase the audit fee by an amount that is larger than the increase in the auditor’s expected damage payment. However, for this result to hold, it is crucial that damage payments be fully recovered by the investors. We show that an increase in litigation frictions leads to the opposite result—client rejection rates increase. Finally, since a shift in the strength of the legal regime affects both the expected damage payments to investors as well as litigation frictions, the relationship between the legal regime and rejection rates is nonmonotonic. Specifically, we show that the relationship is U-shaped, which implies that for both weak and strong legal liability regimes, rejection rates are higher than those characterizing more moderate legal liability regimes.
Volker Laux  and D. Paul Newman, "Auditor Liability and Client Acceptance Decisions," The Accounting Review, Vol. 85, No. 1, 2010 pp. 261–285

This analytical conclusion rests upon crucial underlying assumptions that are mostly justified by reference to previous analytical studies that made similar simplifying assumptions. For example, "the assumption that 'the entrepreneur has no private information' is common in the auditing literature; see, for example, Dye (1993, 1995), Schwartz (1997), Chan and Pae (1998), and Chan and Wong (2002)." This assumption is crucial and highly dubious in many real-world settings. Further reading of footnotes piles assumption upon assumption.

Laux and Newman contend their underlying assumptions are "reasonable." I will argue that they are overly simplistic and thereby unreasonable. I instead contend that risky clients must instead be pooled and that decisions regarding fees and acceptances of risky clients must be made dynamically over time with respect to the entire pool. In addition the current reputation losses have to be factored in on a continuing basis.

Laux and Newman assume away the pooled and varying and interactive externality costs of adverse publicity of litigation when clients fail. Such costs are not as independent as assumed in the Laux and Newman audit pricing model for a single risky client. Their model ignores the interactive covariances.

Even if the audit firm conducts a good audit, it usually finds itself drawn into litigation as a deep pockets participant in the affairs of a failed client. If the audit firms have had recent embarrassments for bad audits, the firm might decide to drop a risky client no matter what the client might pay for an audit fee. I contend the friction costs are disjointed and do not fit the Laux and Newman model in a reasonable way. For example, after Deloitte, KMPG, and Ernst & Young had their hands slapped by the PCAOB for some bad auditing, it becomes even more imperative for these firms to reconsider their risky client pool that could result in further damage to their reputations. Laux and Newman vaguely bundle the reputation loss among what they call "frictions" but then assume that the audit fee of a pending risky client can be adjusted to overcome such "frictions." I would instead contend that the adverse publicity costs are interdependent upon the entire subset of an audit firm's risky clients. Audit firms must instead base audit pricing based upon an analysis of their entire risk pool and seriously consider dropping some current clients irrespective of audit fees.  Also the friction cost of Client A is likely to be impacted by a decision to drop Clients B, C, and D. Hence, friction costs are in reality joint costs, and managers that make independent product pricing decisions amidst joint products does so at great peril.

Laux and Newman assume possible reputation losses and other frictions can be measured on a ratio scale. I consider this assumption entirely unrealistic. The decision to take on a risky client depends greatly on the publicity losses that have recently transpired combined with the potential losses due to adverse publicity in the entire existing pool of risky clients. Andersen did not fail because of Enron. Enron was merely the straw that broke the camel's back.


More importantly, it was found in the case of Andersen that accepting or keeping risky Client A may impact on the cost of capital of Clients B, C, D, E, etc.

Loss of Reputation was the Kiss of Death for Andersen
Andersen Audits Increased Clients' Cost of Capital Relative to Clients of Other Auditing Firms

"The Demise of Arthur Andersen," by Clifford F. Thies, Ludwig Von Mises Institute, April 12, 2002 --- http://www.mises.org/fullstory.asp?control=932&FS=The+Demise+of+Arthur+Andersen

From Yahoo.com, Andrew and I downloaded the daily adjusted closing prices of the stocks of these companies (the adjustment taking into account splits and dividends). I then constructed portfolios based on an equal dollar investment in the stocks of each of the companies and tracked the performance of the two portfolios from August 1, 2001, to March 1, 2002. Indexes of the values of these portfolios are juxtaposed in Figure 1.

From August 1, 2001, to November 30, 2001, the values of the two portfolios are very highly correlated. In particular, the values of the two portfolios fell following the September 11 terrorist attack on our country and then quickly recovered. You would expect a very high correlation in the values of truly matched portfolios. Then, two deviations stand out.

In early December 2001, a wedge temporarily opened up between the values of the two portfolios. This followed the SEC subpoena. Then, in early February, a second and persistent wedge opened. This followed the news of the coming DOJ indictment. It appears that an Andersen signature (relative to a "Final Four" signature) costs a company 6 percent of its market capitalization. No wonder corporate clients--including several of the companies that were in the Andersen-audited portfolio Andrew and I constructed--are leaving Andersen.

Prior to the demise of Arthur Andersen, the Big 5 firms seemed to have a "lock" on reputation. It is possible that these firms may have felt free to trade on their names in search of additional sources of revenue. If that is what happened at Andersen, it was a big mistake. In a free market, nobody has a lock on anything. Every day that you don’t earn your reputation afresh by serving your customers well is a day you risk losing your reputation. And, in a service-oriented economy, losing your reputation is the kiss of death.


 

"Is mathematics an effective way to describe the world?" by Lisa Zyga, Physorg, September 3, 2013 ---
http://phys.org/news/2013-09-mathematics-effective-world.html

Mathematics has been called the language of the universe. Scientists and engineers often speak of the elegance of mathematics when describing physical reality, citing examples such as π, E=mc2, and even something as simple as using abstract integers to count real-world objects. Yet while these examples demonstrate how useful math can be for us, does it mean that the physical world naturally follows the rules of mathematics as its "mother tongue," and that this mathematics has its own existence that is out there waiting to be discovered? This point of view on the nature of the relationship between mathematics and the physical world is called Platonism, but not everyone agrees with it.

Derek Abbott, Professor of Electrical and Electronics Engineering at The University of Adelaide in Australia, has written a perspective piece to be published in the Proceedings of the IEEE in which he argues that mathematical Platonism is an inaccurate view of reality. Instead, he argues for the opposing viewpoint, the non-Platonist notion that mathematics is a product of the human imagination that we tailor to describe reality.

This argument is not new. In fact, Abbott estimates (through his own experiences, in an admittedly non-scientific survey) that while 80% of mathematicians lean toward a Platonist view, engineers by and large are non-Platonist. Physicists tend to be "closeted non-Platonists," he says, meaning they often appear Platonist in public. But when pressed in private, he says he can "often extract a non-Platonist confession."

So if mathematicians, engineers, and physicists can all manage to perform their work despite differences in opinion on this philosophical subject, why does the true nature of mathematics in its relation to the physical world really matter?

The reason, Abbott says, is that because when you recognize that math is just a mental construct—just an approximation of reality that has its frailties and limitations and that will break down at some point because perfect mathematical forms do not exist in the physical universe—then you can see how ineffective math is.

And that is Abbott's main point (and most controversial one): that mathematics is not exceptionally good at describing reality, and definitely not the "miracle" that some scientists have marveled at. Einstein, a mathematical non-Platonist, was one scientist who marveled at the power of mathematics. He asked, "How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality?"

In 1959, the physicist and mathematician Eugene Wigner described this problem as "the unreasonable effectiveness of mathematics." In response, Abbott's paper is called "The Reasonable Ineffectiveness of Mathematics." Both viewpoints are based on the non-Platonist idea that math is a human invention. But whereas Wigner and Einstein might be considered mathematical optimists who noticed all the ways that mathematics closely describes reality, Abbott pessimistically points out that these mathematical models almost always fall short.

What exactly does "effective mathematics" look like? Abbott explains that effective mathematics provides compact, idealized representations of the inherently noisy physical world.

"Analytical mathematical expressions are a way making compact descriptions of our observations," he told Phys.org. "As humans, we search for this 'compression' that math gives us because we have limited brain power. Maths is effective when it delivers simple, compact expressions that we can apply with regularity to many situations. It is ineffective when it fails to deliver that elegant compactness. It is that compactness that makes it useful/practical ... if we can get that compression without sacrificing too much precision.

"I argue that there are many more cases where math is ineffective (non-compact) than when it is effective (compact). Math only has the illusion of being effective when we focus on the successful examples. But our successful examples perhaps only apply to a tiny portion of all the possible questions we could ask about the universe."

Some of the arguments in Abbott's paper are based on the ideas of the mathematician Richard W. Hamming, who in 1980 identified four reasons why mathematics should not be as effective as it seems. Although Hamming resigned himself to the idea that mathematics is unreasonably effective, Abbott shows that Hamming's reasons actually support non-Platonism given a reduced level of mathematical effectiveness.

Here are a few of Abbott's reasons for why mathematics is reasonably ineffective, which are largely based on the non-Platonist viewpoint that math is a human invention:

• Mathematics appears to be successful because we cherry-pick the problems for which we have found a way to apply mathematics. There have likely been millions of failed mathematical models, but nobody pays attention to them. ("A genius," Abbott writes, "is merely one who has a great idea, but has the common sense to keep quiet about his other thousand insane thoughts.")

• Our application of mathematics changes at different scales. For example, in the 1970s when transistor lengths were on the order of micrometers, engineers could describe transistor behavior using elegant equations. Today's submicrometer transistors involve complicated effects that the earlier models neglected, so engineers have turned to computer simulation software to model smaller transistors. A more effective formula would describe transistors at all scales, but such a compact formula does not exist.

• Although our models appear to apply to all timescales, we perhaps create descriptions biased by the length of our human lifespans. For example, we see the Sun as an energy source for our planet, but if the human lifespan were as long as the universe, perhaps the Sun would appear to be a short-lived fluctuation that rapidly brings our planet into thermal equilibrium with itself as it "blasts" into a red giant. From this perspective, the Earth is not extracting useful net energy from the Sun.

• Even counting has its limits. When counting bananas, for example, at some point the number of bananas will be so large that the gravitational pull of all the bananas draws them into a black hole. At some point, we can no longer rely on numbers to count.

• And what about the concept of integers in the first place? That is, where does one banana end and the next begin? While we think we know visually, we do not have a formal mathematical definition. To take this to its logical extreme, if humans were not solid but gaseous and lived in the clouds, counting discrete objects would not be so obvious. Thus axioms based on the notion of simple counting are not innate to our universe, but are a human construct. There is then no guarantee that the mathematical descriptions we create will be universally applicable.

For Abbott, these points and many others that he makes in his paper show that mathematics is not a miraculous discovery that fits reality with incomprehensible regularity. In the end, mathematics is a human invention that is useful, limited, and works about as well as expected.

Continued in article

574 Shields Against Validity Challenges in Plato's Cave ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

Real Science versus Pseudo Science ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Pseudo-Science

How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

"How Non-Scientific Granulation Can Improve Scientific Accountics"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf

 

A Mathematical Way To Think About Biology --- http://qbio.lookatphysics.com/
"Do Biologists Avoid Math-Heavy Papers?" Inside Higher Ed, June 27, 2012 ---
http://www.insidehighered.com/quicktakes/2012/06/27/do-biologists-avoid-math-heavy-papers

New research by professors at the University of Bristol suggests that biologists may be avoiding scientific papers that have extensive mathematical detail, Times Higher Education  reported. The Bristol researchers studied the number of citations to 600 evolutionary biology papers published in 1998. They found that the most "maths-heavy" papers were cited by others half as much as other papers. Each additional math equation appears to reduce the odds of a paper being cited. Tim Fawcett, a co-author of the paper, told Times Higher Education, "I think this is potentially something that could be a problem for all areas of science where there is a tight link between the theoretical mathematical models and experiment."

"Maths-heavy papers put biologists off," by Elizabeth Gibney, Times Higher Education, June 26, 2012 ---
http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=420388&c=1

The study, published in the Proceedings of the National Academy of Sciences USA, suggests that scientists pay less attention to theories that are dense with mathematical detail.

Researchers in Bristol’s School of Biological Sciences compared citation data with the number of equations per page in more than 600 evolutionary biology papers in 1998.

They found that most maths-heavy articles were referenced 50 per cent less often than those with little or no maths. Each additional equation per page reduced a paper’s citation success by 28 per cent.

The size of the effect was striking, Tim Fawcett, research fellow and the paper’s co-author, told Times Higher Education.

“I think this is potentially something that could be a problem for all areas of science where there is a tight link between the theoretical mathematical models and experiment,” he said.

The research stemmed from a suspicion that papers full of equations and technical detail could be putting off researchers who do not necessarily have much mathematical training, said Dr Fawcett.

“Even Steven Hawking worried that each equation he added to A Brief History of Time would reduce sales. So this idea has been out there for a while, but no one’s really looked at it until we did this study,” he added.

Andrew Higginson, Dr Fawcett’s co-author and a research associate in the School of Biological Sciences, said that scientists need to think more carefully about how they present the mathematical details of their work.

“The ideal solution is not to hide the maths away, but to add more explanatory text to take the reader carefully through the assumptions and implications of the theory,” he said.

But the authors say they fear that this approach will be resisted by some journals that favour concise papers and where space is in short supply.

An alternative solution is to put much of the mathematical details in an appendix, which tends to be published online.

“Our analysis seems to show that for equations put in an appendix there isn’t such an effect,” said Dr Fawcett.

But there’s a big risk that in doing that you are potentially hiding the maths away, so it's important to state clearly the assumptions and implications in the main text for everyone to see.”

Although the issue is likely to extend beyond evolutionary biology, it may not be such a problem in other branches of science where students and researchers tend to be trained in maths to a greater degree, he added.

Continued in article

Jensen Comment
The causes of this asserted avoidance are no doubt very complicated and vary in among individual instances. Some biologists might avoid biology quant papers because they themselves are not sufficiently quant to comprehend the mathematics. It would seem, however, that even quant biology papers have some non-mathematics summaries that might be of interest to the non-quant biologists.

I would be inclined to believer that biologists avoid quant papers for other reasons, especially some reasons that accounting teachers and practitioners most often avoid accountics research studies (that are quant by definition). I think the main reason for this avoidance is that biology and academic quants typically do their research in Plato's Cave with "convenient" assumptions that are too removed from the real and much more complicated world. For example, the real world is seldom in a state of equilibrium or a "steady state" needed to greatly simplify the mathematical derivations.

Bob Jensen's threads and illustrations of simplifying assumptions are at
Mathematical Analytics in Plato's Cave --- See Below

 


An Excellent Presentation on the Flaws of Finance, Particularly the Flaws of Financial Theorists

A recent topic on the AECM listserv concerns the limitations of accounting standard setters and researchers when it comes to understanding investors. One point that was not raised in the thread to date is that a lot can be learned about investors from the top financial analysts of the world --- their writings and their conferences.

A Plenary Session Speech at a Chartered Financial Analysts Conference
Video: James Montier’s 2012 Chicago CFA Speech The Flaws of Finance ---
http://cfapodcast.smartpros.com/web/live_events/Annual/Montier/index.html
Note that it takes over 15 minutes before James Montier begins

Major Themes

  1. The difference between physics versus finance models is that physicists know the limitations of their models.
     
  2. Another difference is that components (e.g., atoms) of a physics model are not trying to game the system.
     
  3. The more complicated the model in finance the more the analyst is trying to substitute theory for experience.
     
  4. There's a lot wrong with Value at Risk (VaR) models that regulators ignored.
     
  5. The assumption of market efficiency among regulators (such as Alan Greenspan) was a huge mistake that led to excessively low interest rates and bad behavior by banks and credit rating agencies.
     
  6. Auditors succumbed to self-serving biases of favoring their clients over public investors.
     
  7. Banks were making huge gambles on other peoples' money.
     
  8. Investors themselves ignored risk such as poisoned CDO risks when they should've known better. I love his analogy of black swans on a turkey farm.
     
  9. Why don't we see surprises coming (five excellent reasons given here)?
     
  10. The only group of people who view the world realistically are the clinically depressed.
     
  11. Model builders should stop substituting elegance for reality.
     
  12. All financial theorists should be forced to interact with practitioners.
     
  13. Practitioners need to abandon the myth of optimality before the fact.
    Jensen Note
    This also applies to abandoning the myth that we can set optimal accounting standards.
     
  14. In the long term fundamentals matter.
     
  15. Don't get too bogged down in details at the expense of the big picture.
     
  16. Max Plank said science advances one funeral at a time.
     
  17. The speaker then entertains questions from the audience (some are very good).

 

James Montier is a very good speaker from England!

Mr. Montier is a member of GMO’s asset allocation team. Prior to joining GMO in 2009, he was co-head of Global Strategy at Société Générale. Mr. Montier is the author of several books including Behavioural Investing: A Practitioner’s Guide to Applying Behavioural Finance; Value Investing: Tools and Techniques for Intelligent Investment; and The Little Book of Behavioural Investing. Mr. Montier is a visiting fellow at the University of Durham and a fellow of the Royal Society of Arts. He holds a B.A. in Economics from Portsmouth University and an M.Sc. in Economics from Warwick University
http://www.gmo.com/america/about/people/_departments/assetallocation.htm

There's a lot of useful information in this talk for accountics scientists.

Bob Jensen's threads on what went wrong with accountics research are at
http://www.trinity.edu/rjensen/theory01.htm#WhatWentWrong


How will World War III be fought to bring down the USA?
Target Breach Malware Partly Written in Russian

From the CFO Journal's Morning Ledger on January 17, 2014

Target breach was part of broad attack
The holiday data breach at Target appears to be part of a broad and sophisticated international hacking campaign against multiple retailers, the WSJ’s Danny Yadron reports. Parts of the malicious computer code used against Target’s credit-card readers had been on the Internet’s black market since last spring and were partly written in Russian. Both details suggest the attack may have ties to organized crime in the former Soviet Union.


 

"Economics has met the enemy, and it is economics," by Ira Basen, Globe and Mail, October 15, 2011 ---
http://www.theglobeandmail.com/news/politics/economics-has-met-the-enemy-and-it-is-economics/article2202027/page1/ 
Thank you Jerry Trites for the heads up.

After Thomas Sargent learned on Monday morning that he and colleague Christopher Sims had been awarded the Nobel Prize in Economics for 2011, the 68-year-old New York University professor struck an aw-shucks tone with an interviewer from the official Nobel website: “We're just bookish types that look at numbers and try to figure out what's going on.”

But no one who'd followed Prof. Sargent's long, distinguished career would have been fooled by his attempt at modesty. He'd won for his part in developing one of economists' main models of cause and effect: How can we expect people to respond to changes in prices, for example, or interest rates? According to the laureates' theories, they'll do whatever's most beneficial to them, and they'll do it every time. They don't need governments to instruct them; they figure it out for themselves. Economists call this the “rational expectations” model. And it's not just an abstraction: Bankers and policy-makers apply these formulae in the real world, so bad models lead to bad policy.

Which is perhaps why, by the end of that interview on Monday, Prof. Sargent was adopting a more realistic tone: “We experiment with our models,” he explained, “before we wreck the world.”

Rational-expectations theory and its corollary, the efficient-market hypothesis, have been central to mainstream economics for more than 40 years. And while they may not have “wrecked the world,” some critics argue these models have blinded economists to reality: Certain the universe was unfolding as it should, they failed both to anticipate the financial crisis of 2008 and to chart an effective path to recovery.

The economic crisis has produced a crisis in the study of economics – a growing realization that if the field is going to offer meaningful solutions, greater attention must be paid to what is happening in university lecture halls and seminar rooms.

While the protesters occupying Wall Street are not carrying signs denouncing rational-expectations and efficient-market modelling, perhaps they should be.

They wouldn't be the first young dissenters to call economics to account. In June of 2000, a small group of elite graduate students at some of France's most prestigious universities declared war on the economic establishment. This was an unlikely group of student radicals, whose degrees could be expected to lead them to lucrative careers in finance, business or government if they didn't rock the boat. Instead, they protested – not about tuition or workloads, but that too much of what they studied bore no relation to what was happening outside the classroom walls.

They launched an online petition demanding greater realism in economics teaching, less reliance on mathematics “as an end in itself” and more space for approaches beyond the dominant neoclassical model, including input from other disciplines, such as psychology, history and sociology. Their conclusion was that economics had become an “autistic science,” lost in “imaginary worlds.” They called their movement Autisme-economie.

The students' timing is notable: It was the spring of 2000, when the world was still basking in the glow of “the Great Moderation,” when for most of a decade Western economies had been enjoying a prolonged period of moderate but fairly steady growth.

Some economists were daring to think the unthinkable – that their understanding of how advanced capitalist economies worked had become so sophisticated that they might finally have succeeded in smoothing out the destructive gyrations of capitalism's boom-and-bust cycle. (“The central problem of depression prevention has been solved,” declared another Nobel laureate, Robert Lucas of the University of Chicago, in 2003 – five years before the greatest economic collapse in more than half a century.)

The students' petition sparked a lively debate. The French minister of education established a committee on economic education. Economics students across Europe and North America began meeting and circulating petitions of their own, even as defenders of the status quo denounced the movement as a Trotskyite conspiracy. By September, the first issue of the Post-Autistic Economic Newsletter was published in Britain.

As The Independent summarized the students' message: “If there is a daily prayer for the global economy, it should be, ‘Deliver us from abstraction.'”

It seems that entreaty went unheard through most of the discipline before the economic crisis, not to mention in the offices of hedge funds and the Stockholm Nobel selection committee. But is it ringing louder now? And how did economics become so abstract in the first place?

The great classical economists of the late 18th and early 19th centuries had no problem connecting to the real world – the Industrial Revolution had unleashed profound social and economic changes, and they were trying to make sense of what they were seeing. Yet Adam Smith, who is considered the founding father of modern economics, would have had trouble understanding the meaning of the word “economist.”

What is today known as economics arose out of two larger intellectual traditions that have since been largely abandoned. One is political economy, which is based on the simple idea that economic outcomes are often determined largely by political factors (as well as vice versa). But when political-economy courses first started appearing in Canadian universities in the 1870s, it was still viewed as a small offshoot of a far more important topic: moral philosophy.

In The Wealth of Nations (1776), Adam Smith famously argued that the pursuit of enlightened self-interest by individuals and companies could benefit society as a whole. His notion of the market's “invisible hand” laid the groundwork for much of modern neoclassical and neo-liberal, laissez-faire economics. But unlike today's free marketers, Smith didn't believe that the morality of the market was appropriate for society at large. Honesty, discipline, thrift and co-operation, not consumption and unbridled self-interest, were the keys to happiness and social cohesion. Smith's vision was a capitalist economy in a society governed by non-capitalist morality.

But by the end of the 19th century, the new field of economics no longer concerned itself with moral philosophy, and less and less with political economy. What was coming to dominate was a conviction that markets could be trusted to produce the most efficient allocation of scarce resources, that individuals would always seek to maximize their utility in an economically rational way, and that all of this would ultimately lead to some kind of overall equilibrium of prices, wages, supply and demand.

Political economy was less vital because government intervention disrupted the path to equilibrium and should therefore be avoided except in exceptional circumstances. And as for morality, economics would concern itself with the behaviour of rational, self-interested, utility-maximizing Homo economicus. What he did outside the confines of the marketplace would be someone else's field of study.

As those notions took hold, a new idea emerged that would have surprised and probably horrified Adam Smith – that economics, divorced from the study of morality and politics, could be considered a science. By the beginning of the 20th century, economists were looking for theorems and models that could help to explain the universe. One historian described them as suffering from “physics envy.” Although they were dealing with the behaviour of humans, not atoms and particles, they came to believe they could accurately predict the trajectory of human decision-making in the marketplace.

In their desire to have their field be recognized as a science, economists increasingly decided to speak the language of science. From Smith's innovations through John Maynard Keynes's work in the 1930s, economics was argued in words. Now, it would go by the numbers.

Continued in a long article


On July 14, 2006, Greg Wilson inquired about what the implications of poor auditing are to investors and clients?

July 14, 2006 reply from Bob Jensen

Empirical evidence suggests that when an auditing firm begins to get a reputation for incompetence and/or lack of independence its clients’ cost of capital rises. This in fact was the case for the Arthur Andersen firm even before it imploded. The firm’s reputation for bad audits and lack of independence from Andersen Consulting, especially after the Waste Management auditing scandal, was becoming so well known that some of its major clients had already changed to another auditing firm in order to lower their cost of capital.

Bob Jensen

July 14, 2006 reply from Ed Scribner [escribne@NMSU.EDU]

I think the conventional wisdom is that poor audits reduce the ability of information to reduce uncertainty, so investors charge companies for this in the form of lower security prices.

In a footnote on p. 276 of the Watts and Zimmerman "Market for Excuses" paper in the April 79 Accounting Review, WZ asserted the following:

***
Share prices are unbiased estimates of the extent to which the auditor monitors management and reduces agency costs... . The larger the reduction in agency costs effected by an auditor (net of the auditor's fees), the higher the value of the corporation's shares and bonds and, ceteris paribus, the greater the demand for that auditor's services. If the market observes the auditor failing to monitor management, it will adjust downwards the share price of all firms who engage this auditor... .
***

Sometime in the 1980s, Mike Kennelley tested this assertion on the then-recent SEC censure of Peat Marwick. (I think his article appeared in the Journal of Accounting and Economics, but I can't find it at the moment.) The Watts/Zimmerman footnote suggests a negative effect on all of Peat Marwick's client stock prices, but Mike, as I recall, found a small positive effect.

Because agency theory seems to permit arguing any side of any argument, a possible explanation was that the market interpreted this adverse publicity as a wakeup call for Peat Marwick, causing it to clean up its act so that its audits would be impeccable.

A couple of other examples of the empirical research:

(1) Journal of Empirical Legal Studies Volume 1 Page 263 - July 2004 doi:10.1111/j.1740-1461.2004.00008.x Volume 1 Issue 2

Was Arthur Andersen Different? An Empirical Examination of Major Accounting Firm Audits of Large Clients Theodore Eisenberg1 and Jonathan R. Macey2

Enron and other corporate financial scandals focused attention on the accounting industry in general and on Arthur Andersen in particular. Part of the policy response to Enron, the criminal prosecution of Andersen eliminated one of the few major audit firms capable of auditing many large public corporations. This article explores whether Andersen's performance, as measured by frequency of financial restatements, measurably differed from that of other large auditors. Financial restatements trigger significant negative market reactions and their frequency can be viewed as a measure of accounting performance. We analyze the financial restatement activity of approximately 1,000 large public firms from 1997 through 2001. After controlling for client size, region, time, and industry, we find no evidence that Andersen's performance significantly differed from that of other large accounting firms.

... Hiring an auditor, at least in theory, allows the client company to "rent" the reputation of the accounting firm, which rents its reputation for care, honesty, and integrity to its clients.

... From the perspective of audit firms' clients, good audits are good investments because they reduce the cost of capital and increase shareholder wealth. Good audits also increase management's credibility among the investment community. In theory, the capital markets audit the auditors.

------------------------------------
(2) Journal of Accounting Research Volume 40 Page 1221 - September 2002 doi:10.1111/1475-679X.00087 Volume 40 Issue 4

Corporate Financial Reporting and the Market for Independent Auditing: Contemporary Research Shredded Reputation: The Cost of Audit Failure Paul K. Chaney & Kirk L. Philipich

In this article we investigate the impact of the Enron audit failure on auditor reputation. Specifically, we examine Arthur Andersen's clients' stock market impact surrounding various dates on which Andersen's audit procedures and independence were under severe scrutiny. On the three days following Andersen's admission that a significant number of documents had been shredded, we find that Andersen's other clients experienced a statistically negative market reaction, suggesting that investors downgraded the quality of the audits performed by Andersen. We also find that audits performed by Andersen's Houston office suffered a more severe decline in abnormal returns on this date. We are not able to show that Andersen's independence was questioned by the amount of non-audit fees charged to its clients.

Ed Scribner
New Mexico State University, USA

Bob Jensen's threads on fraudulent and incompetent auditing are at http://www.trinity.edu/rjensen/fraud001.htm

Why smart people can be so stupid Or Rationality, Intelligence, and Levels of Analysis in Cognitive Science:
Is Dysrationalia Possible?

The sure-thing principle is not the only rule of rational thinking that humans have been shown to violate. A substantial research literature–one comprising literally hundreds of empirical studies conducted over nearly four decades–has firmly established that people’s responses often deviate from the performance considered normative on many reasoning tasks. For example, people assess probabilities incorrectly, they display confirmation bias, they test hypotheses inefficiently, they violate the axioms of utility theory, they do not properly calibrate degrees of belief, they overproject their own opinions onto others, they display illogical framing effects, they uneconomically honor sunk costs, they allow prior knowledge to become implicated in deductive reasoning, and they display numerous other information processing biases.
Keith E. Stanovich, In R. J. Sternberg (Ed.), Why smart people can be so stupid (pp. 124-158). New Haven, CT: Yale University Press, ISBN-13: 9780300101706, September 2009
Jensen Comment
And all of these real-world complications are usually brushed aside by analytical accountics researchers, because real people mess up the mathematics.

 


Volker Laux  and D. Paul Newman, "Auditor Liability and Client Acceptance Decisions," The Accounting Review, Vol. 85, No. 1, 2010 pp. 261–285

One of the dubious assumptions of the entire Laux and Newman analysis is equilibrium of an audit firm's litigation payout for a particular client that has a higher likelihood to fail. If a client has a higher than average likelihood to fail then it most likely is not in an equilibrium state.

Another leap of faith is continuity in the payout and risk functions to a point where second derivatives can be calculated of such firms. In reality such functions are likely to be highly non-continuous and subject to serious break points. It is not clear how such a model could ever be applied to a real world audit client.

Another assumption is that the audit firm's ex ante utility function and a client firm's utility function are respectively as follows:

                 .

Yeah right. Have these utility functions ever been validated for any real world client and auditor? As a matter of fact, what is the utility function of any corporation that according to agency theory is a nexus of contracts? My feeble mind cannot even imagine what a realistic utility function looks like for a nexus of contracts.

I would instead contend that there is no audit firm utility function apart from the interactions of the utilities of the major players in client acceptance/retention decision and audit pricing decisions. For example, before David Duncan was fired by Andersen, the decision to keep Enron as a client was depended upon the interactive utility functions of David Duncan versus Carl Bass versus Joseph Berardino. None of them worked from a simplistic Andersen utility function such as the one shown in Equation 20 above. Each worked interactively with each other in a very complicated way that had Bass being released from the Enron audit and Berardino buring his head in the sands of Lake Michigan.

The audit firm utility function, if it exists, is based on the nexus of people rather than the nexus of contracts that we call a "corporation."

The Laux and Newman paper also fails to include the role of outside players in some decisions regarding risky players. A huge outside player is the SEC that is often brought into the arena. Currently the SEC is playing a role in the "merry-go-round of auditors" for a corporation called Overstock.com that is currently working with the SEC to find an auditor. See "Auditor Merry Go Round at Overstock.com," Big Four Blog, January 8, 2010 --- http://www.bigfouralumni.blogspot.com/ 

Another leap of faith in the Laux and Newman paper is that auditor "liability environment" can be decomposed into   "three components: (1) the strictness of the legal regime, defined as the probability that the auditor is sued and found liable in case of an audit failure, (2) potential damage payments from the auditor to investors and (3) other litigation costs incurred by the auditor, labeled litigation frictions, such as attorneys’ fees or loss of reputation."  It would seem that these three components cannot be decomposed in real life without also accounting for the nonlinear and possibly huge covariances.

A possible test of of this study might be reference to one case illustration demonstrating that in at least one real world instance "an increase in the potential damage payment actually leads to a reduction in the client rejection rate." In the absence of such real world partial validation of the analytical results, we are asked to accept a huge amount on unsupported faith in untested assumptions inside Plato's Cave.


In finance mathematical analytics, a model derivation is on occasion put to the test. A real world example of where assumptions break down is the mathematical analytical model that is suspected of having contributed greatly to the present economic crisis.

Can the 2008 investment banking failure be traced to a math error?
Recipe for Disaster:  The Formula That Killed Wall Street --- http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all
Link forwarded by Jim Mahar ---
http://financeprofessorblog.blogspot.com/2009/03/recipe-for-disaster-formula-that-killed.html 

Some highlights:

"For five years, Li's formula, known as a Gaussian copula function, looked like an unambiguously positive breakthrough, a piece of financial technology that allowed hugely complex risks to be modeled with more ease and accuracy than ever before. With his brilliant spark of mathematical legerdemain, Li made it possible for traders to sell vast quantities of new securities, expanding financial markets to unimaginable levels.

His method was adopted by everybody from bond investors and Wall Street banks to ratings agencies and regulators. And it became so deeply entrenched—and was making people so much money—that warnings about its limitations were largely ignored.

Then the model fell apart." The article goes on to show that correlations are at the heart of the problem.

"The reason that ratings agencies and investors felt so safe with the triple-A tranches was that they believed there was no way hundreds of homeowners would all default on their loans at the same time. One person might lose his job, another might fall ill. But those are individual calamities that don't affect the mortgage pool much as a whole: Everybody else is still making their payments on time.

But not all calamities are individual, and tranching still hadn't solved all the problems of mortgage-pool risk. Some things, like falling house prices, affect a large number of people at once. If home values in your neighborhood decline and you lose some of your equity, there's a good chance your neighbors will lose theirs as well. If, as a result, you default on your mortgage, there's a higher probability they will default, too. That's called correlation—the degree to which one variable moves in line with another—and measuring it is an important part of determining how risky mortgage bonds are."

I would highly recommend reading the entire thing that gets much more involved with the actual formula etc.

The “math error” might truly be have been an error or it might have simply been a gamble with what was perceived as miniscule odds of total market failure. Something similar happened in the case of the trillion-dollar disastrous 1993 collapse of Long Term Capital Management formed by Nobel Prize winning economists and their doctoral students who took similar gambles that ignored the “miniscule odds” of world market collapse -- -
http://www.trinity.edu/rjensen/FraudRotten.htm#LTCM  

The rhetorical question is whether the failure is ignorance in model building or risk taking using the model?

Also see
"In Plato's Cave:  Mathematical models are a powerful way of predicting financial markets. But they are fallible" The Economist, January 24, 2009, pp. 10-14 --- http://www.trinity.edu/rjensen/2008Bailout.htm#Bailout

Wall Street’s Math Wizards Forgot a Few Variables
What wasn’t recognized was the importance of a different species of risk — liquidity risk,” Stephen Figlewski, a professor of finance at the Leonard N. Stern School of Business at New York University, told The Times. “When trust in counterparties is lost, and markets freeze up so there are no prices,” he said, it “really showed how different the real world was from our models.
DealBook, The New York Times, September 14, 2009 ---
http://dealbook.blogs.nytimes.com/2009/09/14/wall-streets-math-wizards-forgot-a-few-variables/

Bottom Line
My conclusion is that the mathematical analytics papers in general in TAR are not adequately put to the test if the Senior Editor refuses to put commentaries on published papers out to review. This policy discourages independent researchers from even bothering to write commentaries on the published papers.

"Deductive reasoning,"  Phil Johnson-Laird, Wiley Interscience, ,2009 ---
http://www3.interscience.wiley.com/cgi-bin/fulltext/123228371/PDFSTART?CRETRY=1&SRETRY=0

This article begins with an account of logic, and of how logicians formulate formal rules of inference for the sentential calculus, which hinges on analogs of negation and the connectives if, or, and and. It considers the various ways in which computer scientists have written programs to prove the validity of inferences in this and other domains. Finally, it outlines the principal psychological theories of how human reasoners carry out deductions.  2009 John Wiley & Sons, Ltd. WIREs Cogn Sci 2010 1 8–1

 

Audit Pricing in the Real World --- See Appendix 3


Warnings from a Theoretical Physicist With an Interest in Economics and Finance
"Beware of Economists (and accoutnics scientists) Peddling Elegant Models," by Mark Buchanan, Bloomberg, April 7, 2013 ---
http://www.bloomberg.com/news/2013-04-07/beware-of-economists-peddling-elegant-models.html 

. . .

In one very practical and consequential area, though, the allure of elegance has exercised a perverse and lasting influence. For several decades, economists have sought to express the way millions of people and companies interact in a handful of pretty equations.

The resulting mathematical structures, known as dynamic stochastic general equilibrium models, seek to reflect our messy reality without making too much actual contact with it. They assume that economic trends emerge from the decisions of only a few “representative” agents -- one for households, one for firms, and so on. The agents are supposed to plan and act in a rational way, considering the probabilities of all possible futures and responding in an optimal way to unexpected shocks.

Surreal Models

Surreal as such models might seem, they have played a significant role in informing policy at the world’s largest central banks. Unfortunately, they don’t work very well, and they proved spectacularly incapable of accommodating the way markets and the economy acted before, during and after the recent crisis.

Now, some economists are beginning to pursue a rather obvious, but uglier, alternative. Recognizing that an economy consists of the actions of millions of individuals and firms thinking, planning and perceiving things differently, they are trying to model all this messy behavior in considerable detail. Known as agent-based computational economics, the approach is showing promise.

Take, for example, a 2012 (and still somewhat preliminary) study by a group of economists, social scientists, mathematicians and physicists examining the causes of the housing boom and subsequent collapse from 2000 to 2006. Starting with data for the Washington D.C. area, the study’s authors built up a computational model mimicking the behavior of more than two million potential homeowners over more than a decade. The model included detail on each individual at the level of race, income, wealth, age and marital status, and on how these characteristics correlate with home buying behavior.

Led by further empirical data, the model makes some simple, yet plausible, assumptions about the way people behave. For example, homebuyers try to spend about a third of their annual income on housing, and treat any expected house-price appreciation as income. Within those constraints, they borrow as much money as lenders’ credit standards allow, and bid on the highest-value houses they can. Sellers put their houses on the market at about 10 percent above fair market value, and reduce the price gradually until they find a buyer.

The model captures things that dynamic stochastic general equilibrium models do not, such as how rising prices and the possibility of refinancing entice some people to speculate, buying more-expensive houses than they otherwise would. The model accurately fits data on the housing market over the period from 1997 to 2010 (not surprisingly, as it was designed to do so). More interesting, it can be used to probe the deeper causes of what happened.

Consider, for example, the assertion of some prominent economists, such as Stanford University’s John Taylor, that the low-interest-rate policies of the Federal Reserve were to blame for the housing bubble. Some dynamic stochastic general equilibrium models can be used to support this view. The agent- based model, however, suggests that interest rates weren’t the primary driver: If you keep rates at higher levels, the boom and bust do become smaller, but only marginally.

Leverage Boom

A much more important driver might have been leverage -- that is, the amount of money a homebuyer could borrow for a given down payment. In the heady days of the housing boom, people were able to borrow as much as 100 percent of the value of a house -- a form of easy credit that had a big effect on housing demand. In the model, freezing leverage at historically normal levels completely eliminates both the housing boom and the subsequent bust.

Does this mean leverage was the culprit behind the subprime debacle and the related global financial crisis? Not necessarily. The model is only a start and might turn out to be wrong in important ways. That said, it makes the most convincing case to date (see my blog for more detail), and it seems likely that any stronger case will have to be based on an even deeper plunge into the messy details of how people behaved. It will entail more data, more agents, more computation and less elegance.

If economists jettisoned elegance and got to work developing more realistic models, we might gain a better understanding of how crises happen, and learn how to anticipate similarly unstable episodes in the future. The theories won’t be pretty, and probably won’t show off any clever mathematics. But we ought to prefer ugly realism to beautiful fantasy.

(Mark Buchanan, a theoretical physicist and the author of “The Social Atom: Why the Rich Get Richer, Cheaters Get Caught and Your Neighbor Usually Looks Like You,” is a Bloomberg View columnist. The opinions expressed are his own.)

Jensen Comment
Bob Jensen's threads on the mathematical formula that probably led to the economic collapse after mortgage lenders peddled all those poisoned mortgages ---
 


"What use is game theory?" by Steve Hsu, Information Processing, May 4, 2011 ---
http://infoproc.blogspot.com/2011/05/what-use-is-game-theory.html

Fantastic interview with game theorist Ariel Rubinstein on Econtalk. I agree with Rubinstein that game theory has little predictive power in the real world, despite the pretty mathematics. Experiments at RAND (see, e.g., Mirowski's Machine Dreams) showed early game theorists, including Nash, that people don't conform to the idealizations in their models. But this wasn't emphasized (Mirowski would claim it was deliberately hushed up) until more and more experiments showed similar results. (Who woulda thought -- people are "irrational"! :-)

Perhaps the most useful thing about game theory is that it requires you to think carefully about decision problems. The discipline of this kind of analysis is valuable, even if the models have limited applicability to real situations.

Rubinstein discusses a number of topics, including raw intelligence vs psychological insight and its importance in economics
(see also here). He has, in my opinion, a very developed and mature view of what social scientists actually do, as opposed to what they claim to do.

Continued in article


The problem is when the model created to represent reality takes on a life of its own completely detached from the reality that it is supposed to model that nonsense can easily ensue.

Was it Mark Twain who wrote: "The criterion of understanding is a simple explanation."?
As quoted by Martin Weiss in a comment to the article below.

But a lie gets halfway around the world while the truth is still tying its shoes
Mark Twain as quoted by PKB (in Mankato, MN) in a comment to the article below.

"US Net Investment Income," by Paul Krugman, The New York Times, December 31, 2011 ---
http://krugman.blogs.nytimes.com/2011/12/31/us-net-investment-income/
Especially note the cute picture.

December 31, 2011 Comment by Wendell Murray
http://krugman.blogs.nytimes.com/2011/12/31/i-like-math/#postComment

Mathematics, like word-oriented languages, uses symbols to represent concepts, so it is essentially the same as word-oriented languages that everyone is comfortable with.

Because mathematics is much more precise and in most ways much simpler than word-oriented languages, it is useful for modeling (abstraction from) of the messiness of the real world.

The problem, as Prof. Krugman notes, is when the model created to represent reality takes on a life of its own completely detached from the reality that it is supposed to model that nonsense can easily ensue.

This is what has happened in the absurd conclusions often reached by those who blindly believe in the infallibility of hypotheses such as the rational expectations theory or even worse the completely peripheral concept of so-called Ricardian equivalence. These abstractions from reality have value only to the extent that they capture the key features of reality. Otherwise they are worse than useless.

I think some academics and/or knowledgeless distorters of academic theories in fact just like to use terms such as "Ricardian equivalence theorem" because that term, for example, sounds so esoteric whereas the theorem itself is not much of anything
.

Ricardian Equivalence --- http://en.wikipedia.org/wiki/Ricardian_equivalence

Jensen Comment
One of the saddest flaws of accountics science archival studies is the repeated acceptance of the CAPM mathematics allowing the CAPM to "represent reality on a life of its own" when in fact the CAPM is a seriously flawed representation of investing reality ---
http://www.trinity.edu/rjensen/theory01.htm#AccentuateTheObvious

At the same time one of the things I dislike about the exceedingly left-wing biased, albeit brilliant, Paul Krugman is his playing down of trillion dollar deficit spending and his flippant lack of concern about $80 trillion in unfunded entitlements. He just turns a blind eye toward risks of Zimbabwe-like inflation. As noted below, he has a Nobel Prize in Economics but "doesn't command respect in the profession". Put another way, he's more of a liberal preacher than an economics teacher.

Paul Krugman --- http://en.wikipedia.org/wiki/Paul_Krugman

Economics and policy recommendations

Economist and former United States Secretary of the Treasury Larry Summers has stated Krugman has a tendency to favor more extreme policy recommendations because "it’s much more interesting than agreement when you’re involved in commenting on rather than making policy."

According to Harvard professor of economics Robert Barro, Krugman "has never done any work in Keynesian macroeconomics" and makes arguments that are politically convenient for him.Nobel laureate Edward Prescott has charged that Krugman "doesn't command respect in the profession", as "no respectable macroeconomist" believes that economic stimulus works, though the number of economists who support such stimulus is "probably a majority".

Bob Jensen's critique of analytical models in accountics science (Plato's Cave) can be found at
http://www.trinity.edu/rjensen/TheoryTAR.htm#Analytics


Why Do Accountics Scientists Get Along So Well?

To a fault I've argued that accountics scientists do not challenge each other or do replications and other validity tests of their published research ---
See below.

By comparison the real science game is much more a hard ball game of replication, critical commentary, and other validity checking. Accountics scientists have a long way to go in their quest to become more like real scientists.

 

"Casualty of the Math Wars," by Scott Jaschik, Inside Higher Ed, October 15, 2012 ---
http://www.insidehighered.com/news/2012/10/15/stanford-professor-goes-public-attacks-over-her-math-education-research

. . .

The "math wars" have raged since the 1990s. A series of reform efforts (of which Boaler's work is a part) have won support from many scholars and a growing number of school districts. But a traditionalist school (of which Milgram and Bishop are part) has pushed back, arguing that rigor and standards are being sacrificed. Both sides accuse the other of oversimplifying the other's arguments, and studies and op-eds from proponents of the various positions appear regularly in education journals and the popular press. Several mathematics education experts interviewed for this article who are supportive of Boaler and her views stressed that they did not view all, or even most, criticism from the "traditionalist" camp as irresponsible.

The essay Boaler published Friday night noted that there has been "spirited academic debate" about her ideas and those of others in mathematics education, and she says that there is nothing wrong with that.

"Milgram and Bishop have gone beyond the bounds of reasoned discourse in a campaign to systematically suppress empirical evidence that contradicts their stance," Boaler wrote. "Academic disagreement is an inevitable consequence of academic freedom, and I welcome it. However, responsible disagreement and academic bullying are not the same thing. Milgram and Bishop have engaged in a range of tactics to discredit me and damage my work which I have now decided to make public."

Some experts who have been watching the debate say that the reason this dispute is important is because Boaler's work is not based simply on a critique of traditional methods of teaching math, but because she has data to back up her views.

Keith Devlin, director of the Human Sciences and Technologies Advanced Research Institute at Stanford, said that he has "enormous respect" for Boaler, although he characterized himself as someone who doesn't know her well, but has read her work and is sympathetic to it. He said that he shares her views, but that he does so "based on my own experience and from reading the work of others," not from his own research. So he said that while he has also faced "unprofessional" attacks when he has expressed those views, he hasn't attracted the same level of criticism as has Boaler.

Of her critics, Devlin said that "I suspect they fear her because she brings hard data that threatens their view of how children should be taught mathematics." He said that the criticisms of Boaler reach "the point of character assassination."

Debating the Data

The Milgram/Bishop essay that Boaler said has unfairly damaged her reputation is called "A Close Examination of Jo Boaler's Railside Report," and appears on Milgram's Stanford website. ("Railside" refers to one of the schools Boaler studied.) The piece says that Boaler's claims are "grossly exaggerated," and yet expresses fear that they could be influential and so need to be rebutted. Under federal privacy protection requirements for work involving schoolchildren, Boaler agreed to keep confidential the schools she studied and, by extension, information about teachers and students. The Milgram/Bishop essay claims to have identified some of those schools and says this is why they were able to challenge her data.

Boaler said -- in her essay and in an interview -- that this puts her in a bind. She cannot reveal more about the schools without violating confidentiality pledges, even though she is being accused of distorting data. While the essay by Milgram and Bishop looks like a journal article, Boaler notes that it has in fact never been published, in contrast to her work, which has been subjected to peer review in multiple journals and by various funding agencies.

Further, she notes that Milgram's and Bishop's accusations were investigated by Stanford when Milgram in 2006 made a formal charge of research misconduct against her, questioning the validity of her data collection. She notes in her new essay that the charges "could have destroyed my career." Boaler said that her final copy of the initial investigation was deemed confidential by the university, but she provided a copy of the conclusions, which rejected the idea that there had been any misconduct.

Here is the conclusion of that report: "We understand that there is a currently ongoing (and apparently passionate) debate in the mathematics education field concerning the best approaches and methods to be applied in teaching mathematics. It is not our task under Stanford's policy to determine who is 'right' and who is 'wrong' in this academic debate. We do note that Dr. Boaler's responses to the questions put to her related to her report were thorough, thoughtful, and offered her scientific rationale for each of the questions underlying the allegations. We found no evidence of scientific misconduct or fraudulent behavior related to the content of the report in question. In short, we find that the allegations (such as they are) of scientific misconduct do not have substance."

Even though the only body to examine the accusations made by Milgram rejected them, and even though the Milgram/Bishop essay has never been published beyond Milgram's website, the accusations in the essay have followed Boaler all over as supporters of Milgram and Bishop cite the essay to question Boaler's ethics. For example, an article she and a co-author wrote about her research that was published in a leading journal in education research, Teachers College Record, attracted a comment that said the findings were "imaginative" and asked if they were "a prime example of data cooking." The only evidence offered: a link to the Milgram/Bishop essay.

In an interview, Boaler said that, for many years, she has simply tried to ignore what she considers to be unprofessional, unfair criticism. But she said she was prompted to speak out after thinking about the fallout from an experience this year when Irish educational authorities brought her in to consult on math education. When she wrote an op-ed in The Irish Times, a commenter suggested that her ideas be treated with "great skepticism" because they had been challenged by prominent professors, including one at her own university. Again, the evidence offered was a link to the Stanford URL of the Milgram/Bishop essay.

"This guy Milgram has this on a webpage. He has it on a Stanford site. They have a campaign that everywhere I publish, somebody puts up a link to that saying 'she makes up data,' " Boaler said. "They are stopping me from being able to do my job."

She said one reason she decided to go public is that doing so gives her a link she can use whenever she sees a link to the essay attacking her work.

Bishop did not respond to e-mail messages requesting comment about Boaler's essay. Milgram via e-mail answered a few questions about Boaler's essay. He said she inaccurately characterized a meeting they had after she arrived at Stanford. (She said he discouraged her from writing about math education.) Milgram denied engaging in "academic bullying."

He said via e-mail that the essay was prepared for publication in a journal and was scheduled to be published, but "the HR person at Stanford has some reservations because it turned out that it was too easy to do a Google search on some of the quotes in the paper and thereby identify the schools involved. At that point I had so many other things that I had to attend to that I didn't bother to make the corrections." He also said that he has heard more from the school since he wrote the essay, and that these additional discussions confirm his criticism of Boaler's work.

In an interview Sunday afternoon, Milgram said that by "HR" in the above quote, he meant "human research," referring to the office at Stanford that works to protect human subjects in research. He also said that since it was only those issues that prevented publication, his critique was in fact peer-reviewed, just not published.

Further, he said that Stanford's investigation of Boaler was not handled well, and that those on the committee considered the issue "too delicate and too hot a potato." He said he stood behind everything in the paper. As to Boaler's overall criticism of him, he said that he would "have discussions with legal people, and I'll see if there is an appropriate action to be taken, but my own inclination is to ignore it."

Milgram also rejected the idea that it was not appropriate for him to speak out on these issues as he has. He said he first got involved in raising questions about research on math education as the request of an assistant in the office of Rod Paige, who held the job of U.S. education secretary during the first term of President George W. Bush.

Ze'ev Wurman, a supporter of Milgram and Bishop, and one who has posted the link to their article elsewhere, said he wasn't bothered by its never having been published. "She is basically using the fact that it was not published to undermine its worth rather than argue the specific charges leveled there by serious academics," he said.

Critiques 'Without Merit'

E-mail requests for comment from several leading figures in mathematics education resulted in strong endorsements of Boaler's work and frustration at how she has been treated over the years.

Jeremy Kilpatrick, a professor of mathematics education at the University of Georgia who has chaired commissions on the subject for the National Research Council and the Rand Corporation, said that "I have long had great respect for Jo Boaler and her work, and I have been very disturbed that it has been attacked as faulty or disingenuous. I have been receiving multiple e-mails from people who are disconcerted at the way she has been treated by Wayne Bishop and Jim Milgram. The critiques by Bishop and Milgram of her work are totally without merit and unprofessional. I'm pleased that she has come forward at last to give her side of the story, and I hope that others will see and understand how badly she has been treated."

Alan H. Schoenfeld is the Elizabeth and Edward Conner Professor of Education at the University of California at Berkeley, and a past president of the American Educational Research Association and past vice president of the National Academy of Education. He was reached in Sweden, where he said his e-mail has been full of commentary about Boaler's Friday post. "Boaler is a very solid researcher. You don't get to be a professor at Stanford, or the Marie Curie Professor of Mathematics Education at the University of Sussex [the position she held previously], unless you do consistently high quality, peer-reviewed research."

Schoenfeld said that the discussion of Boaler's work "fits into the context of the math wars, which have sometimes been argued on principle, but in the hands of a few partisans, been vicious and vitriolic." He said that he is on a number of informal mathematics education networks, and that the response to Boaler's essay "has been swift and, most generally, one of shock and support for Boaler." One question being asked, he said, is why Boaler was investigated and no university has investigated the way Milgram and Bishop have treated her.

A spokeswoman for Stanford said the following via e-mail: "Dr. Boaler is a nationally respected scholar in the field of math education. Since her arrival more than a decade ago, Stanford has provided extensive support for Dr. Boaler as she has engaged in scholarship in this field, which is one in which there is wide-ranging academic opinion. At the same time, Stanford has carefully respected the fundamental principle of academic freedom: the merits of a position are to be determined by scholarly debate, rather than by having the university arbitrate or interfere in the academic discourse."

Boaler in Her Own Words

Here is a YouTube video of Boaler discussing and demonstrating her ideas about math education with a group of high school students in Britain.

Continued in article

How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this so that we don't get along so well
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

 

 


Why Pick on The Accounting Review (TAR)?

The Accounting Review (TAR) since 1926 ---
http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm

Jensen Comment
Occasionally I receive messages questioning why I pick on TAR when in fact my complaints are really with accountics scientists and accountics science in general.

Accountics is the mathematical science of values.
Charles Sprague [1887] as quoted by McMillan [1998, p. 1]

http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm 

 


David Johnstone asked me to write a paper on the following:
"A Scrapbook on What's Wrong with the Past, Present and Future of Accountics Science"
Bob Jensen
February 19, 2014
SSRN Download:  http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2398296 

Abstract

For operational convenience I define accountics science as research that features equations and/or statistical inference. Historically, there was a heated debate in the 1920s as to whether the main research journal of academic accounting, The Accounting Review (TAR) that commenced in 1926, should be an accountics journal with articles that mostly featured equations. Practitioners and teachers of college accounting won that debate.

TAR articles and accountancy doctoral dissertations prior to the 1970s seldom had equations.  For reasons summarized below, doctoral programs and TAR evolved to where in the 1990s there where having equations became virtually a necessary condition for a doctoral dissertation and acceptance of a TAR article. Qualitative normative and case method methodologies disappeared from doctoral programs.

What’s really meant by “featured equations” in doctoral programs is merely symbolic of the fact that North American accounting doctoral programs pushed out most of the accounting to make way for econometrics and statistics that are now keys to the kingdom for promotion and tenure in accounting schools ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

The purpose of this paper is to make a case that the accountics science monopoly of our doctoral programs and published research is seriously flawed, especially its lack of concern about replication and focus on simplified artificial worlds that differ too much from reality to creatively discover findings of greater relevance to teachers of accounting and practitioners of accounting. Accountics scientists themselves became a Cargo Cult.

 

 


June 5, 2013 reply to a long thread by Bob Jensen

Hi Steve,

As usual, these AECM threads between you, me, and Paul Williams resolve nothing to date. TAR still has zero articles without equations unless such articles are forced upon editors like the Kaplan article was forced upon you as Senior Editor. TAR still has no commentaries about the papers it publishes and the authors make no attempt to communicate and have dialog about their research on the AECM or the AAA Commons.

I do hope that our AECM threads will continue and lead one day to when the top academic research journals do more to both encourage (1) validation (usually by speedy replication), (2) alternate methodologies, (3) more innovative research, and (4) more interactive commentaries.

I remind you that Professor Basu's essay is only one of four essays bundled together in Accounting Horizons on the topic of how to make accounting research, especially the so-called Accounting Sciience or Accountics Science or Cargo Cult science, more innovative.

The four essays in this bundle are summarized and extensively quoted at http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Essays 

I will try to keep drawing attention to these important essays and spend the rest of my professional life trying to bring accounting research closer to the accounting profession.

I also want to dispel the myth that accountics research is harder than making research discoveries without equations. The hardest research I can imagine (and where I failed) is to make a discovery that has a noteworthy impact on the accounting profession. I always look but never find such discoveries reported in TAR.

The easiest research is to purchase a database and beat it with an econometric stick until something falls out of the clouds. I've searched for years and find very little that has a noteworthy impact on the accounting profession. Quite often there is a noteworthy impact on other members of the Cargo Cult and doctoral students seeking to beat the same data with their sticks. But try to find a practitioner with an interest in these academic accounting discoveries?

Our latest thread leads me to such questions as:

  1. Is accounting research of inferior quality relative to other disciplines like engineering and finance?

     
  2. Are there serious innovation gaps in academic accounting research?

     
  3. Is accounting research stagnant?

     
  4. How can accounting researchers be more innovative?

     
  5. Is there an "absence of dissent" in academic accounting research?

     
  6. Is there an absence of diversity in our top academic accounting research journals and doctoral programs?

     
  7. Is there a serious disinterest (except among the Cargo Cult) and lack of validation in findings reported in our academic accounting research journals, especially TAR?

     
  8. Is there a huge communications gap between academic accounting researchers and those who toil teaching accounting and practicing accounting?

     
  9. Why do our accountics scientists virtually ignore the AECM and the AAA Commons and the Pathways Commission Report?
    http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

One fall out of this thread is that I've been privately asked to write a paper about such matters. I hope that others will compete with me in thinking and writing about these serious challenges to academic accounting research that never seem to get resolved.

Thank you Steve for sometimes responding in my threads on such issues in the AECM.

Respectfully,
Bob Jensen


Rise in Research Cheating
"A Sharp Rise in Retractions Prompts Calls for Reform," by Carl Zimmer, The New York Times, April 16, 2012 ---
http://www.nytimes.com/2012/04/17/science/rise-in-scientific-journal-retractions-prompts-calls-for-reform.html?_r=2&

In the fall of 2010, Dr. Ferric C. Fang made an unsettling discovery. Dr. Fang, who is editor in chief of the journal Infection and Immunity, found that one of his authors had doctored several papers.

It was a new experience for him. “Prior to that time,” he said in an interview, “Infection and Immunity had only retracted nine articles over a 40-year period.”

The journal wound up retracting six of the papers from the author, Naoki Mori of the University of the Ryukyus in Japan. And it soon became clear that Infection and Immunity was hardly the only victim of Dr. Mori’s misconduct. Since then, other scientific journals have retracted two dozen of his papers, according to the watchdog blog Retraction Watch.

“Nobody had noticed the whole thing was rotten,” said Dr. Fang, who is a professor at the University of Washington School of Medicine.

Dr. Fang became curious how far the rot extended. To find out, he teamed up with a fellow editor at the journal, Dr. Arturo Casadevall of the Albert Einstein College of Medicine in New York. And before long they reached a troubling conclusion: not only that retractions were rising at an alarming rate, but that retractions were just a manifestation of a much more profound problem — “a symptom of a dysfunctional scientific climate,” as Dr. Fang put it.

Dr. Casadevall, now editor in chief of the journal mBio, said he feared that science had turned into a winner-take-all game with perverse incentives that lead scientists to cut corners and, in some cases, commit acts of misconduct.

“This is a tremendous threat,” he said.

Last month, in a pair of editorials in Infection and Immunity, the two editors issued a plea for fundamental reforms. They also presented their concerns at the March 27 meeting of the National Academies of Sciences committee on science, technology and the law.

Members of the committee agreed with their assessment. “I think this is really coming to a head,” said Dr. Roberta B. Ness, dean of the University of Texas School of Public Health. And Dr. David Korn of Harvard Medical School agreed that “there are problems all through the system.”

No one claims that science was ever free of misconduct or bad research. Indeed, the scientific method itself is intended to overcome mistakes and misdeeds. When scientists make a new discovery, others review the research skeptically before it is published. And once it is, the scientific community can try to replicate the results to see if they hold up.

But critics like Dr. Fang and Dr. Casadevall argue that science has changed in some worrying ways in recent decades — especially biomedical research, which consumes a larger and larger share of government science spending.

In October 2011, for example, the journal Nature reported that published retractions had increased tenfold over the past decade, while the number of published papers had increased by just 44 percent. In 2010 The Journal of Medical Ethics published a study finding the new raft of recent retractions was a mix of misconduct and honest scientific mistakes.

Several factors are at play here, scientists say. One may be that because journals are now online, bad papers are simply reaching a wider audience, making it more likely that errors will be spotted. “You can sit at your laptop and pull a lot of different papers together,” Dr. Fang said.

But other forces are more pernicious. To survive professionally, scientists feel the need to publish as many papers as possible, and to get them into high-profile journals. And sometimes they cut corners or even commit misconduct to get there.

To measure this claim, Dr. Fang and Dr. Casadevall looked at the rate of retractions in 17 journals from 2001 to 2010 and compared it with the journals’ “impact factor,” a score based on how often their papers are cited by scientists. The higher a journal’s impact factor, the two editors found, the higher its retraction rate.

The highest “retraction index” in the study went to one of the world’s leading medical journals, The New England Journal of Medicine. In a statement for this article, it questioned the study’s methodology, noting that it considered only papers with abstracts, which are included in a small fraction of studies published in each issue. “Because our denominator was low, the index was high,” the statement said.

Continued in article

Bob Jensen's threads on cheating by faculty are at
http://www.trinity.edu/rjensen/Plagiarism.htm#ProfessorsWhoPlagiarize

August 14, 2013 reply from Dennis Huber

Hmmmm. I wonder. Does accounting research culture also need to be reformed?

August 14, 2013 reply from Bob Jensen

Hi Dennis,

Academics have debated the need for reform in academic accounting research for decades. There are five primary areas of recommended reform, but those areas overlap a great deal.

One area of suggested reform is to make it less easy to cheat and commit undetected errors in academic accounting research by forcing/encouraging replication, which is part and parcel to quality control in real science ---
http://www.trinity.edu/rjensen/TheoryTAR.htm 

A second area of improvement would turn accountics science from a pseudo science into a real science. Real science does not stop inferring causality from correlation when the causality data needed is not contained in the databases studied empirically with econometric models.

Real scientists granulate deeper and deeper for causal factors to test whether correlations are spurious. Accountics scientists seldom granulate beyond their purchased databases ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf 

A third area of improvement would arise if accountics scientists were forced to communicate their research findings better with accounting teachers and practitioners. Accountics scientists just do not care about such communications and should be forced to communicate in other venues such as having publication in a Tech Corner of the AAA Commons ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Commons

A fourth area of improvement would be expand research methods of accountics science to take on more interesting topics that are not so amenable to traditional quantitative and statistical modeling. See Cargo Cult mentality criticisms of accountics scients at
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Essays


It might be argued that accountics scientists don't replicate their findings because nobody gives a damn about their findings ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#CargoCult
That's taking the criticisms too far. I find lots of accountics science findings interesting. It's just that accountics scientists ignore topics that I find more interesting --- particularly topics of interest to accounting practitioners.

A fifth and related problem is that academic accounting inventors are rare in comparison with academic inventors in science and engineering ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Inventors

I summarize how academic accounting researchers should change at
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

 

 


Shame on you Richard. You claimed a totally incorrect reason for not having any interest in the Pathways Commission Report. It is totally incorrect to assume that the PC Report resolutions apply only to the CPA profession.

Did you ever read the PC  Report?
http://commons.aaahq.org/files/0b14318188/Pathways_Commission_Final_Report_Complete.pdf
 

Perhaps you just never read as far as Page 109 of the PC Report quoted below:

Accounting Profession

1. The need to enhance the bilateral relationship between the practice community and academe.

From the perspective of the profession, one impediment to change has been the lack of a consistent relationship between a broadly defined profession (i.e., public, private, government) and a broadly defined academy—large and small public and private institutions. This impediment can be broken down into three subparts. First, the Commission recommends the organizations and individuals in the practice community work with accounting educators to provide access to their internal training seminars, so faculty can remain current with the workings of the profession. These organizations also need to develop internship-type opportunities for interested faculty. Second, the practice community and regulators need to reduce the barriers academics have in obtaining research data. All stakeholders must work together to determine how to overcome the privacy, confidentiality, and regulatory issues that impede a greater number of researchers from obtaining robust data needed for many of these research projects. Having access to this data could be instrumental in helping the academy provide timely answers to the profession on the impact of policy decisions on business practice. Third, the profession and the academy need to share pedagogy best practices and resources, especially with respect to rapidly changing educational delivery models as both are essential segments of the lifelong educational pathway of accounting professionals.

Conversely, academia is not without fault in the development of this relationship. The Commission recommends that more institutions, possibly through new accreditation standards, engage more practitioners as executives in residence in the classroom. These individuals can provide a different perspective on various topics and thus might better explain what they do, how they do it, and why they do it. Additionally, the Commission recommends institutions utilize accounting professionals through department advisory boards that can assist the department in the development of its curriculum.



Jensen Comment
I contend that you are simply another accountics scientist member of the Cargo Cult looking for feeble luddite excuses to run for cover from the Pathways Commission resolutions, especially resolutions to conduct more clinical research and add diversity to the curricula of accounting doctoral programs.


Thank you for this honesty. But have you ever looked at the Pathways Commission Report?


Have you ever looked at the the varied professionals who generated this report and support its resolutions? In addition to CPA firms and universities, many of the Commissioners  come from major employers of Tuck School graduates including large and small corporations and consulting firms.
The Report is located at
http://commons.aaahq.org/files/0b14318188/Pathways_Commission_Final_Report_Complete.pdf


The Pathways Commission was made up of representatives of all segments of accounting academe, industrial accounting, and not-for-profit accounting. This Commission never intended its resolutions to apply only to only public accounting, which by the way includes tax accounting where you do most of your research. You're grasping at straws here Richard!


Most accountics Cargo Cult scientists are silent and smug with respect to the Pathways Commission Report, especially it's advocacy of clinical research and research methods extending beyond GLM data mining of commercial databases that the AAA leadership itself is admitting has grown stale and lacks innovation ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Essays



This is a perfect opportunity for me to recall the cargo plane scene from a movie called Mondo Cane ---
http://en.wikipedia.org/wiki/Mondo_cane


 Sudipta Basu
picked up on the Cargo Cult analogy to stagnation of accountics science research over the past few decades.

 

"How Can Accounting Researchers Become More Innovative? by Sudipta Basu, Accounting Horizons, December 2012, Vol. 26, No. 4, pp. 851-87 ---
http://aaajournals.org/doi/full/10.2308/acch-10311 


 

We fervently hope that the research pendulum will soon swing back from the narrow lines of inquiry that dominate today's leading journals to a rediscovery of the richness of what accounting research can be. For that to occur, deans and the current generation of academic accountants must give it a push.�
Michael H. Granof and Stephen A. Zeff (2008)


 

Rather than clinging to the projects of the past, it is time to explore questions and engage with ideas that transgress the current accounting research boundaries. Allow your values to guide the formation of your research agenda. The passion will inevitably follow �
Joni J. Young (2009)

. . .

Is Academic Accounting a “Cargo Cult Science”?

In a commencement address at Caltech titled “Cargo Cult Science,” Richard Feynman (1974) discussed “science, pseudoscience, and learning how not to fool yourself.” He argued that despite great efforts at scientific research, little progress was apparent in school education. Reading and mathematics scores kept declining, despite schools adopting the recommendations of experts. Feynman (1974, 11) dubbed fields like these “Cargo Cult Sciences,” explaining the term as follows:

In the South Seas there is a Cargo Cult of people. During the war they saw airplanes land with lots of good materials, and they want the same things to happen now. So they've arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas—he's the controller—and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. So I call these things Cargo Cult Science, because they follow all the apparent precepts and forms of scientific investigation, but they're missing something essential, because the planes don't land.

Feynman (1974) argued that the key distinction between a science and a Cargo Cult Science is scientific integrity: “[T]he idea is to give all of the information to help others judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.” In other words, papers should not be written to provide evidence for one's hypothesis, but rather to “report everything that you think might make it invalid.” Furthermore, “you should not fool the layman when you're talking as a scientist.”

Even though more and more detailed rules are constantly being written by the SEC, FASB, IASB, PCAOB, AICPA, and other accounting experts (e.g., Benston et al. 2006), the number and severity of accounting scandals are not declining, which is Feynman's (1969) hallmark of a pseudoscience. Because accounting standards often reflect standard-setters' ideology more than research into the effectiveness of different alternatives, it is hardly surprising that accounting quality has not improved. Even preliminary research findings can be transformed journalistically into irrefutable scientific results by the political process of accounting standard-setting. For example, the working paper results of Frankel et al. (2002) were used to justify the SEC's longstanding desire to ban non-audit services in the Sarbanes-Oxley Act of 2002, even though the majority of contemporary and subsequent studies found different results (Romano 2005). Unfortunately, the ability to bestow status by invitation to select conferences and citation in official documents (e.g., White 2005) may let standard-setters set our research and teaching agendas (Zeff 1989). Academic Accounting and the “Cult of Statistical Significance”

Ziliak and McCloskey (2008) argue that, in trying to mimic physicists, many biologists and social scientists have become devotees of statistical significance, even though most articles in physics journals do not report statistical significance. They argue that statistical tests are typically used to infer whether a particular effect exists, rather than to measure the magnitude of the effect, which usually has more practical import. While early empirical accounting researchers such as Ball and Brown (1968) and Beaver (1968) went to great lengths to estimate how much extra information reached the stock market in the earnings announcement month or week, subsequent researchers limited themselves to answering whether other factors moderated these effects. Because accounting theories rarely provide quantitative predictions (e.g., Kinney 1986), accounting researchers perform nil hypothesis significance testing rituals, i.e., test unrealistic and atheoretical null hypotheses that a particular coefficient is exactly zero.15 While physicists devise experiments to measure the mass of an electron to the accuracy of tens of decimal places, accounting researchers are still testing the equivalent of whether electrons have mass. Indeed, McCloskey (2002) argues that the “secret sins of economics” are that economics researchers use quantitative methods to produce qualitative research outcomes such as (non-)existence theorems and statistically significant signs, rather than to predict and measure quantitative (how much) outcomes.

Practitioners are more interested in magnitudes than existence proofs, because the former are more relevant in decision making. Paradoxically, accounting research became less useful in the real world by trying to become more scientific (Granof and Zeff 2008). Although every empirical article in accounting journals touts the statistical significance of the results, practical significance is rarely considered or discussed (e.g., Lev 1989). Empirical articles do not often discuss the meaning of a regression coefficient with respect to real-world decision variables and their outcomes. Thus, accounting research results rarely have practical implications, and this tendency is likely worst in fields with the strongest reliance on statistical significance such as financial reporting research.

Ziliak and McCloskey (2008) highlight a deeper concern about over-reliance on statistical significance—that it does not even provide evidence about whether a hypothesis is true or false. Carver (1978) provides a memorable example of drawing the wrong inference from statistical significance:

What is the probability of obtaining a dead person (label this part D) given that the person was hanged (label this part H); this is, in symbol form, what is P(D|H)? Obviously, it will be very high, perhaps 0.97 or higher. Now, let us reverse the question. What is the probability that a person has been hanged (H), given that the person is dead (D); that is, what is P(H|D)? This time the probability will undoubtedly be very low, perhaps 0.01 or lower. No one would be likely to make the mistake of substituting the first estimate (0.97) for the second (0.01); that is, to accept 0.97 as the probability that a person has been hanged given that the person is dead. Even though this seems to be an unlikely mistake, it is exactly the kind of mistake that is made with interpretations of statistical significance testing—by analogy, calculated estimates of P(D|H) are interpreted as if they were estimates of P(H|D), when they clearly are not the same.

As Cohen (1994) succinctly explains, statistical tests assess the probability of observing a sample moment as extreme as observed conditional on the null hypothesis being true, or P(D|H0), where D represents data and H0 represents the null hypothesis. However, researchers want to know whether the null hypothesis is true, conditional on the sample, or P(H0|D). We can calculate P(H0|D) from P(D|H0) by applying Bayes' theorem, but that requires knowledge of P(H0), which is what researchers want to discover in the first place. Although Ziliak and McCloskey (2008) quote many eminent statisticians who have repeatedly pointed out this basic logic, the essential point has not entered the published accounting literature.

In my view, restoring relevance to mathematically guided accounting research requires changing our role model from applied science to engineering (Colander 2011).16 While science aims at finding truth through application of institutionalized best practices with little regard for time or cost, engineering seeks to solve a specific problem using available resources, and the engineering method is “the strategy for causing the best change in a poorly understood or uncertain situation within the available resources” (Koen 2003). We should move to an experimental approach that simulates real-world applications or field tests new accounting methods in particular countries or industries, as would likely happen by default if accounting were not monopolized by the IASB (Dye and Sunder 2001). The inductive approach to standard-setting advocated by Littleton (1953) is likely to provide workable solutions to existing problems and be more useful than an axiomatic approach that starts from overly simplistic first principles.

To reduce the gap between academe and practice and stimulate new inquiry, AAA should partner with the FEI or Business Roundtable to create summer, semester, or annual research internships for accounting professors and Ph.D. students at corporations and audit firms.17 Accounting professors who have served as visiting scholars at the SEC and FASB have reported positively about their experience (e.g., Jorgensen et al. 2007), and I believe that such practice internships would provide opportunities for valuable fieldwork that supplements our experimental and archival analyses. Practice internships could be an especially fruitful way for accounting researchers to spend their sabbaticals.

Another useful initiative would be to revive the tradition of The Accounting Review publishing papers that do not rely on statistical significance or mathematical notation, such as case studies, field studies, and historical studies, similar to the Journal of Financial Economics (Jensen et al. 1989).18 A separate editor, similar to the book reviews editor, could ensure that appropriate criteria are used to evaluate qualitative research submissions (Chapman 2012). A co-editor from practice could help ensure that the topics covered are current and relevant, and help reverse the steep decline in AAA professional membership. Encouraging diversity in research methods and topics is more likely to attract new scholars who are passionate and intrinsically care about their research, rather than attracting only those who imitate current research fads for purely instrumental career reasons.

The relevance of accounting journals can be enhanced by inviting accomplished guest authors from outside accounting. The excellent April 1983 issue of The Accounting Review contains a section entitled “Research Perspectives from Related Disciplines,” which includes essays by Robert Wilson (Decision Sciences), Michael Jensen and Stephen Ross (Finance and Economics), and Karl Weick (Organizational Behavior) that were based on invited presentations at the 1982 AAA Annual Meeting. The thought-provoking essays were discussed by prominent accounting academics (Robert Kaplan, Joel Demski, Robert Libby, and Nils Hakansson); I still use Jensen (1983) to start each of my Ph.D. courses. Academic outsiders bring new perspectives to familiar problems and can often reframe them in ways that enable solutions (Tullock 1966).

I still lament that no accounting journal editor invited the plenary speakers—Joe Henrich, Denise Schmandt-Besserat, Michael Hechter, Eric Posner, Robert Lucas, and Vernon Smith—at the 2007 AAA Annual Meeting to write up their presentations for publication in accounting journals. It is rare that Nobel Laureates and U.S. Presidential Early Career Award winners address AAA annual meetings.20 I strongly urge that AAA annual meetings institute a named lecture given by a distinguished researcher from a different discipline, with the address published in The Accounting Review. This would enable cross-fertilization of ideas between accounting and other disciplines. Several highly cited papers published in the Journal of Accounting and Economics were written by economists (Watts 1998), so this initiative could increase citation flows from accounting journals to other disciplines.

HOW CAN WE MAKE U.S. ACCOUNTING JOURNALS MORE READABLE AND INTERESTING?

Even the greatest discovery will have little impact if other people cannot understand it or are unwilling to make the effort. Zeff (1978) says, “Scholarly writing need not be abstruse. It can and should be vital and relevant. Research can succeed in illuminating the dark areas of knowledge and facilitating the resolution of vexing problems—but only if the report of research findings is communicated to those who can carry the findings further and, in the end, initiate change.” If our journals put off readers, then our research will not stimulate our students or induce change in practice (Dyckman 1989).

Michael Jensen (1983, 333–334) addressed the 1982 AAA Annual Meeting saying:

Unfortunately, there exists in the profession an unwarranted bias toward the use of mathematics even in situations where it is unproductive or useless. One manifestation of this is the common use of the terms “rigorous” or “analytical” or even “theoretical” as identical with ‘‘mathematical.” None of these links is, of course, correct. Mathematical is not the same as rigorous, nor is it the same as analytical or theoretical. Propositions can be logically rigorous without being mathematical, and analysis does not have to take the form of symbols and equations. The English sentence and paragraph will do quite well for many analytical purposes. In addition, the use of mathematics does not prevent the commission of errors—even egregious ones.

Unfortunately, the top accounting journals demonstrate an increased “tyranny of formalism” that “develops when mathematically inclined scholars take the attitude that if the analytical language is not mathematics, it is not rigorous, and if a problem cannot be solved with the use of mathematics, the effort should be abandoned” (Jensen 1983, 335). Sorter (1979) acidly described the transition from normative to quantitative research: “the golden age of empty blindness gave way in the sixties to bloated blindness calculated to cause indigestion. In the sixties, the wonders of methodology burst upon the minds of accounting researchers. We entered what Maslow described as a mean-oriented age. Accountants felt it was their absolute duty to regress, regress and regress.” Accounting research increasingly relies on mathematical and statistical models with highly stylized and unrealistic assumptions. As Young (2006) demonstrates, the financial statement “user” in accounting research and regulation bears little resemblance to flesh-and-blood individuals, and hence our research outputs often have little relevance to the real world.

Figure 1 compares how frequently accountants and members of ten other professions are cited in The New York Times in the late 1990s (Ellenberg 2000). These data are juxtaposed with the numbers employed in each profession during 1996 using U.S. census data. Accountants are cited less frequently relative to their numbers than any profession except computer programmers. One possibility is that journalists cannot detect anything interesting in accounting journals. Another possibility is that university public relations staffs are consistently unable to find an interesting angle in published accounting papers that they can pitch to reporters. I have little doubt that the obscurantist tendencies in accounting papers make it harder for most outsiders to understand what accounting researchers are saying or find interesting.

Accounting articles have also become much longer over time, and I am regularly asked to review articles with introductions that are six to eight pages long, with many of the paragraphs cut-and-pasted from later sections. In contrast, it took Watson and Crick (1953) just one journal page to report the double-helix structure of DNA. Einstein (1905) took only three journal pages to derive his iconic equation E = mc2. Since even the best accounting papers are far less important than these classics of 20th century science, readers waste time wading through academic bloat (Sorter 1979). Because the top general science journals like Science and Nature place strict word limits on articles that differ by the expected incremental contribution, longer scientific papers signal better quality.21 Unfortunately, accounting journals do not restrict length, which encourages bloated papers. Another driver of length is the aforementioned trend toward greater rigor in the review process (Ellison 2002).

My first suggestion for making published accounting articles less tedious and boring is to impose strict word limits and to revive the “Notes” sections for shorter contributions. Word limits force authors to think much harder about how to communicate their essential ideas succinctly and greatly improve writing. Similarly, I would encourage accounting journals to follow Nature and provide guidelines for informative abstracts.22 A related suggestion is to follow the science journals, and more recently, The American Economic Review, by introducing online-only appendices to report the lengthy robustness sections that are demanded by persnickety reviewers.23 In addition, I strongly encourage AAA journals to require authors to post online with each journal article the data sets and working computer code used to produce all tables as a condition for publication, so that other independent researchers can validate and replicate their studies (Bernanke 2004; McCullough and McKitrick 2009).24 This is important because recent surveys of science and management researchers reveal that data fabrication, data falsification, and other violations in published studies is far from rare (Martinson et al. 2005; Bedeian et al. 2010).

I also urge that authors report results graphically rather than in tables, as recommended by numerous statistical experts (e.g., Tukey 1977; Chambers et al. 1983; Wainer 2009). For example, Figure 2 shows how the data in Figure 1 can be displayed more effectively without taking up more page space (Gelman et al. 2002). Scientific papers routinely display results in figures with confidence intervals rather than tables with standard errors and p-values, and accounting journals should adopt these practices to improve understandability. Soyer and Hogarth (2012) show experimentally that even well-trained econometricians forecast more slowly and inaccurately when given tables of statistical results than when given equivalent scatter plots. Most accounting researchers cannot recognize the main tables of Ball and Brown (1968) or Beaver (1968) on sight, but their iconic figures are etched in our memories. The figures in Burgstahler and Dichev (1997) convey their results far more effectively than tables would. Indeed, the finance professoriate was convinced that financial markets are efficient by the graphs in Fama et al. (1969), a highly influential paper that does not contain a single statistical test! Easton (1999) argues that the 1990s non-linear earnings-return relation literature would likely have been developed much earlier if accounting researchers routinely plotted their data. Since it is not always straightforward to convert tables into graphs (Gelman et al. 2002), I recommend that AAA pay for new editors of AAA journals to take courses in graphical presentation.

I would also recommend that AAA award an annual prize for the best figure or graphic in an accounting journal each year. In addition to making research articles easier to follow, figures ease the introduction of new ideas into accounting textbooks. Economics is routinely taught with diagrams and figures to aid intuition—demand and supply curves, IS-LM analysis, Edgeworth boxes, etc. (Blaug and Lloyd 2010). Accounting teachers would benefit if accounting researchers produced similar education tools. Good figures could also be used to adorn the cover pages of our journals similar to the best science journals; in many disciplines, authors of lead articles are invited to provide an illustration for the cover page. JAMA (Journal of the American Medical Association) reproduces paintings depicting doctors on its cover (Southgate 1996); AAA could print paintings of accountants and accounting on the cover of The Accounting Review, perhaps starting with those collected in Yamey (1989). If color printing costs are prohibitive, we could imitate the Journal of Political Economy back cover and print passages from literature where accounting and accountants play an important role, or even start a new format by reproducing cartoons illustrating accounting issues. The key point is to induce accountants to pick up each issue of the journal, irrespective of the research content.

I think that we need an accounting journal to “fill a gap between the general-interest press and most other academic journals,” similar to the Journal of Economics Perspectives (JEP).25 Unlike other economics journals, JEP editors and associate editors solicit articles from experts with the goal of conveying state-of-the-art economic thinking to non-specialists, including students, the lay public, and economists from other specialties.26 The journal explicitly eschews mathematical notation or regression results and requires that results be presented either graphically or as a table of means. In response to the question “List the three economics journals (broadly defined) that you read most avidly when a new issue appears,” a recent survey of U.S. economics professors found that Journal of Economics Perspectives was their second favorite economics journal (Davis et al. 2011), which suggests that an unclaimed niche exists in accounting. Although Accounting Horizons could be restructured along these lines to better reach practitioners, it might make sense to start a new association-wide journal under the AAA aegis.

 

CONCLUSION

I believe that accounting is one of the most important human innovations. The invention of accounting records was likely indispensable to the emergence of agriculture, and ultimately, civilization (e.g., Basu and Waymire 2006). Many eminent historians view double-entry bookkeeping as indispensable for the Renaissance and the emergence of capitalism (e.g., Sombart 1919; Mises 1949; Weber 1927), possibly via stimulating the development of algebra (Heeffer 2011). Sadly, accounting textbooks and the top U.S. accounting journals seem uninterested in whether and how accounting innovations changed history, or indeed in understanding the history of our current practices (Zeff 1989).

In short, the accounting academy embodies a “tragedy of the commons” (Hardin 1968) where strong extrinsic incentives to publish in “top” journals have led to misdirected research efforts. As Zeff (1983) explains, “When modeling problems, researchers seem to be more affected by technical developments in the literature than by their potential to explain phenomena. So often it seems that manuscripts are the result of methods in search of questions rather than questions in search of methods.” Solving common problems requires strong collective action by the social network of accounting researchers using self-governing mechanisms (e.g., Ostrom 1990, 2005). Such initiatives should occur at multiple levels (e.g., school, association, section, region, and individual) to have any chance of success.

While accounting research has made advances in recent decades, our collective progress seems slow, relative to the hard work put in by so many talented researchers. Instead of letting financial economics and psychology researchers and accounting standard-setters choose our research methods and questions, we should return our focus to addressing fundamental issues in accounting. As important, junior researchers should be encouraged to take risks and question conventional academic wisdom, rather than blindly conform to the party line. For example, the current FASB–IASB conceptual framework “remains irreparably flawed” (Demski 2007), and accounting researchers should take the lead in developing alternative conceptual frameworks that better fit what accounting does (e.g., Ijiri 1983; Ball 1989; Dickhaut et al. 2010). This will entail deep historical and cross-cultural analyses rather than regression analyses on machine-readable data. Deliberately attacking the “fundamental and frequently asked questions” in accounting will require innovations in research outlooks and methods, as well as training in the history of accounting thought. It is shameful that we still cannot answer basic questions like “Why did anyone invent recordkeeping?” or “Why is double-entry bookkeeping beautiful?”


Bravo to Professor Basu for having the guts address the Cargo Cult in this manner!


Respectfully,
Bob Jenesen

 

Major problems in accountics science:

Problem 1 --- Control Over Research Methods Allowed in Doctoral Programs and Leading Academic Accounting Research Journals
Accountics scientists control the leading accounting research journals and only allow archival (data mining), experimental, and analytical research methods into those journals. Their referees shun other methods like case method research, field studies, accounting history studies, commentaries, and criticisms of accountics science.
This is the major theme of Anthony Hopwood, Paul Williams, Bob Sterling, Bob Kaplan, Steve Zeff, Dan Stone, and others ---
http://www.trinity.edu/rjensen/TheoryTAR.htm#Appendix01

Since there are so many other accounting research journals in academe and in the practitioner profession, why single out TAR and the other very "top" journals because they refuse to publish any articles without equations and/or statistical inference tables. Accounting researchers have hundreds of other alternatives for publishing their research.

I'm critical of TAR referees because they're symbolic of today's many problems with the way the accountics scientists have taken over the research arm of accounting higher education. Over the past five decades they've taken over all AACSB doctoral programs with a philosophy that "it's our way or the highway" for students seeking PhD or DBA degrees ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

In the United States, following the Gordon/Howell and Pierson reports in the 1950s, our accounting doctoral programs and leading academic journals bet the farm on the social sciences without taking the due cautions of realizing why the social sciences are called "soft sciences." They're soft because "not everything that can be counted, counts. And not everything that counts can be counted."

Be Careful What You Wish For
Academic accountants wanted to become more respectable on their campuses by creating accountics scientists in literally all North American accounting doctoral programs. Accountics scientists virtually all that our PhD and DBA programs graduated over the ensuing decades and they took on an elitist attitude that it really did not matter if their research became ignored by practitioners and those professors who merely taught accounting.

One of my complaints with accountics scientists is that they appear to be unconcerned that they are not not real scientists. In real science the primary concern in validity, especially validation by replication. In accountics science validation and replication are seldom of concern. Real scientists react to their critics. Accountics scientists ignore their critics.

Another complaint is that accountics scientists only take on research that they can model. The ignore the many problems, particularly problems faced by the accountancy profession, that they cannot attack with equations and statistical inference.

"Research on Accounting Should Learn From the Past" by Michael H. Granof and Stephen A. Zeff, Chronicle of Higher Education, March 21, 2008

The unintended consequence has been that interesting and researchable questions in accounting are essentially being ignored. By confining the major thrust in research to phenomena that can be mathematically modeled or derived from electronic databases, academic accountants have failed to advance the profession in ways that are expected of them and of which they are capable.

Academic research has unquestionably broadened the views of standards setters as to the role of accounting information and how it affects the decisions of individual investors as well as the capital markets. Nevertheless, it has had scant influence on the standards themselves.

Continued in article

 

Problem 2 --- Paranoia Regarding Validity Testing and Commentaries on their Research
This is the major theme of Bob Jensen, Paul Williams, Joni Young and others
574 Shields Against Validity Challenges in Plato's Cave ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

 

Problem 3 --- Lack of Concern over Being Ignored by Accountancy Teachers and Practitioners
Accountics scientists only communicate through their research journals that are virtually ignored by most accountancy teachers and practitioners. Thus they are mostly gaming in Plato's Cave and having little impact on the outside world, which is a major criticism raised by then AAA President Judy Rayburn  and Roger Hermanson and others
http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm
Also see
http://www.trinity.edu/rjensen/theory01.htm#WhatWentWrong

Some accountics scientists have even warned against doing research for the practicing profession as a "vocational virus."

Joel Demski steers us away from the clinical side of the accountancy profession by saying we should avoid that pesky “vocational virus.” (See below).

The (Random House) dictionary defines "academic" as "pertaining to areas of study that are not primarily vocational or applied , as the humanities or pure mathematics." Clearly, the short answer to the question is no, accounting is not an academic discipline.
Joel Demski, "Is Accounting an Academic Discipline?" Accounting Horizons, June 2007, pp. 153-157

 

Statistically there are a few youngsters who came to academia for the joy of learning, who are yet relatively untainted by the vocational virus. I urge you to nurture your taste for learning, to follow your joy. That is the path of scholarship, and it is the only one with any possibility of turning us back toward the academy.
Joel Demski, "Is Accounting an Academic Discipline? American Accounting Association Plenary Session" August 9, 2006 ---
http://www.trinity.edu/rjensen//theory/00overview/theory01.htm

Too many accountancy doctoral programs have immunized themselves against the “vocational virus.” The problem lies not in requiring doctoral degrees in our leading colleges and universities. The problem is that we’ve been neglecting the clinical needs of our profession. Perhaps the real underlying reason is that our clinical problems are so immense that academic accountants quake in fear of having to make contributions to the clinical side of accountancy as opposed to the clinical side of finance, economics, and psychology.

 

Problem 4 --- Ignoring Critics: The Accountics Science Wall of Silence
Leading scholars critical of accountics science included Bob Anthony, Charles Christiensen, Anthony Hopwood, Paul Williams Roger Hermanson, Bob Sterling, Jane Mutchler, Judy Rayburn, Bob Kaplan, Steve Zeff, Joni Young, Bob Sterling, Dan Stone, Bob Jensen, and many others. The most frustrating thing for these critics is that accountics scientists are content with being the highest paid faculty on their campuses and their monopoly control of accounting PhD programs (limiting outputs of graduates) to a point where they literally ignore they critics and rarely, if ever, respond to criticisms.
See http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm  

 

"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm


Hi David,
 
Separately and independently, both Steve Kachelmeier (Texas) and Bob Kaplan (Harvard) singled out the Hunton  and Gold (2010) TAR article as being an excellent paradigm shift model in the sense that the data supposedly was captured by practitioners with the intent of jointly working with academic experts in collecting and analyzing the data ---
 
If that data had subsequently not been challenged for integrity (by whom is secret) that Hunton and Gold (2010) research us the type of thing we definitely would like to see more of in accountics research.
 
Unfortunately, this excellent example may have been a bit like Lance Armstrong being such a winner because he did not playing within the rules.
 

For Jim Hunton maybe the world did end on December 21, 2012

"Following Retraction, Bentley Professor Resigns," Inside Higher Ed, December 21, 2012 ---
http://www.insidehighered.com/quicktakes/2012/12/21/following-retraction-bentley-professor-resigns

James E. Hunton, a prominent accounting professor at Bentley University, has resigned amid an investigation of the retraction of an article of which he was the co-author, The Boston Globe reported. A spokeswoman cited "family and health reasons" for the departure, but it follows the retraction of an article he co-wrote in the journal Accounting Review. The university is investigating the circumstances that led to the journal's decision to retract the piece.
 

An Accounting Review Article is Retracted

One of the article that Dan mentions has been retracted, according to
http://aaajournals.org/doi/abs/10.2308/accr-10326?af=R 

Retraction: A Field Experiment Comparing the Outcomes of Three Fraud Brainstorming Procedures: Nominal Group, Round Robin, and Open Discussion

James E. Hunton, Anna Gold Bentley University and Erasmus University Erasmus University This article was originally published in 2010 in The Accounting Review 85 (3) 911–935; DOI: 10/2308/accr.2010.85.3.911 

The authors confirmed a misstatement in the article and were unable to provide supporting information requested by the editor and publisher. Accordingly, the article has been retracted.

Jensen Comment
The TAR article retraction in no way detracts from this study being a model to shoot for in order to get accountics researchers more involved with the accounting profession and using their comparative advantages to analyze real world data that is more granulated that the usual practice of beating purchased databases like Compustat with econometric sticks and settling for correlations rather than causes.
 
Respectfully,
 
Bob Jensen

 


Some Comments About Accountics Science Versus Real Science

This is the lead article in the May 2013 edition of The Accounting Review
"On Estimating Conditional Conservatism
Authors

Ray Ball (The University of Chicago)
S. P. Kothari )Massachusetts Institute of Technology)
Valeri V. Nikolaev (The University of Chicago)

The Accounting Review, Volume 88, No. 3, May 2013, pp. 755-788

The concept of conditional conservatism (asymmetric earnings timeliness) has provided new insight into financial reporting and stimulated considerable research since Basu (1997). Patatoukas and Thomas (2011) report bias in firm-level cross-sectional asymmetry estimates that they attribute to scale effects. We do not agree with their advice that researchers should avoid conditional conservatism estimates and inferences from research based on such estimates. Our theoretical and empirical analyses suggest the explanation is a correlated omitted variables problem that can be addressed in a straightforward fashion, including fixed-effects regression. Correlation between the expected components of earnings and returns biases estimates of how earnings incorporate the information contained in returns. Further, the correlation varies with returns, biasing asymmetric timeliness estimates. When firm-specific effects are taken into account, estimates do not exhibit the bias, are statistically and economically significant, are consistent with priors, and behave as a predictable function of book-to-market, size, and leverage.

. . .

We build on and provide a different interpretation of the anomalous evidence reported by PT. We begin by replicating their [Basu (1997). Patatoukas and Thomas (2011)] results. We then provide evidence that scale-related effects are not the explanation. We control for scale by sorting observations into relatively narrow portfolios based on price, such that within each portfolio approximately 99 percent of the cross-sectional variation in scale is eliminated. If scale effects explain the anomalous evidence, then it would disappear within these portfolios, but the estimated asymmetric timeliness remains considerable. We conclude that the data do not support the scale-related explanation.4 It thus becomes necessary to look for a better explanation.

Continued in article

Jensen Comment
The good news is that the earlier findings were replicated. This is not common in accountics science research. The bad news is that such replications took 16 years and two years respectively. And the probability that TAR will publish a one or more commentaries on these findings is virtually zero.

How does this differ from real science?
In real science most findings are replicated before or very quickly after publication of scientific findings. And interest is in the reproducible results without also requiring an extension of the research for publication of the replication outcomes.

In accountics science there is little incentive to perform exact replications since top accountics science journals neither demand such replications nor will they publish (even in commentaries) replication outcomes. A necessary condition to publish replication outcomes in accountics science is the extend the research into new frontiers.

How long will it take for somebody to replicate these May 2013 findings of Ball, Kothari, and Nikolaev? If the past is any indicator of the future the BKN findings will never be replicated. If they are replicated it will most likely take years before we receive notice of such replication in an extension of the BKN research published in 2013.


 

CONCLUSION from
http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm

In the first 40 years of TAR, an accounting “scholar” was first and foremost an expert on accounting. After 1960, following the Gordon and Howell Report, the perception of what it took to be a “scholar” changed to quantitative modeling. It became advantageous for an “accounting” researcher to have a degree in mathematics, management science, mathematical economics, psychometrics, or econometrics. Being a mere accountant no longer was sufficient credentials to be deemed a scholarly researcher. Many doctoral programs stripped much of the accounting content out of the curriculum and sent students to mathematics and social science departments for courses. Scholarship on accounting standards became too much of a time diversion for faculty who were “leading scholars.” Particularly relevant in this regard is Dennis Beresford’s address to the AAA membership at the 2005 Annual AAA Meetings in San Francisco:

In my eight years in teaching I’ve concluded that way too many of us don’t stay relatively up to date on professional issues. Most of us have some experience as an auditor, corporate accountant, or in some similar type of work. That’s great, but things change quickly these days.
Beresford [2005]

 

Jane Mutchler made a similar appeal for accounting professors to become more involved in the accounting profession when she was President of the AAA [Mutchler, 2004, p. 3].

In the last 40 years, TAR’s publication preferences shifted toward problems amenable to scientific research, with esoteric models requiring accountics skills in place of accounting expertise. When Professor Beresford attempted to publish his remarks, an Accounting Horizons referee’s report to him contained the following revealing reply about “leading scholars” in accounting research:

1. The paper provides specific recommendations for things that accounting academics should be doing to make the accounting profession better. However (unless the author believes that academics' time is a free good) this would presumably take academics' time away from what they are currently doing. While following the author's advice might make the accounting profession better, what is being made worse? In other words, suppose I stop reading current academic research and start reading news about current developments in accounting standards. Who is made better off and who is made worse off by this reallocation of my time? Presumably my students are marginally better off, because I can tell them some new stuff in class about current accounting standards, and this might possibly have some limited benefit on their careers. But haven't I made my colleagues in my department worse off if they depend on me for research advice, and haven't I made my university worse off if its academic reputation suffers because I'm no longer considered a leading scholar? Why does making the accounting profession better take precedence over everything else an academic does with their time?
As quoted in Jensen [2006a]

 

The above quotation illustrates the consequences of editorial policies of TAR and several other leading accounting research journals. To be considered a “leading scholar” in accountancy, one’s research must employ mathematically-based economic/behavioral theory and quantitative modeling. Most TAR articles published in the past two decades support this contention. But according to AAA President Judy Rayburn and other recent AAA presidents, this scientific focus may not be in the best interests of accountancy academicians or the accountancy profession.

In terms of citations, TAR fails on two accounts. Citation rates are low in practitioner journals because the scientific paradigm is too narrow, thereby discouraging researchers from focusing on problems of great interest to practitioners that seemingly just do not fit the scientific paradigm due to lack of quality data, too many missing variables, and suspected non-stationarities. TAR editors are loath to open TAR up to non-scientific methods so that really interesting accounting problems are neglected in TAR. Those non-scientific methods include case method studies, traditional historical method investigations, and normative deductions.

In the other account, TAR citation rates are low in academic journals outside accounting because the methods and techniques being used (like CAPM and options pricing models) were discovered elsewhere and accounting researchers are not sought out for discoveries of scientific methods and models. The intersection of models and topics that do appear in TAR seemingly are borrowed models and uninteresting topics outside the academic discipline of accounting.

We close with a quotation from Scott McLemee demonstrating that what happened among accountancy academics over the past four decades is not unlike what happened in other academic disciplines that developed “internal dynamics of esoteric disciplines,” communicating among themselves in loops detached from their underlying professions. McLemee’s [2006] article stems from Bender [1993].

 “Knowledge and competence increasingly developed out of the internal dynamics of esoteric disciplines rather than within the context of shared perceptions of public needs,” writes Bender. “This is not to say that professionalized disciplines or the modern service professions that imitated them became socially irresponsible. But their contributions to society began to flow from their own self-definitions rather than from a reciprocal engagement with general public discourse.”

 

Now, there is a definite note of sadness in Bender’s narrative – as there always tends to be in accounts of the shift from Gemeinschaft to Gesellschaft. Yet it is also clear that the transformation from civic to disciplinary professionalism was necessary.

 

“The new disciplines offered relatively precise subject matter and procedures,” Bender concedes, “at a time when both were greatly confused. The new professionalism also promised guarantees of competence — certification — in an era when criteria of intellectual authority were vague and professional performance was unreliable.”

But in the epilogue to Intellect and Public Life, Bender suggests that the process eventually went too far. “The risk now is precisely the opposite,” he writes. “Academe is threatened by the twin dangers of fossilization and scholasticism (of three types: tedium, high tech, and radical chic). The agenda for the next decade, at least as I see it, ought to be the opening up of the disciplines, the ventilating of professional communities that have come to share too much and that have become too self-referential.”

For the good of the AAA membership and the profession of accountancy in general, one hopes that the changes in publication and editorial policies at TAR proposed by President Rayburn [2005, p. 4] will result in the “opening up” of topics and research methods produced by “leading scholars.”

 

The purpose of this document is to focus on Problem 2 above. Picking on TAR is merely symbolic of my concerns with the larger problem of the what I view are much larger problems caused by the take over of the research arm of academic accountancy.

Epistemologists present several challenges to Popper's arguments
"Separating the Pseudo From Science," by Michael D. Gordon, Chronicle of Higher Education, September 17, 2012 ---
http://chronicle.com/article/Separating-the-Pseudo-From/134412/


Hi Pat,

Certainly expertise and dedication to students rather than any college degree is what's important in teaching.


However, I would not go so far as to detract from the research (discovery of new knowledge) mission of the university by taking all differential pay incentives away from researchers who, in addition to teaching, are taking on the drudge work and stress of research and refereed publication.


Having said that, I'm no longer in favor of the tenure system since in most instances it's more dysfunctional than functional for long-term research and teaching dedication. In fact, it's become more of an exclusive club that gets away with most anything short of murder.


My concern with accounting and business is how we define "research," Empirical and analytical research that has zero to say about causality is given too much priority in pay, release time, and back slapping.

"How Non-Scientific Granulation Can Improve Scientific Accountics"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf
By Bob Jensen
This essay takes off from the following quotation:

A recent accountics science study suggests that audit firm scandal with respect to someone else's audit may be a reason for changing auditors.
"Audit Quality and Auditor Reputation: Evidence from Japan," by Douglas J. Skinner and Suraj Srinivasan, The Accounting Review, September 2012, Vol. 87, No. 5, pp. 1737-1765.

Our conclusions are subject to two caveats. First, we find that clients switched away from ChuoAoyama in large numbers in Spring 2006, just after Japanese regulators announced the two-month suspension and PwC formed Aarata. While we interpret these events as being a clear and undeniable signal of audit-quality problems at ChuoAoyama, we cannot know for sure what drove these switches (emphasis added). It is possible that the suspension caused firms to switch auditors for reasons unrelated to audit quality. Second, our analysis presumes that audit quality is important to Japanese companies. While we believe this to be the case, especially over the past two decades as Japanese capital markets have evolved to be more like their Western counterparts, it is possible that audit quality is, in general, less important in Japan (emphasis added) .


"In Japan, Research Scandal Prompts Questions," by David McNeill, Chronicle of Higher Education, June 30, 2014 ---
http://chronicle.com/article/In-Japan-Research-Scandal/147417/?cid=at&utm_source=at&utm_medium=en

. . .

Ms. Obokata’s actions "lead us to the conclusion that she sorely lacks, not only a sense of research ethics, but also integrity and humility as a scientific researcher," a damning report concluded. The release of the report sent Ms. Obokata, who admits mistakes but not ill intent, to the hospital in shock for a week. Riken has dismissed all her appeals, clearing the way for disciplinary action, which she has pledged to fight.

In June the embattled researcher agreed to retract both Nature papers—under duress, said her lawyer. On July 2, Nature released a statement from her and the other authors officially retracting the papers.

The seismic waves from Ms. Obokata’s rise and vertiginous fall continue to reverberate. Japan’s top universities are rushing to install antiplagiarism software and are combing through old doctoral theses amid accusations that they are honeycombed with similar problems.

The affair has sucked in some of Japan’s most revered professors, including Riken’s president, Ryoji Noyori, a Nobel laureate, and Shinya Yamanaka, credited with creating induced pluripotent stem cells. Mr. Yamanaka, a professor at Kyoto University who is also a Nobel laureate, in April denied claims that he too had manipulated images in a 2000 research paper on embryonic mouse stem cells, but he was forced to admit that, like Ms. Obokata, he could not find lab notes to support his denial.

The scandal has triggered questions about the quality of science in a country that still punches below its international weight in cutting-edge research. Critics say Japan’s best universities have churned out hundreds of poor-quality Ph.D.’s. Young researchers are not taught how to keep detailed lab notes, properly cite data, or question assumptions, said Sukeyasu Yamamoto, a former physicist at the University of Massachusetts at Amherst and now an adviser to Riken. "The problems we see in this episode are all too common," he said.

Hung Out to Dry?

Ironically, Riken was known as a positive discriminator in a country where just one in seven university researchers are women—the lowest share in the developed world. The organization was striving to push young women into positions of responsibility, say other professors there. "The flip side is that they overreacted and maybe went a little too fast," said Kathleen S. Rockland, a neurobiologist who once worked at Riken’s Brain Science Institute. "That’s a pity because they were doing a very good job."

Many professors, however, accuse the institute of hanging Ms. Obokata out to dry since the problems in her papers were exposed. Riken was under intense pressure to justify its budget with high-profile results. Japan’s news media have focused on the role of Yoshiki Sasai, deputy director of the Riken Center and Ms. Obokata’s supervisor, who initially promoted her, then insisted he had no knowledge of the details of her research once the problems were exposed.

Critics noted that even the head of the inquiry into Ms. Obokata’s alleged misconduct was forced to admit in April that he had posted "problematic" images in a 2007 paper published in Oncogene. Shunsuke Ishii, a molecular geneticist, quit the investigative committee.

Continued in article

Bob Jensen's threads on the need for independent replication and other validity studies in research (except in accountancy were accountics researchers are not encouraged by journals to do validity checks) ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

Bob Jensen's threads on professors who cheat ---
http://www.trinity.edu/rjensen/Plagiarism.htm#ProfessorsWhoPlagiarize

 


The limits of mathematical and statistical analysis of big data
From the CFO Journal's Morning Ledger on April 18, 2014

The limits of social engineering
Writing in
 MIT Technology Review, tech reporter Nicholas Carr pulls from a new book by one of MIT’s noted data scientists to explain why he thinks Big Data has its limits, especially when applied to understanding society. Alex ‘Sandy’ Pentland, in his book “Social Physics: How Good Ideas Spread – The Lessons from a New Science,” sees a mathematical modeling of society made possible by new technologies and sensors and Big Data processing power. Once data measurement confirms “the innate tractability of human beings,” scientists may be able to develop models to predict a person’s behavior. Mr. Carr sees overreach on the part of Mr. Pentland. “Politics is messy because society is messy, not the other way around,” Mr. Carr writes, and any statistical model likely to come from such research would ignore the history, politics, class and messy parts associated with humanity. “What big data can’t account for is what’s most unpredictable, and most interesting, about us,” he concludes.

Jensen Comment
The sad state of accountancy and many doctoral programs in the 21st Century is that virtually all of them in North America only teach the methodology and technique of analyzing big data with statistical tools or the analytical modeling of artificial worlds based on dubious assumptions to simplify reality ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

The Pathways Commission sponsored by the American Accounting Association strongly proposes adding non-quantitative alternatives to doctoral programs but I see zero evidence of any progress in that direction. The main problem is that it's just much easier to avoid having to collect data by beating purchased databases with econometric sticks until something, usually an irrelevant something, falls out of the big data piñata.

"A Scrapbook on What's Wrong with the Past, Present and Future of Accountics Science"
Bob Jensen Jensen
February 19, 2014
SSRN Download:  http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2398296 

 


From the Stanford University Encyclopedia of Philosophy
Science and Pseudo-Science --- http://plato.stanford.edu/entries/pseudo-science/

The demarcation between science and pseudoscience is part of the larger task to determine which beliefs are epistemically warranted. The entry clarifies the specific nature of pseudoscience in relation to other forms of non-scientific doctrines and practices. The major proposed demarcation criteria are discussed and some of their weaknesses are pointed out. In conclusion, it is emphasized that there is much more agreement in particular issues of demarcation than on the general criteria that such judgments should be based upon. This is an indication that there is still much important philosophical work to be done on the demarcation between science and pseudoscience.

1. The purpose of demarcations
2. The “science” of pseudoscience
3. The “pseudo” of pseudoscience

3.1 Non-, un-, and pseudoscience
3.2 Non-science posing as science
3.3 The doctrinal component
3.4 A wider sense of pseudoscience
3.5 The objects of demarcation 3.6 A time-bound demarcation

4. Alternative demarcation criteria

4.1 The logical positivists
4.2 Falsificationism
4.3 The criterion of puzzle-solving
4.4 Criteria based on scientific progress
4.5 Epistemic norms 4.6 Multi-criterial approaches

5. Unity in diversity Bibliography

Bibliography of philosophically informed literature on pseudosciences and contested doctrines

Other Internet resources Related Entries

Bibliography

Cited Works

Paul Feyerabend --- http://plato.stanford.edu/entries/feyerabend/

William Thomas Ziemba --- http://www.williamtziemba.com/WilliamZiemba-ShortCV.pdf

Thomas M. Cover --- http://en.wikipedia.org/wiki/Thomas_M._Cover

On June 15, 2013 David Johnstone wrote the following:

Dear all,
I worked on the logic and philosophy of hypothesis tests in the early 1980s and discovered a very large literature critical of standard forms of testing, a little of which was written by philosophers of science (see the more recent book by Howson and Urbach) and much of which was written by statisticians. At this point philosophy of science was warming up on significance tests and much has been written since. Something I have mentioned to a few philosophers however is how far behind the pace philosophy of science is in regard to all the new finance and decision theory developed in finance (e.g. options logic, mean-variance as an expression of expected utility). I think that philosophers would get a rude shock on just how clever and rigorous all this thinking work in “business” fields is. There is also wonderfully insightful work on betting-like decisions done by mathematicians, such as Ziemba and Cover, that has I think rarely if ever surfaced in the philosophy of science (“Kelly betting” is a good example). So although I believe modern accounting researchers should have far more time and respect for ideas from the philosophy of science, the argument runs both ways.

Jensen Comment
Note that in the above "cited works" there are no cited references in statistics such as Ziemba and Cover or the better known statistical theory and statistical science references.

This suggests somewhat the divergence of statistical theory from philosophy theory with respect to probability and hypothesis testing. Of course probability and hypothesis testing are part and parcel to both science and pseudo-science. Statistical theory may accordingly be a subject that divides pseudo-science and real science.

Etymology provides us with an obvious starting-point for clarifying what characteristics pseudoscience has in addition to being merely non- or un-scientific. “Pseudo-” (ψευδο-) means false. In accordance with this, the Oxford English Dictionary (OED) defines pseudoscience as follows:

“A pretended or spurious science; a collection of related beliefs about the world mistakenly regarded as being based on scientific method or as having the status that scientific truths now have.”

June 16, 2013 reply from Marc Dupree

Let me try again, better organized this time.

You (Bob) have referenced sources that include falsification and demarcation. A good idea. Also, AECM participants discuss hypothesis testing and Phi-Sci topics from time to time.

I didn't make my purpose clear. My purpose is to offer that falsification and demarcation are still relevant to empirical research, any empirical research.

So,

What is falsification in mathematical form?

Why does falsification not demarcate science from non-science?

And for fun: Did Popper know falsification didn't demarcate science from non-science?

Marc

June 17, 2013 reply form Bob Jensen

Hi Marc,

Falsification in science generally requires explanation. You really have not falsified a theory or proven a theory if all you can do is demonstrate an unexplained correlation. In pseudo-science empiricism a huge problem is that virtually all our databases are not granulated sufficiently to possibly explain the discovered correlations or discovered predictability that cannot be explained ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf
 
Mathematics is beautiful in many instances because theories are formulated in a way where finding a counter example ipso facto destroys the theory. This is not generally the case in the empirical sciences where exceptions (often outliers) arise even when causal mechanisms have been discovered. In genetics those exceptions are often mutations that infrequently but persistently arise in nature.
 
The key difference between pseudo-science and real-science, as I pointed out earlier in this thread, lies in explanation versus prediction (the F-twist) or causation versus correlation. When a research study concludes there is a correlation that cannot be explained we are departing from a scientific discovery.  For an example, see

Researchers pinpoint how smoking causes osteoporosis ---
http://medicalxpress.com/news/2013-06-osteoporosis.html

Data mining research in particular suffers from inability to find causes if the granulation needed for discovery of causation just is not contained in the databases. I've hammered on this one with a Japanese research data mining accountics research illustration (from TAR) ----
"How Non-Scientific Granulation Can Improve Scientific Accountics"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf

 
Another huge problem in accountics science and empirical finance is statistical significance testing of correlation coefficients with enormous data mining samples. For example R-squared coefficients of 0.001 are deemed statistically significant if the sample sizes are large enough :
My threads on Deidre McCloskey (the Cult of Statistical Significance) and my own talk are at
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

 
A problem with real-science is that there's a distinction between the evolution of a theory and the ultimate discovery of the causal mechanisms. In the evolution of a theory there may be unexplained correlations or explanations that have not yet been validated (usually by replication). But genuine scientific discoveries entail explanation of phenomena. We like to think of physics and chemistry are real-sciences. In fact they deal a lot with unexplained correlations before theories can finally be explained.
 
Perhaps a difference between a pseudo-science (like accountics science) versus chemistry (a real-science) is that real scientists are never satisfied until they can explain causality to the satisfaction of their peers. Accountics scientists are generally satisfied with correlations and statistical inference tests that cannot explain root causes:
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf
 
Of course science is replete with examples of causal explanations that are later falsified or demonstrated to be incomplete. But the focus is on the causal mechanisms and not mere correlations.

In Search of the Theory of Everything
 "Physics’s pangolin:  Trying to resolve the stubborn paradoxes of their field, physicists craft ever more mind-boggling visions of reality," by Margaret Wertheim, AEON Magazine, June 2013 ---
 http://www.aeonmagazine.com/world-views/margaret-wertheim-the-limits-of-physics/

Of course social scientists complain that the problem in social science research is that the physicists stole all the easy problems.

Respectfully,
 

Bob Jensen

"Is Economics a Science," by Robert Shiller, QFinance, November 8, 2013 --- Click Here
http://www.qfinance.com/blogs/robert-j. shiller/2013/11/08/nobel-is-economics-a-science?utm_source=November+2013+email&utm_medium=Email&utm_content=Blog2&utm_campaign=EmailNov13

NEW HAVEN – I am one of the winners of this year’s Nobel Memorial Prize in Economic Sciences, which makes me acutely aware of criticism of the prize by those who claim that economics – unlike chemistry, physics, or medicine, for which Nobel Prizes are also awarded – is not a science. Are they right?

One problem with economics is that it is necessarily focused on policy, rather than discovery of fundamentals. Nobody really cares much about economic data except as a guide to policy: economic phenomena do not have the same intrinsic fascination for us as the internal resonances of the atom or the functioning of the vesicles and other organelles of a living cell. We judge economics by what it can produce. As such, economics is rather more like engineering than physics, more practical than spiritual.

There is no Nobel Prize for engineering, though there should be. True, the chemistry prize this year looks a bit like an engineering prize, because it was given to three researchers – Martin Karplus, Michael Levitt, and Arieh Warshel – “for the development of multiscale models of complex chemical systems” that underlie the computer programs that make nuclear magnetic resonance hardware work. But the Nobel Foundation is forced to look at much more such practical, applied material when it considers the economics prize.

The problem is that, once we focus on economic policy, much that is not science comes into play. Politics becomes involved, and political posturing is amply rewarded by public attention. The Nobel Prize is designed to reward those who do not play tricks for attention, and who, in their sincere pursuit of the truth, might otherwise be slighted.
 

The pursuit of truth


Why is it called a prize in “economic sciences”, rather than just “economics”? The other prizes are not awarded in the “chemical sciences” or the “physical sciences”.

 

Fields of endeavor that use “science” in their titles tend to be those that get masses of people emotionally involved and in which crackpots seem to have some purchase on public opinion. These fields have “science” in their names to distinguish them from their disreputable cousins.

The term political science first became popular in the late eighteenth century to distinguish it from all the partisan tracts whose purpose was to gain votes and influence rather than pursue the truth. Astronomical science was a common term in the late nineteenth century, to distinguish it from astrology and the study of ancient myths about the constellations. Hypnotic science was also used in the nineteenth century to distinguish the scientific study of hypnotism from witchcraft or religious transcendentalism.
 

Crackpot counterparts


There was a need for such terms back then, because their crackpot counterparts held much greater sway in general discourse. Scientists had to announce themselves as scientists.

 

In fact, even the term chemical science enjoyed some popularity in the nineteenth century – a time when the field sought to distinguish itself from alchemy and the promotion of quack nostrums. But the need to use that term to distinguish true science from the practice of impostors was already fading by the time the Nobel Prizes were launched in 1901.

Similarly, the terms astronomical science and hypnotic science mostly died out as the twentieth century progressed, perhaps because belief in the occult waned in respectable society. Yes, horoscopes still persist in popular newspapers, but they are there only for the severely scientifically challenged, or for entertainment; the idea that the stars determine our fate has lost all intellectual currency. Hence there is no longer any need for the term “astronomical science.”
 

Pseudoscience?


Critics of “economic sciences” sometimes refer to the development of a “pseudoscience” of economics, arguing that it uses the trappings of science, like dense mathematics, but only for show. For example, in his 2004 book,
 Fooled by Randomness, Nassim Nicholas Taleb said of economic sciences:
 
“You can disguise charlatanism under the weight of equations, and nobody can catch you since there is no such thing as a controlled experiment.”

But physics is not without such critics, too. In his 2004 book,
The Trouble with Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next, Lee Smolin reproached the physics profession for being seduced by beautiful and elegant theories (notably string theory) rather than those that can be tested by experimentation. Similarly, in his 2007 book, Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law, Peter Woit accused physicists of much the same sin as mathematical economists are said to commit.


 

Exposing the charlatans


My belief is tha
t economics is somewhat more vulnerable than the physical sciences to models whose validity will never be clear, because the necessity for approximation is much stronger than in the physical sciences, especially given that the models describe people rather than magnetic resonances or fundamental particles. People can just change their minds and behave completely differently. They even have neuroses and identity problems - complex phenomena that the field of behavioral economics is finding relevant to understand economic outcomes.

 

But all the mathematics in economics is not, as Taleb suggests, charlatanism. Economics has an important quantitative side, which cannot be escaped. The challenge has been to combine its mathematical insights with the kinds of adjustments that are needed to make its models fit the economy’s irreducibly human element.

The advance of behavioral economics is not fundamentally in conflict with mathematical economics, as some seem to think, though it may well be in conflict with some currently fashionable mathematical economic models. And, while economics presents its own methodological problems, the basic challenges facing researchers are not fundamentally different from those faced by researchers in other fields. As economics develops, it will broaden its repertory of methods and sources of evidence, the science will become stronger, and the charlatans will be exposed.

 

Bob Jensen's threads on Real Science versus Pseudo Science ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm


"A Pragmatist Defence of Classical Financial Accounting Research," by Brian A. Rutherford, Abacus, Volume 49, Issue 2, pages 197–218, June 2013 ---
http://onlinelibrary.wiley.com/doi/10.1111/abac.12003/abstract

The reason for the disdain in which classical financial accounting research has come to held by many in the scholarly community is its allegedly insufficiently scientific nature. While many have defended classical research or provided critiques of post-classical paradigms, the motivation for this paper is different. It offers an epistemologically robust underpinning for the approaches and methods of classical financial accounting research that restores its claim to legitimacy as a rigorous, systematic and empirically grounded means of acquiring knowledge. This underpinning is derived from classical philosophical pragmatism and, principally, from the writings of John Dewey. The objective is to show that classical approaches are capable of yielding serviceable, theoretically based solutions to problems in accounting practice.

Jensen Comment
When it comes to "insufficient scientific nature" of classical accounting research I should note yet once again that accountics science never attained the status of real science where the main criteria are scientific searches for causes and an obsession with replication (reproducibility) of findings.

Accountics science is overrated because it only achieved the status of a psuedo science ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Pseudo-Science

"Research on Accounting Should Learn From the Past" by Michael H. Granof and Stephen A. Zeff, Chronicle of Higher Education, March 21, 2008

The unintended consequence has been that interesting and researchable questions in accounting are essentially being ignored. By confining the major thrust in research to phenomena that can be mathematically modeled or derived from electronic databases, academic accountants have failed to advance the profession in ways that are expected of them and of which they are capable.

Academic research has unquestionably broadened the views of standards setters as to the role of accounting information and how it affects the decisions of individual investors as well as the capital markets. Nevertheless, it has had scant influence on the standards themselves.

Continued in article

"Research on Accounting Should Learn From the Past," by Michael H. Granof and
 Stephen A. Zeff, Chronicle of Higher Education, March 21, 2008

. . .

The narrow focus of today's research has also resulted in a disconnect between research and teaching. Because of the difficulty of conducting publishable research in certain areas — such as taxation, managerial accounting, government accounting, and auditing — Ph.D. candidates avoid choosing them as specialties. Thus, even though those areas are central to any degree program in accounting, there is a shortage of faculty members sufficiently knowledgeable to teach them.

To be sure, some accounting research, particularly that pertaining to the efficiency of capital markets, has found its way into both the classroom and textbooks — but mainly in select M.B.A. programs and the textbooks used in those courses. There is little evidence that the research has had more than a marginal influence on what is taught in mainstream accounting courses.

What needs to be done? First, and most significantly, journal editors, department chairs, business-school deans, and promotion-and-tenure committees need to rethink the criteria for what constitutes appropriate accounting research. That is not to suggest that they should diminish the importance of the currently accepted modes or that they should lower their standards. But they need to expand the set of research methods to encompass those that, in other disciplines, are respected for their scientific standing. The methods include historical and field studies, policy analysis, surveys, and international comparisons when, as with empirical and analytical research, they otherwise meet the tests of sound scholarship.

Second, chairmen, deans, and promotion and merit-review committees must expand the criteria they use in assessing the research component of faculty performance. They must have the courage to establish criteria for what constitutes meritorious research that are consistent with their own institutions' unique characters and comparative advantages, rather than imitating the norms believed to be used in schools ranked higher in magazine and newspaper polls. In this regard, they must acknowledge that accounting departments, unlike other business disciplines such as finance and marketing, are associated with a well-defined and recognized profession. Accounting faculties, therefore, have a special obligation to conduct research that is of interest and relevance to the profession. The current accounting model was designed mainly for the industrial era, when property, plant, and equipment were companies' major assets. Today, intangibles such as brand values and intellectual capital are of overwhelming importance as assets, yet they are largely absent from company balance sheets. Academics must play a role in reforming the accounting model to fit the new postindustrial environment.

Third, Ph.D. programs must ensure that young accounting researchers are conversant with the fundamental issues that have arisen in the accounting discipline and with a broad range of research methodologies. The accounting literature did not begin in the second half of the 1960s. The books and articles written by accounting scholars from the 1920s through the 1960s can help to frame and put into perspective the questions that researchers are now studying.

Continued in article

How accountics scientists should change ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm


June 5, 2013 reply to a long thread by Bob Jensen

Hi Steve,

As usual, these AECM threads between you, me, and Paul Williams resolve nothing to date. TAR still has zero articles without equations unless such articles are forced upon editors like the Kaplan article was forced upon you as Senior Editor. TAR still has no commentaries about the papers it publishes and the authors make no attempt to communicate and have dialog about their research on the AECM or the AAA Commons.

I do hope that our AECM threads will continue and lead one day to when the top academic research journals do more to both encourage (1) validation (usually by speedy replication), (2) alternate methodologies, (3) more innovative research, and (4) more interactive commentaries.

I remind you that Professor Basu's essay is only one of four essays bundled together in Accounting Horizons on the topic of how to make accounting research, especially the so-called Accounting Sciience or Accountics Science or Cargo Cult science, more innovative.

The four essays in this bundle are summarized and extensively quoted at http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Essays 

I will try to keep drawing attention to these important essays and spend the rest of my professional life trying to bring accounting research closer to the accounting profession.

I also want to dispel the myth that accountics research is harder than making research discoveries without equations. The hardest research I can imagine (and where I failed) is to make a discovery that has a noteworthy impact on the accounting profession. I always look but never find such discoveries reported in TAR.

The easiest research is to purchase a database and beat it with an econometric stick until something falls out of the clouds. I've searched for years and find very little that has a noteworthy impact on the accounting profession. Quite often there is a noteworthy impact on other members of the Cargo Cult and doctoral students seeking to beat the same data with their sticks. But try to find a practitioner with an interest in these academic accounting discoveries?

Our latest thread leads me to such questions as:

  1. Is accounting research of inferior quality relative to other disciplines like engineering and finance?

     
  2. Are there serious innovation gaps in academic accounting research?

     
  3. Is accounting research stagnant?

     
  4. How can accounting researchers be more innovative?

     
  5. Is there an "absence of dissent" in academic accounting research?

     
  6. Is there an absence of diversity in our top academic accounting research journals and doctoral programs?

     
  7. Is there a serious disinterest (except among the Cargo Cult) and lack of validation in findings reported in our academic accounting research journals, especially TAR?

     
  8. Is there a huge communications gap between academic accounting researchers and those who toil teaching accounting and practicing accounting?

     
  9. Why do our accountics scientists virtually ignore the AECM and the AAA Commons and the Pathways Commission Report?
    http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

One fall out of this thread is that I've been privately asked to write a paper about such matters. I hope that others will compete with me in thinking and writing about these serious challenges to academic accounting research that never seem to get resolved.

Thank you Steve for sometimes responding in my threads on such issues in the AECM.

Respectfully,
Bob Jensen

 

June 16, 2013 message from Bob Jensen

Hi Marc,

The mathematics of falsification is essentially the same as the mathematics of proof negation.
 
If mathematics is a science it's largely a science of counter examples.
 
Regarding real-real science versus pseudo-science, one criterion is that of explanation (not just prediction)  that satisfies a community of scholars. One of the best examples of this are the exchanges between two Nobel economists --- Milton Friedman versus Herb Simon.
 

From
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

Jensen Comment
Interestingly, two Nobel economists slugged out the very essence of theory some years back. Herb Simon insisted that the purpose of theory was to explain. Milton Friedman went off on the F-Twist tangent saying that it was enough if a theory merely predicted. I lost some (certainly not all) respect for Friedman over this. Deidre, who knew Milton, claims that deep in his heart, Milton did not ultimately believe this to the degree that it is attributed to him. Of course Deidre herself is not a great admirer of Neyman, Savage, or Fisher.

Friedman's essay "The Methodology of Positive Economics" (1953) provided the epistemological pattern for his own subsequent research and to a degree that of the Chicago School. There he argued that economics as science should be free of value judgments for it to be objective. Moreover, a useful economic theory should be judged not by its descriptive realism but by its simplicity and fruitfulness as an engine of prediction. That is, students should measure the accuracy of its predictions, rather than the 'soundness of its assumptions'. His argument was part of an ongoing debate among such statisticians as Jerzy Neyman, Leonard Savage, and Ronald Fisher.

Stanley Wong, 1973. "The 'F-Twist' and the Methodology of Paul Samuelson," American Economic Review, 63(3) pp. 312-325. Reprinted in J.C. Wood & R.N. Woods, ed., 1990, Milton Friedman: Critical Assessments, v. II, pp. 224- 43.
http://www.jstor.org/discover/10.2307/1914363?uid=3739712&uid=2129&uid=2&uid=70&uid=4&uid=3739256&sid=21102409988857
 

Respectfully,
 
Bob Jensen

June 18, 2013 reply to David Johnstone by Jagdish Gangolly

David,

Your call for a dialogue between statistics and philosophy of science is very timely, and extremely important considering the importance that statistics, both in its probabilistic and non-probabilistic incarnations, has gained ever since the computational advances of the past three decades or so. Let me share a few of my conjectures regarding the cause of this schism between statistics and philosophy, and consider a few areas where they can share in mutual reflection. However, reflection in statistics, like in accounting of late and unlike in philosophy, has been on short order for quite a while. And it is always easier to pick the low hanging fruit. Albert Einstein once remarked, ""I have little patience with scientists who take a board of wood, look for the thinnest part and drill a great number of holes where drilling is easy".

1.

Early statisticians were practitioners of the art, most serving as consultants of sorts. Gosset worked for Guiness, GEP Box did most of his early work for Imperial Chemical Industries (ICI), Fisher worked at Rothamsted Experimental Station, Loeve was an actuary at University of Lyon... As practitioners, statisticians almost always had their feet in one of the domains in science: Fisher was a biologist, Gossett was a chemist, Box was a chemist, ... Their research was down to earth, and while statistics was always regarded the turf of mathematicians, their status within mathematics was the same as that of accountants in liberal arts colleges today, slightly above that of athletics. Of course, the individuals with stature were expected to be mathematicians in their own right.

All that changed with the work of Kolmogorov (1933, Moscow State, http://www.socsci.uci.edu/~bskyrms/bio/readings/kolmogorov_theory_of_probability_small.pdf), Loeve (1960, Berkeley), Doob(1953, Illinois), and Dynkin(1963, Moscow State and Cornell). They provided mathematical foundations for earlier work of practitioners, and especially Kolmogorov provided axiomatic foundations for probability theory. In the process, their work unified statistics into a coherent mass of knowledge. (Perhaps there is a lesson here for us accountants). A collateral effect was the schism in the field between the theoreticians and the practitioners (of which we accountants must be wary) that has continued to this date. We can see a parallel between accounting and statistics here too.

2.

Early controversies in statistics had to do with embedding statistical methods in decision theory (Fisher was against, Neyman and Pearson were for it), and whether the foundations for statistics had to be deductive or inductive (frequentists were for the former, Bayesians were for the latter). These debates were not just technical, and had underpinnings in philosophy, especially philosophy of mathematics (after all, the early contributors to the field were mathematicians: Gauss, Fermat, Pascal, Laplace, deMoivre, ...). For example, when the Fisher-Neyman/Pearson debates had ranged, Neyman was invited by the philosopher Jakko Hintikka to write a paper for the journal Synthese ( "Frequentist probability and Frequentist statistics", 1977).

3.

Since the early statisticians were practitioners, their orientation was usually normative: in sample theory, regression, design of experiments,.... The mathematisation of statistics and later work of people like Tukey, raised the prominence of descriptive (especially axiomatic) in the field. However, the recent developments in datamining have swung the balance again in favour of the normative.

4. Foundational issues in statistics have always been philosophical. And treatment of probability has been profoundly philosophical (see for example http://en.wikipedia.org/wiki/Probability_interpretations).

Regards,

Jagdish

June 18, 2018 reply from David Johnstone

Dear Jagdish, as usual your knowledge and perspectives are great to read.

In reply to your points: (1) the early development of statistics by Gossett and Fisher was as a means to an end, i.e. to design and interpret experiments that helped to resolve practical issues, like whether fertilizers were effective and different genetic strains of crops were superior. This left results testable in the real world laboratory, by the farmers, so the pressure to get it right rather than just publish was on. Gossett by the way was an old fashioned English scholar who spent as much time fishing and working in his workshop as doing mathematics. This practical bent comes out in his work.

(2) Neman’s effort to make statistics “deductive” was always his weak point, and he went to great lengths to evade this issue. I wrote a paper on Neyman’s interpretations of tests, as in trying to understand him I got frustrated by his inconsistency and evasiveness over his many papers. In more than one place, he wrote that to “accept” the null is to “act as if it is true”, and to reject it is to “act as if it is false”. This is ridiculous in scientific contexts, since to act as if something is decided 100% you would never draw another sample - your work would be done on that hypothesis.

(3) On the issue of normative versus descriptive, as in accounting research, Harold Jeffreys had a great line in his book, “he said that if we observe a child add 2 and 2 to get 5, we don’t change the laws of arithmetic”. He was very anti learning about the world by watching people rather than doing abstract theory. BTW I own his personal copy of his 3rd edition. A few years ago I went to buy this book on Bookfinder, and found it available in a secondhand bookshop in Cambridge. I rand them instantly when I saw that they said whose book it was, and they told me that Mrs Jeffreys had just died and Harold’s books had come in, and that the 1st edition was sold the day before.

(4) I adore your line that “Foundational issues in statistics have always been philosophical”. .... So must they be in accounting, in relation to how to construct income and net assets measures that are sound and meaningful. Note however that just because we accept something needs philosophical footing doesn’t mean that we will find or agree on that footing. I recently received a comment on a paper of mine from an accounting referee. The comment was basically that the effect of information on the cost of capital “could not be revealed by philosophy” (i.e. by probability theory etc.). Rather, this is an empirical issue. Apart from ignoring all the existing theory on this matter in accounting and finance, the comment is symptomatic of the way that “empirical findings” have been elevated to the top shelf, and theory, or worse, “thought pieces”, are not really science. There is so much wrong with this extreme but common view, including of course that every empirical finding stands on a model or a priori view. Indeed, remember that every null hypothesis that was ever rejected might have been rejected because the model (not the hypothesis) was wrong. People naively believe that a bad model or bad experimental design just reduces power (makes it harder to reject the null) but the mathematical fact is that it can go either way, and error in the model or sample design can make rejection of the null almost certain.

Thank you for your interesting thoughts Jagdish,

David

From Bob Jensen's threads on the Cult of Statistical Significance ---
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm
 

The Cult of Statistical Significance: How Standard Error Costs Us Jobs, Justice, and Lives ---
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

Page 15
The doctor who cannot distinguish statistical significance from substantive significance, an F-statistic from a heart attach, is like an economist who ignores opportunity cost---what statistical theorists call the loss function. The doctors of "significance" in medicine and economy are merely "deciding what to say rather than what to do" (Savage 1954, 159). In the 1950s Ronald Fisher published an article and a book that intended to rid decision from the vocabulary of working statisticians (1955, 1956). He was annoyed by the rising authority in highbrow circles of those he called "the Neymanites."

Continued on Page 15


pp. 28-31
An example is provided regarding how Merck manipulated statistical inference to keep its killing pain killer Vioxx from being pulled from the market.

Page 31
Another story. The Japanese government in June 2005 increased the limit on the number of whales that may be annually killed in the Antarctica---from around 440 annually to over 1,000 annually. Deputy Commissioner Akira Nakamae explained why:  "We will implement JARPS-2 [the plan for the higher killing] according to the schedule, because the sample size is determined in order to get statistically significant results" (Black 2005). The Japanese hunt for the whales, they claim, in order to collect scientific data on them. That and whale steaks. The commissioner is right:  increasing sample size, other things equal, does increase the statistical significance of the result. It is, fter all, a mathematical fact that statistical significance increases, other things equal, as sample size increases. Thus the theoretical standard error of JAEPA-2, s/SQROOT(440+560) [given for example the simple mean formula], yields more sampling precision than the standard error JARPA-1, s/SQROOT(440). In fact it raises the significance level to Fisher's percent cutoff. So the Japanese government has found a formula for killing more whales, annually some 560 additional victims, under the cover of getting the conventional level of Fisherian statistical significance for their "scientific" studies.


pp. 250-251
The textbooks are wrong. The teaching is wrong. The seminar you just attended is wrong. The most prestigious journal in your scientific field is wrong.

You are searching, we know, for ways to avoid being wrong. Science, as Jeffreys said, is mainly a series of approximations to discovering the sources of error. Science is a systematic way of reducing wrongs or can be. Perhaps you feel frustrated by the random epistemology of the mainstream and don't know what to do. Perhaps you've been sedated by significance and lulled into silence. Perhaps you sense that the power of a Roghamsted test against a plausible Dublin alternative is statistically speaking low but you feel oppressed by the instrumental variable one should dare not to wield. Perhaps you feel frazzled by what Morris Altman (2004) called the "social psychology rhetoric of fear," the deeply embedded path dependency that keeps the abuse of significance in circulation. You want to come out of it. But perhaps you are cowed by the prestige of Fisherian dogma. Or, worse thought, perhaps you are cynically willing to be corrupted if it will keep a nice job

 

June 25, 2013 reply from Marc Dupree

With regard to the article Scott recommended, "The Flawed Probabilistic Foundation of Law and Economics,"  (https://law.northwestern.edu/journals/lawreview/v105/n1/199/LR105n1Stein.pdf), there may be more interest in the discussion of research methods than answering the question, "Is following the law the same as being ethical?"

 
An excerpt:
Evidential Variety as a Basis for Inference

The logical composition of the two systems of probability— mathematical, on the one hand, and causative, on the other—reveals the systems’ relative strengths and weaknesses. The mathematical system is most suitable for decisions that implicate averages. Gambling is a para- digmatic example of those decisions. At the same time, this system em- ploys relatively lax standards for identifying causes and effects. Moreover, it weakens the reasoner’s epistemic grasp of her individual case by requir- ing her to abstract away from the case’s specifics. This requirement is im- posed by the system’s epistemically unfounded rules that make individual cases look similar to each other despite the uniqueness of each case. On the positive side, however, the mathematical system allows a person to concep- tualize her probabilistic assessments in the parsimonious and standardized language of numbers. This conceptual framework enables people to form and communicate their assessments of probabilities with great precision.

The causative system of probability is not suitable for gambling. It as- sociates probability with the scope, or variety, of the evidence that confirms the underlying individual occurrence. The causative system also employs rigid standards for establishing causation. Correspondingly, it disavows in- stantial multiplicity as a basis for inferences and bans all other factual as- sumptions that do not have epistemic credentials. These features improve people’s epistemic grasps of their individual cases. The causative system has a shortcoming: its unstructured and “noisy” taxonomy. This system in- structs people to conceptualize their probability assessments in the ordinary day-to-day language. This conceptual apparatus is notoriously imprecise. The causative system therefore has developed no uniform metric for grada- tion of probabilities.142

On balance, the causative system outperforms mathematical probabili- ty in every area of fact-finding for which it was designed. This system enables people to perform an epistemically superior causation analysis in both scientific and daily affairs. Application of the causative system also improves people’s ability to predict and reconstruct specific events. The mathematical system, in contrast, is a great tool for understanding averages and distributions of multiple events. However, when it comes to an as- sessment of an individual event, the precision of its estimates of probability becomes illusory. The causative system consequently becomes decisively superior.

Marc 

 


I hope Jim K will comment on how "research in business schools is becoming increasingly distanced from the reality of business"
"In 2008 Hopwood commented on a number of issues," by Jim Martin, MAAW Blog, June 26, 2013 ---
http://maaw.blogspot.com/2013/06/in-2008-hopwood-commented-on-number-of.html

The first issue below is related to the one addressed by Bennis and O'Toole. According to Hopwood, research in business schools is becoming increasingly distanced from the reality of business. The worlds of practice and research have become ever more separated. More and more accounting and finance researchers know less and less about accounting and finance practice. Other professions such as medicine have avoided this problem so it is not an inevitable development.

Another issue has to do with the status of management accounting. Hopwood tells us that the term management accountant is no longer popular and virtually no one in the U.S. refers to themselves as a management accountant. The body of knowledge formally associated with the term is now linked to a variety of other concepts and job titles. In addition, management accounting is no longer an attractive subject to students in business schools. This is in spite of the fact that many students will be working in positions where a knowledge of management control and systems design issues will be needed. Unfortunately, the present positioning and image of management accounting does not make this known.

Continued in article

June 29, 2013 reply from Zane Swanson

Hi Bob,

A key word of incentive comes up as it relates to the practitioner motivator of the nature of accounting and financing research. The AICPA does give an educator award at the AAA convention and so it isn't as though the practitioners don't care about accounting professorship activity.

Maybe, the "right"' type of incentive needs to be designed. For example, it was not so many years ago that firms developed stock options to align interests of management and investors. Perhaps, a similar option oriented award could be designed to align the interests of research professors and practitioners. Theoretically, practitioners could vest a set of professors for research publications in a pool for a particular year and then grant the exercise of the option several years later with the attainment of a practitioner selected goal level (like HR performance share awards). This approach could meet your calls to get researchers to write "real world" papers and to have follow up replications to prove the point.

However, there are 2 road blocks to this approach. 1 is money for the awards. 2 is determining what the practitioner performance features would be.

You probably would have to determine what practitioners want in terms of research or this whole line of discussion is moot.

The point of this post is: Determining research demand solely by professors choices does not look like it is addressing your "real world" complaints.

Respectfully,
Zane

June 29, 2013 reply from Bob Jensen

Hi Zane,

I had a very close friend (now dead) in the Engineering Sciences Department at Trinity University. I asked him why engineering professors seemed to be much closer to their profession than many other departments in the University. He said he thought it was primarily that doctoral students chose engineering because they perhaps were more interested in being problem solvers --- and their profession provided them with an unlimited number of professional problems to be solved. Indeed the majority of Ph.D. graduates in engineering do not even join our Academy. The ones that do are not a whole lot different from the Ph.D. engineers who chose to go into industry except that engineering professors do more teaching.

When they take up research projects, engineering professors tend to be working with government (e.g., the EPA) and and industry (e.g., Boeing) to help solve problems. In many instances they work on grants, but many engineering professors are working on industry problems without grants.

In contrast, accounting faculty don't like to work with practitioners to solve problems. In fact accounting faculty don't like to leave the campus to explore new problems and collect data. The capital markets accounting researchers purchase their databases and them mine the data. The behavioral accounting researchers study their students as surrogates for real world decision makers knowing full well that students are almost always poor surrogates. The analytical accounting researchers simply assume the world away. They don't set foot off campus except to go home at night. I know because I was one of them for nearly all of my career.

Academic accounting researchers submit very little original research work to journals that practitioners read. Even worse a hit in an accounting practitioner journal counts very little for promotion and tenure especially when the submission itself may be too technical to interest any of our AAA journal editors, e.g., an editor told me that the AAA membership was just not interested in technical articles on valuing interest rate swaps, I had to get two very technical papers on accounting for derivative financial instruments published in a practitioner journal (Derivatives Reports) because I was told that these papers were just too technical for AAA journal readers.

Our leading accountics science researchers have one goal in mind --- getting a hit in TAR, JAR, or JAE or one of the secondary academic accounting research journals that will publish accountics research. They give little or no priority to finding and helping to solve problems that practitioners want solved. They have little interest in leaving the ivory tower to collect their own messy real-world data.

Awards and even research grants aren't the answer to making accounting professors more like engineering, medical, and law professors. We need to change the priorities of TAR, JAR, JAE, and other top academic accounting research journals where referees ask hard questions about how the practice of the profession is really helped by the research findings of virtually all submitted articles.

In short, we need to become better problem solvers in a way like engineering, medical, and law professors are problem solvers on the major problems of their professions. A great start would be to change the admissions criteria of our top accounting research journals.

Respectfully,
Bob Jensen

 

Avoiding applied research for practitioners and failure to attract practitioner interest in academic research journals ---
"Why business ignores the business schools," by Michael Skapinker
Some ideas for applied research ---
http://www.trinity.edu/rjensen/theory01.htm#AcademicsVersusProfession

Essays on the (mostly sad) State of Accounting Scholarship ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Essays

Sue Haka, former AAA President, commenced a thread on the AAA Commons entitled
"Saving Management Accounting in the Academy,"
--- http://commons.aaahq.org/posts/98949b972d
A succession of comments followed.

The latest comment (from James Gong) may be of special interest to some of you.
Ken Merchant is a former faculty member from Harvard University who form many years now has been on the faculty at the University of Southern California.

Here are my two cents. First, on the teaching side, the management accounting textbooks fail to cover new topics or issues. For instance, few textbooks cover real options based capital budgeting, product life cycle management, risk management, and revenue driver analysis. While other disciplines invade management accounting, we need to invade their domains too. About five or six years ago, Ken Merchant had written a few critical comments on Garrison/Noreen textbook for its lack of breadth. Ken's comments are still valid. Second, on the research and publication side, management accounting researchers have disadvantage in getting data and publishing papers compared with financial peers. Again, Ken Merchant has an excellent discussion on this topic at an AAA annual conference.

Bob Jensen's threads on what went wrong in the Accounting Academy
How did academic accounting research become a pseudo science?
http://www.trinity.edu/rjensen/theory01.htm#WhatWentWrong ---
 

June 30, 2013 reply from Zane Swanson

Hi Bob,
  You have expressed your concerns articulately and passionately.  However, in terms of creating value to society in general, your "action plan" of getting the "top" of the profession (editors) to take steps appears unlikely.  As you pointed out, the professors who create articles do it with resources immediately under their control in the most expeditious fashion in order to get tenure, promotion and annual raises.  The editors take what submissions are given.  Thus, it is an endless cycle (a closed loop, a complete circle).  As you noted the engineering profession has different culture with a "make it happen" objective real world.  In comparison with accounting, the prospect of "only" accounting editors from the top dictating research seems questionable.  Your critique suggests that the "entire" accounting research culture needs a paradigm shift of real world action consequences  in order to do what you want.  The required big data shift is probably huge and is a reason that I suggested starting an options alignment mechanism of interests of practitioners and researchers.
 

Respectfully,
Zane

 

June 30, 2013 reply from Bob Jensen

Hi Zane,

 

You may be correct that a paradigm shift in accountics research is just not feasible given the generations of econometrics, psychometrics. and mathematical accountics researchers that virtually all of the North American doctoral programs have produced.
 
I think Anthony Hopwood, Paul Williams, and others agree with you that it will take a paradigm shift that just is not going to happen in our leading journals like TAR, JAR, JAE, CAR, etc. Paul, however, thinks we are making some traction, especially since virtually all AAA presidents since Judy Rayburn have made appeals fro a paradigm shift plus the strong conclusions of the Pathways Commission Report. However, that report seems to have fallen on deaf ears as far as accountics scientists are concerned.
 
Other historical scholars like Steve Zeff, Mike Granfof, Bob Kaplan, Judy Rayburn, Sudipta Basu, and think that we can wedge these top journals to just be a bit more open to alternative research methods like were used in the past when practitioners took a keen interest in TAR and even submitted papers to be published in TAR --- alternative methods like case studies, field studies, and normative studies without equations.
 

"We fervently hope that the research pendulum will soon swing back from the narrow lines of inquiry that dominate today's leading journals to a rediscovery of the richness of what accounting research can be. For that to occur, deans and the current generation of academic accountants must give it a push."
Granof and Zeff --- http://www.trinity.edu/rjensen/TheoryTAR.htm#Appendix01

Michael H. Granof
is a professor of accounting at the McCombs School of Business at the University of Texas at Austin. Stephen A. Zeff is a professor of accounting at the Jesse H. Jones Graduate School of Management at Rice University.

Accounting Scholarship that Advances Professional Knowledge and Practice
Robert S. Kaplan
The Accounting Review, March 2011, Volume 86, Issue 2, 

Recent accounting scholarship has used statistical analysis on asset prices, financial reports and disclosures, laboratory experiments, and surveys of practice. The research has studied the interface among accounting information, capital markets, standard setters, and financial analysts and how managers make accounting choices. But as accounting scholars have focused on understanding how markets and users process accounting data, they have distanced themselves from the accounting process itself. Accounting scholarship has failed to address important measurement and valuation issues that have arisen in the past 40 years of practice. This gap is illustrated with missed opportunities in risk measurement and management and the estimation of the fair value of complex financial securities. This commentary encourages accounting scholars to devote more resources to obtaining a fundamental understanding of contemporary and future practice and how analytic tools and contemporary advances in accounting and related disciplines can be deployed to improve the professional practice of accounting. ©2010 AAA

The videos of the three plenary speakers at the 2010 Annual Meetings in San Francisco are now linked at
http://commons.aaahq.org/hives/531d5280c3/posts?postTypeName=session+video
I think the video is only available to AAA members.

Hi David,
 
Separately and independently, both Steve Kachelmeier (Texas) and Bob Kaplan (Harvard) singled out the Hunton  and Gold (2010) TAR article as being an excellent paradigm shift model in the sense that the data supposedly was captured by practitioners with the intent of jointly working with academic experts in collecting and analyzing the data ---
 
If that data had subsequently not been challenged for integrity (by whom is secret) that Hunton and Gold (2010) research us the type of thing we definitely would like to see more of in accountics research.
 
Unfortunately, this excellent example may have been a bit like Lance Armstrong being such a winner because he did not playing within the rules.
 

For Jim Hunton maybe the world did end on December 21, 2012

"Following Retraction, Bentley Professor Resigns," Inside Higher Ed, December 21, 2012 ---
http://www.insidehighered.com/quicktakes/2012/12/21/following-retraction-bentley-professor-resigns

James E. Hunton, a prominent accounting professor at Bentley University, has resigned amid an investigation of the retraction of an article of which he was the co-author, The Boston Globe reported. A spokeswoman cited "family and health reasons" for the departure, but it follows the retraction of an article he co-wrote in the journal Accounting Review. The university is investigating the circumstances that led to the journal's decision to retract the piece.
 

An Accounting Review Article is Retracted

One of the article that Dan mentions has been retracted, according to
http://aaajournals.org/doi/abs/10.2308/accr-10326?af=R 

Retraction: A Field Experiment Comparing the Outcomes of Three Fraud Brainstorming Procedures: Nominal Group, Round Robin, and Open Discussion

James E. Hunton, Anna Gold Bentley University and Erasmus University Erasmus University This article was originally published in 2010 in The Accounting Review 85 (3) 911–935; DOI: 10/2308/accr.2010.85.3.911 

The authors confirmed a misstatement in the article and were unable to provide supporting information requested by the editor and publisher. Accordingly, the article has been retracted.

Jensen Comment
The TAR article retraction in no way detracts from this study being a model to shoot for in order to get accountics researchers more involved with the accounting profession and using their comparative advantages to analyze real world data that is more granulated that the usual practice of beating purchased databases like Compustat with econometric sticks and settling for correlations rather than causes.
 
Respectfully,
 
Bob Jensen

"Why the “Maximizing Shareholder Value” Theory of Corporate Governance is Bogus," Naked Capitalism, October 21, 2013 ---
http://www.nakedcapitalism.com/2013/10/why-the-maximizing-shareholder-value-theory-of-corporate-governance-is-bogus.html

. . .

So how did this “the last shall come first” thinking become established? You can blame it all on economists, specifically Harvard Business School’s Michael Jensen. In other words, this idea did not come out of legal analysis, changes in regulation, or court decisions. It was simply an academic theory that went mainstream. And to add insult to injury, the version of the Jensen formula that became popular was its worst possible embodiment.

A terrific 2010 paper by Frank Dobbin and Jiwook Jung, “The Misapplication of Mr. Michael Jensen: How Agency Theory Brought Down the Economy and Might Do It Again,” explains how this line of thinking went mainstream. I strongly suggest you read it in full, but I’ll give a brief recap for the time-pressed.

In the 1970s, there was a great deal of hand-wringing in America as Japanese and German manufacturers were eating American’s lunch. That led to renewed examination of how US companies were managed, with lots of theorizing about what went wrong and what the remedies might be. In 1976, Jensen and William Meckling asserted that the problem was that corporate executives served their own interests rather than those of shareholders, in other words, that there was an agency problem. Executives wanted to build empires while shareholders wanted profits to be maximized.

I strongly suspect that if Jensen and Meckling had not come out with this line of thinking, you would have gotten something similar to justify the actions of the leveraged buyout kings, who were just getting started in the 1970s and were reshaping the corporate landscape by the mid-1980s. They were doing many of the things Jensen and Meckling recommended: breaking up multi-business companies, thinning out corporate centers, and selling corporate assets (some of which were clearly excess, like corporate art and jet collection, while other sales were simply to increase leverage, like selling corporate office buildings and leasing them back). In other words, a likely reason that Jensen and Meckling’s theory gained traction was it appeared to validate a fundamental challenge to incumbent managements. (Dobbin and Jung attribute this trend, as pretty much everyone does, to Jensen because he continued to develop it. What really put it on the map was a 1990 Harvard Business Review article, “It’s Not What You Pay CEOs, but How,” that led to an explosion in the use of option-based pay and resulted in a huge increase in CEO pay relative to that of average workers.)

To forestall takeovers, many companies implemented the measures an LBO artist might take before his invading army arrived: sell off non-core divisions, borrow more, shed staff.

The problem was to the extent that the Jensen/Meckling prescription had merit, only the parts that helped company executives were adopted. Jensen didn’t just call on executives to become less ministerial and more entrepreneurial; they also called for more independent and engaged boards to oversee and discipline top managers, and more equity-driven pay, both options and other equity-linked compensation, to make management more sensitive to both upside and downside risks.

Over the next two decades, companies levered up, became more short-term oriented, and executive pay levels exploded. As Dobbin and Jung put it, “The result of the changes promoted by agency theory was that by the late 1990s, corporate America’s leaders were drag racing without the brakes.”

The paper proceeds to analyze in considerable detail how three of the major prescriptions of “agency theory” aka “executives and boards should maximize value,” namely, pay for (mythical) performance, dediversification, and greater reliance on debt all increased risk. And the authors also detail how efforts to improve oversight were ineffective.

But the paper also makes clear that this vision of how companies should be run was simply a new management fashion, as opposed to any sort of legal requirement:

Organizational institutionalists have long argued that new management practices diffuse through networks of firms like fads spread through high schools….In their models, new paradigms are socially constructed as appropriate solutions to perceived problems or crises….Expert groups that stand to gain from having their preferred strategies adopted by firms then enter the void, competing to have their model adopted….

And as Dobbin and Jung point out, the parts of the Jensen formula that got adopted were the one that had constituents. The ones that promoted looting and short-termism had obvious followings. The ones for prudent management didn’t.

And consider the implications of Jensen’s prescriptions, of pushing companies to favor shareholders, when they actually stand at the back of the line from a legal perspective. The result is that various agents (board compensation consultants, management consultants, and cronyistic boards themselves) have put incentives in place for CEOs to favor shareholders over parties that otherwise should get better treatment. So is it any surprise that companies treat employees like toilet paper, squeeze vendors, lobby hard for tax breaks and to weaken regulations, and worse, like fudge their financial reports? Jensen himself, in 2005, repudiated his earlier prescription precisely because it led to fraud. From an interview with the New York Times:

Q. So the maximum stock price is the holy grail?

A. Absolutely not. Warren Buffett says he worries as much when one of his companies becomes overvalued as undervalued. I agree. Overvalued equity is managerial heroin – it feels really great when you start out; you’re feted on television; investment bankers vie to float new issues.

But it doesn’t take long before the elation and ecstasy turn into enormous pain. The market starts demanding increased earnings and revenues, and the managers begin to say: “Holy Moley! How are we going to generate the returns?” They look for legal loopholes in the accounting, and when those don’t work, even basically honest people move around the corner to outright fraud.

If they hold a lot of stock or options themselves, it is like pouring gasoline on a fire. They fudge the numbers and hope they can sell the stock or exercise the options before anything hits the fan.

Q. Are you suggesting that executives be rewarded for driving down the price of the stock?

A. I’m saying they should be rewarded for being honest. A C.E.O. should be able to tell investors, “Listen, this company isn’t worth its $70 billion market cap; it’s really worth $30 billion, and here’s why.”

But the board would fire that executive immediately. I guess it has to be preventative – if executives would present the market with realistic numbers rather than overoptimistic expectations, the stock price would stay realistic. But I admit, we scholars don’t yet know the real answer to how to make this happen.

So having led Corporate America in the wrong direction, Jensen ‘fesses up no one knows the way out. But if executives weren’t incentivized to take such a topsy-turvey shareholder-driven view of the world, they’d weigh their obligations to other constituencies, including the community at large, along with earning shareholders a decent return. But it’s now become so institutionalized it’s hard to see how to move to a more sensible regime. For instance, analysts regularly try pressuring Costco to pay its workers less, wanting fatter margins. But the comparatively high wages are an integral part of Costco’s formula: it reduces costly staff turnover and employee pilferage. And Costco’s upscale members report they prefer to patronize a store they know treats workers better than Walmart and other discounters. If managers with an established, successful formulas still encounter pressure from the Street to strip mine their companies, imagine how hard it is for struggling companies or less secure top executives to implement strategies that will take a while to reap rewards. I’ve been getting reports from McKinsey from the better part of a decade that they simply can’t get their clients to implement new initiatives if they’ll dent quarterly returns.

This governance system is actually in crisis, but the extraordinary profit share that companies have managed to achieve by squeezing workers and the asset-goosing success of post-crisis financial policies have produced an illusion of health. But porcine maquillage only improves appearances; it doesn’t mask the stench of gangrene. Nevertheless, executives have successfully hidden the generally unhealthy state of their companies. As long as they have cheerleading analysts, complacent boards and the Fed protecting their back, they can likely continue to inflict more damage, using “maximizing shareholder value” canard as the cover for continuing rent extraction.


Read more at http://www.nakedcapitalism.com/2013/10/why-the-maximizing-shareholder-value-theory-of-corporate-governance-is-bogus.html#ehj10weqAL2vdXkh.99
So how did this “the last shall come first” thinking become established? You can blame it all on economists, specifically Harvard Business School’s Michael Jensen. In other words, this idea did not come out of legal analysis, changes in regulation, or court decisions. It was simply an academic theory that went mainstream. And to add insult to injury, the version of the Jensen formula that became popular was its worst possible embodiment.

A terrific 2010 paper by Frank Dobbin and Jiwook Jung, “The Misapplication of Mr. Michael Jensen: How Agency Theory Brought Down the Economy and Might Do It Again,” explains how this line of thinking went mainstream. I strongly suggest you read it in full, but I’ll give a brief recap for the time-pressed.

In the 1970s, there was a great deal of hand-wringing in America as Japanese and German manufacturers were eating American’s lunch. That led to renewed examination of how US companies were managed, with lots of theorizing about what went wrong and what the remedies might be. In 1976, Jensen and William Meckling asserted that the problem was that corporate executives served their own interests rather than those of shareholders, in other words, that there was an agency problem. Executives wanted to build empires while shareholders wanted profits to be maximized.

I strongly suspect that if Jensen and Meckling had not come out with this line of thinking, you would have gotten something similar to justify the actions of the leveraged buyout kings, who were just getting started in the 1970s and were reshaping the corporate landscape by the mid-1980s. They were doing many of the things Jensen and Meckling recommended: breaking up multi-business companies, thinning out corporate centers, and selling corporate assets (some of which were clearly excess, like corporate art and jet collection, while other sales were simply to increase leverage, like selling corporate office buildings and leasing them back). In other words, a likely reason that Jensen and Meckling’s theory gained traction was it appeared to validate a fundamental challenge to incumbent managements. (Dobbin and Jung attribute this trend, as pretty much everyone does, to Jensen because he continued to develop it. What really put it on the map was a 1990 Harvard Business Review article, “It’s Not What You Pay CEOs, but How,” that led to an explosion in the use of option-based pay and resulted in a huge increase in CEO pay relative to that of average workers.)

To forestall takeovers, many companies implemented the measures an LBO artist might take before his invading army arrived: sell off non-core divisions, borrow more, shed staff.

The problem was to the extent that the Jensen/Meckling prescription had merit, only the parts that helped company executives were adopted. Jensen didn’t just call on executives to become less ministerial and more entrepreneurial; they also called for more independent and engaged boards to oversee and discipline top managers, and more equity-driven pay, both options and other equity-linked compensation, to make management more sensitive to both upside and downside risks.

Over the next two decades, companies levered up, became more short-term oriented, and executive pay levels exploded. As Dobbin and Jung put it, “The result of the changes promoted by agency theory was that by the late 1990s, corporate America’s leaders were drag racing without the brakes.”

The paper proceeds to analyze in considerable detail how three of the major prescriptions of “agency theory” aka “executives and boards should maximize value,” namely, pay for (mythical) performance, dediversification, and greater reliance on debt all increased risk. And the authors also detail how efforts to improve oversight were ineffective.

But the paper also makes clear that this vision of how companies should be run was simply a new management fashion, as opposed to any sort of legal requirement:

Organizational institutionalists have long argued that new management practices diffuse through networks of firms like fads spread through high schools….In their models, new paradigms are socially constructed as appropriate solutions to perceived problems or crises….Expert groups that stand to gain from having their preferred strategies adopted by firms then enter the void, competing to have their model adopted….

And as Dobbin and Jung point out, the parts of the Jensen formula that got adopted were the one that had constituents. The ones that promoted looting and short-termism had obvious followings. The ones for prudent management didn’t.

And consider the implications of Jensen’s prescriptions, of pushing companies to favor shareholders, when they actually stand at the back of the line from a legal perspective. The result is that various agents (board compensation consultants, management consultants, and cronyistic boards themselves) have put incentives in place for CEOs to favor shareholders over parties that otherwise should get better treatment. So is it any surprise that companies treat employees like toilet paper, squeeze vendors, lobby hard for tax breaks and to weaken regulations, and worse, like fudge their financial reports? Jensen himself, in 2005, repudiated his earlier prescription precisely because it led to fraud. From an interview with the New York Times:

Q. So the maximum stock price is the holy grail?

A. Absolutely not. Warren Buffett says he worries as much when one of his companies becomes overvalued as undervalued. I agree. Overvalued equity is managerial heroin – it feels really great when you start out; you’re feted on television; investment bankers vie to float new issues.

But it doesn’t take long before the elation and ecstasy turn into enormous pain. The market starts demanding increased earnings and revenues, and the managers begin to say: “Holy Moley! How are we going to generate the returns?” They look for legal loopholes in the accounting, and when those don’t work, even basically honest people move around the corner to outright fraud.

If they hold a lot of stock or options themselves, it is like pouring gasoline on a fire. They fudge the numbers and hope they can sell the stock or exercise the options before anything hits the fan.

Q. Are you suggesting that executives be rewarded for driving down the price of the stock?

A. I’m saying they should be rewarded for being honest. A C.E.O. should be able to tell investors, “Listen, this company isn’t worth its $70 billion market cap; it’s really worth $30 billion, and here’s why.”

But the board would fire that executive immediately. I guess it has to be preventative – if executives would present the market with realistic numbers rather than overoptimistic expectations, the stock price would stay realistic. But I admit, we scholars don’t yet know the real answer to how to make this happen.

So having led Corporate America in the wrong direction, Jensen ‘fesses up no one knows the way out. But if executives weren’t incentivized to take such a topsy-turvey shareholder-driven view of the world, they’d weigh their obligations to other constituencies, including the community at large, along with earning shareholders a decent return. But it’s now become so institutionalized it’s hard to see how to move to a more sensible regime. For instance, analysts regularly try pressuring Costco to pay its workers less, wanting fatter margins. But the comparatively high wages are an integral part of Costco’s formula: it reduces costly staff turnover and employee pilferage. And Costco’s upscale members report they prefer to patronize a store they know treats workers better than Walmart and other discounters. If managers with an established, successful formulas still encounter pressure from the Street to strip mine their companies, imagine how hard it is for struggling companies or less secure top executives to implement strategies that will take a while to reap rewards. I’ve been getting reports from McKinsey from the better part of a decade that they simply can’t get their clients to implement new initiatives if they’ll dent quarterly returns.

This governance system is actually in crisis, but the extraordinary profit share that companies have managed to achieve by squeezing workers and the asset-goosing success of post-crisis financial policies have produced an illusion of health. But porcine maquillage only improves appearances; it doesn’t mask the stench of gangrene. Nevertheless, executives have successfully hidden the generally unhealthy state of their companies. As long as they have cheerleading analysts, complacent boards and the Fed protecting their back, they can likely continue to inflict more damage, using “maximizing shareholder value” canard as the cover for continuing rent extraction.


Read more at http://www.nakedcapitalism.com/2013/10/why-the-maximizing-shareholder-value-theory-of-corporate-governance-is-bogus.html#ehj10weqAL2vdXkh.99
So how did this “the last shall come first” thinking become established? You can blame it all on economists, specifically Harvard Business School’s Michael Jensen. In other words, this idea did not come out of legal analysis, changes in regulation, or court decisions. It was simply an academic theory that went mainstream. And to add insult to injury, the version of the Jensen formula that became popular was its worst possible embodiment.

A terrific 2010 paper by Frank Dobbin and Jiwook Jung, “The Misapplication of Mr. Michael Jensen: How Agency Theory Brought Down the Economy and Might Do It Again,” explains how this line of thinking went mainstream. I strongly suggest you read it in full, but I’ll give a brief recap for the time-pressed.

In the 1970s, there was a great deal of hand-wringing in America as Japanese and German manufacturers were eating American’s lunch. That led to renewed examination of how US companies were managed, with lots of theorizing about what went wrong and what the remedies might be. In 1976, Jensen and William Meckling asserted that the problem was that corporate executives served their own interests rather than those of shareholders, in other words, that there was an agency problem. Executives wanted to build empires while shareholders wanted profits to be maximized.

I strongly suspect that if Jensen and Meckling had not come out with this line of thinking, you would have gotten something similar to justify the actions of the leveraged buyout kings, who were just getting started in the 1970s and were reshaping the corporate landscape by the mid-1980s. They were doing many of the things Jensen and Meckling recommended: breaking up multi-business companies, thinning out corporate centers, and selling corporate assets (some of which were clearly excess, like corporate art and jet collection, while other sales were simply to increase leverage, like selling corporate office buildings and leasing them back). In other words, a likely reason that Jensen and Meckling’s theory gained traction was it appeared to validate a fundamental challenge to incumbent managements. (Dobbin and Jung attribute this trend, as pretty much everyone does, to Jensen because he continued to develop it. What really put it on the map was a 1990 Harvard Business Review article, “It’s Not What You Pay CEOs, but How,” that led to an explosion in the use of option-based pay and resulted in a huge increase in CEO pay relative to that of average workers.)

To forestall takeovers, many companies implemented the measures an LBO artist might take before his invading army arrived: sell off non-core divisions, borrow more, shed staff.

The problem was to the extent that the Jensen/Meckling prescription had merit, only the parts that helped company executives were adopted. Jensen didn’t just call on executives to become less ministerial and more entrepreneurial; they also called for more independent and engaged boards to oversee and discipline top managers, and more equity-driven pay, both options and other equity-linked compensation, to make management more sensitive to both upside and downside risks.

Over the next two decades, companies levered up, became more short-term oriented, and executive pay levels exploded. As Dobbin and Jung put it, “The result of the changes promoted by agency theory was that by the late 1990s, corporate America’s leaders were drag racing without the brakes.”

The paper proceeds to analyze in considerable detail how three of the major prescriptions of “agency theory” aka “executives and boards should maximize value,” namely, pay for (mythical) performance, dediversification, and greater reliance on debt all increased risk. And the authors also detail how efforts to improve oversight were ineffective.

But the paper also makes clear that this vision of how companies should be run was simply a new management fashion, as opposed to any sort of legal requirement:

Organizational institutionalists have long argued that new management practices diffuse through networks of firms like fads spread through high schools….In their models, new paradigms are socially constructed as appropriate solutions to perceived problems or crises….Expert groups that stand to gain from having their preferred strategies adopted by firms then enter the void, competing to have their model adopted….

And as Dobbin and Jung point out, the parts of the Jensen formula that got adopted were the one that had constituents. The ones that promoted looting and short-termism had obvious followings. The ones for prudent management didn’t.

And consider the implications of Jensen’s prescriptions, of pushing companies to favor shareholders, when they actually stand at the back of the line from a legal perspective. The result is that various agents (board compensation consultants, management consultants, and cronyistic boards themselves) have put incentives in place for CEOs to favor shareholders over parties that otherwise should get better treatment. So is it any surprise that companies treat employees like toilet paper, squeeze vendors, lobby hard for tax breaks and to weaken regulations, and worse, like fudge their financial reports? Jensen himself, in 2005, repudiated his earlier prescription precisely because it led to fraud. From an interview with the New York Times:

Q. So the maximum stock price is the holy grail?

A. Absolutely not. Warren Buffett says he worries as much when one of his companies becomes overvalued as undervalued. I agree. Overvalued equity is managerial heroin – it feels really great when you start out; you’re feted on television; investment bankers vie to float new issues.

But it doesn’t take long before the elation and ecstasy turn into enormous pain. The market starts demanding increased earnings and revenues, and the managers begin to say: “Holy Moley! How are we going to generate the returns?” They look for legal loopholes in the accounting, and when those don’t work, even basically honest people move around the corner to outright fraud.

If they hold a lot of stock or options themselves, it is like pouring gasoline on a fire. They fudge the numbers and hope they can sell the stock or exercise the options before anything hits the fan.

Q. Are you suggesting that executives be rewarded for driving down the price of the stock?

A. I’m saying they should be rewarded for being honest. A C.E.O. should be able to tell investors, “Listen, this company isn’t worth its $70 billion market cap; it’s really worth $30 billion, and here’s why.”

But the board would fire that executive immediately. I guess it has to be preventative – if executives would present the market with realistic numbers rather than overoptimistic expectations, the stock price would stay realistic. But I admit, we scholars don’t yet know the real answer to how to make this happen.

So having led Corporate America in the wrong direction, Jensen ‘fesses up no one knows the way out. But if executives weren’t incentivized to take such a topsy-turvey shareholder-driven view of the world, they’d weigh their obligations to other constituencies, including the community at large, along with earning shareholders a decent return. But it’s now become so institutionalized it’s hard to see how to move to a more sensible regime. For instance, analysts regularly try pressuring Costco to pay its workers less, wanting fatter margins. But the comparatively high wages are an integral part of Costco’s formula: it reduces costly staff turnover and employee pilferage. And Costco’s upscale members report they prefer to patronize a store they know treats workers better than Walmart and other discounters. If managers with an established, successful formulas still encounter pressure from the Street to strip mine their companies, imagine how hard it is for struggling companies or less secure top executives to implement strategies that will take a while to reap rewards. I’ve been getting reports from McKinsey from the better part of a decade that they simply can’t get their clients to implement new initiatives if they’ll dent quarterly returns.

This governance system is actually in crisis, but the extraordinary profit share that companies have managed to achieve by squeezing workers and the asset-goosing success of post-crisis financial policies have produced an illusion of health. But porcine maquillage only improves appearances; it doesn’t mask the stench of gangrene. Nevertheless, executives have successfully hidden the generally unhealthy state of their companies. As long as they have cheerleading analysts, complacent boards and the Fed protecting their back, they can likely continue to inflict more damage, using “maximizing shareholder value” canard as the cover for continuing rent extraction.


Read more at http://www.nakedcapitalism.com/2013/10/why-the-maximizing-shareholder-value-theory-of-corporate-governance-is-bogus.html#ehj10weqAL2vdXkh.99

So how did this “the last shall come first” thinking become established? You can blame it all on economists, specifically Harvard Business School’s Michael Jensen. In other words, this idea did not come out of legal analysis, changes in regulation, or court decisions. It was simply an academic theory that went mainstream. And to add insult to injury, the version of the Jensen formula that became popular was its worst possible embodiment.

A terrific 2010 paper by Frank Dobbin and Jiwook Jung, The Misapplication of Mr. Michael Jensen: How Agency Theory Brought Down the Economy and Might Do It Again,” explains how this line of thinking went mainstream. I strongly suggest you read it in full, but I’ll give a brief recap for the time-pressed.

In the 1970s, there was a great deal of hand-wringing in America as Japanese and German manufacturers were eating American’s lunch. That led to renewed examination of how US companies were managed, with lots of theorizing about what went wrong and what the remedies might be. In 1976, Jensen and William Meckling asserted that the problem was that corporate executives served their own interests rather than those of shareholders, in other words, that there was an agency problem. Executives wanted to build empires while shareholders wanted profits to be maximized.

I strongly suspect that if Jensen and Meckling had not come out with this line of thinking, you would have gotten something similar to justify the actions of the leveraged buyout kings, who were just getting started in the 1970s and were reshaping the corporate landscape by the mid-1980s. They were doing many of the things Jensen and Meckling recommended: breaking up multi-business companies, thinning out corporate centers, and selling corporate assets (some of which were clearly excess, like corporate art and jet collection, while other sales were simply to increase leverage, like selling corporate office buildings and leasing them back). In other words, a likely reason that Jensen and Meckling’s theory gained traction was it appeared to validate a fundamental challenge to incumbent managements. (Dobbin and Jung attribute this trend, as pretty much everyone does, to Jensen because he continued to develop it. What really put it on the map was a 1990 Harvard Business Review article, It’s Not What You Pay CEOs, but How,” that led to an explosion in the use of option-based pay and resulted in a huge increase in CEO pay relative to that of average workers.)

To forestall takeovers, many companies implemented the measures an LBO artist might take before his invading army arrived: sell off non-core divisions, borrow more, shed staff.

The problem was to the extent that the Jensen/Meckling prescription had merit, only the parts that helped company executives were adopted. Jensen didn’t just call on executives to become less ministerial and more entrepreneurial; they also called for more independent and engaged boards to oversee and discipline top managers, and more equity-driven pay, both options and other equity-linked compensation, to make management more sensitive to both upside and downside risks.

Over the next two decades, companies levered up, became more short-term oriented, and executive pay levels exploded. As Dobbin and Jung put it, “The result of the changes promoted by agency theory was that by the late 1990s, corporate America’s leaders were drag racing without the brakes.”

The paper proceeds to analyze in considerable detail how three of the major prescriptions of “agency theory” aka “executives and boards should maximize value,” namely, pay for (mythical) performance, dediversification, and greater reliance on debt all increased risk. And the authors also detail how efforts to improve oversight were ineffective.

But the paper also makes clear that this vision of how companies should be run was simply a new management fashion, as opposed to any sort of legal requirement:

Organizational institutionalists have long argued that new management practices diffuse through networks of firms like fads spread through high schools….In their models, new paradigms are socially constructed as appropriate solutions to perceived problems or crises….Expert groups that stand to gain from having their preferred strategies adopted by firms then enter the void, competing to have their model adopted….

And as Dobbin and Jung point out, the parts of the Jensen formula that got adopted were the one that had constituents. The ones that promoted looting and short-termism had obvious followings. The ones for prudent management didn’t.

And consider the implications of Jensen’s prescriptions, of pushing companies to favor shareholders, when they actually stand at the back of the line from a legal perspective. The result is that various agents (board compensation consultants, management consultants, and cronyistic boards themselves) have put incentives in place for CEOs to favor shareholders over parties that otherwise should get better treatment. So is it any surprise that companies treat employees like toilet paper, squeeze vendors, lobby hard for tax breaks and to weaken regulations, and worse, like fudge their financial reports? Jensen himself, in 2005, repudiated his earlier prescription precisely because it led to fraud. From an interview with the New York Times:

Q. So the maximum stock price is the holy grail?

A. Absolutely not. Warren Buffett says he worries as much when one of his companies becomes overvalued as undervalued. I agree. Overvalued equity is managerial heroin – it feels really great when you start out; you’re feted on television; investment bankers vie to float new issues.

But it doesn’t take long before the elation and ecstasy turn into enormous pain. The market starts demanding increased earnings and revenues, and the managers begin to say: “Holy Moley! How are we going to generate the returns?” They look for legal loopholes in the accounting, and when those don’t work, even basically honest people move around the corner to outright fraud.

If they hold a lot of stock or options themselves, it is like pouring gasoline on a fire. They fudge the numbers and hope they can sell the stock or exercise the options before anything hits the fan.

Q. Are you suggesting that executives be rewarded for driving down the price of the stock?

A. I’m saying they should be rewarded for being honest. A C.E.O. should be able to tell investors, “Listen, this company isn’t worth its $70 billion market cap; it’s really worth $30 billion, and here’s why.”

But the board would fire that executive immediately. I guess it has to be preventative – if executives would present the market with realistic numbers rather than overoptimistic expectations, the stock price would stay realistic. But I admit, we scholars don’t yet know the real answer to how to make this happen.

So having led Corporate America in the wrong direction, Jensen ‘fesses up no one knows the way out. But if executives weren’t incentivized to take such a topsy-turvey shareholder-driven view of the world, they’d weigh their obligations to other constituencies, including the community at large, along with earning shareholders a decent return. But it’s now become so institutionalized it’s hard to see how to move to a more sensible regime. For instance, analysts regularly try pressuring Costco to pay its workers less, wanting fatter margins. But the comparatively high wages are an integral part of Costco’s formula: it reduces costly staff turnover and employee pilferage. And Costco’s upscale members report they prefer to patronize a store they know treats workers better than Walmart and other discounters. If managers with an established, successful formulas still encounter pressure from the Street to strip mine their companies, imagine how hard it is for struggling companies or less secure top executives to implement strategies that will take a while to reap rewards. I’ve been getting reports from McKinsey from the better part of a decade that they simply can’t get their clients to implement new initiatives if they’ll dent quarterly returns.

This governance system is actually in crisis, but the extraordinary profit share that companies have managed to achieve by squeezing workers and the asset-goosing success of post-crisis financial policies have produced an illusion of health. But porcine maquillage only improves appearances; it doesn’t mask the stench of gangrene. Nevertheless, executives have successfully hidden the generally unhealthy state of their companies. As long as they have cheerleading analysts, complacent boards and the Fed protecting their back, they can likely continue to inflict more damage, using “maximizing shareholder value” canard as the cover for continuing rent extraction.

 

Read more at
http://www.nakedcapitalism.com/2013/10/why-the-maximizing-shareholder-value-theory-of-corporate-governance-is-bogus.html#ehj10weqAL2vdXkh.99

Jensen Comment
Mike Jensen was the headliner at the 2013 American Accounting Association Annual Meetings. AAA members can watch various videos by him and about him at the AAA Commons Website.

Actually Al Rappaport at Northwestern may have been more influential in spreading the word about creating shareholder value ---
Rappaport, Alfred (1998). Creating Shareholder Value: A guide for managers and investors. New York: The Free Press. pp. 13–29.

It would be interesting if Mike Jensen and/or Al Rappaport wrote rebuttals to this article.

Bob Jensen's threads on triple-bottom reporting ---
http://www.trinity.edu/rjensen/Theory02.htm#TripleBottom

Bob Jensen's threads on theory are at
http://www.trinity.edu/rjensen/Theory01.htm

 

 


Purpose of Theory:  Prediction Versus Explanation

Hi Steve and Jagdish,

Buried in the 2011Denver presentation by Greg Waymire is a lament about two of my hot buttons. Greg mentions the lack of replication (shall we call them reproductions?) in findings (harvests)  published in academic accounting research journals. Secondly, he mentions the lack of commentary and debate concerning these these findings. It seems that there's not a whole lot of interest (debate) about those findings among practitioners or in our academy ---
http://commons.aaahq.org/hives/629d926370/summary 


At long last we are making progress in finally getting the attention of the American Accounting Association leaders regarding how to broaden research methods and topics of study (beyond financial reporting)  in academic accounting research. The AAA Executive Committee now has annual retreats devoted to this most serious hole that accountics researchers have dug (Steve calls it a "dig" in the message from Jagdish) us into over the past four decades.


Change in academic accounting research will come very slowly. Paul Williams blames the slowness of change on the accountics scientist-conspired monopoly. I'm less inclined to blame the problem of conspiracy. I think the biggest problem is that accountics research in capital markets studies is so much easier since the data is provided like manna from heaven from CRSP, Compustat, AuditAnalytics, etc. No added scientific effort to collect data is required by accountics scientists. At CERN, however, physics scientists had to collect new data to cast doubt on prevailing speed of light theory.


Two years ago, at a meeting, I encountered one of my former students who eventually entered a leading accounting PhD program and was completing his dissertation. When I asked him why he was doing a traditional accountics-science dissertation he admitted that this was much easier than having to collect his own data.


Now more to the point concerning the messaging of Jagdish and Steve is my message earlier this week about the physics of economics in general.

Purpose of Theory:  Prediction Versus Explanation

"Milton Friedman's grand illusion," by Mark Buchanan, The Physics of Finance: A look at economics and finance through the lens of physics, September 16, 2011 ---
http://physicsoffinance.blogspot.com/2011/09/milton-friedmans-grand-illusion.html

Three years ago I wrote an Op-Ed for the New York Times on the need for radical change in the way economists model whole economies. Today's General Equilibrium models -- and their slightly more sophisticated cousins, Dynamic Stochastic General Equilibrium models -- make assumptions with no basis in reality. For example, there is no financial sector in these model economies. They generally assume that the diversity of behaviour of all an economy's many firms and consumers can be ignored and simply included as the average behaviour of a few "representative" agents.

I argued then that it was about time economists started using far more sophisticated modeling tools, including agent based models, in which the diversity of interactions among economic agents can be included along with a financial sector. The idea is to model the simpler behaviours of agents as well as you can and let the macro-scale complex behaviour of the economy emerge naturally out of them, without making any restrictive assumptions about what kinds of things can or cannot happen in the larger economy. This kind of work is going forward rapidly. For some detail, I recommend
this talk earlier this month by Doyne Farmer.

After that Op-Ed I received quite a number of emails from economists defending the General Equilibrium approach. Several of them mentioned Milton Friedman in their defense, saying that he had shown long ago that one shouldn't worry about the realism of the assumptions in a theory, but only about the accuracy of its predictions. I eventually found the paper to which they were referring, a classic in economic history which has exerted a huge influence over economists over the past half century. I recently re-read the paper and wanted to make a few comments on Friedman's main argument. It rests entirely, I think, on a devious or slippery use of words which makes it possible to give a sensible sounding argument for what is actually a ridiculous proposition. 

The paper is entitled
The Methodology of Positive Economics and was first published in 1953. It's an interesting paper and enjoyable to read. Essentially, it seems, Friedman's aim is to argue for scientific standards for economics akin to those used in physics. He begins by making a clear definition of what he means by "positive economics," which aims to be free from any particular ethical position or normative judgments. As he wrote, positive economics deals with...
 
"what is," not with "what ought to be." Its task is to provide a system of generalizations that can be used to make correct predictions about the consequences of any change in circumstances. Its performance is to be judged by the precision, scope, and conformity with experience of the predictions it yields.
Friedman then asks how one should judge the validity of a hypothesis, and asserts that...
 
...the only relevant test of the validity of a hypothesis is comparison of its predictions with experience. The hypothesis is rejected if its predictions are contradicted ("frequently" or more often than predictions from an alternative hypothesis); it is accepted if its predictions are not contradicted; great confidence is attached to it if it has survived many opportunities for contradiction. Factual evidence can never "prove" a hypothesis; it can only fail to disprove it, which is what we generally mean when we say, somewhat inexactly, that the hypothesis has been "confirmed" by experience."

So far so good. I think most scientists would see the above as conforming fairly closely to their own conception of how science should work (and of course this view is closely linked to views made famous by Karl Popper).

Next step: Friedman goes on to ask how one chooses between several hypotheses if they are all equally consistent with the available evidence. Here too his initial observations seem quite sensible:

 
...there is general agreement that relevant considerations are suggested by the criteria "simplicity" and "fruitfulness," themselves notions that defy completely objective specification. A theory is "simpler" the less the initial knowledge needed to make a prediction within a given field of phenomena; it is more "fruitful" the more precise the resulting prediction, the wider the area within which the theory yields predictions, and the more additional lines for further research it suggests.
Again, right in tune I think with the practice and views of most scientists. I especially like the final point that part of the value of a hypothesis also comes from how well it stimulates creative thinking about further hypotheses and theories. This point is often overlooked.

Friedman's essay then shifts direction. He argues that the processes and practices involved in the initial formation of a hypothesis, and in the testing of that hypothesis, are not as distinct as people often think, Indeed, this is obviously so. Many scientists form a hypothesis and try to test it, then adjust the hypothesis slightly in view of the data. There's an ongoing evolution of the hypothesis in correspondence with the data and the kinds of experiments of observations which seem interesting.

To this point, Friedman's essay says nothing that wouldn't fit into any standard discussion of the generally accepted philosophy of science from the 1950s. But this is where it suddenly veers off wildly and attempts to support a view that is indeed quite radical. Friedman mentions the difficulty in the social sciences of getting
new evidence with which to test an hypothesis by looking at its implications. This difficulty, he suggests,

 
... makes it tempting to suppose that other, more readily available, evidence is equally relevant to the validity of the hypothesis-to suppose that hypotheses have not only "implications" but also "assumptions" and that the conformity of these "assumptions" to "reality" is a test of the validity of the hypothesis different from or additional to the test by implications. This widely held view is fundamentally wrong and productive of much mischief.
Having raised this idea that assumptions are not part of what should be tested, Friedman then goes on to attack very strongly the idea that a theory should strive at all to have realistic assumptions. Indeed, he suggests, a theory is actually superior insofar as its assumptions are unrealistic:
 
In so far as a theory can be said to have "assumptions" at all, and in so far as their "realism" can be judged independently of the validity of predictions, the relation between the significance of a theory and the "realism" of its "assumptions" is almost the opposite of that suggested by the view under criticism. Truly important and significant hypotheses will be found to have "assumptions" that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions... The reason is simple. A hypothesis is important if it "explains" much by little,...   To be important, therefore, a hypothesis must be descriptively false in its assumptions...
This is the statement that the economists who wrote to me used to defend unrealistic assumptions in General Equilibrium theories. Their point was that having unrealistic assumptions isn't just not a problem, but is a positive strength for a theory. The more unrealistic the better, as Friedman argued (and apparently proved, in the eyes of some economists).

Now, what is wrong with Friedman's argument, if anything?  I think the key issue is his use of the provocative terms such as "unrealistic" and "false" and "inaccurate" in places where he actually means "simplified," "approximate" or "incomplete."  He switches without warning between these two different meanings in order to make the conclusion seem unavoidable, and profound, when in fact it is simply not true, or something we already believe and hardly profound at all.

To see the problem, take a simple example in physics. Newtonian dynamics describes the motions of the planets quite accurately (in many cases) even if the planets are treated as point masses having no extension, no rotation, no oceans and tides, mountains, trees and so on. The great triumph of Newtonian dynamics (including his law of gravitational attraction) is it's simplicity -- it asserts that out of all the many details that could conceivably influence planetary motion, two (mass and distance) matter most by far. The atmosphere of the planet doesn't matter much, nor does the amount of sunlight it reflects. The theory of course goes further to describe how other details do matter if one considers planetary motion in more detail -- rotation does matter, for example, because it generates tides which dissipate energy, taking energy slowly away from orbital motion. 

But I don't think anyone would be tempted to say that Newtonian dynamics is a powerful theory because it is descriptively false in its assumptions. It's assumptions are actually descriptively simple -- that planets and The Sun have mass, and that a force acts between any two masses in proportion to the product of their masses and in inverse proportional to the distance between them. From these assumptions one can work out predictions for details of planetary motion, and those details turn out to be close to what we see. The assumptions are simple and plausible, and this is what makes the theory so powerful when it turns out to make powerful and accurate predictions.

Indeed, if those same predictions came out of a theory with obviously false assumptions -- all planets are perfect cubes, etc. -- it would be less powerful by far because it would be less believable. It's ability to make predictions would be as big a mystery as the original phenomenon of planetary motion itself -- how can a theory that is so obviously not in tune with reality still make such accurate predictions?

So whenever Friedman says "descriptively false" I think you can instead write "descriptively simple", and clarify the meaning by adding a phrase of the sort "which identify the key factors which matter most." Do that replacement in Friedman's most provocative phrase from above and you have something far more sensible:

 
A hypothesis is important if it "explains" much by little,...   To be important, therefore, a hypothesis must be descriptively simple in its assumptions. It must identify the key factors which matter most...

That's not quite so bold, however, and it doesn't create a license for theorists to make any assumptions they want without being criticized if those assumptions stray very far from reality.

Continued in article

Jensen Comment
Especially note the comments at the end of this article.

My favorite is the following:

Herbert Simon (1963) countered Friedman by stating the purpose of scientific theories is not to make predictions, but to explain things - predictions are then tests of whether the explanations are correct.

Both Friedman and Simon's views are better directed to a field other than economics. The data at some point will always expose the frailest of assumptions; while the lack of repeatable results supports futility in the explanation of heterogeneous agents.

That's perceptive. Scientists should just steer clear of economics. Economics is so complex it is better suited to astrologists.


"How Non-Scientific Granulation Can Improve Scientific Accountics"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf
By Bob Jensen
This essay takes off from the following quotation:

A recent accountics science study suggests that audit firm scandal with respect to someone else's audit may be a reason for changing auditors.
"Audit Quality and Auditor Reputation: Evidence from Japan," by Douglas J. Skinner and Suraj Srinivasan, The Accounting Review, September 2012, Vol. 87, No. 5, pp. 1737-1765.

Our conclusions are subject to two caveats. First, we find that clients switched away from ChuoAoyama in large numbers in Spring 2006, just after Japanese regulators announced the two-month suspension and PwC formed Aarata. While we interpret these events as being a clear and undeniable signal of audit-quality problems at ChuoAoyama, we cannot know for sure what drove these switches (emphasis added). It is possible that the suspension caused firms to switch auditors for reasons unrelated to audit quality. Second, our analysis presumes that audit quality is important to Japanese companies. While we believe this to be the case, especially over the past two decades as Japanese capital markets have evolved to be more like their Western counterparts, it is possible that audit quality is, in general, less important in Japan (emphasis added) .

 

 


Monty Hall Paradox Video ---
http://www.youtube.com/watch?v=mhlc7peGlGg

Monty Hall Paradox Explanation ---
http://en.wikipedia.org/wiki/Monte_Hall_paradox

Jensen Comment
Of course the paradox in real life decision making, that takes it out of the real of the Monty Hall solutions and game theory in general, is that in the real world the probabilities of finding what's behind closed doors are unknown.

An alternate solution when probabilities are unknown for paths leading to closed doors is the Robert Frost solution to choose the door least opened.---
http://www.trinity.edu/rjensen/tidbits/2007/tidbits070905.htm

What the Monty Hall Paradox teaches us, at least symbolically, is that sometimes the most obvious common sense solutions to problems are not necessarily optimal. The geniuses in life discover better solutions that most of would consider absurd at the time --- such as that time is relative and not absolute ---
http://en.wikipedia.org/wiki/Theory_of_relativity

Richard Sansing forwarded the link
http://en.wikipedia.org/wiki/Principle_of_restricted_choice_(bridge)


Thank You Dana Hermanson
I think Dana Hermanson should be applauded for adding diversity to research methods during his service as Senior Editor of Accounting Horizons. Before Dana took over Accounting Horizons (AH) had succumbed to being a clone of The Accounting Review (TAR) in a manner totally inconsistent with its original charter.

There's nothing wrong with equations per se, and they serve a vital function in research.
But must having them be a necessary condition?
How long has it been since a mainline TAR paper was published without equations?
How long will it take for a mainline TAR paper to be published that does not have equations?

Fortunately, thanks to Dana, some papers can be once again published in AH that are not replete with equations.

Steve Zeff had the guts to admit the divergence of Accounting Horizons from its original charter in his excellent presentation in San Francisco on August 4, 2010 following a plenary session at the AAA Annual Meetings.

Steve compared the missions of the Accounting Horizons with performances since AH was inaugurated. Bob Mautz faced the daunting tasks of being the first Senior Editor of AH and of setting the missions of that journal for the future in the spirit dictated by the AAA Executive Committee at the time and of Jerry Searfoss (Deloitte) and others providing seed funding for starting up AH.

Steve Zeff first put up a list of the AH missions as laid out by Bob Mautz  in the first issues of AH:

Mautz, R. K. 1987. Editorial. Accounting Horizons (September): 109-111.

Mautz, R. K. 1987. Editorial: Expectations: Reasonable or ridiculous? Accounting Horizons (December): 117-120.

Steve Zeff then discussed the early successes of AH in meeting these missions followed by mostly years of failure in terms of meeting the original missions laid out by Bob Mautz ---
http://fisher.osu.edu/departments/accounting-and-mis/the-accounting-hall-of-fame/membership-in-hall/robert-kuhn-mautz/

Steve's PowerPoint slides are at
http://www.cs.trinity.edu/~rjensen/temp/ZeffCommentOnAccountingHorizons.ppt 

Steve’s conclusion was that AH became more like TAR rather than the practitioner-academy marriage journal that was originally intended. And yes, Steve did analyze the AH Commentaries as well as the mainline articles in reaching this conclusion.

 

In my viewpoint, Steve's 2010 worry about Accounting Horizons was largely remedied by Dana Hermanson.
Firstly Dana promoted normative commentaries that, in my opinion, would never have been accepted for publication in The Accounting Review. Examples are provided at
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Essays

Secondly I will point to a recent Accounting Horizons paper (see below) that, in my opinion, would have zero chance of being published in The Accounting Review. This is because it uses normative research methodology that is not acceptable to the TAR Team unless this normative logic is dressed up as an analytical research paper complete with equations and proofs. For an example of one such normative paper all dressed up with equations and proofs, see the Laux and Newman paper discussed at
http://www.trinity.edu/rjensen/TheoryTAR.htm#Analytics

An Example of an Excellent Normative-Method Research Paper That's Not Dressed Up in Equations and Proofs
The excellent paper that would have to be dressed up with equations and proofs for publication in TAR is the following paper accepted by Dana Hermanson for Accounting Horizons. I should note that what makes analytical papers generally normative is that they are usually built upon hypothetical, untested, and often unrealistic assumptions that serve as starting points in the analysis. The analytical conclusions, like normative conclusions in general, all hinge on the starting point assumptions, axioms, and postulates. For example it is extremely common to assume equilibrium conditions that really do not exist in the real world. And analytical researchers assume such things as utility functions that are assumed from thin air. Analytical conclusions as well as normative conclusions in general can be of great interest and relevance in spite of limitations of assumptions. Robustness, however, depends upon the sensitivity of those conclusions to the underlying assumptions. This also applies to the paper below.

"Should Repurchase Transactions be Accounted for as Sales or Loans?" by  Justin Chircop , Paraskevi Vicky Kiosse , and Ken Peasnell, Accounting Horizons, December 2012, Vol. 26, No. 4, pp. 657-679. 
http://aaajournals.org/doi/full/10.2308/acch-50176

SYNOPSIS:

In this paper, we discuss the accounting for repurchase transactions, drawing on how repurchase agreements are characterized under U.S. bankruptcy law, and in light of the recent developments in the U.S. repo market. We conclude that the current accounting rules, which require the recording of most such transactions as collateralized loans, can give rise to opaqueness in a firm's financial statements because they incorrectly characterize the economic substance of repurchase agreements. Accounting for repurchase transactions as sales and the concurrent recognition of a forward, as “Repo 105” transactions were accounted for by Lehman Brothers, has furthermore overlooked merits. In particular, such a method provides a more comprehensive and transparent picture of the economic substance of such transactions.

. . .

CONCLUSION

This paper suggests that the current method of accounting for repos is deficient in the sense of ignoring key aspects of the economics of such transactions. Moreover, as shown in the case of Lehman Brothers, under current regulations it may be relatively easy for a firm to design a repo in such a way to accomplish a preferred accounting treatment. For example, a firm wishing to account for a repo as a sale may easily design a bilateral repo with the option not to repurchase the assets should a particular highly unlikely event occur. Such an option would make the repo eligible for sale accounting under SFAS140. In this regard, a standard uniform method of accounting for all repos would reduce the risk of such accounting arbitrage.

Various factors not considered in this paper have probably played a part in the current position adopted by the standard setters regarding repos, including the drive for convergence in accounting standards and the fact that participants in the repo market may be “unaccustomed to treating [repurchase] transactions as sales, and a change to sale treatment would have a substantial impact on their reported financial position” (FASB 2000). It would be a pity if the concerns associated with the circumstances surrounding Lehman's use of Repo 105 prevented proper consideration being given to the possibility of treating all repos in the same manner, one that will reflect the key economic and legal features of repurchase agreements. As lawyers say, hard cases make bad law. But in this case, the Lehman's accounting for its Repo 105 transactions does substantially reflect the economics and legal considerations involved, that is, a sale of an asset with an associated obligation to return a substantially similar asset at the end of the agreement. An alternative approach would be to stick with the current measurement rules but provide additional disclosures. We have offered some tentative suggestions as to what kinds of additional disclosures are needed.

 

Jensen Comment
Thank you Dana Hermanson for resetting Accounting Horizons on a course consistent with its original charges. We can only hope the new AH editors Paul Griffin and Arnold Wright will carry on with this change of course that's consistent with the resolutions of the Pathways Commission Report ---
http://commons.aaahq.org/files/0b14318188/Pathways_Commission_Final_Report_Complete.pdf

By the way the above AH paper changed my thinking about repo accounting where, until now, I've been entirely negative about recording Repo 105/109 transactions as sales ---
http://www.trinity.edu/rjensen/ecommerce/eitf01.htm#Repo

January 24, 2013 reply from Dana Hernonson

Bob,

I hope all is well. A colleague forwarded the material below to me.

I greatly appreciate the kind words. I should point out, though, that my co-editor, Terry Shevlin, deserves a great deal of the credit. Terry handled all of the papers on the financial side of the house at Horizons, and he was extremely open to a variety of contributions. I believe that Terry fully embraced the mission of Horizons.

Thanks again, and please feel free to share this email with others.

Dana

Dana Hermanson
Sent from my iPhone

 

 

 

 

Increasing Complexity of the World and Its Mathematical Models

Growing Knowledge: The Evolution of Research --- http://www.growingknowledge.bl.uk/
Note the link to "New Ways of doing research"

Accountics Worshippers Please Take Note
"A Nobel Lesson: Economics is Getting Messier," by Justin Fox, Harvard Business Review Blog, October 11, 2010 --- Click Here
http://blogs.hbr.org/fox/2010/10/nobel-lesson-economics-messier.html?referral=00563&cm_mmc=email-_-newsletter-_-daily_alert-_-alert_date&utm_source=newsletter_daily_alert&utm_medium=email&utm_campaign=alert_date

When Peter Diamond was a graduate student at MIT in the early 1960s, he spent much of his time studying the elegant new models of perfectly functioning markets that were all the rage in those days. Most important of all was the general equilibrium model assembled in the 1950s by Kenneth Arrow and Gerard Debreu, often referred to as the mathematical proof of the existence of Adam Smith's "invisible hand." Working through the Arrow-Debreu proofs was a major part of the MIT grad student experience. At least, that's what Diamond told me a few years ago. (If I ever find the notes of that conversation, I'll offer up some quotes.)

Diamond certainly learned well. In a long career spent almost entirely at MIT, he became known for work of staggering theoretical sophistication. As economist Steven Levitt put it today:

He wrote the kind of papers that I would have to read four or five times to get a handle on what he was doing, and even then, I couldn't understand it all.

But Diamond wasn't out to further prove the perfection of markets. He was trying instead to show how, with the injection of the tiniest bit of reality, the perfect-market models he'd learned so well in grad school began to break down. Today he won a third of the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (it's not technically a "Nobel Prize"), mainly for a paper he wrote in 1971 that explored how the injection of friction between buyers and sellers, in the form of what he called "search costs," prices would end up at a level far removed from what a perfect competition model would predict. The two economists who shared the prize with him, Dale Mortensen of Northwestern University and Christopher Pissarides of the London School of Economics, later elaborated on this insight with regard to job markets (as did Diamond).

The exact practical implications of this work can be a little hard to define — although Catherine Rampell makes a valiant and mostly successful effort in The New York Times. What this year's prize does clearly indicate is that the Nobel committee believes economic theory is messy and getting messier (no, I didn't come up with this insight on my own; my colleague Tim Sullivan had to nudge me). The last Nobel awarded for an all-encompassing mathematical theory of how the economic world fits together was to Robert Lucas in 1995 for his work on rational expectations. Since then (with the arguable exceptions of the prizes awarded to Robert Merton and Myron Scholes in 1997 for options-pricing and to Fynn Kydland and Edward Prescott in 2004 for real-business-cycle theory) the Nobel crew has chosen to honor either interesting economic side projects or work that muddies the elegance of those grand postwar theories of rational actors buying and selling under conditions of perfect competition. The 2001 prize for work exploring the impact on markets of asymmetric information, awarded to George Akerlof, Michael Spence and Joseph Stiglitz, was probably most similar to this year's model (and, not coincidentally, Akerlof and Stiglitz were also MIT grad students in the 1960s).

The implications of messier economics are interesting to contemplate. The core insight of mainstream economics — that incentives matter — continues to hold up well. And on the whole, markets appear to do a better job of channeling those incentives to useful ends than any other form of economic organization. But beyond that, the answers one can derive from economic theory — especially answers that address the functioning of the entire economy — are complicated and often contradictory. Meaning that sometimes we non-economists are just going to have to figure things out for ourselves.

Jensen Comment
Not mentioned but certainly implied is the increased complexity of replicating and validating empirical models in terms of assumptions, missing variables, and data error. Increasing complexity will affect accountics researchers less since replicating and validating is of less concern among accountics researchers ---
http://www.trinity.edu/rjensen/TheoryTAR.htm


"Is Modern Portfolio Theory Dead? Come On," by Paul Pfleiderer, TechCrunch, August 11, 2012 ---
http://techcrunch.com/2012/08/11/is-modern-portfolio-theory-dead-come-on/

A few weeks ago, TechCrunch published a piece arguing software is better at investing than 99% of human investment advisors. That post, titled Thankfully, Software Is Eating The Personal Investing World, pointed out the advantages of engineering-driven software solutions versus emotionally driven human judgment. Perhaps not surprisingly, some commenters (including some financial advisors) seized the moment to call into question one of the foundations of software-based investing, Modern Portfolio Theory.

Given the doubts raised by a small but vocal chorus, it’s worth spending some time to ask if we need a new investing paradigm and if so, what it should be. Answering that question helps show why MPT still is the best investment methodology out there; it enables the automated, low-cost investment management offered by a new wave of Internet startups including Wealthfront (which I advise), Personal Capital, Future Advisor and SigFig.

The basic questions being raised about MPT run something like this:

Let’s begin by briefly laying out the key insights of MPT.

MPT is based in part on the assumption that most investors don’t like risk and need to be compensated for bearing it. That compensation comes in the form of higher average returns. Historical data strongly supports this assumption. For example, from 1926 to 2011 the average (geometric) return on U.S. Treasury Bills was 3.6%. Over the same period the average return on large company stocks was 9.8%; that on small company stocks was 11.2% ( See 2012 Ibbotson Stocks, Bonds, Bills and Inflation (SBBI) Valuation Yearbook, Morningstar, Inc., page 23. ).  Stocks, of course, are much riskier than Treasuries, so we expect them to have higher average returns — and they do.

One of MPT’s key insights is that while investors need to be compensated to bear risk, not all risks are rewarded. The market does not reward risks that can be “diversified away” by holding a bundle of investments, instead of a single investment. By recognizing that not all risks are rewarded, MPT helped establish the idea that a diversified portfolio can help investors earn a higher return for the same amount of risk.

To understand which risks can be diversified away, and why, consider Zynga. Zynga hit $14.69 in March and has since dropped to less than $2 per share. Based on what’s happened over the past few months, the major risks associated with Zynga’s stock are things such as delays in new game development, the fickle taste of consumers and changes on Facebook that affect users’ engagement with Zynga’s games.

For company insiders, who have much of their wealth tied up in the company, Zynga is clearly a risky investment. Although those insiders are exposed to huge risks, they aren’t the investors who determine the “risk premium” for Zynga. (A stock’s risk premium is the extra return the stock is expected to earn that compensates for the stock’s risk.)

Rather, institutional funds and other large investors establish the risk premium by deciding what price they’re willing to pay to hold Zynga in their diversified portfolios. If a Zynga game is delayed, and Zynga’s stock price drops, that decline has a miniscule effect on a diversified shareholder’s portfolio returns. Because of this, the market does not price in that particular risk. Even the overall turbulence in many Internet stocks won’t be problematic for investors who are well diversified in their portfolios.

Modern Portfolio Theory focuses on constructing portfolios that avoid exposing the investor to those kinds of unrewarded risks. The main lesson is that investors should choose portfolios that lie on the Efficient Frontier, the mathematically defined curve that describes the relationship between risk and reward. To be on the frontier, a portfolio must provide the highest expected return (largest reward) among all portfolios having the same level of risk. The Internet startups construct well-diversified portfolios designed to be efficient with the right combination of risk and return for their clients.

Now let’s ask if anything in the past five years casts doubt on these basic tenets of Modern Portfolio Theory. The answer is clearly, “No.” First and foremost, nothing has changed the fact that there are many unrewarded risks, and that investors should avoid these risks. The major risks of Zynga stock remain diversifiable risks, and unless you’re willing to trade illegally on inside information about, say, upcoming changes to Facebook’s gaming policies, you should avoid holding a concentrated position in Zynga.

The efficient frontier is still the desirable place to be, and it makes no sense to follow a policy that puts you in a position well below that frontier.

Most of the people who say that “diversification failed” in the financial crisis have in mind not the diversification gains associated with avoiding concentrated investments in companies like Zynga, but the diversification gains that come from investing across many different asset classes, such as domestic stocks, foreign stocks, real estate and bonds. Those critics aren’t challenging the idea of diversification in general – probably because such an effort would be nonsensical.

True, diversification across asset classes didn’t shelter investors from 2008’s turmoil. In that year, the S&P 500 index fell 37%, the MSCI EAFE index (the index of developed markets outside North America) fell by 43%, the MSCI Emerging Market index fell by 53%, the Dow Jones Commodities Index fell by 35%, and the Lehman High Yield Bond Index fell by 26%. The historical record shows that in times of economic distress, asset class returns tend to move in the same direction and be more highly correlated. These increased correlations are no doubt due to the increased importance of macro factors driving corporate cash flows. The increased correlations limit, but do not eliminate, diversification’s value. It would be foolish to conclude from this that you should be undiversified. If a seat belt doesn’t provide perfect protection, it still makes sense to wear one. Statistics show it’s better to wear a seatbelt than to not wear one.  Similarly, statistics show diversification reduces risk, and that you are better off diversifying than not.

Timing the market

The obvious question to ask anyone who insists diversification across asset classes is not effective is: What is the alternative? Some say “Time the market.” Make sure you hold an asset class when it is earning good returns, but sell as soon as things are about to go south. Even better, take short positions when the outlook is negative. With a trustworthy crystal ball, this is a winning strategy. The potential gains are huge. If you had perfect foresight and could time the S&P 500 on a daily basis, you could have turned $1,000 on Jan. 1, 2000, into $120,975,000 on Dec. 31, 2009, just by going in and out of the market. If you could also short the market when appropriate, the gains would have been even more spectacular!

Sometimes, it seems someone may have a fairly reliable crystal ball. Consider John Paulson, who in 2007 and 2008 seemed so prescient in profiting from the subprime market’s collapse. It appears, however, that Mr. Paulson’s crystal ball became less reliable after his stunning success in 2007. His Advantage Plus fund experienced more than a 50% loss in 2011. Separating luck from skill is often difficult.

Some people try to come up with a way to time the market based on historical data. In fact a large number of strategies will work well “in the back test.” The question is whether any system is reliable enough to use for future investing.

There are at least three reasons to be cautious about substituting a timing system for diversification.

Black Swans

What about those Black Swans? Doesn’t MPT ignore the possibility that we can be surprised by the unexpected? Isn’t it impossible to measure risk when there are unknown unknowns?

Most people recognize that financial markets are not like simple games of chance where risk can be quantified precisely. As we’ve seen (e.g., the “Black Monday” stock market crash of 1987 and the “flash crash” of 2010), the markets can produce extreme events that hardly anyone contemplated as a possibility. As opposed to poker, where we always draw from the same 52-card deck, in financial markets, asset returns are drawn from changing distributions as the world economy and financial relationships change.

Some Black Swan events turned out to have limited effects on investors over the long term. Although the market dropped precipitously in October 1987, it was close to fully recovered in June 1988. The flash crash was confined to a single day.
This is not to say that all “surprise” events are transitory. The Great Depression followed the stock market crash of 1929, and the effects of the financial crisis in 2007 and 2008 linger on five years later.

The question is, how should we respond to uncertainties and Black Swans? One sensible way is to be more diligent in quantifying the risks we can see. For example, since extreme events don’t happen often, we’re likely to be misled if we base our risk assessment on what has occurred over short time periods. We shouldn’t conclude that just because housing prices haven’t gone down over 20 years that a housing decline is not a meaningful risk. In the case of natural disasters like earthquakes, tsunamis, asteroid strikes and solar storms, the long run could be very long indeed. While we can’t capture all risks by looking far back in time, taking into account long-term data means we’re less likely to be surprised.

Some people suggest you should respond to the risk of unknown unknowns by investing very conservatively. This means allocating most of the portfolio to “safe assets” and significantly reducing exposure to risky assets, which are likely to be affected by Black Swan surprises. This response is consistent with MPT. If you worry about Black Swans, you are, for all intents and purposes, a very risk-averse investor. The MPT portfolio position for very risk-averse investors is a position on the efficient frontier that has little risk.

The cost of investing in a low-risk position is a lower expected return (recall that historically the average return on stocks was about three times that on U.S. Treasuries), but maybe you think that’s a price worth paying. Can everyone take extremely conservative positions to avoid Black Swan risk? This clearly won’t work, because some investors must hold risky assets. If all investors try to avoid Black Swan events, the prices of those risky assets will fall to a point where the forecasted returns become too large to ignore.

Continued in article

Jensen Comment
All quant theories and strategies in finance are based upon some foundational assumptions that in rare instances turn into the Achilles' heel of the entire superstructure. The classic example is the wonderful theory and arbitrage strategy of Long Term Capital Management (LTCM) formed by the best quants in finance (two with Nobel Prizes in economics). After remarkable successes one nickel at a time in a secret global arbitrage strategy based heavily on the Black-Scholes Model, LTCM placed a trillion dollar bet that failed dramatically and became the only hedge fund that nearly imploded all of Wall Street. At a heavy cost, Wall Street investment bankers pooled billions of dollars to quietly shut down LTCM ---
http://www.trinity.edu/rjensen/FraudRotten.htm#LTCM

So what was the Achilles heal of the arbitrage strategy of LTCM? It was an assumption that a huge portion of the global financial market would not collapse all at once. Low and behold, the Asian financial markets collapsed all at once and left LTCM naked and dangling from a speculative cliff.

There is a tremendous (one of the best videos I've ever seen on the Black-Scholes Model) PBS Nova video called "Trillion Dollar Bet" explaining why LTCM collapsed.  Go to http://www.pbs.org/wgbh/nova/stockmarket/ 
This video is in the media libraries on most college campuses.  I highly recommend showing this video to students.  It is extremely well done and exciting to watch.

One of the more interesting summaries is the Report of The President’s Working Group on Financial Markets, April 1999 --- http://www.ustreas.gov/press/releases/reports/hedgfund.pdf 

The principal policy issue arising out of the events surrounding the near collapse of LTCM is how to constrain excessive leverage. By increasing the chance that problems at one financial institution could be transmitted to other institutions, excessive leverage can increase the likelihood of a general breakdown in the functioning of financial markets. This issue is not limited to hedge funds; other financial institutions are often larger and more highly leveraged than most hedge funds.

What went wrong at Long Term Capital Management? --- http://www.killer-essays.com/Economics/euz220.shtml 

The video and above reports, however, do not delve into the tax shelter pushed by Myron Scholes and his other LTCM partners. A nice summary of the tax shelter case with links to other documents can be found at http://www.cambridgefinance.com/CFP-LTCM.pdf 

The above August 27, 2004 ruling by Judge Janet Bond Arterton rounds out the "Trillion Dollar Bet."

The classic and enormous scandal was Long Term Capital led by Nobel Prize winning Merton and Scholes (actually the blame is shared  with their devoted doctoral students).  There is a tremendous (one of the best videos I've ever seen on the Black-Scholes Model) PBS Nova video ("Trillion Dollar Bet") explaining why LTC collapsed.  Go to http://www.pbs.org/wgbh/nova/stockmarket/ 

Another illustration of the Achilles' heel of a popular mathematical theory and strategy is the 2008 collapse mortgage-backed CDO financial risk bonds based upon David Li's Gaussian copula function of risk diversification in portfolios. The Achilles' heel was the assumption that the real estate bubble would not burst to a point where millions of subprime mortgages would all go into default at roughly the same time.

Can the 2008 investment banking failure be traced to a math error?
Recipe for Disaster:  The Formula That Killed Wall Street --- http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all
Link forwarded by Jim Mahar ---
http://financeprofessorblog.blogspot.com/2009/03/recipe-for-disaster-formula-that-killed.html 

Some highlights:

"For five years, Li's formula, known as a Gaussian copula function, looked like an unambiguously positive breakthrough, a piece of financial technology that allowed hugely complex risks to be modeled with more ease and accuracy than ever before. With his brilliant spark of mathematical legerdemain, Li made it possible for traders to sell vast quantities of new securities, expanding financial markets to unimaginable levels.

His method was adopted by everybody from bond investors and Wall Street banks to ratings agencies and regulators. And it became so deeply entrenched—and was making people so much money—that warnings about its limitations were largely ignored.

Then the model fell apart." The article goes on to show that correlations are at the heart of the problem.

"The reason that ratings agencies and investors felt so safe with the triple-A tranches was that they believed there was no way hundreds of homeowners would all default on their loans at the same time. One person might lose his job, another might fall ill. But those are individual calamities that don't affect the mortgage pool much as a whole: Everybody else is still making their payments on time.

But not all calamities are individual, and tranching still hadn't solved all the problems of mortgage-pool risk. Some things, like falling house prices, affect a large number of people at once. If home values in your neighborhood decline and you lose some of your equity, there's a good chance your neighbors will lose theirs as well. If, as a result, you default on your mortgage, there's a higher probability they will default, too. That's called correlation—the degree to which one variable moves in line with another—and measuring it is an important part of determining how risky mortgage bonds are."

I would highly recommend reading the entire thing that gets much more involved with the actual formula etc.

The “math error” might truly be have been an error or it might have simply been a gamble with what was perceived as miniscule odds of total market failure. Something similar happened in the case of the trillion-dollar disastrous 1993 collapse of Long Term Capital Management formed by Nobel Prize winning economists and their doctoral students who took similar gambles that ignored the “miniscule odds” of world market collapse -- -
http://www.trinity.edu/rjensen/FraudRotten.htm#LTCM  

The rhetorical question is whether the failure is ignorance in model building or risk taking using the model?

"In Plato's Cave:  Mathematical models are a powerful way of predicting financial markets. But they are fallible" The Economist, January 24, 2009, pp. 10-14 ---
http://www.economist.com/specialreports/displaystory.cfm?story_id=12957753

ROBERT RUBIN was Bill Clinton’s treasury secretary. He has worked at the top of Goldman Sachs and Citigroup. But he made arguably the single most influential decision of his long career in 1983, when as head of risk arbitrage at Goldman he went to the MIT Sloan School of Management in Cambridge, Massachusetts, to hire an economist called Fischer Black.

A decade earlier Myron Scholes, Robert Merton and Black had explained how to use share prices to calculate the value of derivatives. The Black-Scholes options-pricing model was more than a piece of geeky mathematics. It was a manifesto, part of a revolution that put an end to the anti-intellectualism of American finance and transformed financial markets from bull rings into today’s quantitative powerhouses. Yet, in a roundabout way, Black’s approach also led to some of the late boom’s most disastrous lapses.

Derivatives markets are not new, nor are they an exclusively Western phenomenon. Mr Merton has described how Osaka’s Dojima rice market offered forward contracts in the 17th century and organised futures trading by the 18th century. However, the growth of derivatives in the 36 years since Black’s formula was published has taken them from the periphery of financial services to the core.

In “The Partnership”, a history of Goldman Sachs, Charles Ellis records how the derivatives markets took off. The International Monetary Market opened in 1972; Congress allowed trade in commodity options in 1976; S&P 500 futures launched in 1982, and options on those futures a year later. The Chicago Board Options Exchange traded 911 contracts on April 26th 1973, its first day (and only one month before Black-Scholes appeared in print). In 2007 the CBOE’s volume of contracts reached almost 1 trillion.

Trading has exploded partly because derivatives are useful. After America came off the gold standard in 1971, businesses wanted a way of protecting themselves against the movements in exchange rates, just as they sought protection against swings in interest rates after Paul Volcker, Mr Greenspan’s predecessor as chairman of the Fed, tackled inflation in the 1980s. Equity options enabled investors to lay off general risk so that they could concentrate on the specific types of corporate risk they wanted to trade.

The other force behind the explosion in derivatives trading was the combination of mathematics and computing. Before Black-Scholes, option prices had been little more than educated guesses. The new model showed how to work out an option price from the known price-behaviour of a share and a bond. It is as if you had a formula for working out the price of a fruit salad from the prices of the apples and oranges that went into it, explains Emanuel Derman, a physicist who later took Black’s job at Goldman. Confidence in pricing gave buyers and sellers the courage to pile into derivatives. The better that real prices correlate with the unknown option price, the more confidently you can take on any level of risk. “In a thirsty world filled with hydrogen and oxygen,” Mr Derman has written, “someone had finally worked out how to synthesise H2O.”

Poetry in Brownian motion Black-Scholes is just a model, not a complete description of the world. Every model makes simplifications, but some of the simplifications in Black-Scholes looked as if they would matter. For instance, the maths it uses to describe how share prices move comes from the equations in physics that describe the diffusion of heat. The idea is that share prices follow some gentle random walk away from an equilibrium, rather like motes of dust jiggling around in Brownian motion. In fact, share-price movements are more violent than that.

Over the years the “quants” have found ways to cope with this—better ways to deal with, as it were, quirks in the prices of fruit and fruit salad. For a start, you can concentrate on the short-run volatility of prices, which in some ways tends to behave more like the Brownian motion that Black imagined. The quants can introduce sudden jumps or tweak their models to match actual share-price movements more closely. Mr Derman, who is now a professor at New York’s Columbia University and a partner at Prisma Capital Partners, a fund of hedge funds, did some of his best-known work modelling what is called the “volatility smile”—an anomaly in options markets that first appeared after the 1987 stockmarket crash when investors would pay extra for protection against another imminent fall in share prices.

The fixes can make models complex and unwieldy, confusing traders or deterring them from taking up new ideas. There is a constant danger that behaviour in the market changes, as it did after the 1987 crash, or that liquidity suddenly dries up, as it has done in this crisis. But the quants are usually pragmatic enough to cope. They are not seeking truth or elegance, just a way of capturing the behaviour of a market and of linking an unobservable or illiquid price to prices in traded markets. The limit to the quants’ tinkering has been not mathematics but the speed, power and cost of computers. Nobody has any use for a model which takes so long to compute that the markets leave it behind.

The idea behind quantitative finance is to manage risk. You make money by taking known risks and hedging the rest. And in this crash foreign-exchange, interest-rate and equity derivatives models have so far behaved roughly as they should.

A muddle of mortgages Yet the idea behind modelling got garbled when pools of mortgages were bundled up into collateralised-debt obligations (CDOs). The principle is simple enough. Imagine a waterfall of mortgage payments: the AAA investors at the top catch their share, the next in line take their share from what remains, and so on. At the bottom are the “equity investors” who get nothing if people default on their mortgage payments and the money runs out.

Despite the theory, CDOs were hopeless, at least with hindsight (doesn’t that phrase come easily?). The cash flowing from mortgage payments into a single CDO had to filter up through several layers. Assets were bundled into a pool, securitised, stuffed into a CDO, bits of that plugged into the next CDO and so on and on. Each source of a CDO had interminable pages of its own documentation and conditions, and a typical CDO might receive income from several hundred sources. It was a lawyer’s paradise.

This baffling complexity could hardly be more different from an equity or an interest rate. It made CDOs impossible to model in anything but the most rudimentary way—all the more so because each one contained a unique combination of underlying assets. Each CDO would be sold on the basis of its own scenario, using central assumptions about the future of interest rates and defaults to “demonstrate” the payouts over, say, the next 30 years. This central scenario would then be “stress-tested” to show that the CDO was robust—though oddly the tests did not include a 20% fall in house prices.

This was modelling at its most feeble. Derivatives model an unknown price from today’s known market prices. By contrast, modelling from history is dangerous. There was no guarantee that the future would be like the past, if only because the American housing market had never before been buoyed up by a frenzy of CDOs. In any case, there are not enough past housing data to form a rich statistical picture of the market—especially if you decide not to include the 1930s nationwide fall in house prices in your sample.

Neither could the models take account of falling mortgage-underwriting standards. Mr Rajan of the University of Chicago says academic research suggests mortgage originators, keen to automate their procedures, stopped giving potential borrowers lengthy interviews because they could not easily quantify the firmness of someone’s handshake or the fixity of their gaze. Such things turned out to be better predictors of default than credit scores or loan-to-value ratios, but the investors at the end of a long chain of securities could not monitor lending decisions.

The issuers of CDOs asked rating agencies to assess their quality. Although the agencies insist that they did a thorough job, a senior quant at a large bank says that the agencies’ models were even less sophisticated than the issuers’. For instance, a BBB tranche in a CDO might pay out in full if the defaults remained below 6%, and not at all once they went above 6.5%. That is an all-or-nothing sort of return, quite different from a BBB corporate bond, say. And yet, because both shared the same BBB rating, they would be modelled in the same way.

Issuers like to have an edge over the rating agencies. By paying one for rating the CDOs, some may have laid themselves open to a conflict of interest. With help from companies like Codefarm, an outfit from Brighton in Britain that knew the agencies’ models for corporate CDOs, issuers could build securities with any risk profile they chose, including those made up from lower-quality ingredients that would nevertheless win AAA ratings. Codefarm has recently applied for administration.

There is a saying on Wall Street that the test of a product is whether clients will buy it. Would they have bought into CDOs had it not been for the dazzling performance of the quants in foreign-exchange, interest-rate and equity derivatives? There is every sign that the issuing banks believed their own sales patter. The banks so liked CDOs that they held on to a lot of their own issues, even when the idea behind the business had been to sell them on. They also lent buyers much of the money to bid for CDOs, certain that the securities were a sound investment. With CDOs in deep trouble, the lenders are now suffering.

Modern finance is supposed to be all about measuring risks, yet corporate and mortgage-backed CDOs were a leap in the dark. According to Mr Derman, with Black-Scholes “you know what you are assuming when you use the model, and you know exactly what has been swept out of view, and hence you can think clearly about what you may have overlooked.” By contrast, with CDOs “you don’t quite know what you are ignoring, so you don’t know how to adjust for its inadequacies.”

Now that the world has moved far beyond any of the scenarios that the CDO issuers modelled, investors’ quantitative grasp of the payouts has fizzled into blank uncertainty. That makes it hard to put any value on them, driving away possible buyers. The trillion-dollar bet on mortgages has gone disastrously wrong. The hope is that the trillion-dollar bet on companies does not end up that way too.

Continued in article

Closing Jensen Comment
So is portfolio diversification theory dead? I hardly think so. But if any lesson is to be learned is that we should question those critical underlying assumptions in Plato's Cave before worldwide strategies are implemented that overlook the Achilles' heel of those critical underlying assumptions.

 


Ockham’s (or Occam's) Razor (Law of Parsimony and Succinctness) --- http://en.wikipedia.org/wiki/Ockham's_razor

"Razoring Ockham’s razor," by Massimo Pigliucci, Rationally Speaking, May 6, 2011 ---
http://rationallyspeaking.blogspot.com/2011/05/razoring-ockhams-razor.html

Scientists, philosophers and skeptics alike are familiar with the idea of Ockham’s razor, an epistemological principle formulated in a number of ways by the English Franciscan friar and scholastic philosopher William of Ockham (1288-1348). Here is one version of it, from the pen of its originator:
 
Frustra fit per plura quod potest fieri per pauciora. [It is futile to do with more things that which can be done with fewer] (Summa Totius Logicae)
 
Philosophers often refer to this as the principle of economy, while scientists tend to call it parsimony. Skeptics invoke it every time they wish to dismiss out of hand claims of unusual phenomena (after all, to invoke the “unusual” is by definition unparsimonious, so there).
 
There is a problem with all of this, however, of which I was reminded recently while reading an old paper by my colleague Elliot Sober, one of the most prominent contemporary philosophers of biology. Sober’s article is provocatively entitled “Let’s razor Ockham’s razor” and it is available for download from his web site.
 
Let me begin by reassuring you that Sober didn’t throw the razor in the trash. However, he cut it down to size, so to speak. The obvious question to ask about Ockham’s razor is: why? On what basis are we justified to think that, as a matter of general practice, the simplest hypothesis is the most likely one to be true? Setting aside the surprisingly difficult task of operationally defining “simpler” in the context of scientific hypotheses (it can be done, but only in certain domains, and it ain’t straightforward), there doesn’t seem to be any particular logical or metaphysical reason to believe that the universe is a simple as it could be.
 
Indeed, we know it’s not. The history of science is replete with examples of simpler (“more elegant,” if you are aesthetically inclined) hypotheses that had to yield to more clumsy and complicated ones. The Keplerian idea of elliptical planetary orbits is demonstrably more complicated than the Copernican one of circular orbits (because it takes more parameters to define an ellipse than a circle), and yet, planets do in fact run around the gravitational center of the solar system in ellipses, not circles.
 
Lee Smolin (in his delightful The Trouble with Physics) gives us a good history of 20th century physics, replete with a veritable cemetery of hypotheses that people thought “must” have been right because they were so simple and beautiful, and yet turned out to be wrong because the data stubbornly contradicted them.
 
In Sober’s paper you will find a discussion of two uses of Ockham’s razor in biology, George Williams’ famous critique of group selection, and “cladistic” phylogenetic analyses. In the first case, Williams argued that individual- or gene-level selective explanations are preferable to group-selective explanations because they are more parsimonious. In the second case, modern systematists use parsimony to reconstruct the most likely phylogenetic relationships among species, assuming that a smaller number of independent evolutionary changes is more likely than a larger number.
 
Part of the problem is that we do have examples of both group selection (not many, but they are there), and of non-parsimonious evolutionary paths, which means that at best Ockham’s razor can be used as a first approximation heuristic, not as a sound principle of scientific inference.
 
And it gets worse before it gets better. Sober cites Aristotle, who chided Plato for hypostatizing The Good. You see, Plato was always running around asking what makes for a Good Musician, or a Good General. By using the word Good in all these inquiries, he came to believe that all these activities have something fundamental in common, that there is a general concept of Good that gets instantiated in being a good musician, general, etc. But that, of course, is nonsense on stilts, since what makes for a good musician has nothing whatsoever to do with what makes for a good general.
 
Analogously, suggests Sober, the various uses of Ockham’s razor have no metaphysical or logical universal principle in common — despite what many scientists, skeptics and even philosophers seem to think. Williams was correct, group selection is less likely than individual selection (though not impossible), and the cladists are correct too that parsimony is usually a good way to evaluate competitive phylogenetic hypotheses. But the two cases (and many others) do not share any universal property in common.
 
What’s going on, then? Sober’s solution is to invoke the famous Duhem thesis.** Pierre Duhem suggested in 1908 that, as Sober puts it: “it is wrong to think that hypothesis H makes predictions about observation O; it is the conjunction of H&A [where A is a set of auxiliary hypotheses] that issues in testable consequences.”
 
This means that, for instance, when astronomer Arthur Eddington “tested” Einstein’s General Theory of Relativity during a famous 1919 total eclipse of the Sun — by showing that the Sun’s gravitational mass was indeed deflecting starlight by exactly the amount predicted by Einstein — he was not, strictly speaking doing any such thing. Eddington was testing Einstein’s theory given a set of auxiliary hypotheses, a set that included independent estimates of the mass of the sun, the laws of optics that allowed the telescopes to work, the precision of measurement of stellar positions, and even the technical processing of the resulting photographs. Had Eddington failed to confirm the hypotheses this would not (necessarily) have spelled the death of Einstein’s theory (since confirmed in many other ways). The failure could have resulted from the failure of any of the auxiliary hypotheses instead.
 
This is both why there is no such thing as a “crucial” experiment in science (you always need to repeat them under a variety of conditions), and why naive Popperian falsificationism is wrong (you can never falsify a hypothesis directly, only the H&A complex can be falsified).
 
What does this have to do with Ockham’s razor? The Duhem thesis explains why Sober is right, I think, in maintaining that the razor works (when it does) given certain background assumptions that are bound to be discipline- and problem-specific. So, for instance, Williams’ reasoning about group selection isn’t correct because of some generic logical property of parsimony (as Williams himself apparently thought), but because — given the sorts of things that living organisms and populations are, how natural selection works, and a host of other biological details — it is indeed much more likely than not that individual and not group selective explanations will do the work in most specific instances. But that set of biological reasons is quite different from the set that cladists use in justifying their use of parsimony to reconstruct organismal phylogenies. And needless to say, neither of these two sets of auxiliary assumptions has anything to do with the instances of successful deployment of the razor by physicists, for example.

Continued in article
Note the comments that follow

Bob Jensen's threads on theory are at
http://www.trinity.edu/rjensen/Theory01.htm


"You Might Already Know This ... ," by Benedict Carey, The New York Times, January 10, 2011 ---
http://www.nytimes.com/2011/01/11/science/11esp.html?_r=1&src=me&ref=general

In recent weeks, editors at a respected psychology journal have been taking heat from fellow scientists for deciding to accept a research report that claims to show the existence of extrasensory perception.

The report, to be published this year in The Journal of Personality and Social Psychology, is not likely to change many minds. And the scientific critiques of the research methods and data analysis of its author, Daryl J. Bem (and the peer reviewers who urged that his paper be accepted), are not winning over many hearts.

Yet the episode has inflamed one of the longest-running debates in science. For decades, some statisticians have argued that the standard technique used to analyze data in much of social science and medicine overstates many study findings — often by a lot. As a result, these experts say, the literature is littered with positive findings that do not pan out: “effective” therapies that are no better than a placebo; slight biases that do not affect behavior; brain-imaging correlations that are meaningless.

By incorporating statistical techniques that are now widely used in other sciences — genetics, economic modeling, even wildlife monitoring — social scientists can correct for such problems, saving themselves (and, ahem, science reporters) time, effort and embarrassment.

“I was delighted that this ESP paper was accepted in a mainstream science journal, because it brought this whole subject up again,” said James Berger, a statistician at Duke University. “I was on a mini-crusade about this 20 years ago and realized that I could devote my entire life to it and never make a dent in the problem.”

In recent weeks, editors at a respected psychology journal have been taking heat from fellow scientists for deciding to accept a research report that claims to show the existence of extrasensory perception.

The report, to be published this year in The Journal of Personality and Social Psychology, is not likely to change many minds. And the scientific critiques of the research methods and data analysis of its author, Daryl J. Bem (and the peer reviewers who urged that his paper be accepted), are not winning over many hearts.

Yet the episode has inflamed one of the longest-running debates in science. For decades, some statisticians have argued that the standard technique used to analyze data in much of social science and medicine overstates many study findings — often by a lot. As a result, these experts say, the literature is littered with positive findings that do not pan out: “effective” therapies that are no better than a placebo; slight biases that do not affect behavior; brain-imaging correlations that are meaningless.

By incorporating statistical techniques that are now widely used in other sciences — genetics, economic modeling, even wildlife monitoring — social scientists can correct for such problems, saving themselves (and, ahem, science reporters) time, effort and embarrassment.

“I was delighted that this ESP paper was accepted in a mainstream science journal, because it brought this whole subject up again,” said James Berger, a statistician at Duke University. “I was on a mini-crusade about this 20 years ago and realized that I could devote my entire life to it and never make a dent in the problem.”

The statistical approach that has dominated the social sciences for almost a century is called significance testing. The idea is straightforward. A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent.

This arbitrary cutoff makes sense when the effect being studied is a large one — for example, when measuring the so-called Stroop effect. This effect predicts that naming the color of a word is faster and more accurate when the word and color match (“red” in red letters) than when they do not (“red” in blue letters), and is very strong in almost everyone.

“But if the true effect of what you are measuring is small,” said Andrew Gelman, a professor of statistics and political science at Columbia University, “then by necessity anything you discover is going to be an overestimate” of that effect.

Consider the following experiment. Suppose there was reason to believe that a coin was slightly weighted toward heads. In a test, the coin comes up heads 527 times out of 1,000.

Is this significant evidence that the coin is weighted?

Classical analysis says yes. With a fair coin, the chances of getting 527 or more heads in 1,000 flips is less than 1 in 20, or 5 percent, the conventional cutoff. To put it another way: the experiment finds evidence of a weighted coin “with 95 percent confidence.”

Yet many statisticians do not buy it. One in 20 is the probability of getting any number of heads above 526 in 1,000 throws. That is, it is the sum of the probability of flipping 527, the probability of flipping 528, 529 and so on.

But the experiment did not find all of the numbers in that range; it found just one — 527. It is thus more accurate, these experts say, to calculate the probability of getting that one number — 527 — if the coin is weighted, and compare it with the probability of getting the same number if the coin is fair.

Statisticians can show that this ratio cannot be higher than about 4 to 1, according to Paul Speckman, a statistician, who, with Jeff Rouder, a psychologist, provided the example. Both are at the University of Missouri and said that the simple experiment represented a rough demonstration of how classical analysis differs from an alternative approach, which emphasizes the importance of comparing the odds of a study finding to something that is known.

The point here, said Dr. Rouder, is that 4-to-1 odds “just aren’t that convincing; it’s not strong evidence.”

And yet classical significance testing “has been saying for at least 80 years that this is strong evidence,” Dr. Speckman said in an e-mail.

The critics have been crying foul for half that time. In the 1960s, a team of statisticians led by Leonard Savage at the University of Michigan showed that the classical approach could overstate the significance of the finding by a factor of 10 or more. By that time, a growing number of statisticians were developing methods based on the ideas of the 18th-century English mathematician Thomas Bayes.

Bayes devised a way to update the probability for a hypothesis as new evidence comes in.

So in evaluating the strength of a given finding, Bayesian (pronounced BAYZ-ee-un) analysis incorporates known probabilities, if available, from outside the study.

It might be called the “Yeah, right” effect. If a study finds that kumquats reduce the risk of heart disease by 90 percent, that a treatment cures alcohol addiction in a week, that sensitive parents are twice as likely to give birth to a girl as to a boy, the Bayesian response matches that of the native skeptic: Yeah, right. The study findings are weighed against what is observable out in the world.

In at least one area of medicine — diagnostic screening tests — researchers already use known probabilities to evaluate new findings. For instance, a new lie-detection test may be 90 percent accurate, correctly flagging 9 out of 10 liars. But if it is given to a population of 100 people already known to include 10 liars, the test is a lot less impressive.

It correctly identifies 9 of the 10 liars and misses one; but it incorrectly identifies 9 of the other 90 as lying. Dividing the so-called true positives (9) by the total number of people the test flagged (18) gives an accuracy rate of 50 percent. The “false positives” and “false negatives” depend on the known rates in the population.

Continued in article

What went wrong with accountics research ---
http://www.trinity.edu/rjensen/Theory01.htm#WhatWentWrong


It ain’t what we don’t know that gives us trouble, it’s what we know that just ain’t so.
Josh Billings

Interesting Quotation for Accountics Researchers Who Tend Not to Check for Validity With Replication Efforts

"On Early Warning Signs," by George Sugihara. December 20, 2010 ---
http://seedmagazine.com/content/article/on_early_warning_signs/
Thank you Miguel.

. . .

Nonlinear systems, however, are not so well behaved. They can appear stationary for a long while, then without anything changing, they exhibit jumps in variability—so-called “heteroscedasticity.” For example, if one looks at the range of economic variables over the past decade (daily market movements, GDP changes, etc.), one might guess that variability and the universe of possibilities are very modest. This was the modus operandi of normal risk management. As a consequence, the likelihood of some of the large moves we saw in 2008, which happened over so many consecutive days, should have been less than once in the age of the universe.

Our problem is that the scientific desire to simplify has taken over, something that Einstein warned against when he paraphrased Occam: “Everything should be made as simple as possible, but not simpler.” Thinking of natural and economic systems as essentially stable and decomposable into parts is a good initial hypothesis, current observations and measurements do not support that hypothesis—hence our continual surprise. Just as we like the idea of constancy, we are stubborn to change. The 19th century American humorist Josh Billings, perhaps, put it best: “It ain’t what we don’t know that gives us trouble, it’s what we know that just ain’t so.”

Continued in article


Is anecdotal evidence irrelevant?

A subscriber to the AECM that we hear from quite often asked me to elaborate on the nature of anecdotal evidence. My reply may be of interest to other subscribers to the AECM.

 

Hi XXXXX,

Statistical inference --- http://en.wikipedia.org/wiki/Statistical_inference 


Anecdotal Evidence --- http://en.wikipedia.org/wiki/Anecdotal_evidence 


Humanities research is nearly always anecdotal. History research, for example, delves through original correspondence (letters, memos, and now email messages) of great people in history to discover more about causes of events in history. This, however, is anecdotal research, and there are greatly varying degrees of the quality of such historical anecdotal evidence.


Legal research is generally anecdotal, although court cases often use statistical inference studies as part, but not all, of the total evidence packages in the court cases.


Scientific research is both inferential and anecdotal. Anecdotal evidence often provides the creative ideas for hypotheses that are later put to more rigorous tests.


National Center for Case Study Teaching in Science ---
http://sciencecases.lib.buffalo.edu/cs/


But between the anecdote and the truly random sample is evidence that is neither totally anecdotal nor rigorously scientific.  For example, it's literally impossible to identify the population of tax cheaters in the underground cash-only economy. Hence, from a strictly inferential standpoint it's impossible to conduct truly random samples on such unknown populations.


Nevertheless, the IRS and other researchers do conduct various types of "anecdotal investigations" of how people cheat on their taxes, including cheating in the underground cash-only economy. One approach is the IRS policy of conducting a samplings (not random) of full audits designed not so much to collect revenue or punish wrong doers as to discover how people comply with tax rules and devise legal or illegal ploys for avoiding or deferring taxes. This is anecdotal research.


In both instances of mine that you refer to I provided only anecdotal evidence that I called "cases." In fact, virtually all case studies are anecdotal in the sense that the statistical inference tests are not generally feasible ---
http://www.trinity.edu/rjensen/000aaa/thetools.htm#Cases 


However, it is common knowledge that there's a vast underground cash-only economy. And the court records are clogged with cases of persons who got caught cheating on welfare, cheating on taxes, receiving phony disability insurance settlements and Social Security payments, etc. But these court cases are probably only the tip of the icebergs in terms of the millions more who get away with cheating in the cash-only underground economy.


The problem with accountics research published in TAR, JAR, and JAE is that it requires statistical inference or analytics based upon assumed (usually unrealistic or unproven) assumptions. The net result has been very sophisticated research findings that are of little interest to the profession because the research methodology and unrealistic assumptions limit accountics research to mostly uninteresting problems. Analytical accountics research problems are sometimes interesting problems but these accountics research findings are usually no better than or even worse than anecdotal evidence due to unrealistic and unproven assumptions ---
http://www.trinity.edu/rjensen/TheoryTAR.htm


It is obvious that accountics researchers have limited themselves to mostly uninteresting problems. In real science, scientists demand that interesting research findings be replicated. Since accountics scientists almost never demand or even encourage (by publishing replications) that their studies be replicated this is prima facie evidence of the lack of relevance of accountics research findings since accountics researchers themselves do not demand replications.


AAA leaders are now having retreats focused on how to make accountics research more relevant to the academic world (read that accounting teachers) and professional world ---
http://aaahq.org/pubs/AEN/2012/AEN_Winter12_WEB.pdf  


Anecdotal research in accounting generally focuses on the more interesting problems than accountics research. But anecdotal findings are not easily extrapolated to general conclusions. Anecdotal evidence often builds up to where it becomes more and more convincing. For example, it did not take long in the early 1990s to discover that companies were entering into hundreds of billions and then trillions in interest rate swaps because there were no domestic or international accounting rules for even disclosing interest rate swaps let alone booking them. In many instances companies were entering into such swaps for off-balance sheet financing (OBSF).


As the anecdotal evidence on swap OBSF mounted like grains of sand, the Director of the SEC told the Chairman of the FASB that the three major problems to be addressed by the FASB were to be "derivatives, derivatives, and derivatives." And the leading problems of derivatives was that forward contracts and swaps (portfolios of forward contracts) were not even disclosed let alone booked.


Without having a single accountics study of interest rate swaps amongst the mountain of anecdotal evidence of OBSF cheating with interest rate swaps we soon had FAS 133 that required the booking of interest rate swaps and at least quarterly resets of the carrying values of these swaps to fair market value --- that is the power of anecdotal evidence rather than accountics evidence.


In a similar manner, the IRS is making inroads on reducing tax cheating in the underground economy using evidence piled up from anecdotal rather than strictly scientific research. For example, a huge step was made when the IRS commenced to require and code 1099 information into IRS computers. Before then, for example, most professors who received small consulting fees and honoraria forgot about such fees when they filed their taxes. Now they're reminded after December 31 when they receive their copies of the 1099 forms files with the IRS.


But I can assure you based upon my anecdotal evidence, that the underground economy still is alive and thriving in San Antonio when it comes to the type of "cash only" labor that I list at
http://www.cs.trinity.edu/~rjensen/temp/TaxNoTax.htm 



And I can assure you of this without knowing about a single accountics study of the underground cash-only economy that this economy is alive and thriving. Mountains of anecdotal evidence reveal that the underground economy greatly inhibits the prevention of cheating on taxes, welfare, disability claims, Medicaid, etc.


Interestingly, however, the underground cash-only economy often makes it easier to for poor people to attain the American Dream.


Case Studies in Gaming the Income Tax Laws
 http://www.cs.trinity.edu/~rjensen/temp/TaxNoTax.htm

 

Question
What would be the best way to reduce cheating on taxes, welfare, Medicaid, etc.?


Answer
Go to a cashless society that is now technically feasible but politically impossible since members of Congress themselves thrive on cheating in the underground cash-only economy.

 

Respectfully,
Bob Jensen

 

"A Pragmatist Defence of Classical Financial Accounting Research," by Brian A. Rutherford, Abacus, Volume 49, Issue 2, pages 197–218, June 2013 ---
http://onlinelibrary.wiley.com/doi/10.1111/abac.12003/abstract

The reason for the disdain in which classical financial accounting research has come to held by many in the scholarly community is its allegedly insufficiently scientific nature. While many have defended classical research or provided critiques of post-classical paradigms, the motivation for this paper is different. It offers an epistemologically robust underpinning for the approaches and methods of classical financial accounting research that restores its claim to legitimacy as a rigorous, systematic and empirically grounded means of acquiring knowledge. This underpinning is derived from classical philosophical pragmatism and, principally, from the writings of John Dewey. The objective is to show that classical approaches are capable of yielding serviceable, theoretically based solutions to problems in accounting practice.

Jensen Comment
When it comes to "insufficient scientific nature" of classical accounting research I should note yet once again that accountics science never attained the status of real science where the main criteria are scientific searches for causes and an obsession with replication (reproducibility) of findings.

Accountics science is overrated because it only achieved the status of a psuedo science ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Pseudo-Science

"Research on Accounting Should Learn From the Past" by Michael H. Granof and Stephen A. Zeff, Chronicle of Higher Education, March 21, 2008

The unintended consequence has been that interesting and researchable questions in accounting are essentially being ignored. By confining the major thrust in research to phenomena that can be mathematically modeled or derived from electronic databases, academic accountants have failed to advance the profession in ways that are expected of them and of which they are capable.

Academic research has unquestionably broadened the views of standards setters as to the role of accounting information and how it affects the decisions of individual investors as well as the capital markets. Nevertheless, it has had scant influence on the standards themselves.

Continued in article

"Research on Accounting Should Learn From the Past," by Michael H. Granof and
 Stephen A. Zeff, Chronicle of Higher Education, March 21, 2008

. . .

The narrow focus of today's research has also resulted in a disconnect between research and teaching. Because of the difficulty of conducting publishable research in certain areas — such as taxation, managerial accounting, government accounting, and auditing — Ph.D. candidates avoid choosing them as specialties. Thus, even though those areas are central to any degree program in accounting, there is a shortage of faculty members sufficiently knowledgeable to teach them.

To be sure, some accounting research, particularly that pertaining to the efficiency of capital markets, has found its way into both the classroom and textbooks — but mainly in select M.B.A. programs and the textbooks used in those courses. There is little evidence that the research has had more than a marginal influence on what is taught in mainstream accounting courses.

What needs to be done? First, and most significantly, journal editors, department chairs, business-school deans, and promotion-and-tenure committees need to rethink the criteria for what constitutes appropriate accounting research. That is not to suggest that they should diminish the importance of the currently accepted modes or that they should lower their standards. But they need to expand the set of research methods to encompass those that, in other disciplines, are respected for their scientific standing. The methods include historical and field studies, policy analysis, surveys, and international comparisons when, as with empirical and analytical research, they otherwise meet the tests of sound scholarship.

Second, chairmen, deans, and promotion and merit-review committees must expand the criteria they use in assessing the research component of faculty performance. They must have the courage to establish criteria for what constitutes meritorious research that are consistent with their own institutions' unique characters and comparative advantages, rather than imitating the norms believed to be used in schools ranked higher in magazine and newspaper polls. In this regard, they must acknowledge that accounting departments, unlike other business disciplines such as finance and marketing, are associated with a well-defined and recognized profession. Accounting faculties, therefore, have a special obligation to conduct research that is of interest and relevance to the profession. The current accounting model was designed mainly for the industrial era, when property, plant, and equipment were companies' major assets. Today, intangibles such as brand values and intellectual capital are of overwhelming importance as assets, yet they are largely absent from company balance sheets. Academics must play a role in reforming the accounting model to fit the new postindustrial environment.

Third, Ph.D. programs must ensure that young accounting researchers are conversant with the fundamental issues that have arisen in the accounting discipline and with a broad range of research methodologies. The accounting literature did not begin in the second half of the 1960s. The books and articles written by accounting scholars from the 1920s through the 1960s can help to frame and put into perspective the questions that researchers are now studying.

Continued in article

How accountics scientists should change ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm


 


Statistical Inference Versus Substantive Inference

A scholar named with the commentary name Centurian comments as follows following the following article
"One Economist's Mission to Redeem the Field of Finance," by Robert Schiller, Chronicle of Higher Education, April 8, 2012 ---
http://chronicle.com/article/Robert-Shillers-Mission-to/131456/

Economics as a "science" is no different than Sociology, Psychology, Criminal Justice, Political Science, etc.,etc.. To those in the "hard sciences" [physics, biology, chemistry, mathematics], these "soft sciences" are dens of thieves. Thieves who have stolen the "scientific method" and abused it.

These soft sciences all apply the scientific method to biased and insufficient data sets, then claim to be "scientific", then assert their opinions and biases as scientific results. They point to "correlations". Correlations which are made even though they know they do not know all the forces/factors involved nor the ratio of effect from the forces/factors.

They know their mathematical formulas and models are like taking only a few pieces of evidence from a crime scene and then constructing an elaborate "what happened" prosecution and defense. Yet neither side has any real idea, other than in the general sense, what happened. They certainly have no idea what all the factors or human behaviors were involved, nor the true motives.

Hence the growing awareness of the limitations of all the quantitative models that led to the financial crisis/financial WMDs going off.

Take for example the now thoroughly discredited financial and economic models that claimed validity through the use of the same mathematics used to make atomic weapons; Monte Carlo simulation. MC worked on the Manhattan Project because real scientists, who obeyed the laws of science when it came to using data, were applying the mathematics to a valid data set.

Economists and Wall Street Quants threw out the data set disciplines of science. The Quant's of Wall Street and those scientists who claimed the data proved man made global warming share the same sin of deception. Why? For the same reason, doing so allowed them to continue their work in the lab. They got to continue to experiment and "do science". Science paid for by those with a deep vested financial interest in the the false correlations proclaimed by these soft science dogmas.

If you take away a child's crayons and give him oil paints used by Michelangelo, you're not going to get the Sistine Chapel. You're just going to get a bigger mess.

If Behavioral Finance proves anything it is how far behind the other Social Sciences economists really are. And if the "successes" of the Social Sciences are any indication, a lot bigger messes are waiting down the road.

Centurion

"The Standard Error of Regressions," by Deirdre N. McCloskey and Stephen T. Ziliak, Journal of Economic Literature, 1996, pp. 97-114

THE IDEA OF statistical significance is old, as old as Cicero writing on forecasts (Cicero, De Divinatione, I. xiii. 23). In 1773 Laplace used it to test whether comets came from outside the solar system (Elizabeth Scott 1953, p. 20). The first use of the very word "significance" in a statistical context seems to be John Venn's, in 1888, speaking of differences expressed in units of probable error,

They inform us which of the differences in the above tables are permanent and significant, in the sense that we may be tolerably confident that if we took another similar batch we should find a similar difference; and which are merely transient and insignificant, in the sense that another similar batch is about as likely as not to reverse the conclusion we have obtained. (Venn, quoted in Lancelot Hogben 1968, p. 325).

Statistical significance has been much used since Venn, and especially since Ronald Fisher. The problem, and our main point, is that a difference can be permanent (as Venn put it) without being "significant" in o ther senses, such as for science or policy. And a difference can be significant for science or policy and yet be insignificant statistically, ignored by the less thoughtful researchers.

In the 1930s Jerzy Neyman and Egon S. Pearson, and then more explicitly Abraham Wald, argued that actual investigations should depend on substantive not merely statistical significance. In 1933 Neyman and Pearson wrote of type I and type II errors:

Is it more serious to convict an innocent man or to acquit a guilty? That will depend on the consequences of the error; is the punishment death or fine; what is the danger to the community of released criminals; what are the current ethical views on punishment? From the point of view of mathematical theory all that we can do is to show how the risk of errors may be controlled and minimised. The use of these statistical tools in any given case, in determining just how the balance should be struck, must be left to the investigator. (Neyman and Pearson 1933, p. 296; italics supplied)

Wald went further:

The question as to how the form of the weight [that is, loss] function . . . should be determined, is not a mathematical or statistical one. The statistician who wants to test certain hypotheses must first determine the relative importance of all possible errors, which will depend on the special purposes of his investigation. (1939, p. 302, italics supplied)

To date no empirical studies have been undertaken measuring the use of statistical significance in economics. We here examine the alarming hypothesis that ordinary usage in economics takes statistical significance to be the same as economic significance. We compare statistical best practice against leading textbooks of recent decades and against the papers using regression analysis in the 1980s in the American Economic Review.

 

An Example

. . .

V. Taking the Con Out of Confidence Intervals

In a squib published in the American Economic Review in 1985 one of us claimed that "[r]oughly three-quarters of the contributors to the American Economic Review misuse the test of statistical significance" (McCloskey 1985, p. 201). The full survey confirms the claim, and in some matters strengthens it.

We would not assert that every economist misunderstands statistical significance, only that most do, and these some of the best economic scientists. By way of contrast to what most understand statistical significance to be capable of saying, Edward Lazear and Robert Michael wrote 17 pages of empirical economics in the AER, using ordinary least squares on two occasions, without a single mention of statistical significance (AER Mar. 1980, pp. 96-97, pp. 105-06). This is notable considering they had a legitimate sample, justifying a discussion of statistical significance were it relevant to the scientific questions they were asking. Estimated coefficients in the paper are interpreted carefully, and within a conversation in which they ask how large is large (pp. 97, 101, and throughout).

The low and falling cost of calculation, together with a widespread though unarticulated realization that after all the significance test is not crucial to scientific questions, has meant that statistical significance has been valued at its cost. Essentially no one believes a finding of statistical significance or insignificance.

This is bad for the temper of the field. My statistical significance is a "finding"; yours is an ornamented prejudice.

Continued in article

Jensen at the 2012 AAA Meetings?
http://aaahq.org/AM2012/program.cfm
A Forthcoming AAA Plenary Session to Note

Sudipta Basu called my attention to the 2012 AAA annual meeting website that now lists the plenary speakers.
See: http://aaahq.org/AM2012/Speakers.cfm

In particular note the following speaker

Deirdre McCloskey Distinguished Professor of Economics, History, English, and Communication, University of Illinois at Chicago ---
http://www.deirdremccloskey.com/

Deirdre McCloskey teaches economics, history, English, and communication at the University of Illinois at Chicago. A well-known economist and historian and rhetorician, she has written sixteen books and around 400 scholarly pieces on topics ranging from technical economics and statistics to transgender advocacy and the ethics of the bourgeois virtues. She is known as a "conservative" economist, Chicago-School style (she taught for 12 years there), but protests that "I'm a literary, quantitative, postmodern, free-market, progressive Episcopalian, Midwestern woman from Boston who was once a man. Not 'conservative'! I'm a Christian libertarian."

Her latest book, Bourgeois Dignity: Why Economics Can't Explain the Modern World (University of Chicago Press, 2010), which argues that an ideological change rather than saving or exploitation is what made us rich, is the second in a series of four on The Bourgeois Era. The first was The Bourgeois Virtues: Ethics for an Age of Commerce (2006), asking if a participant in a capitalist economy can still have an ethical life (briefly, yes). With Stephen Ziliak she wrote in 2008, The Cult of Statistical Significance (2008), which criticizes the proliferation of tests of "significance," and was in 2011 the basis of a Supreme Court decision.


Professor Basu called my attention to the plan for Professor McCloskey to discuss accountics science with a panel in a concurrent session following her plenary session. I had not originally intended to attend the 2012 AAA meetings because of my wife's poor health. But the chance to be in the program with Professor McCloskey on the topic of accountics science is just too tempting. My wife is now insisting that I go to these meetings and that she will come along along with me. One nice thing for us is that Southwest flies nonstop from Manchester to Baltimore with no stressful change of flights for her.

I think I am going to accept Professor Basu's kind invitation to be on this panel.

I think we are making progress against the "Cult of Statistical Significance."


2012 AAA Meeting Plenary Speakers and Response Panel Videos ---
http://commons.aaahq.org/hives/20a292d7e9/summary
I think you have to be a an AAA member and log into the AAA Commons to view these videos.
Bob Jensen is an obscure speaker following the handsome Rob Bloomfield
in the 1.02 Deirdre McCloskey Follow-up Panel—Video ---
http://commons.aaahq.org/posts/a0be33f7fc

My threads on Deidre McCloskey and my own talk are at
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

September 13, 2012 reply from Jagdish Gangolly

Bob,

Thanks you so much for posting this.

What a wonderful speaker Deidre McCloskey! Reminded me of JR Hicks who also was a stammerer. For an economist, I was amazed by her deep and remarkable understanding of statistics.

It was nice to hear about Gossett, perhaps the only human being who got along well with both Karl Pearson and R.A. Fisher, getting along with the latter itself a Herculean feat.

Gosset was helped in the mathematical derivation of small sample theory by Karl Pearson, he did not appreciate its importance, it was left to his nemesis R.A. Fisher. It is remarkable that he could work with these two giants who couldn't stand each other.

In later life Fisher and Gosset parted ways in that Fisher was a proponent of randomization of experiments while Gosset was a proponent of systematic planning of experiments and in fact proved decisively that balanced designs are more precise, powerful and efficient compared with Fisher's randomized experiments (see http://sites.roosevelt.edu/sziliak/files/2012/02/William-S-Gosset-and-Experimental-Statistics-Ziliak-JWE-2011.pdf )

I remember my father (who designed experiments in horticulture for a living) telling me the virtues of balanced designs at the same time my professors in school were extolling the virtues of randomisation.

In Gosset we also find seeds of Bayesian thinking in his writings.

While I have always had a great regard for Fisher (visit to the tree he planted at the Indian Statistical Institute in Calcutta was for me more of a pilgrimage), I think his influence on the development of statistics was less than ideal.

Regards,

Jagdish

Jagdish S. Gangolly
Department of Informatics College of Computing & Information
State University of New York at Albany
Harriman Campus, Building 7A, Suite 220
Albany, NY 12222 Phone: 518-956-8251, Fax: 518-956-8247

Hi Jagdish,

You're one of the few people who can really appreciate Deidre's scholarship in history, economics, and statistics. When she stumbled for what seemed like forever trying to get a word out, it helped afterwards when trying to remember that word.


Interestingly, two Nobel economists slugged out the very essence of theory some years back. Herb Simon insisted that the purpose of theory was to explain. Milton Friedman went off on the F-Twist tangent saying that it was enough if a theory merely predicted. I lost some (certainly not all) respect for Friedman over this. Deidre, who knew Milton, claims that deep in his heart, Milton did not ultimately believe this to the degree that it is attributed to him. Of course Deidre herself is not a great admirer of Neyman, Savage, or Fisher.

Friedman's essay "The Methodology of Positive Economics" (1953) provided the epistemological pattern for his own subsequent research and to a degree that of the Chicago School. There he argued that economics as science should be free of value judgments for it to be objective. Moreover, a useful economic theory should be judged not by its descriptive realism but by its simplicity and fruitfulness as an engine of prediction. That is, students should measure the accuracy of its predictions, rather than the 'soundness of its assumptions'. His argument was part of an ongoing debate among such statisticians as Jerzy Neyman, Leonard Savage, and Ronald Fisher.

.
"Milton Friedman's grand illusion," by Mark Buchanan, The Physics of Finance: A look at economics and finance through the lens of physics, September 16, 2011 ---
 http://physicsoffinance.blogspot.com/2011/09/milton-friedmans-grand-illusion.html

Many of us on the AECM are not great admirers of positive economics ---
http://www.trinity.edu/rjensen/theory02.htm#PostPositiveThinking

Everyone is entitled to their own opinion, but not their own facts.
Senator Daniel Patrick Moynihan --- FactCheck.org ---
http://www.factcheck.org/

Then again, maybe we're all entitled to our own facts!

"The Power of Postpositive Thinking," Scott McLemee, Inside Higher Ed, August 2, 2006 --- http://www.insidehighered.com/views/2006/08/02/mclemee

In particular, a dominant trend in critical theory was the rejection of the concept of objectivity as something that rests on a more or less naive epistemology: a simple belief that “facts” exist in some pristine state untouched by “theory.” To avoid being naive, the dutiful student learned to insist that, after all, all facts come to us embedded in various assumptions about the world. Hence (ta da!) “objectivity” exists only within an agreed-upon framework. It is relative to that framework. So it isn’t really objective....

What Mohanty found in his readings of the philosophy of science were much less naïve, and more robust, conceptions of objectivity than the straw men being thrashed by young Foucauldians at the time. We are not all prisoners of our paradigms. Some theoretical frameworks permit the discovery of new facts and the testing of interpretations or hypotheses. Others do not. In short, objectivity is a possibility and a goal — not just in the natural sciences, but for social inquiry and humanistic research as well.

Mohanty’s major theoretical statement on PPR arrived in 1997 with Literary Theory and the Claims of History: Postmodernism, Objectivity, Multicultural Politics (Cornell University Press). Because poststructurally inspired notions of cultural relativism are usually understood to be left wing in intention, there is often a tendency to assume that hard-edged notions of objectivity must have conservative implications. But Mohanty’s work went very much against the current.

“Since the lowest common principle of evaluation is all that I can invoke,” wrote Mohanty, complaining about certain strains of multicultural relativism, “I cannot — and consequently need not — think about how your space impinges on mine or how my history is defined together with yours. If that is the case, I may have started by declaring a pious political wish, but I end up denying that I need to take you seriously.”

PPR did not require throwing out the multicultural baby with the relativist bathwater, however. It meant developing ways to think about cultural identity and its discontents. A number of Mohanty’s students and scholarly colleagues have pursued the implications of postpositive identity politics. I’ve written elsewhere about Moya, an associate professor of English at Stanford University who has played an important role in developing PPR ideas about identity. And one academic critic has written an interesting review essay on early postpositive scholarship — highly recommended for anyone with a hankering for more cultural theory right about now.

Not everybody with a sophisticated epistemological critique manages to turn it into a functioning think tank — which is what started to happen when people in the postpositive circle started organizing the first Future of Minority Studies meetings at Cornell and Stanford in 2000. Others followed at the University of Michigan and at the University of Wisconsin in Madison. Two years ago FMS applied for a grant from Mellon Foundation, receiving $350,000 to create a series of programs for graduate students and junior faculty from minority backgrounds.

The FMS Summer Institute, first held in 2005, is a two-week seminar with about a dozen participants — most of them ABD or just starting their first tenure-track jobs. The institute is followed by a much larger colloquium (the part I got to attend last week). As schools of thought in the humanities go, the postpositivists are remarkably light on the in-group jargon. Someone emerging from the Institute does not, it seems, need a translator to be understood by the uninitated. Nor was there a dominant theme at the various panels I heard.

Rather, the distinctive quality of FMS discourse seems to derive from a certain very clear, but largely unstated, assumption: It can be useful for scholars concerned with issues particular to one group to listen to the research being done on problems pertaining to other groups.

That sounds pretty simple. But there is rather more behind it than the belief that we should all just try to get along. Diversity (of background, of experience, of disciplinary formation) is not something that exists alongside or in addition to whatever happens in the “real world.” It is an inescapable and enabling condition of life in a more or less democratic society. And anyone who wants it to become more democratic, rather than less, has an interest in learning to understand both its inequities and how other people are affected by them.

A case in point might be the findings discussed by Claude Steele, a professor of psychology at Stanford, in a panel on Friday. His paper reviewed some of the research on “identity contingencies,” meaning “things you have to deal with because of your social identity.” One such contingency is what he called “stereotype threat” — a situation in which an individual becomes aware of the risk that what you are doing will confirm some established negative quality associated with your group. And in keeping with the threat, there is a tendency to become vigilant and defensive.

Steele did not just have a string of concepts to put up on PowerPoint. He had research findings on how stereotype threat can affect education. The most striking involved results from a puzzle-solving test given to groups of white and black students. When the test was described as a game, the scores for the black students were excellent — conspicuously higher, in fact, than the scores of white students. But in experiments where the very same puzzle was described as an intelligence test, the results were reversed. The black kids scores dropped by about half, while the graph for their white peers spiked.

The only variable? How the puzzle was framed — with distracting thoughts about African-American performance on IQ tests creating “stereotype threat” in a way that game-playing did not.

Steele also cited an experiment in which white engineering students were given a mathematics test. Just beforehand, some groups were told that Asian students usually did really well on this particular test. Others were simply handed the test without comment. Students who heard about their Asian competitors tended to get much lower scores than the control group.

Extrapolate from the social psychologist’s experiments with the effect of a few innocent-sounding remarks — and imagine the cumulative effect of more overt forms of domination. The picture is one of a culture that is profoundly wasteful, even destructive, of the best abilities of many of its members.

“It’s not easy for minority folks to discuss these things,” Satya Mohanty told me on the final day of the colloquium. “But I don’t think we can afford to wait until it becomes comfortable to start thinking about them. Our future depends on it. By ‘our’ I mean everyone’s future. How we enrich and deepen our democratic society and institutions depends on the answers we come up with now.”

Earlier this year, Oxford University Press published a major new work on postpositivist theory, Visible Identities: Race, Gender, and the Self,by Linda Martin Alcoff, a professor of philosophy at Syracuse University. Several essays from the book are available at the author’s Web site.


A scholar going by the name of Centurian comments following the following article
"One Economist's Mission to Redeem the Field of Finance," by Robert Schiller, Chronicle of Higher Education, April 8, 2012 ---
http://chronicle.com/article/Robert-Shillers-Mission-to/131456/

Economics as a "science" is no different than Sociology, Psychology, Criminal Justice, Political Science, etc.,etc.. To those in the "hard sciences" [physics, biology, chemistry, mathematics], these "soft sciences" are dens of thieves. Thieves who have stolen the "scientific method" and abused it.

These soft sciences all apply the scientific method to biased and insufficient data sets, then claim to be "scientific", then assert their opinions and biases as scientific results. They point to "correlations". Correlations which are made even though they know they do not know all the forces/factors involved nor the ratio of effect from the forces/factors.

They know their mathematical formulas and models are like taking only a few pieces of evidence from a crime scene and then constructing an elaborate "what happened" prosecution and defense. Yet neither side has any real idea, other than in the general sense, what happened. They certainly have no idea what all the factors or human behaviors were involved, nor the true motives.

Hence the growing awareness of the limitations of all the quantitative models that led to the financial crisis/financial WMDs going off.

Take for example the now thoroughly discredited financial and economic models that claimed validity through the use of the same mathematics used to make atomic weapons; Monte Carlo simulation. MC worked on the Manhattan Project because real scientists, who obeyed the laws of science when it came to using data, were applying the mathematics to a valid data set.

Economists and Wall Street Quants threw out the data set disciplines of science. The Quant's of Wall Street and those scientists who claimed the data proved man made global warming share the same sin of deception. Why? For the same reason, doing so allowed them to continue their work in the lab. They got to continue to experiment and "do science". Science paid for by those with a deep vested financial interest in the the false correlations proclaimed by these soft science dogmas.

If you take away a child's crayons and give him oil paints used by Michelangelo, you're not going to get the Sistine Chapel. You're just going to get a bigger mess.

If Behavioral Finance proves anything it is how far behind the other Social Sciences economists really are. And if the "successes" of the Social Sciences are any indication, a lot bigger messes are waiting down the road.

Centurion

 

 


High Hopes Dashed for a Change in Policy of TAR Regarding Commentaries on Previously Published Research

In a recent merry-go-round of private correspondence with the current Senior Editor of TAR, Steve Kachelmeier, I erroneously concluded that TAR was relaxing its policy of discouraging commentaries focused recent papers published in TAR, including commentaries that focus on having replicated the original studies.

I went so far on the AECM Listserv as to suggest that a researcher replicate a recent research study reported in TAR and then seek to have the replication results published in TAR in some form such as a commentary or abstract or as a full paper.

Steve  Kachelmeier was deeply upset by my circulated idea and quickly responded with a clarification that amounts to flatly denying any change in policy. Steve sent the following clarification to distribute on the AECM Listserv and at my Website:


Low Hopes for Less Inbreeding in the Stable of TAR Referees

 

 

 

When browsing some of my 8,000+ comments on the AAA Commons, I ran across this old tidbit that relates to our more current AECM messaging on journal refereeing.

I even liked the "Dear Sir, Madame, or Other" beginning.

I assume that "Other" is for the benefit of Senator Boxer from California.

 

Letter From Frustrated Authors, by  R.L. Glass, Chronicle of Higher Education, May 21, 2009 ---
http://chronicle.com/forums/index.php?topic=60573.0
This heads up was sent to me by Ed Scribner at New Mexico State

Dear Sir, Madame, or Other:

Enclosed is our latest version of Ms. #1996-02-22-RRRRR, that is the re-re-re-revised revision of our paper. Choke on it. We have again rewritten the entire manuscript from start to finish. We even changed the g-d-running head! Hopefully, we have suffered enough now to satisfy even you and the bloodthirsty reviewers.

I shall skip the usual point-by-point description of every single change we made in response to the critiques. After all, it is fairly clear that your anonymous reviewers are less interested in the details of scientific procedure than in working out their personality problems and sexual frustrations by seeking some kind of demented glee in the sadistic and arbitrary exercise of tyrannical power over hapless authors like ourselves who happen to fall into their clutches. We do understand that, in view of the misanthropic psychopaths you have on your editorial board, you need to keep sending them papers, for if they were not reviewing manuscripts they would probably be out mugging little old ladies or clubbing baby seals to death. Still, from this batch of reviewers, C was clearly the most hostile, and we request that you not ask him to review this revision. Indeed, we have mailed letter bombs to four or five people we suspected of being reviewer C, so if you send the manuscript back to them, the review process could be unduly delayed.

Some of the reviewers’ comments we could not do anything about. For example, if (as C suggested) several of my recent ancestors were indeed drawn from other species, it is too late to change that. Other suggestions were implemented, however, and the paper has been improved and benefited. Plus, you suggested that we shorten the manuscript by five pages, and we were able to accomplish this very effectively by altering the margins and printing the paper in a different font with a smaller typeface. We agree with you that the paper is much better this way.

One perplexing problem was dealing with suggestions 13–28 by reviewer B. As you may recall (that is, if you even bother reading the reviews before sending your decision letter), that reviewer listed 16 works that he/she felt we should cite in this paper. These were on a variety of different topics, none of which had any relevance to our work that we could see. Indeed, one was an essay on the Spanish–American war from a high school literary magazine. The only common thread was that all 16 were by the same author, presumably someone whom reviewer B greatly admires and feels should be more widely cited. To handle this, we have modified the Introduction and added, after the review of the relevant literature, a subsection entitled “Review of Irrelevant Literature” that discusses these articles and also duly addresses some of the more asinine suggestions from other reviewers.

We hope you will be pleased with this revision and will finally recognize how urgently deserving of publication this work is. If not, then you are an unscrupulous, depraved monster with no shred of human decency. You ought to be in a cage. May whatever heritage you come from be the butt of the next round of ethnic jokes. If you do accept it, however, we wish to thank you for your patience and wisdom throughout this process, and to express our appreciation for your scholarly insights. To repay you, we would be happy to review some manuscripts for you; please send us the next manuscript that any of these reviewers submits to this journal.

Assuming you accept this paper, we would also like to add a footnote acknowledging your help with this manuscript and to point out that we liked the paper much better the way we originally submitted it, but you held the editorial shotgun to our heads and forced us to chop, reshuffle, hedge, expand, shorten, and in general convert a meaty paper into stir-fried vegetables. We could not – or would not – have done it without your input.

-- R.L. Glass
Computing Trends,
1416 Sare Road Bloomington, IN 47401 USA

E-mail address: rglass@acm.org

December 30, 2011 reply from Steve Kachelmeier

This letter perpetuates the sense that "reviewers" are malicious outsiders who stand in the way of good scholarship. It fails to recognize that reviewers are simply peers who have experience and expertise in the area of the submission. The Accounting Review asks about 600 such experts to review each year -- hardly a small set.

While I have seen plenty of bad reviews in my editorial experience, I also sense that it is human nature to impose a self-serving double standard about reviewing. Too many times when we receive a negative review, the author concludes that this is because the reviewer does not have the willingness or intelligence to appreciate good scholarship or even read the paper carefully. But when the same author is asked to evaluate a different manuscript and writes a negative review, it is because the manuscript is obviously flawed. Psychologists have long studied self-attributions, including the persistent sense that when one experiences a good thing, it is because one is good, and when one experiences a bad thing, it is because others are being malicious. My general sense is that manucripts are not as good as we sense they are as authors and are not as bad as we sense they are as reviewers. I vented on these thoughts in a 2004 JATA Supplement commentary. It was good therapy for me at the time.

The reviewers are us.

Steve

December 31, 2011 reply from Bob Jensen

Hi Steve,

Thank you for that sobering reply.

I will repeat a tidbit that I posted some years back --- it might've been in reply to a message from you.
 

When I was a relatively young PhD and still full of myself, the Senior Editor, Charlie Griffin, of The Accounting Review sent me a rather large number of accountics science papers to referee (there weren't many accountics science referees available 1968-1970). I think it was at a 1970 AAA Annual Meeting that I inadvertently overheard Charlie tell somebody else that he was not sending any more TAR submissions to Bob Jensen because "Jensen rejects every submission." My point in telling you this is that having only one or two referees can really be unfair if the referees are still full of themselves.

Bob Jensen

 

December 31, 2011 reply from Jim Peters

The attribution bias to which Steve refers also creates an upward (I would say vicious) cycle for research standards. Here is how it works. When an author gets a negative review, because of the attribution problem, they also infer that the standards for publication have gone up (because, they must have since their work is solid). Then, when that same author is asked to review a paper, they tend to apply the new, higher standards that they miss-attributed to the recent review they received. A sort of "they did it to me, I am going to do it to them," but not vindictively, just in an effort to apply current standards. Of course, the author of the paper they are reviewing makes their own miss-attribution to higher standards and, when that author is asked to review a paper, the cycle repeats. The other psychological phenomena at work here is lack of self-insight. Most humans have very poor self-insight as to why they do things. They make emotional decisions and then rationalize them. Thus, the reviewers involved are probably unaware of what they are doing. Although a few may indeed be vindictive. The blind review process isn't very blind given that most papers are shopped at seminars and other outlets before they are submitted for publication and there tend to some self-serving patterns in citations. Thus, a certain level of vindictiveness is possible.

When I was a PhD student, I asked Harry Evans to define the attributes of a good paper in an effort to establish some form of objective standard I could shoot for. His response was similar to the old response about pornography. In essence, I know a good paper when I see it, but I cannot define attributes of a good paper in advance. I may have missed something in my 20+ years, but I have never seen any effort to establish written, objective standards for publishability of academic research. So, we all still are stuck with the cycle where authors try to infer what they standards are from reviews.

Jim

 

January 1, 2012 reply from Dan Stone

I've given lots of thought to why peer review, as now exists in many disciplines (including accounting), so frequently fails to improve research, and generates so extensive a waste of authorial resources. After almost thirty years of working within this system, as an editor, author and reviewer, I offer 10 reasons why peer review, as is often constructed, frequently fails to improve manuscripts, and often diminishes their contribution:

1. authors devote thousands of hours to thoroughly understanding an issue,

2. most reviewers devote a few hours to understanding the authors' manuscript,

3. most reviewers are asked to review outside of their primary areas of expertise. For example, today, I am reviewing a paper that integrates two areas of theory. I know one and not the other. Hence, reviewers, relative to authors, are almost universally ignorant relative to the manuscript,

4. reviewers are anonymous, meaning unaccountable for their frequently idiotic, moronic comments. Editors generally know less about topical areas than do reviewers, hence idiotic reviewers comments are generally allowed to stand as fact and truth.

5. reviewers are rewarded for publishing (as AUTHORS) but receive only the most minimal of rewards for reviewing (sometimes an acknowledgement from the editor),

6. editors are too busy to review papers, hence they spend even fewer hours than authors on manuscripts,

7. most editors are deeply entrenched in the status quo, that is one reason they are selected to be editors. Hence, change to this deeply flaws systems is glacial if at all

8. reviewers are (often erroneously) told that they are experts by editors,

9. humans naturally overestimate their own competence, (called the overconfidence bias),

10 hence, reviewers generally overestimate their own knowledge of the manuscript.

The result is the wasteful system that is now in place at most (though certainly not all) journals. There are many easy suggestions for improving this deeply flawed system -- most importantly to demand reviewer accountability. I've given citations earlier to this list of articles citing the deeply flaws state of peer review and suggesting improvements. But see point #7.

In short, when I speak as a reviewer, where I am comparatively ignorant, my words are granted the status of absolute truth but when I speak as an author, where I am comparatively knowledgable, I must often listen to babbling fools, whose words are granted the status of absolute truth.

That's a very bad system -- which could be easily reformed -- but for the entrenched interests of those who benefit from the status quo. (see the research cited in "The Social Construction of Research Advice: The American Accounting Association Plays Miss Lonelyhearts" for more about those entrenched interests).

Best,

Dan S.

 

January 1, 2011 reply from Bob Jensen

Thanks Dan for such a nice summary. Personal anecdote - my respect for Dan went way up years ago when he was the editor and overrode my rejection of a paper. While I stand by my critique of the paper, Dan had the courtesy to make his case to me and I respected his judgment. What constitutes "publishable" is highly subjective and in some cases, we need to lower the rigor bar a little to expose new approaches. As I recall, I did work with the author of the paper after Dan accepted it to help clean it up a bit.

Dan - you state that the fixes are relatively easy, but don't provide details. In my little hyper-optimistic world, a fix would create an air of cooperation between editors, authors, and reviewers to work together to extract the best from research and expose it to the general public. This is about 180 degrees from what I perceive is the current gatekeeper emphasis on "what can I find to hang a rejection on?"

I saw a study years ago, the reference for I would have a hell of a time finding again, that tracked the publications in major journals per PhD in different disciplines in business and over time. For all disciplines, the rate steady fell over time and accounting had by far the lowest rate. It would be simple math to calculate the number of articles published in top journals each year over time, which doesn't seem to increase, and the number of PhDs in accounting, which does. Simple math may indicate we have a problem of suppressing good work simply because of a lack of space.

Jim

January 1, 2011 reply from Steve Kachelmeier

Dan has listed 10 reasons why peer review fails to improve manuscripts. To the contrary, in my experience, at least for those manuscripts that get published, I can honestly say that, on average, they are discernably better after the review process than before. So, warts and all, I am not nearly as critical of the process in general as are some others. I will attempt to offer constructive, well-intended replies to each of Dan's 10 criticisms.

Dan's point 1.: Authors devote thousands of hours to thoroughly understanding an issue,

SK's counterpoint: I guess I don't understand why this observation is a reason why reviews fail to improve manuscripts. Is the implication that, because authors spend so much time understanding an issue, the author's work cannot possibly be improved by mere reviewers?

2. Most reviewers devote a few hours to understanding the authors' manuscript,

SK's counterpont: This seems a corollary to the oft-heard "lazy reviewer" complaint. Let us concede that reviewers sometimes (or even often) do not spend as much time on a manuscript as we would like to see. Even if this is true, I would submit that the reviewer spends more time on the paper than does the typical reader, post publication. So if the reviewer "doesn't get it," chances are that the casual reader won't get it either.

3. Most reviewers are asked to review outside of their primary areas of expertise. For example, today, I am reviewing a paper that integrates two areas of theory. I know one and not the other. Hence, reviewers, relative to authors, are almost universally ignorant relative to the manuscript,

SK's counterpoint: As I see it, the editor's primary responsibility is to avoid this criticism. I can honestly say that we did our best at The Accounting Review during my editorship to choose qualified reviewers. It is easier said than done, but I employed a 20-hour RA (and my understanding is that Harry Evans does the same) simply to research submissions in a dispassionate manner and suggest names of well-qualified potential reviewers with no obvious axes to grind. In a literal sense, it is of course true that the author knows the most about the author's research. But that, to me, does not justifiy the assertion that "most reviewrs are asked to review outside of their primary areas of expertise." That is, Dan's anecdote notwithstanding, I simply disagree with the assertion. Also, a somewhat inconvenient truth I have uncovered as editor is that too much reviewer expertise is not necessarily a good thing for the author. As in most things, moderation is the key.

4. reviewers are anonymous, meaning unaccountable for their frequently idiotic, moronic comments. Editors generally know less about topical areas than do reviewers, hence idiotic reviewers comments are generally allowed to stand as fact and truth.

SK's counterpoint: To say that reviewers are "idiotic" and "moronic" is to say that professors in general are idiotic and moronic. After all, who do you think does the reviews? To be sure, authors often perceive a reviewer's comments as "idiotic and moronic." Similarly, have you ever reviewed a manuscript that you perceived as "idiotic and moronic"? This is self-serving bias on self-attributions, plain as simple. As I've said before, my general sense is that the reviews we receive are not as bad as we think, and the manuscripts we submit are not as good as we think. As to the assertion that "editors generally know less about topical areas than do reviewers," of course that is true (in general), which is why we have a peer review system!

5. Reviewers are rewarded for publishing (as AUTHORS) but receive only the most minimal of rewards for reviewing (sometimes an acknowledgement from the editor),

SK's counterpoint: I'm reluctant to tag the word "counterpoint" on this one, because I agree that the reward system is somewhat warped when it comes to reviewing. Bad reviewers get off the hook (because editors wise-up and stop asking them), so they can then sometimes free-ride on the system. Conversely, good reviewers get rewarded with many more review requests, proving that no good deed goes unpunished. At least I tried to take baby steps to remedy this problem by publishing the names of the nearly 500 ad hoc reviewers TAR asks each year, and in addition, starting in November 2011, I started publishing an "honor roll" of our most prolific and timely reviewers.

6. Editors are too busy to review papers, hence they spend even fewer hours than authors on manuscripts,

SK's counterpoint: Why is this a criticisim of the review process? It is precisely because editors have limited time that the editor delegates much of the evalation process to experts in the area of the submission. Consider the alternatives. An alternative that is not on the table is for the editor to pour in many hours/days/weeks on each submission, as there are only 24 hours in the day. So that leaves the alternative of a dictatorial editor who accepts whatever fits the editor's taste and rejects whatever is inconsistent with that taste, reviewers be damned. This is the "benevolent dictator" model to those who like the editor's tastes, but as I said in my November 2011 TAR editorial, the editorial dictator who is benevolent to some will surely be malevolent to others. Surely there is a critical role for editorial judgment, particularly when the reviewers are split, but a wholesale substitution of the editor's tastes in lieu of evaluations by experts would make things worse, in my opinion. More precisely, some would clearly be better off under such a system, but many others would be worse off.

7. Most editors are deeply entrenched in the status quo, that is one reason they are selected to be editors. Hence, change to this deeply flaws systems is glacial if at all

SK's counterpoint: Is the implication here that editors are more entrenched in the "status quo" than are professors in general? If that is true, then a peer review system that forces the editor's hand by holding the editor accountable to the peer reviewers would serve as a check and balance on the editor's "entrenchment," right? So I really don't see why this point is a criticism of the review process. If we dispensed with peer review and gave editors full power, then "entrenched" editors could perpetuate their entrenched tastes forever.

8. Reviewers are (often erroneously) told that they are experts by editors,

SK's counterpoint: Sometimes, as TAR editor, I really wished I could reveal reviewer names to a disgruntled author, if only to prove to the person that the two reviewers were chosen for their expertise and sympathy to both the topic and the method of the submission. But of course I could not do that. A system without reviewer anonymity could solve that problem, but would undoutedly introduce deeper problems of strategic behavior and tit-for-tat rewards and retaliations. So reviews are anonymous, and authors can persist in their belief that the reviewer must be incompetent, because otherwise how could the reviewer possibly not like my submission. But let me back off here and add that many reviews are less constructive and less helpful than an editor would like to see. Point taken. That is why, in my opinon, a well-functioning peer review system must solicit two expert opinions. When the reviewers disagree, that is when the editor must step in and exercise reasoned judgment, often on the side of the more positive reviewer. Let's just say that if I rejected every manuscript with split reviews over the past three years, TAR would have had some very thin issues.

9. Humans naturally overestimate their own competence, (called the overconfidence bias),

SK's counterpoint: Yes, and this is why we tend to be so impressed with our own research and so critical of review reports.

10 Hence, reviewers generally overestimate their own knowledge of the manuscript.

SK's counterpoint: Let's grant this one. But, if I may borrow from Winston Churchill, "Democracy is the worst form of government except for all those other forms that have been tried from time to time." Is a peer review system noisy? Absolutely! Are peer reviews always of high quality? No way! Are reviews sometimes petty and overly harsh? You bet! But is a peer review system better than other forms of journal governance, such as editorial dictatorship or a "power" system that lets the most powerful authors bully their way in? I think so. Editors have very important responsibilities to choose reviewers wisely and to make tough judgment calls at the margin, especially when two reviewers disagree. But dispensing with the system would only make things worse, in my opinion. I again return to the most fundamental truism of this process -- the reviewers are us. If you are asking that we dispense with these "idiotic, moronic" reports, than what you are really asking is that professors have less control over the process to which professors submit. Now that I'm back to being a regular professor again, I'm unwilling to cede that authority.

Just my two cents. Happy New Year to all,

Steve K.

 

January 1, 2012 reply from Bob Jensen

Hi Dan,

My biggest complaint with the refereeing process as we know it is that anonymous referees are not accountable for their decisions. I always find it odd that in modern times we deplore tenure black balling where senior faculty can vote secretly and anonymously to deny tenure to a candidate without having to justify their reasons. And yet when it comes to rejecting a candidate's attempt to publish, we willingly accept a black ball system in the refereeing processes.

Granted, we hope that referees will communicate reasons for rejection, but there's no requirement to do so, and many of the reasons given are vague statements such as "this does not meet the quality standards of the journal."

More importantly, the referees are anonymous which allows them to be superficial or just plain wrong without having to be accountable.

On the other side of the coin I can see reasons for anonymity. Otherwise the best qualified reviewers may reject invitations to become referees because they don't want to be personally judged for doing the journal a favor by lending their expertise to the refereeing process. Referees should not be forced into endless debates about the research of somebody else.

I've long advocated a compromise. I think that referee reports should be anonymous. I also think referee reports along with author responses should be made available in electronic form in an effort to make the entire refereeing process more transparent (without necessarily naming the referees). For example, each published Accounting Review paper could be linked to the electronic file of referee, author, and editor comments leading up to the publication of the article.

Rejected manuscripts are more problematic. Authors should have discretion about publishing their working papers along with referee and editor communications. However, I think the practice of electronic publishing of rejected papers along with referee communications should become a more common practice. One of the benefits might be to make referees be more careful when reviewing manuscripts even if their rejection reports do not mention names of the referees.

The AAA Executive Committee is usually looking for things that can be done to improve scholarship and research among AAA members. One thing I propose is that the AAA leadership take on the task of how to improve the refereeing process of all refereed AAA journals. One of the objectives concerns ways of making the refereeing process more transparent.

Lastly, I think the AAA leadership should work toward encouraging commentaries on published working papers that indirectly allow scholars to question the judgments of the referees and authors. As it stands today, AAA publications are not challenged like they are in many journals of other scholarly disciplines ---
http://www.trinity.edu/rjensen/TheoryTAR.htm#TARversusAMR 

Respectfully,
Bob Jensen

Hi Dan, Jim, and Steve and others,

One added consideration in this "debate" about top accountics science research journal refereeing is the inbreeding that has taken in a very large stable of referees that virtually excludes practitioners. Ostensibly this is because practitioners more often than not cannot read the requisite equations in submitted manuscripts. But I often suspect that this is also because of fear about questions and objections that practitioner scholars might raise in the refereeing process.

Sets of accountics science referees are very inbred largely because editors do not invite practitioner "evaluators" into the gene pool. Think of how things might've been different if practitioner scholars suggested more ideas to accountics science authors and, horrors, demanded something that some submissions be more relevant to the professions.

Think of how Kaplan's criticism of accounting science research publications might've changed if accountics science referees were not so inbred in having accountics science "faculty is as evaluators (referees) of, but not creators or originators of, business practice. (Pfeffer 2007, 1335)."

"Accounting Scholarship that Advances Professional Knowledge and Practice," AAA Presidential Scholar Address by Robert S. Kaplan, The Accounting Review, March 2011, pp. 372-373 (emphasis added)

I am less pessimistic than Schön about whether rigorous research can inform professional practice (witness the important practical significance of the Ohlson accounting-based valuation model and the Black-Merton-Scholes options pricing model), but I concur with the general point that academic scholars spend too much time at the top of Roethlisberger’s knowledge tree and too little time performing systematic observation, description, and classification, which are at the foundation of knowledge creation. Henderson 1970, 67–68 echoes the benefits from a more balanced approach based on the experience of medical professionals:

both theory and practice are necessary conditions of understanding, and the method of Hippocrates is the only method that has ever succeeded widely and generally. The first element of that method is hard, persistent, intelligent, responsible, unremitting labor in the sick room, not in the library … The second element of that method is accurate observation of things and events, selection, guided by judgment born of familiarity and experience, of the salient and the recurrent phenomena, and their classification and methodical exploitation. The third element of that method is the judicious construction of a theory … and the use thereof … [T]he physician must have, first, intimate, habitual, intuitive familiarity with things, secondly, systematic knowledge of things, and thirdly an effective way of thinking about things.

 More recently, other observers of business school research have expressed concerns about the gap that has opened up in the past four decades between academic scholarship and professional practice.

Examples include: Historical role of business schools and their faculty is as evaluators of, but not creators or originators of, business practice. (Pfeffer 2007, 1335) Our journals are replete with an examination of issues that no manager would or should ever care about, while concerns that are important to practitioners are being ignored. (Miller et al. 2009, 273)

In summary, while much has been accomplished during the past four decades through the application of rigorous social science research methods to accounting issues, much has also been overlooked. As I will illustrate later in these remarks, we have missed big opportunities to both learn from innovative practice and to apply innovations from other disciplines to important accounting issues. By focusing on these opportunities, you will have the biggest potential for a highly successful and rewarding career.

Integrating Practice and Theory: The Experience of Other Professional Schools
Other professional schools, particularly medicine, do not disconnect scholarly activity from practice. Many scholars in medical and public health schools do perform large-scale statistical studies similar to those done by accounting scholars. They estimate reduced-form statistical models on cross-sectional and longitudinal data sets to discover correlations between behavior, nutrition, and health or sickness. Consider, for example, statistical research on the effects of smoking or obesity on health, and of the correlations between automobile accidents and drivers who have consumed significant quantities of alcoholic beverages. Such large-scale statistical studies are at the heart of the discipline of epidemiology.

Some scholars in public health schools also intervene in practice by conducting large-scale field experiments on real people in their natural habitats to assess the efficacy of new health and safety practices, such as the use of designated drivers to reduce alcohol-influenced accidents. Few academic accounting scholars, in contrast, conduct field experiments on real professionals working in their actual jobs (Hunton and Gold [2010] is an exception). The large-scale statistical studies and field experiments about health and sickness are invaluable, but, unlike in accounting scholarship, they represent only one component in the research repertoire of faculty employed in professional schools of medicine and health sciences.

Many faculty in medical schools (and also in schools of engineering and science) continually innovate. They develop new treatments, new surgeries, new drugs, new instruments, and new radiological procedures. Consider, for example, the angiogenesis innovation, now commercially represented by Genentech’s Avastin drug, done by Professor Judah Folkman at his laboratories in Boston Children’s Hospital (West et al. 2005). Consider also the dozens of commercial innovations and new companies that flowed from the laboratories of Robert Langer at MIT (Bowen et al. 2005) and George Whiteside at Harvard University (Bowen and Gino 2006). These academic scientists were intimately aware of gaps in practice that they could address and solve by applying contemporary engineering and science. They produced innovations that delivered better solutions in actual clinical practices. Beyond contributing through innovation, medical school faculty often become practice thought-leaders in their field of expertise. If you suffer from a serious, complex illness or injury, you will likely be referred to a physician with an appointment at a leading academic medical school. How often, other than for expert testimony, do leading accounting professors get asked for advice on difficult measurement and valuation issues arising in practice?

One study (Zucker and Darby 1996) found that life-science academics who partner with industry have higher academic productivity than scientists who work only in their laboratories in medical schools and universities. Those engaged in practice innovations work on more important problems and get more rapid feedback on where their ideas work or do not work.

These examples illustrate that some of the best academic faculty in schools of medicine, engineering, and science, attempt to improve practice, enabling their professionals to be more effective and valuable to society. Implications for Accounting Scholarship To my letter writer, just embarking on a career as an academic accounting professor, I hope you can contribute by attempting to become the accounting equivalent of an innovative, worldclass accounting surgeon, inventor, and thought-leader; someone capable of advancing professional practice, not just evaluating it. I do not want you to become a “JAE” Just Another Epidemiologist . My vision for the potential in your 40 year academic career at a professional school is to develop the knowledge, skills, and capabilities to be at the leading edge of practice. You, as an academic, can be more innovative than a consultant or a skilled practitioner. Unlike them, you can draw upon fundamental advances in your own and related disciplines and can integrate theory and generalizable conceptual frameworks with skilled practice. You can become the accounting practice leader, the “go-to” person, to whom others make referrals for answering a difficult accounting or measurement question arising in practice.

But enough preaching! My teaching is most effective when I illustrate ideas with actual cases, so let us explore several opportunities for academic scholarship that have the potential to make important and innovative contributions to professional practice.

Continued in article

Added Jensen Comment
Of course I'm not the first one to suggest that accountics science referees are inbred. This has been the theme of other AAA presidential scholars (especially Anthony Hopwood), Paul Williams, Steve Zeff, Joni Young, and many, many others that accountics scientists have refused to listen to over past decades.

"The Absence of Dissent," by Joni J. Young, Accounting and the Public Interest 9 (1), 2009 --- Click Here

ABSTRACT:
The persistent malaise in accounting research continues to resist remedy. Hopwood (2007) argues that revitalizing academic accounting cannot be accomplished by simply working more diligently within current paradigms. Based on an analysis of articles published in Auditing: A Journal of Practice & Theory, I show that this paradigm block is not confined to financial accounting research but extends beyond the work appearing in the so-called premier U.S. journals. Based on this demonstration I argue that accounting academics must tolerate (and even encourage) dissent for accounting to enjoy a vital research academy. ©2009 American Accounting Association

We could try to revitalize accountics scientists by expanding the gene pools of inbred referees.

 


The problem is when the model created to represent reality takes on a life of its own completely detached from the reality that it is supposed to model that nonsense can easily ensue.

Was it Mark Twain who wrote: "The criterion of understanding is a simple explanation."?
As quoted by Martin Weiss in a comment to the article below.

But a lie gets halfway around the world while the truth is still tying its shoes
Mark Twain as quoted by PKB (in Mankato, MN) in a comment to the article below.

"US Net Investment Income," by Paul Krugman, The New York Times, December 31, 2011 ---
http://krugman.blogs.nytimes.com/2011/12/31/us-net-investment-income/
Especially note the cute picture.

December 31, 2011 Comment by Wendell Murray
http://krugman.blogs.nytimes.com/2011/12/31/i-like-math/#postComment

Mathematics, like word-oriented languages, uses symbols to represent concepts, so it is essentially the same as word-oriented languages that everyone is comfortable with.

Because mathematics is much more precise and in most ways much simpler than word-oriented languages, it is useful for modeling (abstraction from) of the messiness of the real world.

The problem, as Prof. Krugman notes, is when the model created to represent reality takes on a life of its own completely detached from the reality that it is supposed to model that nonsense can easily ensue.

This is what has happened in the absurd conclusions often reached by those who blindly believe in the infallibility of hypotheses such as the rational expectations theory or even worse the completely peripheral concept of so-called Ricardian equivalence. These abstractions from reality have value only to the extent that they capture the key features of reality. Otherwise they are worse than useless.

I think some academics and/or knowledgeless distorters of academic theories in fact just like to use terms such as "Ricardian equivalence theorem" because that term, for example, sounds so esoteric whereas the theorem itself is not much of anything
.

Ricardian Equivalence --- http://en.wikipedia.org/wiki/Ricardian_equivalence

Jensen Comment
One of the saddest flaws of accountics science archival studies is the repeated acceptance of the CAPM mathematics allowing the CAPM to "represent reality on a life of its own" when in fact the CAPM is a seriously flawed representation of investing reality ---
http://www.trinity.edu/rjensen/theory01.htm#AccentuateTheObvious

At the same time one of the things I dislike about the exceedingly left-wing biased, albeit brilliant, Paul Krugman is his playing down of trillion dollar deficit spending and his flippant lack of concern about $80 trillion in unfunded entitlements. He just turns a blind eye toward risks of Zimbabwe-like inflation. As noted below, he has a Nobel Prize in Economics but "doesn't command respect in the profession". Put another way, he's more of a liberal preacher than an economics teacher.

Paul Krugman --- http://en.wikipedia.org/wiki/Paul_Krugman

Economics and policy recommendations

Economist and former United States Secretary of the Treasury Larry Summers has stated Krugman has a tendency to favor more extreme policy recommendations because "it’s much more interesting than agreement when you’re involved in commenting on rather than making policy."

According to Harvard professor of economics Robert Barro, Krugman "has never done any work in Keynesian macroeconomics" and makes arguments that are politically convenient for him.Nobel laureate Edward Prescott has charged that Krugman "doesn't command respect in the profession", as "no respectable macroeconomist" believes that economic stimulus works, though the number of economists who support such stimulus is "probably a majority".

Bob Jensen's critique of analytical models in accountics science (Plato's Cave) can be found at
http://www.trinity.edu/rjensen/TheoryTAR.htm#Analytics

Bob Jensen's threads on higher education controversies are at
http://www.trinity.edu/rjensen/HigherEdControversies.htm

 


Clarification of Policy With Respect to Publishing in The Accounting Review (TAR)
by Steve Kachelmeier, Senior Editor, January 8, 2010

I have become aware of a recent post by Bob Jensen challenging readers to put me “to the test” to see if The Accounting Review really is open to publishing replications.  I would like to comment on my view (and experience) regarding replications, but first, I cannot help but to comment on the belief implicit in statements such as Bob’s that journals have policies controlled by “gatekeepers” regarding what we will or will not publish.

 

As I have tried to explain in many public forums over the past several months, journals -- and particularly association-based journals such as The Accounting Review -- are not controlled by editorial gatekeepers so much as they are controlled by scholarly communities.  If you want to know what a journal will publish, do not ask the editor or think that you are putting the editor “to the test.”  Rather, take your case to two experts known as “Reviewer A” and “Reviewer B.”  And just who are these reviewers?  For the first time, to my knowledge, The Accounting Review has published the names of all 574 people who kindly submitted one or more manuscript reviews to TAR during the journal’s fiscal year from June 1, 2008 to May 31, 2009.  These include 124 members of the Editorial Advisory and Review Board (named in the inside cover pages) plus an additional 450 experts who served as ad hoc reviewers and who are thanked by name in an appendix to the Annual Report and Editorial Commentary published in the November 2009 issue.  The reader who scans the many pages of names in this appendix will see individuals from a wide variety of topical and methodological interests and from a wide variety of backgrounds and affiliations.  The “gatekeepers” are us.

 

From the experience of reading several hundred reviews submitted by these experts, I can attest that the most common reason a reviewer recommends rejection is the perception that a submitted manuscript does not offer a sufficient incremental contribution to justify publication in The Accounting Review.  This observation has important implications for Professor Jensen’s passion about publishing replications.  Yes, we want to see integrity in research, but we also want to see interesting and meaningful incremental contributions.  The key to a successful replication, if the goal is a top-tier publication, is to do more than merely repeat another author’s work.  Rather, one must advance that work, extending the original insights to new settings if the replication corroborates the earlier findings, and investigating the reasons for any differences if the replication does not corroborate earlier findings.  The Accounting Review publishes replications of those varieties on a regular basis.

 

In an analogy I will borrow from an article written by Nobel Laureate Vernon Smith, if one wants to replicate my assertion that it is currently 11:03 a.m., it is best not to simply ask to see my watch to confirm that I read it correctly.  Rather, look at your own watch.  If we agree, we learn something about the generality and hence the validity of my assertion.  If we disagree, you can help us investigate why. 

 

Steven Kachelmeier

Senior Editor, The Accounting Review

 

Steve's 2010 Update on TAR --- </