Aris SpanosGranted the naive frequentist and the trditiaonal Bayesian perspective on statistical modeling and inference do have most of the problems the guest mentions and then some! I agree that These are wellknown problems , and the difficulty in beginning a discussion in a forum like this is that one cannot use real data, graphs, technical arguments, etc. to substantiate one's views.The whole point of the different perspective that I have developed since the mid 1980s, joining forces with Mayo around 2000 what we now call Error Statistics is exactly to address all these foundational and methodological problems (and a lot more besides) by recasting statistical model specification, MisSpecification (MS) testing and Respecification with a view to secure statistical adequacy. For example,statistical adequacy addresses the underdetermination problem at the statistical model level.Unfortunately, a blog like this is not the proper forum to explain all the issues that a commentator might raise. Having engaged in discussions like these for the last 30 years, I know well that each statistician, econometrician, psychometrician, biometrician etc., etc., has his/her own perspective on every one of these issues and arguing in a blog like this will usually deteriorate rapidly into name calling.I have no intention to go down that road, but let me reiterate that the error statistical perspective is not vulnerable to any of the problems raised by the guest .Over the years, my answer to referees of econometric journals questioning the process of establishing statistical adequacy due to datadredging and overfitting, as users try model after model , has been to show that it is exactly what one is not doing in error statistics. Goodnessoffit/prediction has nothing to do with statistical adequacy, and respecification is not about trying one model after another relying on luck; that will be the most inefficient route, since there is an infinite number of possible models.Statistical model specification is about partitioning the set of all possible models. Respecification is about repartitioning the same set, every time starting with a clean slate, but hopefully better educated guessing thanks to learning from data . It usually takes 12 iterations to whittle things down in order to reach a statistically adequate model if one is using thorough MS testing in conjunction with graphical techniques effectively. To those who dismiss the value of a statistically adequate model because it was reached through a process where one applies these techniques, my reply is that by the same token Kepler's empirical regularity concerning the motion of the planets was unwarranted [tell that to Newton], and could provide no basis for inference [it produces notably anticonservative statements], because he was playing around with the same data for 6 years before it dawned on him that an elliptical motion accounts for the regularities in the data much better than circular motion!On all the above claims I elaborate and illustrate using several data sets, including Kepler's data, in several published papers. For those who are interested in more detailed answers see some of my most recent papers: Spanos, A. (2006), Where Do Statistical Models Come From? Revisiting the Problem of Specification, pp. 98119 in Optimality: The Second Erich L. Lehmann Symposium, edited by J. Rojo, Lecture NotesMonograph Series, vol. 49, Institute of Mathematical Statistics.Spanos, A. (2007), CurveFitting, the Reliability of Inductive Inference and the ErrorStatistical Approach, Philosophy of Science, 74: 1046 1066.Spanos, A. (2010a), Akaiketype Criteria and the Reliability of Inference: Model Selection vs. Statistical Model Specification, Journal of Econometrics, 158: 204220.Spanos, A. (2010b), Statistical Adequacy and the Trustworthiness of Empirical Evidence: Statistical vs. Substantive Information, Economic Modelling, 27: 1436 1452.
