Falsifiability

Testability and Statistical Learning Theory

Daniel Steel , in Philosophy of Statistics, 2011

Testability

In The Logic of Scientific Discovery, Popper characterized a testable or falsifiable theory as one that is capable of being "refuted by experience" [1959, 18]. As is well known, Popper insisted that only falsifiable theories were genuinely scientific, but not because they were more likely to be true. To the contrary, Popper emphasized that the more falsifiable a theory is, the more it sticks its neck out, and hence the more improbable it is [1963, 217–220]. Popper took this as proof that science does not aim primarily for probable theories, but instead for highly informative ones. The reason for preferring falsifiable theories, according to Popper, is that by so doing we further scientific progress. This point comes out in The Logic of Scientific Discovery in connection with what Popper terms "conventionalist stratagems" [1959, 57–61]. Conventionalism treats scientific theories as true by definition, so that if some apparent conflict arises between the theory and observation, that conflict must be resolved by rejecting something other than the theory. Popper admitted that there is no logical contradiction to be found in conventionalism, but he argued that it was nevertheless highly problematic on methodological grounds. In particular, conventionalism would obstruct the advancement of scientific knowledge, and we should therefore firmly commit to rules of scientific method that disallow conventionalist stratagems [1959, 61–62]. This connection between falsifiability and scientific progress is easy to appreciate in light of some of Popper's favorite examples of theories that failed to abide by his strictures, for instance, Marxism and Freudian psychology. According to Popper, the Marxist and Freudian traditions were case studies of how treating scientific theories as unquestionable truths could lead researchers into a morass of ad hoc explanations that stultified the advancement of knowledge.

Popper set out his view of scientific progress in greater detail in Conjectures and Refutations [1963, 231–248]. The central theme of that proposal is quite simple. Scientific progress in Popper's sense occurs when a scientific theory is refuted and replaced with another that is closer to the truth. As we do not have direct access to the truth, progress would typically be judged by some more indirect means. For example, suppose that one theory T is refuted and replaced by another T∗ such that (1) T∗ passes all of the severe tests that T passed, (2) T∗ passes the tests that T failed, and (3) T∗ makes new predictions that turn out to be correct. If this happens, then Popper thought that we have good reason to say that T∗ is closer to the truth than T. For example, Popper thought that Einstein's General Theory of Relativity satisfied these conditions with respect to Newtonian Mechanics. It is obvious that falsifiable theories are a necessary ingredient in this picture of scientific progress. On Popper's account, the advancement of science is driven by refuting theories and replacing them with better theories that generate new discoveries. Therefore, unfalsifiable theories — or theories we decide to save at all costs by means of "conventionalist stratagems" — halt progress in its tracks. Moreover, Popper believed that this process of conjectures and refutations would, in the long run, lead scientists closer and closer to the truth, although we may never know at any given time how close we are (or aren't).

Popper's claimed, then, that testable theories are necessary if science hopes to converge to the truth in the long run. It is easy to see an analogy between Popper's claim on this score and the result from statistical learning theory that finite VC dimension is a necessary condition for long run convergence to the function that minimizes expected predictive error. A set of functions Φ would be unfalsifiable if, for any possible set of data, there is a function in Φ that can fit that data with zero error. Recall that the VC dimension h of a set of functions is the maximum number such that some set of h data points can be shattered by that set. An unfalsifiable set of functions, then, would have no such maximum and hence would have infinite VC dimension. Thus, a basic result of statistical learning theory coincides with Popper's intuition that falsifiability is a necessary ingredient for being assured of homing in on the truth in the long run. Vapnik concludes his discussion of the relationship between falsifiability and statistical learning theory by remarking "how amazing Popper's idea was" [2000, 55].

Popper also proposed that the falsifiability or testability of theories could come in degrees. Degrees of testability are clearly important for Popper's vision of scientific progress. For when one theory is refuted, there may be several possible replacements and Popper would presumably recommend that we choose the most testable of the viable alternatives. Moreover, Popper's reasoning naturally suggests that going with the most testable theory would accelerate scientific progress. After all, a barely testable theory might not halt progress altogether, but it could certainly slow it down. Indeed, the idea that degrees of testability are linked to the rate of scientific progress is hinted at in the epigraphs to Conjectures and Refutations.

Experience is the name everyone gives to their mistakes.

Oscar Wilde

Our whole problem is to make the mistakes as fast as possible…

John Archibald Wheeler

More testable theories rule out more and typically will be refuted faster than less testable ones. So, it is easy to guess Popper's meaning here: the more testable our theories, the faster our mistakes, and the more rapid the advancement of science.

In The Logic of Scientific Discovery, Popper suggested two grounds for comparing degrees of testability [1959, chapter 6]. The first was a subclass relation. For instance, the theory that planets move in circles around the sun is a subclass of the theory that they move in ellipses, and hence the former theory is more easily refuted (i.e. more testable) than the latter. However, it is the second of Popper's suggestions for how to compare degrees of testability that is most pertinent to our concerns here. Popper proposed that the dimension of a theory be understood in terms of the number of data points needed to refute it. More specifically, if d+   1 is the minimum number of data points needed to refute the theory t, then the Popper dimension of t is d [1959, 113–114]. The difference between Popper and VC dimension can be neatly made in terms of shattering. Suppose we think of theories as sets of functions. If the Popper dimension of a theory of functions is d, then no set of only d data points can refute the theory and hence the theory shatters every group of d many data points. On the other hand, if the VC dimension of the theory is h, then that set shatters some but not necessarily all groups of h many data points. This difference is illustrated by the example of predicting gender by height and weight discussed above. In that example, the linear functions shatter every set of two data points, some but not all sets of three data points, and no sets of four data points. Consequently, the Popper dimension of the linear functions in this case is two, while the VC dimension is three. Further divergences between Popper and VC dimension occur in cases in which data points consist of more than two measurements. For example, suppose we wanted to predict diabetes on the basis of blood pressure, body mass index, and cholesterol level. In this case, the data points would be spread out in a three dimensional space, and the linear functions would separate data points with flat planes. In this situation, the Popper dimension of the linear functions remains two (since a flat plane cannot separate three perfectly collinear points in a three dimensional space) but the VC dimension of the linear functions in this case would be four. 3

However, there are some cases in which Popper and VC dimension coincide. For instance, consider a very simple example in which one wishes to predict the colors of balls drawn from an urn, which may be either blue or red. 4 In this example, the x i 's consist of balls drawn from the urn and the y i 's of an indication of the color (blue or red) of each ball. Suppose that 99 balls have been drawn so far, and all are red. The functions, then, tell us what to predict about the colors of the future balls given this data. What we might call the inductive function directs us to predict that all future balls will be red. Another set of functions directs us to predict that the balls will switch at some future time from red to blue and stay blue from then on. We can call these the anti-inductive functions. Notice that in this example a fixed number of data points cannot be arranged into distinct configurations as in figures 2 and 3, and hence there is no difference between shattering some and shattering all configurations of n many data points. As a result, Popper and VC dimension are equivalent in this case. For example, the inductive function does not shatter the next data point, since it would be refuted if the next ball is blue. Hence, its Popper and VC dimension is zero. In contrast, the anti-inductive functions can shatter the next data point, since the switch from red to blue might begin with the next ball or it might begin later. However, the anti-inductive functions do not shatter the next two data points, since none of them can accommodate a switch to blue followed by an immediate switch back to red. Thus, the Popper and VC dimension of the anti-inductive functions is one.

Let us sum up the similarities and differences between VC dimension and Popper's notion of degrees of testability, beginning with the similarities. The two concepts are similar in spirit, coincide in some simple examples, and track one another in some other examples. In addition, similar claims are made on behalf of both: testability and finite VC dimension are claimed to be necessary for convergence in the long run, and a preference for lower Popper dimension (i.e. greater testability) and lower VC dimension are both said to promote the faster convergence. Moreover, there is a further similarity between Popper and VC dimension that is rarely remarked upon: both of these concepts presuppose without explanation some natural or preferred way of dividing data up into points or units. Since different modes of expression might result in different ways of carving up data into units, this means that neither concept is language invariant. 5 Now let us turn to the differences between Popper and VC dimension. First, there are differences in technical details of the two concepts, as explained above. Furthermore, Popper never provided any precise articulation or proof of his claims about the link between testability and convergence, while Vapnik and others have done this for VC dimension. Thus, statistical learning theory represents a very significant step forward from Popper's work. Finally, there are also some important differences in philosophical motivation that I will discuss in the next section.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444518620500289

Statistical Learning Theory as a Framework for the Philosophy of Induction

Gilbert Harman , Sanjeev Kulkarni , in Philosophy of Statistics, 2011

VC Dimension and Popperian Falsifiability

There is an interesting relation between the role of VC dimension in the PAC result and the emphasis on falsifiability in Karl Popper's writings in the philosophy of science. Popper [1934] famously argues that the difference between scientific hypotheses and metaphysical hypotheses is that scientific hypotheses are "falsifiable" in a way that metaphysical hypotheses are not. To say that a certain hypothesis is falsifiable is to say that there is possible evidence that would not count as consistent with the hypothesis.

According to Popper, evidence cannot establish a scientific hypothesis, it can only "falsify" it. A scientific hypothesis is therefore a falsifiable conjecture. A useful scientific hypothesis is a falsifiable hypothesis that has withstood empirical testing.

Recall that enumerative induction requires a choice of a set of rules C. That choice involves a "conjecture" that the relevant rules are the rules in C. If this conjecture is to count as scientific rather than metaphysical, according to Popper, the class of rules C must be appropriately "falsifiable."

Many discussions of Popper treat his notion of falsifiability as an all or nothing matter, not a matter of degree. But in fact Popper does allow for degrees of difficulty of falsifiability [2002, sections 31–40]. For example, he asserts that a linear hypothesis is more falsifiable — easier to falsify — than a quadratic hypothesis. This fits with VC theory, because the collection of linear classification rules has a lower VC dimension than the collection of quadratic classification rules.

However, Popper's measure of degree of difficulty of falsifiability of a class of hypotheses does not quite correspond to VC-dimension (Corfield et al, 2005). Where the VC-dimension of a class C of hypotheses is the largest number N such that some set of N points is shattered by rules in C, what we might call the "Popper dimension" of the difficulty of falsifiability of a class is the largest number N such that every set of N points is shattered by rules in C. This difference between some and every is important and VC-dimension turns out to be the key notion rather than Popper-dimension.

Popper also assumes that the falsifiability of a class of hypotheses is a function of the number of parameters used to pick out instances of the class. This turns out not to be correct either for Popper dimension or VC dimension, as discussed below.

This suggests that Popper's appeal to degree of falsifiability would be improved by adopting VC-dimension as the relevant measure in place of his own measure.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444518620500277

Computational Approaches to Model Evaluation

I.J. Myung , in International Encyclopedia of the Social & Behavioral Sciences, 2001

5 Conclusion

The question of how one should decide among competing explanations (i.e., models) of data is at the core of the scientific endeavor. A number of criteria have been proposed for model evaluation (Jacobs and Grainger 1994). They include (a) falsifiability : are there potential outcomes inconsistent with the model? (b) explanatory adequacy: is the explanation compatible with established findings? (c) interpretability: does the model make sense? Is it understandable? (d) descriptive adequacy: does the model provide a good description of the observed data? (e) simplicity: does the model describe the phenomenon in the simplest possible manner? and (f) generalizability: does the model predict well the characteristics of new, as yet unseen, data?

Among these criteria, computational modeling approaches described in this article, as well as statistical approaches, consider the last three as they are easier to quantify than the other three. In particular, generalizability has been emphasized as the principal criterion by which the effectiveness of a model be judged. Obviously, this 'bias' toward the predictive inference aspect of modeling reflects the dominant thinking of the current scientific community. Clearly one could develop an alternative approach that stresses other criteria, and even incorporates some of the hard-to-quantify criteria. There are plenty of opportunities in this area for the emerging field of model evaluation to grow and mature in the decades to come.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0080430767005891

Hypothetico-deductive Research

Thomas W. Edgar , David O. Manz , in Research Methods for Cyber Security, 2017

Specify What You Think is Involved; Challenge Assumptions

A critical step in setting up an experiment is to clearly state all of your assumptions and what possible variables could have an influence on the dependent variables. As such, experimentation is a good way to define what our current understanding of a situation is. This forces you to methodically think through what may be happening and how everything could be involved. As a response to the philosophy of falsifiability, the Quine–Duhem thesis 1 documented that it is, in effect, impossible to truly test a hypothesis in isolation as they are formulated under the context of a larger set of assumptions we get from our other currently accepted theories. By rigorously defining your assumptions in preparation for an experiment, you are laying bare your thought process and the context under which your experiment was designed and your hypothesis was derived. This provides readers and future researchers the ability to understand and challenge your assumptions or replicate experiments in the future with new presiding. Or, if you are replicating previous research, you can highlight what assumptions you are challenging.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128053492000091

**Learning Theory

Igor Kononenko , Matjaž Kukar , in Machine Learning and Data Mining, 2007

Popperian functions

Nontriviality can be reformulated by limiting the hypotheses to single-valued total languages REsvt (which are also infinite). Such hypotheses can be easily verified with respect to the learning set of examples because languages from REsvt are decidable – for each word we are able to verify whether it belongs to the target language after a finite reading of the input sequence. We name such functions Popperian after Karl Popper (1902-1994) who insisted on the falsifiability of scientific practice.

Definition 13.13

We say that the learning function φ F total is Popperian if for each hypothesis for σSEQ it holds that if φ(σ) ↓, then W φ(σ)REsvt . The class of Popperian functions is denoted with F Popperian . ⋄

Function h in the proof of Theorem 13.3 is Popperian.

It holds that each Popperian function is also nontrivial and accountable: F Popperian   F nontrivial   F accountable . Of course, like nontrivial and accountable functions also the Popperian functions cannot identify finite languages. In a similar way as nontriviality and acountability, Popperian functions restrict recursive functions when learning single-valued total languages. However, they do not restrict F when learning single-valued total languages.

Theorem 13.19

i.

[ F Popperain ] svt   = F svt .

ii.

[ F rec F Popperain ] svt     [ F rec ] svt . ⋄

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781904275213500137

The People Challenge: Building Data Literacy

Laura Sebastian-Coleman , in Meeting the Challenges of Data Quality Management, 2022

Skills

In The Data Loom: Weaving Understanding by Thinking Critically and Scientifically with Data, Stephen Few describes some of the core skills associated with what he calls data sensemaking (Few, 2015a). Data sensemaking starts with domain knowledge. For most of us, this means understanding the data you are working with in the context of your organization. If you work in health care, you must understand the health care system (how patients, providers, and insurers interact). Depending on what you do in health care, you may need highly specialized domain knowledge. Medical coders develop deep knowledge of the structure of diagnostic and procedure code sets, the criteria for coding accurately, and the implications of different choices in coding. Not all individuals in health care have this level of specialization, but anyone who works in health care should know the importance of these codes to the system as a whole.

Few does not use the term data literacy, but his discussion, focused on the set of thinking skills that are at the heart of understanding and using data, provides one of the best descriptions of core data literacy skills that I have seen. Using Few's list as a starting point and adding a few pieces from other writers, these skills include the following:

Critical thinking: The ability to clarify ideas, to recognize how you think (metacognition), to avoid logical fallacies and other cognitive traps, to be open to the possibility that you may be wrong in your assumptions or conclusions, and to be willing to adopt new ideas and perspectives for the purposes of increasing your understanding of a subject.

Scientific thinking: Knowledge of the scientific method and scientific principles (such as falsifiability), the ability to apply a scientific approach to ask better questions, formulate and test hypotheses about possible causes and effects, and maintain perspective on your own conclusions.

Statistical thinking/quantitative reasoning: Knowledge of how statistics work (including some of the pitfalls and misconceptions about statistics), an understanding of numbers, and the ability to apply quantitative reasoning to problems (including making good decisions about what and how to measure and avoiding making poor decisions about the same) (see Paulos, 2001).

Systems thinking: The ability to comprehend the organization as a system of interconnected elements organized to meet goals, to see the relationship among the parts and the whole, and to recognize how interactions among parts influence each other (see also Meadows, 2008).

Visual Thinking: The ability to understand and interpret information conveyed visually through graphs, charts, and other means; the ability to determine the most effective ways to visualize different kinds of information; and the ability to recognize questionable features of such representations (see Cairo, 2016; Knaflic, 2015).

Curiosity: A level of engagement with data, a desire to understand it and learn from it, and the ability to ask meaningful questions about how it works and what one may learn from it. It is important to recognize that critical thinking, scientific thinking, skepticism, and ethical thinking all involve a degree of curiosity and a willingness to ask questions.

Skepticism: Willingness to question the data, to go beneath the surface, and to understand the context in which data is created (data sources) as well as the standards for relevance, accuracy, representativeness, and completeness used to create it. The ability to know the limitations of the data and use it appropriately, to recognize when and in what ways data may be biased, and to account for potential limitations of data when interpreting it (see O'Neil, 2016; Schryvers, 2020).

Ethical thinking: Understanding the potential for good or harm of any actions or conclusions from data and recognizing the need to actively prevent harm (see O'Keefe & O Brien, 2018).

Communications skills: The ability to share insights with others and to help them see what you are able to see in the data.

Few also provides simple, practical advice about how to develop these thinking habits, not only as an individual (prevent distraction, take notes, give yourself time to think), but also as an organization (teach each other, raise questions, encourage feedback, allow people to admit their mistakes, give people time and space to think, help them cultivate their thinking skills).

Ultimately, these skills support a person's ability to use data because they contribute to a person's ability to interpret data: to understand its meaning and be able to explain that meaning to other people. Although these skills are called out separately and you can focus your study to develop them separately, they work together. Think of it this way: when you work out at the gym, your routine may focus on your core, your upper body, or your lower body, depending on your specific goals. But your overall goal and the overall result of working out is to be more fit. And although you may focus on one thing at a time, your body as a whole benefits from the exercise. The same goes for intellectual fitness.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128217375000079

Laws in Chemistry

Rom Harré , in Philosophy of Chemistry, 2012

8 Falsification Protection

Despite the way that O propositions of the 'Some A are not B' form falsify A or universal affirmative propositions in Aristotelian logic and its up-dated versions, contrary evidence is sufficient to lead to the abandonment of a law-like proposition that is deeply embedded in a scientific context only in very special circumstances. There are so many ceteris paribus conditions attached to chemical equations and so much opportunity to attach more, that once such a proposition becomes an established part of chemistry it is scarcely ever abandoned. Most equations describe idealised forms of reactions in which the transformation of substances goes through 100%. In inorganic chemistry this is a fair approximation but in organic chemistry yields of 60% are often regarded as satisfactory, without the disparity between the ideal and the actual reaction casting doubt on the original equation. Furthermore, any one chemical equation highlights just one among a myriad reactions into which elements and compounds involved enter. Coherence is a very strong conservative force in chemistry.

However, there is a more fundamental reason for the resistance to falsifiability of chemical equations. Over the last hundred and fifty years the homogenous regresses of chemistry have enlarged greatly. However, after Cannizaro's memoir of 1859 that sorted out the relation between equivalent and atomic weights, the formulae expressed in chemical equations look pretty much the same. Huge changes have taken place in the heterogeneous regress of currently totalised chemistry. Granted that the Berzelian insight that Coulomb forces and electrostatic attractions should be fundamental explainers in the opening levels of the heterogeneous regress, the advent of quantum mechanics has changed all that. But what it cannot change are the chemical equations.

Chemistry as an ordered body of homogenous regresses involves concepts like 'mixture', 'compound', 'element', 'elective affinity [valency], 'acid', 'base', 'metal', 'colloidal state' and so on. It seems to me that none of these concepts is used in an agentive way. In 'kitchen' chemistry we think of acids as the active agent of corrosion. However, we are equally inclined to think of alkalis as active agents, when, for instance, we are cleaning the drains. This has nothing whatever to do with chemistry as a science. In the well known formula 'acid plus base equals salt plus water', recited by generations of young scholars, there is a time line but no agentive concepts at all.

In the Berzelian account of chemical processes, within the shadow of which contemporary chemistry still lies, there are causal agents galore. However, they come into play only when the homogeneous regress of chemical concepts is underpinned by the heterogeneity of concepts needed to portray the behaviour of charged ions, electrons and protons in exchanges, and everything that prepared the way for the rewriting of the physical processes that we now believe are germane to chemical processes in terms of wave functions. Ions are the first level of powerful particulars, but they have their (Coulomb) causal powers only by virtue of the second level of powerful particulars, electrons and protons, the causal powers of which are basic natural endowments.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444516756500256

Psychoanalysis: Overview

S. Gardner , in International Encyclopedia of the Social & Behavioral Sciences, 2001

2.2 Epistemological and Methodological Issues

Because of the physicalist–reductionist strain in psychoanalysis, and the positivistic character of the conception of knowledge appealed to by Freud in his claims for the scientificity of psychoanalysis, there has been a natural (and, in the English-speaking world, dominant) tendency to consider the question of the empirical warrant for psychoanalytic theory in the context of scientific methodology. This mode of approach to psychoanalysis is associated with a negative estimate of the objectivity and cognitive value of psychoanalysis.

Two major writings in this sphere are Karl Popper (1963, Chap. 1) and Adolf Grünbaum (1984 ). Popper conducted an enormously influential attack on psychoanalysis in the course of defending falsifiability as the key to scientific method. According to Popper, psychoanalysis does not open itself to refutation, and so fails to satisfy the condition of falsifiability which, in his account, provides the necessary and sufficient condition for determining the scientific standing and candidacy for rational acceptability of a theory. In view of its immunity to counterevidence, Popper holds, psychoanalysis must be classified (alongside Marxism) as a 'pseudo-science.' Grünbaum, operating with a different conception of scientific method, maintains in opposition to Popper that psychoanalysis does meet the conditions for evaluation as a scientific theory, but proceeds to offer a detailed critique of what he takes to be Freud's inductive reasoning. Freudian theory reposes, Grünbaum argues, on the claims that only psychoanalysis can give correct insight into the cause of neurosis, and that such insight is causally necessary for a durable cure. Grünbaum then emphasizes the empirical weakness of psychoanalysis' claim to causal efficacy, and presses the frequently voiced objection that the therapeutic effects of psychoanalysis may be due—for all that Freud is able to demonstrate to the contrary—to suggestion. Grünbaum argues further that, even if the clinical data could be taken at face value, the inferences that Freud draws are unwarranted.

Different conceptions of science carry different implications for the epistemological status of psychoanalysis, and a sufficiently anti-rationalistic conception of science, such as Paul Feyerabend's, may succeed in eliminating the epistemological discrepancy between psychoanalysis and the natural sciences; but this strategy carries obvious costs. It is generally agreed that, without repudiating altogether the concept of a distinctive scientific method in the manner of Feyerabend, the prospect of defending the epistemological credentials of psychoanalysis in the terms supplied by the philosophy of natural science is poor. Attempts to test psychoanalytic hypotheses experimentally in controlled, extraclinical contexts have been inconclusive (see Eysenck and Wilson 1973). The comparison with cognitive science, which has successfully claimed for itself empirical support, may appear to aggravate the epistemological situation of psychoanalysis. (For discussion of psychoanalysis' scientificity, see Hook 1964 and Edelson 1984.)

An alternative approach to the epistemological issue is to abandon the assumption that psychoanalysis must be shown to conform to the model of explanation of the natural sciences in order for it to be held to have cognitive value. This involves breaking with Freud's own understanding of psychoanalysis, but it may draw support from the fact that Freud also on numerous occasions emphasizes the alignment of psychoanalysis with, and its dependence on, commonsense principles of reasoning: the everyday assumptions about the mind that are embedded in commonsense ('folk') psychology.

A diversity of approaches falls under this heading, but they share agreement that the right way to understand psychoanalysis is to regard it as reworking the routes of psychological understanding laid down in our ordinary, everyday grasp of ourselves, and validated independently from scientific methodology.

Some philosophers, encouraged by Ludwig Wittgenstein's writings on the philosophy of psychology and remarks on Freud (Wittgenstein 1978), explored the notion that psychoanalytic explanation trades off the concept of a reason rather than that of a cause, and tried to suggest that psychoanalytic interpretation is directly continuous with the practice of explaining agents' actions by citing their reasons. Though psychoanalysis is thereby freed from the burden of having to appeal to strict causal laws, the epistemological grounds of which prove so problematic, an evident difficulty faces this approach, arising from the tension between the assumption of the agent's rationality which is presupposed by the very application of the concept of a reason, and the nonrational or irrational character of the mental connections postulated in psychoanalytic explanation. The resulting concept of 'neurotic rationality' employed by some philosophers reflects this discomfort.

However, more satisfactory solutions along these lines can be developed by making the relation of psychoanalysis to commonsense psychology less direct. One such approach regards psychoanalysis as a theoretical extension of commonsense psychology. It is argued that the familiar schema of practical reason explanation, formalized in the practical syllogism, is in psychoanalytic explanation fundamentally modified by the substitution of the concepts of wish and fantasy for those of belief and desire, and by the replacement of a direct, intrapsychic relation between desire and mental representation for the syllogistic relation between practical reasoning and action. The basis for assigning content and explanatory role to wishes and fantasies—thematic linkages, relations of meaning—remains, however, the same. Melanie Klein's development of Freud's theories, which attributes enormous importance to the role of fantasy in mental life, is standardly appealed to by proponents of this approach. In this view, the overarching epistemological ground for psychoanalysis lies in its capacity to offer a unified explanation for phenomena—dream, psychopathology, mental conflict, sexuality, and so on—that commonsense psychology is unable or poorly equipped to explain (see Irrationality: Philosophical Aspects ), while exploiting the same interpretative methodology as commonsense psychology and complementing commonsense psychological explanations. (See Wollheim 1974, 1991, Wollheim and Hopkins 1982, Hopkins 1992, Neu 1991, Cavell 1993, Levine 1999.)

An alternative epistemological route from commonsense psychology to psychoanalysis is to proceed by way of philosophical theory: given an independently elaborated philosophical theory of psychoanalytically relevant aspects of human subjectivity, intersubjectivity, and representation, it becomes possible to interpret psychoanalysis in the terms provided by this theory and thereby extend to psychoanalysis whatever cognitive authority the theory possesses. Proponents of this approach typically draw on different philosophical traditions from the commonsense-extension theorists, whose philosophical allegiances characteristically reflect a mixture of empiricism and Wittgenstein. (The two approaches are, however, not necessarily incompatible.)

Jürgen Habermas and Jacques Lacan provide two clear examples of this mode of approach. Habermas' (1968, Chaps. 10–11) hermeneutic account asserts a complete separation of psychoanalysis from the natural sciences, this association being attributed to a naturalistic and scientistic misconception of psychoanalysis on Freud's part, and seeks instead to integrate psychoanalysis with communication theory and to set it in the context of a nontraditional conception of rationality and cognitive interest. The unconscious and its symbolism are understood by Habermas in terms of specific forms of distortion of communication and interruptions to self-communication; psychoanalytic interpretation and treatment are understood correspondingly as directed to retrieving elements that have suffered exclusion from the field of public communication, and to engendering a form of self-reflection that will undo self-alienation and thereby emancipate. (Ricœur 1965 offers an alternative hermeneutic approach.)

Lacan's reading of Freud (see Evans 1996) evinces a similar methodological structure: a general theory of subjectivity and representation, in Lacan's case deriving from multiple sources including Ferdinand de Saussure and G. W. F. Hegel, is employed to elucidate psychoanalytic concepts. The concept of the unconscious is explicated by Lacan in terms of paradoxes arising necessarily from the attempt to represent the self and its objects, and constraints put on interpersonal desire by its symbolic mediation, while Freud's theories of the nature of unconscious mental processing are recast in terms borrowed from structural linguistics.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0080430767010792

Truth, Verification, Verisimilitude, and Evidence: Philosophical Aspects

G. Oddie , in International Encyclopedia of the Social & Behavioral Sciences, 2001

4 Falsifiability

That metals expand when heated is false in any world in which one piece of heated metal does not expand. Since we cannot check every piece of heated metal, the proposition cannot be shown to be true. By contrast, it can be shown to be false, or falsified. The observation of one nonexpanding piece of heated metal will do the trick.

Karl Popper (1963 ) was one of the first to stress the fallibility of scientific theories and the asymmetry of verification and falsification. If the positivists took inspiration from Special Relativity, Popper took it from General Relativity—specifically, its prediction of the bending of light near massive bodies. He was impressed that Eddington's eclipse experiment of 1919 could verify neither Newton's nor Einstein's theory, but could have refuted either, and actually refuted Newton's. Thenceforth he took falsifiability to be the hallmark of genuine science. Pseudoscientific theories (Popper's examples were psychoanalysis and astrology) are replete with 'confirming instances.' Everything that happens appears to confirm them only because they rule nothing out. Popper argued that genuine theories must forever remain conjectures. But science is still rational, because we can submit falsifiable theories to severe tests. If they fail, we weed them out. If they survive? Then we submit them to more tests, until they too fail, as they almost assuredly will.

There are three serious problems with falsificationism. First, it does not account for the apparent epistemic value of confirmations. The hardline falsificationist must maintain that the appearance is an illusion. Second, it cannot explain why it is rational to act on the unrefuted theories. Confidence born of experimental success reeks of inductivism. Third, pessimism about the enterprise of science seems obligatory. Although truth is the goal of inquiry, the best we can manage is to pronounce a refuted theory false. We return to these in succeeding sections.

Other criticisms stem from Duhem's problem (Duhem 1954). Duhem noted that predictions can be deduced from theories only with the aid of auxiliary assumptions. One cannot deduce the path of a planet from Newton's theory of gravitation without additional hypotheses about mass and position of the Sun and the planets, and the absence of other forces. It is not Newton's theory alone, but the conjunction of the theory with these auxiliaries, which faces the tribunal of experience. If the conjunction fails it is not clear which conjunct should be blamed. Since we can always blame the auxiliaries, falsification of a theory is impossible.

Quine generalized Duhem's point to undermine the positivist's distinction between factual truth and analytic truth. If no sentence is 'immune to revision' in the face of anomalies, no sentence is true simply by virtue of its meaning. Not even the law of excluded middle s sacrosanct, and meaning itself becomes suspect (Quine 1981).

Kuhn used Duhem to undermine the rationality of scientific revolutions. Since a theory cannot be refuted by experiment, it is not irrational to stick to it, tinkering with auxiliaries to accommodate the data. 'Normal science' consists precisely in that. A revolution only occurs when a bunch of scientists jump irrationally from one bandwagon to another. In Kuhn's famous phrase (which he later regretted) a revolution is a 'paradigm shift' (Kuhn 1962).

Feyerabend followed Kuhn, maintaining that proponents of different paradigms inhabit incommensurable world-views which determine their perception of the so-called observational 'facts' (Feyerabend 1993), thereby setting the stage for constructivists who maintain that science is a human construct (obviously true) with no controlling links to reality (obviously false). From constructivism it is a short leap into postmodernist quicksand, the intellectual vacuity of which has been brutally but amusingly exposed (Sokal 1996).

The more extreme lessons drawn from Duhem's simple point may sound sexy, but they do not withstand sober scrutiny (Watkins 1984). It is just false that each time a prediction goes awry it is one's whole global outlook that is on trial, and that one may rationally tinker with any element at all. In the nineteenth century, Newton's theory was well confirmed by myriad observations and applications, but it faced an apparent anomaly in the orbit of Uranus. Because the theory was well confirmed, scientists looked to the auxiliary assumptions, such as the assumption that there are no unobservable planets affecting Uranus's trajectory. It was far more probable, in the light of the total evidence, that there should be an as yet unobserved planet in the solar system than that Newton's theory be false. It was eminently reasonable to postulate such a planet and search for it. The subsequent discovery of Neptune was a resounding success for the theory.

This rejoinder, of course, requires a positive account of probability and confirmation, one which falsificationism does not supply.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0080430767010147

Simplicity, Truth, and Probability

Kevin T. Kelly , in Philosophy of Statistics, 2011

6 Empirical Simplicity Defined

In order to prove anything about Ockham's razor, a precise definition of empirical simplicity is required. The basic approach adopted here is that empirical complexity is a reflection of empirical effects relevant to the theoretical inference problem addressed. Thus, empirical complexity is not a mere matter of notation, but it is relative to the kind of truth one is trying to discover. An empirical effect is just a verifiable proposition — a proposition that might never be known to be false, but that comes to be known, eventually, if it is true. For example, [Newton, 1726] tested the identity of gravitational and inertial mass by swinging large pendula filled with identical weights of different kinds of matter and then watching to see if they ever went noticeably out of phase. If they were not identical in phase, the accumulating phase difference would have been noticeable eventually. Particle reactions are another example of empirical effects that may be very difficult to produce but that, once observed, are known to occur. Again, two open intervals through which no constant curve passes constitute a first-order effect, three open intervals through which no line passes constitute a second-order effect, and so forth (fig. 10.a-c). 11 Effects can be arbitrarily small or arbitrarily arcane, so they can take arbitrarily long to notice.

Figure 10. First, second, and third order effects

Let E be a countable set of possible effects. 12 Let the empirical presupposition K be a collection of finite subsets of E. It is assumed that each element of K is a possible candidate for the set of all effects that will ever be observed. The theoretical question Q is a partition of K into sets of finite effect sets. Each partition cell in Q corresponds to an empirical theory that might be true. Let T S denote the (unique) theory in Q that corresponds to finite effect set S in K. For example, the hypotheses of interest to Newton can be identified, respectively, with the absence of an out-of-phase effect or the eventual appearance of an out-of-phase effect. The hypothesis that the degree of an unknown polynomial law is n can similarly be identified with an effect — refutation of all polynomial degrees < n. In light of the above discussion of causal inference, each linear causal network corresponds to a pattern of partial correlation effects (note that conditional dependence is noticeable, whereas independence implies only absence of verification of dependence). Each conservation theory of particle interactions can be identified with a finite set of effects corresponding to the discovery of reactions that are not linearly dependent on known reactions [Schulte, 2000; Luo and Schulte, 2006]. 13 The pair (K,Q) then represents the scientist's theoretical inference problem. The scientist's aim is to infer the true answer to Q from observed effects, assuming that the true effect set is in K.

Now empirical simplicity will be defined with respect to inference problem (K,Q). Effect set S conflicts with S′ in Q if and only if T S is distinct from T S ′ . Let π be a finite sequence of sets in K. Say that π is a skeptical path in (K,Q) if and only if for each pair S,S′ of successive effect sets along π, effect set S is a subset of S′ and S conflicts with S′ in Q. Define the empirical complexity c(S) of effect set S relative to (K,Q) to be a  1, where a denotes the length of the longest skeptical path through (K,Q) that terminates in S. 14 Let the empirical complexity c(T) of theory T denote the empirical complexity of the least complex effect set in T.

A skeptical path through (K,Q) poses an iterated problem of induction to a would-be solver of problem (K,Q), since every finite sequence of data received from a given state on such a path might have been produced by a state for which some alternative answer to Q is true. That explains why empirical complexity ought to be relevant to the problem of finding the true theory. Problem-solving effectiveness always depends on the intrinsic difficulty of the problem one is trying to solve and the depth of embedding of the problem of induction determines how hard it is to find the truth by inductive means. Since syntactically defined simplicity (e.g., [Li and Vitanyi, 1993]) can, but need not, latch onto skeptical paths in (K,Q), it does not provide such an explanation.

Let e be some input information. Let S e denote the set of all effects verified by e. Define the conditional empirical complexities c(S | e), c(T | e) in (K,Q) just as before, but with respect to the restricted problem (K e ,Q), where K e denotes the set of all effect sets S in K such that S e is a subset of S.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444518620500319