Monday, October 26, 2009

Burt, Spearman, Thurstone, and Factor Analysis





Cyril Burt, Spearman, Thurstone, and Factor Analysis





Gould’s chapter on Cyril Burt commences by briefly outlining four instances of fraud committed by Cyril Burt during his career. These instances included multiple occurrences of data fabrication on identical twins, I.Q. correlations in kinship, intelligence decline in Britain, and most peculiarly, attempting to establish himself as the creator of the technique which is the focus of this chapter: factor analysis. Although Burt’s fraud was apparently a result of the actions of a mentally-ill individual (p.266), Gould (p.269) calls some of the later acts of fraud “the afterthought of a defeated man. Regardless of his later problems, Burt’s earlier errors immensely affected twentieth century society.



Factor analysis is considered the most important technique in modern multivariate statistics and was developed by Burt’s predecessor and mentor, Charles Spearman. Spearman was a distinguished psychologist and statistician who began to study correlations between mental tests. Spearman believed that some simpler structure may be responsible for the positive correlations in the mental testing. Spearman (p.286) defines two possibilities to the underlying structures. The first option is that the positive correlations might be reduced to a small set of nonaligned aspects. The second explanation is that the correlations may be reduced to a single, general factor, or a cause. Spearman defines the former as oligarchic, and the latter as monarchic. In addition, he also identifies a residual variance, or anarchic component that represents information that is peculiar to each test and unrelated to any other test. This residual information and the idea of a single general factor comprise his “two-factor theory” which becomes a key object of analysis and discussion for Burt, Spearman, and the American L.L. Thurstone.





Factor Analysis and Correlation





Factor analysis is a method of inquiry with a theoretical abstract foundation designed to reveal any basal structures within groups of data. Gould (p.268) explains that it is a “bitch” to work with. It is also described as a device for determining the tangible framework of intellect which has been initially founded on cognitive errors. Factor analysis is also a mathematical means of shrinking an intricate system of correlation into fewer dimensions.





Correlation comes in the following mathematical flavors:





Positive correlation- In a mathematical equation the combined tendency to change in the same direction of up to +1 is considered positively correlated. The closer the numeric value is to +1 the stronger the correlation.





Negative correlation- The combined tendency of a measurement to change in opposite directions up to a value of -1. The closer the value to -1 the stronger the negative correlation.





Zero correlation- The value for having no correlation is 0.





The correlation coefficient, which is symbolized by r, measures the form of an ellipse of plotted points in a graph. The more elongated the ellipse, the stronger the correlation between measurements.





Problems with correlation





One of the fundamental problems with correlation is how it may be used to create false causality. The fact of correlation does not indicate that there is an underlying cause to the correlation. Gould points out that in fact the majority of correlations in our reality are of the noncausal variety. He (p.272) declares that the wrong idea of correlation and causality is most like among the two or three most detrimental and standard errors of human reasoning.



What are the other crucial errors of human reasoning? Some statisticians still believe in the causality influence in factor analysis. If there is correlation, is there an apparent underlying cause that can be identified without further information? What do you think a statistician who believes in causality would use to support his argument?





Principal Components





In factor analysis, factoring occurs much like it does in algebra; the equation is simplified by the removal of common multipliers. In the case of factor analysis, this is geometrically represented by placing axes through the ellipsoid of the matrix. This main axis line recovers the greatest amount of information. This line is called the first principal component. Additional axes are required to capture the remaining information. This is accomplished by placing an additional perpendicular line that resolves more remaining information than any other perpendicular line related to the first principal component. This line is called the second principal component.





Two-factor Theory and Spearman’s g





Charles Spearman developed factor analysis and his two-factor theory as a procedure to discover whether the variance in a matrix of correlation coefficients could be reduced to a single general factor or to numerous individual group factors (p.287). For Spearman’s purposes, he determined that there was only a single or monarchic general factor which is known as Spearman’s g in two-factor theory. Spearman’s g is the first principal component of correlation of mental tests.



In addition to g, Spearman also identifies s as a residual variance that is unique to each individual test. These principal and residual components would be combined with his theory of general energy and specific engines to provide a framework for the heritability of g. Spearman’s theory can be simply described as the general energy (g) of the brain works and activates a set of specific mental engines with a specific location. The more general energy there is the more intelligent a person is. A person’s intelligence is now defined by the general energy that is a result of an individual’s inborn structure.



In Spearman’s defense, Gould (p.302) argues that Spearman held conventional views on intelligence and was not an architect of heredetarian theory. In rebuttal, Gould (p.300) does indicate some of Spearman’s primary claims are synonymous with most heredetarian beliefs:





Assertion 1- Intelligence is a unitary “thing.”





Assertion 2- The inference of a physical substrate for intelligence.





Cyril Burt’s Uncomprimising hereditarianism.





Cyril Burt utilized factor analysis to argue for innate intelligence. In addition, he believed that class differentiation was a result of innate intelligence. Gould (p.304) describes Burt’s proof as scant and superficial data that relied on circular reasoning.



Regardless, Burt was able to devise a two-part position that he stuck by throughout his career:



Assertion 1- Intelligence is a general factor that is largely, if not entirely inherited.





Assertion 2- Intelligence is a reified factor; it is an abstract concept transformed to a



“thing.”





Burt set out 3 goals for himself in his 1909 paper (which he cited as proof of innate intelligence):





Goal #1- to determine if general intelligence can be detected and measured.





Goal #2- to determine if the nature of general intelligence can be isolated and analyzed



for meaning.





Goal #3- Whether the development of intelligence is a result of the environment and



individual acquisition, or dependent on inheritance.





Burt enacted an experiment that included a study of 86 boys of varying education and class to demonstrate hereditary intelligence. He then administered 12 tests of cognitive function in addition to ranking boys from the input of expert observers. Burt used his results to form an argument against environmental influence. Two fundamental flaws in his argument reside in his experiment design and in his statistics. Given the low sample size of his population (n=86) and the use of a subjective and biased ranking system, Burt’s arguments for heredity in this experiment should be nullified. This seems to be a common attribute of all of the hereditarianists we have studied thus far.



Burt was able to expand on Spearman’s approach by creating an inverted technique, and an expansion of Spearman’s two-factor theory. Burt created an inversion of factor analysis that he called Q-mode analysis. This analysis was based on correlation between people instead of tests. He developed this type analysis based on his interest in finding the relationship among individuals in a unilinear ranking system based on inherited mental value (p.323). As a result, Burt’s work helped perpetuate the major political victory in Britain of heredetarian theories of mental testing (p.323). Gould describes the 11+ examination equivalent to the impact of the Immigration restriction Act of 1924. As a result of these tests, eighty percent of the pupils were regarded as unfit for higher education.



Given the impact of such a test and the knowledge of the fallacies of data and method, what new “heredetarian” approaches are visible in American society today? Are there any covert approaches?





Burt also expanded Spearman’s two-factor theory to incorporate group factory by which he identified through studying not the primary component, but by focusing on the secondary and subsequent principal components. The basis is that the primary principal component must run between and not through the sub clusters that are formed from these lesser principal components. Burt’s theory became a four-factor theory in comparison to Spearman’s two-factor theory. The extra two factors are group factors and accidental factors. Accidental factors are a single trait measured on a single occasion.





Thurstone and Rotation of Axes





L. L. Thurstone was the American equivalent to Spearman and Burt. Thurstone fell into the same reification trap by discrediting Spearman’s g because it wasn’t real enough (p.326). Thurstone believed that both Spearman and Burt failed to identify the true vectors of the mind because it placed factor axes in the wrong geometrical location because of the first principal component of g. One of the problems that he points out is the negative projection of some of the data. If a factor is representative of a true vector, it would either be present or not. Therefore, there should only be a positive value, or correlation, or zero value.



In addition to the bipolar problem, Spearman’s g presents a problem for Thurstone because it was supposed to be an all-encompassing grand average, but its position is dependent on subjective test selections and shifts from one test to another (p.327). Thurston constructs a solution designed to resolve the negative qualities as well as the “g” problem. Thurston takes the calculations of the Spearman-Burt principal components and rotates them until they reside near actual clusters of vectors. The result, which he calls simple structure, provides an equivalent, not better, solution in factor analysis. What this did not solve is the problem that includes all of the aforementioned historical characters; reification emanates from their perceptions.



Although this is a very condensed and stripped-down account of Gould’s chapter on factor analysis, several key points can be extracted:







  • Factor analysis can be an effective tool for the interpretation of data, but not as a means of determining causality from a correlation model unless there is more empirical data to support a causal relationship[.


  • Spearman, Burt, and Thurstone’s approach to human intelligence is fundamentally flawed because it asserts reification; human intelligence is a physical and quantifiable trait. Reification brings concepts out of the abstract and into the real world.


  • Spearman, Burt, and Thurstone also operate within the confines of hereditarianism, believing that human intelligence exists mainly in the realm of the innate. They also provided a unilinear scale to create and rank human beings based on this innate intelligence.


  • Gould’s book has indicated that history is often cyclical. This is best represented by the heredetarian basis and how it has reappeared under different guises over time.




The concept of factor analysis has many applications in the real world, especially in mathematics and statistics. How does the new method of bias-reinforcement compare to the past methods that we have read about? Does factor analysis bear any relevance on human intelligence other than identifying degrees of correlations that are most likely noncausal?







Links to learn more about factor analysis-





http://www.psych.cornell.edu/Darlington/factor.htm





http://www.hawaii.edu/powerkills/UFA.HTM





http://www.its.ucdavis.edu/telecom/r11/factan.html





1 comment:

  1. -How does the new method of bias-reinforcement compare to the past methods that we have read about?

    It is hard to see what the end goal was for most characters in this book-legitimacy as a science or honest progress in the area of intelligence testing or attempting to quantify predisposed ideas as a means to maintain the societal status quo. Most likely, it is a combination of these three for everyone involved with the strength of each motivation varying between individuals. There is a common theme of circular reasoning in all of these arguments and factor analysis was just the flavor of the era during Spearman, Burt and Thurstone's influence. The increasing complexity of explanations for essentially the same argument over time represents the maturation of technique rather than progress towards an actual answer to the question. Factor analysis, as difficult as it is to utilize or explain, was just the next rung in the ladder that extends towards our understanding of human intelligence.

    -Does factor analysis bear any relevance on human intelligence other than identifying degrees of correlations that are most likely noncausal?

    Most of the correlations probably due represent causal relationships, but are not representative of a solely hereditarian form of intelligence.

    ReplyDelete

Note: Only a member of this blog may post a comment.