Wednesday, October 28, 2009

Gould: A Positive Conclusion and Swift Debunking


Gould: A Positive Conclusion and a Swift Debunking


Throughout The Mismeasure of Man, Gould has examined several key contributors to a recurring theme of biological determinism. Gould was able to expose the fallacies, errors, and fraud that key figures in “science” utilized as the basis for their theories on race and intelligence as being something that could be inherited, reified, and ranked. Gould (p.351) suggests that the concept of unilinear progress is not only a determining factor of social rankings; it is also suggestive of an incorrect idea of the development of science. Gould uses the metaphor of science as a barrel of accumulating knowledge to explain how science develops. He indicates that some would view debunking as negative because it would only eject a few rotten apples (bad theories) from this barrel. Gould disagrees by explaining that the barrel is constantly at capacity, and the rotten apple must be discarded in order to create space for better apples. Gould Specifies (p.352):


Scientists do not debunk only to cleanse and purge. They refute older ideas in the light of a different view about the nature of things.


It is this perpetual development, debunking, reinvention, and at times revitalization within science that generates new ideas and theories, but in order for the debunking to have a long-term effect it must, as Gould points out (p.352), utilize more competent biology to remove erroneous ideas rooted in social prejudice. Due to the increase of knowledge about human biology, evolution, and genetics, biological determinism has suffered several defeats in theoretical fundamentals supporting it. Gould (p.352) argues that the significant lack of genetic variation among human populations is one of the primary biological principles for dispelling biological determinism; this variation is a contingent reality in evolutionary history.


Is the human species at a point where we can make arguments of “fact” regarding sciences such as biology, or are we just perpetuating the debunking machine by replacing more rotten apples with better ones? Debunking has been a common occurrence throughout history. Most likely there have been ideas that were discredited without the support of knowledge. What if Mendel’s ideas were tossed out? Would it be inevitable that eventually someone would have discovered inheritance?


On Biology and Human Nature


Humans are inevitably a part of nature. The complex system of organization that makes up the human organism operates on the same principles of life as do other forms. Do we maintain uniqueness if we share our biological systems? Gould (p.354) states that human novelty has had an immense impact upon the earth because of the new kind of evolution, or the adaptation humans have exploited: culture. The brain is where this special and peculiar ability resides. Has cultural evolution been the “smoking gun” of how the species has managed a strangle hold on the world? An Australopithecine might disagree, but a shotgun trumps an Oldowan chopper, right?

All the advancements and transformations have occurred at a higher frequency in the shortest amount of time in history (written or geologic). Gould (p.355) argues that cultural evolution can happen at such an accelerated rate because it operates by the inheritance of acquired traits. Biological evolution occurs at a much slower rate. Another important trait of cultural evolution is that it is reversible where biological evolution is not.


The classical arguments of biological determinism in what we have scrutinized fail as a result of them being founded within products of cultural evolution. In other words, biological determinism has not foundational basis because they are measuring socio-cultural bias and not biological traits. As a result in the advancements in science, the very biological basis of the human species extinguishes the arguments of biological determinism. Gould describes that the inheritance and modification of acquired behavior is more effective than biological evolution in relationship to the human organism. Do you agree with that assumption or has too little time of a grander scale gone by? It is easy to assume that cultural evolution is what allows us to be somehow unique and more advanced than the chimpanzee. Have we developed beyond our means?

Gould (p.357) believes that modern biology has created a model that straddles the claim that biology has nothing to educate us about human behavior and the theory that specific behavioral traits are a result of the selective adaptations that are embedded with our genes. Gould also provides 2 major areas for biological comprehension.The first is in fruitful analogies and the limited use thereof. Gould describes the use of analogies as a means of inferring genetic similarity as one of the most frequent errors of reasoning. Just as a correlation in factor analysis may expose common relationships, but it doesn’t identify and explain the cause.The second is biological potentiality versus biological determinism. Biological potentiality is the concept that underlying generating rules are what perpetuate human behavior and not the deterministic idea that the genetic basis of human nature exists among specific behaviors. Gould (p.359) points out that sociobiologists have made a primary error seeking the genetic basis of human behavior at the wrong level.

Two different arguments have led Gould to determine that broad behavioral ranges occur as a consequence of the evolution and structure of the human brain. The first argument relies on the vast range of human behavior from peaceful to aggressive. Human behavior is malleable based on the context of the situation. Gould (p.361) believes that is likely that natural selection acted to maximize the range of human behavior. The second argument is that the structural design of our brain has led to our increased capacities for human success.


Gould invokes the idea of neoteny to close his chapter. Is flexibility within the human species really the hallmark of human evolution? By retaining a more juvenile capacity to adapt and learn, has the human species found a niche in not evolving? Is the structure of our brain a product of a lengthy biological journey, or is it a rapidly advancing adaptation generated by culture within the biological range of variation? Can culture really act as an agent of change at a level that rivals biology?



Links-

Gould on Human Nature

http://condor.depaul.edu/~mfiddler/hyphen/gould-humanature.htm



Monday, October 26, 2009

Burt, Spearman, Thurstone, and Factor Analysis





Cyril Burt, Spearman, Thurstone, and Factor Analysis





Gould’s chapter on Cyril Burt commences by briefly outlining four instances of fraud committed by Cyril Burt during his career. These instances included multiple occurrences of data fabrication on identical twins, I.Q. correlations in kinship, intelligence decline in Britain, and most peculiarly, attempting to establish himself as the creator of the technique which is the focus of this chapter: factor analysis. Although Burt’s fraud was apparently a result of the actions of a mentally-ill individual (p.266), Gould (p.269) calls some of the later acts of fraud “the afterthought of a defeated man. Regardless of his later problems, Burt’s earlier errors immensely affected twentieth century society.



Factor analysis is considered the most important technique in modern multivariate statistics and was developed by Burt’s predecessor and mentor, Charles Spearman. Spearman was a distinguished psychologist and statistician who began to study correlations between mental tests. Spearman believed that some simpler structure may be responsible for the positive correlations in the mental testing. Spearman (p.286) defines two possibilities to the underlying structures. The first option is that the positive correlations might be reduced to a small set of nonaligned aspects. The second explanation is that the correlations may be reduced to a single, general factor, or a cause. Spearman defines the former as oligarchic, and the latter as monarchic. In addition, he also identifies a residual variance, or anarchic component that represents information that is peculiar to each test and unrelated to any other test. This residual information and the idea of a single general factor comprise his “two-factor theory” which becomes a key object of analysis and discussion for Burt, Spearman, and the American L.L. Thurstone.





Factor Analysis and Correlation





Factor analysis is a method of inquiry with a theoretical abstract foundation designed to reveal any basal structures within groups of data. Gould (p.268) explains that it is a “bitch” to work with. It is also described as a device for determining the tangible framework of intellect which has been initially founded on cognitive errors. Factor analysis is also a mathematical means of shrinking an intricate system of correlation into fewer dimensions.





Correlation comes in the following mathematical flavors:





Positive correlation- In a mathematical equation the combined tendency to change in the same direction of up to +1 is considered positively correlated. The closer the numeric value is to +1 the stronger the correlation.





Negative correlation- The combined tendency of a measurement to change in opposite directions up to a value of -1. The closer the value to -1 the stronger the negative correlation.





Zero correlation- The value for having no correlation is 0.





The correlation coefficient, which is symbolized by r, measures the form of an ellipse of plotted points in a graph. The more elongated the ellipse, the stronger the correlation between measurements.





Problems with correlation





One of the fundamental problems with correlation is how it may be used to create false causality. The fact of correlation does not indicate that there is an underlying cause to the correlation. Gould points out that in fact the majority of correlations in our reality are of the noncausal variety. He (p.272) declares that the wrong idea of correlation and causality is most like among the two or three most detrimental and standard errors of human reasoning.



What are the other crucial errors of human reasoning? Some statisticians still believe in the causality influence in factor analysis. If there is correlation, is there an apparent underlying cause that can be identified without further information? What do you think a statistician who believes in causality would use to support his argument?





Principal Components





In factor analysis, factoring occurs much like it does in algebra; the equation is simplified by the removal of common multipliers. In the case of factor analysis, this is geometrically represented by placing axes through the ellipsoid of the matrix. This main axis line recovers the greatest amount of information. This line is called the first principal component. Additional axes are required to capture the remaining information. This is accomplished by placing an additional perpendicular line that resolves more remaining information than any other perpendicular line related to the first principal component. This line is called the second principal component.





Two-factor Theory and Spearman’s g





Charles Spearman developed factor analysis and his two-factor theory as a procedure to discover whether the variance in a matrix of correlation coefficients could be reduced to a single general factor or to numerous individual group factors (p.287). For Spearman’s purposes, he determined that there was only a single or monarchic general factor which is known as Spearman’s g in two-factor theory. Spearman’s g is the first principal component of correlation of mental tests.



In addition to g, Spearman also identifies s as a residual variance that is unique to each individual test. These principal and residual components would be combined with his theory of general energy and specific engines to provide a framework for the heritability of g. Spearman’s theory can be simply described as the general energy (g) of the brain works and activates a set of specific mental engines with a specific location. The more general energy there is the more intelligent a person is. A person’s intelligence is now defined by the general energy that is a result of an individual’s inborn structure.



In Spearman’s defense, Gould (p.302) argues that Spearman held conventional views on intelligence and was not an architect of heredetarian theory. In rebuttal, Gould (p.300) does indicate some of Spearman’s primary claims are synonymous with most heredetarian beliefs:





Assertion 1- Intelligence is a unitary “thing.”





Assertion 2- The inference of a physical substrate for intelligence.





Cyril Burt’s Uncomprimising hereditarianism.





Cyril Burt utilized factor analysis to argue for innate intelligence. In addition, he believed that class differentiation was a result of innate intelligence. Gould (p.304) describes Burt’s proof as scant and superficial data that relied on circular reasoning.



Regardless, Burt was able to devise a two-part position that he stuck by throughout his career:



Assertion 1- Intelligence is a general factor that is largely, if not entirely inherited.





Assertion 2- Intelligence is a reified factor; it is an abstract concept transformed to a



“thing.”





Burt set out 3 goals for himself in his 1909 paper (which he cited as proof of innate intelligence):





Goal #1- to determine if general intelligence can be detected and measured.





Goal #2- to determine if the nature of general intelligence can be isolated and analyzed



for meaning.





Goal #3- Whether the development of intelligence is a result of the environment and



individual acquisition, or dependent on inheritance.





Burt enacted an experiment that included a study of 86 boys of varying education and class to demonstrate hereditary intelligence. He then administered 12 tests of cognitive function in addition to ranking boys from the input of expert observers. Burt used his results to form an argument against environmental influence. Two fundamental flaws in his argument reside in his experiment design and in his statistics. Given the low sample size of his population (n=86) and the use of a subjective and biased ranking system, Burt’s arguments for heredity in this experiment should be nullified. This seems to be a common attribute of all of the hereditarianists we have studied thus far.



Burt was able to expand on Spearman’s approach by creating an inverted technique, and an expansion of Spearman’s two-factor theory. Burt created an inversion of factor analysis that he called Q-mode analysis. This analysis was based on correlation between people instead of tests. He developed this type analysis based on his interest in finding the relationship among individuals in a unilinear ranking system based on inherited mental value (p.323). As a result, Burt’s work helped perpetuate the major political victory in Britain of heredetarian theories of mental testing (p.323). Gould describes the 11+ examination equivalent to the impact of the Immigration restriction Act of 1924. As a result of these tests, eighty percent of the pupils were regarded as unfit for higher education.



Given the impact of such a test and the knowledge of the fallacies of data and method, what new “heredetarian” approaches are visible in American society today? Are there any covert approaches?





Burt also expanded Spearman’s two-factor theory to incorporate group factory by which he identified through studying not the primary component, but by focusing on the secondary and subsequent principal components. The basis is that the primary principal component must run between and not through the sub clusters that are formed from these lesser principal components. Burt’s theory became a four-factor theory in comparison to Spearman’s two-factor theory. The extra two factors are group factors and accidental factors. Accidental factors are a single trait measured on a single occasion.





Thurstone and Rotation of Axes





L. L. Thurstone was the American equivalent to Spearman and Burt. Thurstone fell into the same reification trap by discrediting Spearman’s g because it wasn’t real enough (p.326). Thurstone believed that both Spearman and Burt failed to identify the true vectors of the mind because it placed factor axes in the wrong geometrical location because of the first principal component of g. One of the problems that he points out is the negative projection of some of the data. If a factor is representative of a true vector, it would either be present or not. Therefore, there should only be a positive value, or correlation, or zero value.



In addition to the bipolar problem, Spearman’s g presents a problem for Thurstone because it was supposed to be an all-encompassing grand average, but its position is dependent on subjective test selections and shifts from one test to another (p.327). Thurston constructs a solution designed to resolve the negative qualities as well as the “g” problem. Thurston takes the calculations of the Spearman-Burt principal components and rotates them until they reside near actual clusters of vectors. The result, which he calls simple structure, provides an equivalent, not better, solution in factor analysis. What this did not solve is the problem that includes all of the aforementioned historical characters; reification emanates from their perceptions.



Although this is a very condensed and stripped-down account of Gould’s chapter on factor analysis, several key points can be extracted:







  • Factor analysis can be an effective tool for the interpretation of data, but not as a means of determining causality from a correlation model unless there is more empirical data to support a causal relationship[.


  • Spearman, Burt, and Thurstone’s approach to human intelligence is fundamentally flawed because it asserts reification; human intelligence is a physical and quantifiable trait. Reification brings concepts out of the abstract and into the real world.


  • Spearman, Burt, and Thurstone also operate within the confines of hereditarianism, believing that human intelligence exists mainly in the realm of the innate. They also provided a unilinear scale to create and rank human beings based on this innate intelligence.


  • Gould’s book has indicated that history is often cyclical. This is best represented by the heredetarian basis and how it has reappeared under different guises over time.




The concept of factor analysis has many applications in the real world, especially in mathematics and statistics. How does the new method of bias-reinforcement compare to the past methods that we have read about? Does factor analysis bear any relevance on human intelligence other than identifying degrees of correlations that are most likely noncausal?







Links to learn more about factor analysis-





http://www.psych.cornell.edu/Darlington/factor.htm





http://www.hawaii.edu/powerkills/UFA.HTM





http://www.its.ucdavis.edu/telecom/r11/factan.html





Wednesday, October 21, 2009

The Hereditarian Theory of IQ: An American Invention

Alfred Binet and the original purpose of the Binet scale

Binet flirts with craniomerty

When Alfred Binet decided to study the measurement of intelligence, he used an age old method of measuring skulls, and flavored the conclusion set forth by his countryman Paul Broca. He collected his data by going to various schools and measuring the heads of pupils designated by the teachers as their smartest and stupidest. After three years and several publications, Binet was no longer sure of the conclusion that intelligence is correlated with head size. Binet’s research found that larger head sized favored the “good” student, but the difference between the “good” and “poor” student amounted to mere millimeters. Secondly, Binet didn’t observe a large difference in anterior region of the skull, where higher intelligence was supposedly found. It is where Broca in his analysis found the greatest disparity between superior and less fortunate people (177). Binet concluded that even those most of the results pointed in the right direction it was still useless to asset the intelligence of an individual, because the differences between the smart and poor student was too small. He also found that poor students varied more than their smart students, because the smallest and largest value usually belong to the poor pupil.
Furthermore, Binet become aware of his own unconscious bias. “I feared,” Binet wrote, “ that in making measurements on heads with the intention of finding a difference in volume between an intelligence and less intelligence head, I would be led to increase, unconsciously and in good faith, the cephalic volume of intelligence heads to decrease that of unintelligence heads (177).” Binet was able to confirm his unconscious bias by re-measuring the heads of “idiots and imbeciles” in a hospital; where he found an average diminution of 3mm, a good deal of more average difference between the skulls of smart and poor students. In the end, Binet did recalculate his work and found an extreme average of 3 to 4 mm, but it still didn’t exceed the average potential bias. Thus, Craniometry, the jewel of nineteenth-century objectivity, was not destined for continued celebration (178).

Binet’s scale and the birth of IQ

Binet in 1904 was commissioned by the minister of public education to develop techniques for identifying children who had problems learning. Binet decided to reject the use of craniomerty and Lombroso’s anatomical stigmata and focus more on psychological methods. He decided to bring together a series of short tasks. Some of these task were: counting coins, reasoning “ordering”, comprehension, invention, and censure. Each task was assigned an mental age and a child would begin his test by starting at the youngest mental age tasks, and proceeded until they could go no further. The last task completed would be their mental age, and their general intellectual level was calculated by subtracting the mental age from their true chronological age. Binet test was concerned with separating the natural intelligence and instruction. Binet stated, “ We give him nothing to read, nothing to write, and submit him to no test in which he might succeed by means of rote learning (180-181).” Furthermore, Binet would decline to discuss the meaning of the score he assigned the children for he reminds us that intelligence isn’t a single, scalable thing like height.
What Binet feared most about an IQ number was its negative uses in society. He thought that it could be used as an indelible label rather than a tool to identify the needs of the child. Therefore, Binet declined to label IQ as inborn intelligence and refused to regard it as device for ranking individuals based on the mental capacity. Binet’s statement on the issue was, “ Our purpose is to be able to measure the intellectual capacity of a child who is brought to us in order to know whether he is normal or retarded. We should therefore study his condition at the time and that only. We have nothing to do with his past history or with his future… we shall make no attempt to distinguish between acquired and congenital idiocy… and we leave unanswered the question of whether this retardation is curable, or even improvable (182).” It is clear that Binet is an antihereditarian. The major difference between Hereditarians and Antihereditarians are a matter of social policy and educational practices. Hereditarians believed that measures of intelligence are markers of inborn limits and children should be sorted and trained according to their inheritance and channeled into a profession. Antihereditarians believed the so-called “slow” children through special classes can increase their knowledge. Intelligence, in any meaningful sense of the word, can be augmented by good education; it is not a fixed and inborn quantity (184).

The dismantling of Binet’s intentions in America

Binet insisted upon three cardinal principles for using his test.
1. The scores are a practical device. They do not measure intelligence or any other
reified entity.
2. The scale is rough and used as a guide for identifying learning-disabilities; not a
device for ranking normal children.
3. Emphasis on improvement through special classes, and low scores shall not mark the
child as innately incapable.

However, his cardinal principles were overturned by American hereditarians who translated his scale into written form as a routine device for testing all children. The misuse of his test came from two fallacies: reification and hereditarianism, both were eager to use his test to maintain social ranks and distinctions. This chapter will only focus on the hereditarian theory and analyzes the major works of the three pioneers of hereditarianism in America: H.H. Goddard, L.M. Terman, and R.M. Yerkes (reification will be discussed in the next chapter). The hereditarian theory simply states, “ That inherited IQ scores marked people and groups for an inevitable station in life. And they assumed the average differences between groups were largely the products of heredity, despite manifest and profound variation in quality of life (187).”

Binet argued his work to be antihereditarian, but why was it so easy for the hereditarians to overturn his test and cardinal principles, and turn it into a device for testing all children?

H.H. Goddard and the menace of the feeble-minded
Intelligence as a Mendelian Gene

H.H. Goddard was the first to popularize the Binet scale in American. He translated Binet’s articles into English, applied his ideas to the test, and argued for their general use. He agreed with Binet on the idea that the tests worked to identify people just below the normal range, which Goddard christened the name morons. He used a scale unilinear scale of mental deficiency to identify intelligence as a single entity, which assumed intelligence was inborn and inherited in family ties. He stated that, “ The intelligence controls the emotions and the emotions are controlled in proportion to the degree of intelligence (190).” The rediscovery of Mendel’s work helped support Goddard’s idea that intelligence was a single entity. Mendel’s peas enabled the eugenicists to believe that the most complex parts of the body might be built by a single gene. A single gene for normal intelligence help support Goddard’s notion of an unilinear scale that marked intelligence as a single measurable entity. “For, Goddard had broken his scale into two sections at just the right place: morons carried a double dose of the bad recessive; dull laborers had at least one good copy of the normal gene and could be set before their machines. Moreover, the scourge of feeble mindedness might be eliminated by schemes of breeding easily planning. One gene can be traced, located, and breed out (193).” As for the situation of morons, Goddard did not oppose sterilization, but was more interested in housing morons in exemplary institutions were their reproduction can be curtailed.
After identifying the single gene for feeble-mindedness. It seemed simple enough for him not to allow morons to breed and keep foreigners out, who were also morons. He raised enough funding to conduct a for study in Ellis island, which was to administered his version of the Binet-scale to immigrants. The evaluation of the data suggested that between 40 to 50 percent of the immigrants were feeblemindedness. However, there were several problems with administering the Binet-test to immigrants. One reasons was most immigrants were poor and never gone to school let alone held a pencil in their hand. By 192,8 Goddard has changed his mind about his work and agreed that he had set the upper limit of moronity too high, and agreed most or not all morons could be trained and led useful lives, but he still retained his belief of inherited mentality.

Why do you think Goddard changed his mind? Did it have to do with political or social pressure or did he really feel he was wrong about his theory?

Lewis M. Terman and the mass marketing of innate IQ
Mass testing and the Stanford-Binet

Goddard introduced Binet’s scale to American, but Terman was the first to published the test. Terman’s revision of the 1911 Binet’s test in 1916 extended the scale to include “superior adults” and increased the number of task from fifty-four to ninety. Terman then a professor at Stanford University, gave his revision a name that has become part of our vocabulary, the Stanford-Binet test. He test stressed conformity with expectation and downgraded original responses. For example the question about the Indian’s respond to the white man walks sitting down. What was the white man riding? The only correct answer was ‘bicycle: and any other original answer was downgraded.
Binet’s test was meant to be administered by a train professional working with one child at a time, and could not be used as a general ranking. Terman wanted to test everyone in hopes of establishing a gradation of innate ability that could sort all children into their proper station in life. “What pupil should be tested? The answer is, all. If only selected children are tested, many of the cases most in need of adjustment will be overlooked. Some of the biggest surprises are encountered in testing those who have been looked upon as close to average in ability. Universal testing is fully warranted (206-207).” The Stanford-Binet test remained a test for the individual, but it became the model for all the written versions that followed. Terman adjusted the scale so that the average children would score 100 at each age, and established the standard deviant of 15 points. His test became the standard judgment and approval for all mass marketed written tests that followed. The argument was that the Stanford-Binet test measured intelligence; therefore, any test correlated with Stanford-Binet also measured intelligence.
Terman agreed with Binet on the grounds that the test was needed to identify children with learning disabilities, but differ in how Binet’s desired to help the disabled children. Terman states, “… in the near future intelligence test will bring tens of thousands of these high-grade defectives under the surveillance and protection of society. This will ultimately result in curtailing the reproduction of feeble-mindedness and in the elimination of an enormous amount of crime, pauperism, and industrial inefficiency (209).” He thought to plea for mass testing with the removal of the feeblemindedness and the criminals from society, intelligence tests will be able to place people in their respective professions. For IQ of 75 or lower should be the realm of unskilled labor, 75 to 85 semi-skilled labor, and above 85 more specific judgments could be made; however, this is merely establishing ranks, and Terman took the hereditarian stance and proclaimed class and race distinctions. In the end, Terman used several reasons for supporting hereditarianism. One example, is the IQ test he administered on twenty orphans. He relates the low scores must reflect the biology of the children. He moves easily from individuals, to social, classes to races, and proclaims do we need to prove what common sense already tells us. Terman proclaims, “… does not common observation teach us that, in the main, native qualities of intellect and character, rather than chance, determine the social class to which a family belongs? From what is already know about heredity, should we not naturally expect to find the children of well-to-do, cultured, and successful parents better endowed than the children who have been reared in slums of poverty (221)?” In the end just like Goddard, Terman recants his ideas to a degree. For in later works he mentions a few words of caution for heredity, and state that we also do not know how to partition the average differences between genetic and environmental influences.

Do you think Terman’s explanation of class and race distinction is bias, and if so, is there any other better explanations to support his claim scientifically, or was he already hinting toward pass theories of craniometry, recapitulation, criminal anthropology, and his own work to support his distinction?

R.M. Yerkes and the Army Mental Tests: IQ comes of age
Psychology’s great leap forward

“We must learn to measure skillfully every form and aspect of behavior which has psychological and sociological significance (223).” The preceding quote by R.M. Yerkes states his goal of turning psychology into a science, and he proclaimed through mental testing that psychology had potential to become a rigorous science. Yerkes obtained his chance to compile a sufficient amount of data to support his hereditary IQ theory, with help from the army. During the outbreak of World War I, the military allow Yerkes and his men to administered 1.75 million IQ tests to all new recruits.
Yerkes assembled the great hereditarians of America, which included Terman and Goddard, to create three types of IQ test for the army. Literate recruits took the written Army Alpha test. While the illiterate recruits and the soldiers who failed the Alpha test took the pictorial Beta test, and any individuals who failed the Beta test were called in separately to be administered a formal Binet scale test. These test could now rank and stream everybody; thus, the era of mass testing had begun.
The results from the test stated three major facts. The first is the actual mental age of white American adults was 13 and not the standard age of 16 suggested by Terman. The second factor is that European immigrants can be graded by their country of origins. Again, stating that the Southern and Eastern Europeans are less superior than the Nordic Europeans. The third factor stated that Negros were at the bottom of the scale with an average mental age of 10.41. Yerkes supported that all tests were constructed to measure innate intelligence.
However several problems arose from administering the test. Some of these downfalls in testing was: procedures varied so much from camp to camp that the results could scarcely organized and compared, high levels for poor anxiety, poor conditions for seeing and hearing, inexperience with taking test, recruits taking the wrong test, and many were not called back to retake a test or receive individual Binet-scale tests. Other errors found that a bias that lowered the mean scores of blacks and immigrants, and a bias for interpreting the zero on test scores as evidence of innate stupidity, which many of the zeros should have been interpreted as many men didn’t understand the instruction and should have been invalid.
A serious problem with Yerkes test is the treasure-trove for anyone seeking environmental correlates of performance on “test of intelligence” (247). Yerkes found a relationship between intelligence and amount of schooling. Additionally, he found test scores for foreign-born recruits rose consistently with years of residency in America. Yerkes army test measured the education and familiarity with American culture not innate intelligence. How many people actually know who Christy Mathewson is? Again and again all his tests pointed to a correlation with environment, but Yerkes his ground in hope for a hereditarian salvation. C.C. Brigham, a disciple of Yerkes, and translated the army testing to support the views of hereditarianism. Brigham article became a source used by all propagandists for supporting restriction on immigrants and eugenically regulation of reproduction. In the end, Brigham recant his published work and stated that the army data was worthless in measuring innate intelligence, but the damage was done. His work along with the other three hereditarians, slowed the immigration for southern and eastern Europe. During the outbreak of World War II, more than 6 million southern and eastern, and central Europeans were barred from America. “We know what happened to many who wished to leave but had nowhere to go. The paths to destruction are often indirect, but ideas can be agents as sure as guns and bombs (263).”

How much impact to do you think the hereditarians had on limiting the restriction of immigrants into America? Do you think there were other important factors influencing the Restriction act of 1924, and is so what were these factors?

How much influence do you think Yerkes IQ army test have on the rest of the nation, in terms of mass producing written test? Why do you think the military regarded his test as being useless and a waste of time?

Do you think hereditarians had the better argument for ranking class, gender, or race, or do you think the previous theories were better?

Recapitulation vs. Neoteny


Gould makes an explicit comparison between the two developmental theories on p. 148 of The Mismeasure. Here is the relevant quote...


"Recapitulation required that adult traits of ancestors develop more rapidly in descendants to become juvenile features - hence, traits of modern children are primitive characters of ancestral adults. But suppose that the reverse process occurs as it often does in evolution. Suppose that juvenile traits of ancestors develop so slowly in descendants that they become adult features. This phenomenon of retarded development is common in nature; it is called neoteny (literally, "holding on to youth"). Bolk argued that humans are essentially neotenous. He listed an impressive set of features shared by adult humans and fetal or juvenile apes, but lost in adult apes: vaulted cranium and large brain in relation to body size; small face; hair confined largely to head, armpits and pubic regions; unrotated big toe. I have already discussed one of the most important signs of human neoteny in another context (pp. 132-135): retention of the foramen magnum in its fetal position, under the skull"


So Gould is clearly a supporter of the significance of neoteny in human evolution (see his Ontogeny and Phylogeny (Gould, 1977) for his most complete discussion of this issue).


He then goes on to spell out the meaning of these two theories for ranking of human races.


"Now, consider the implications of neoteny for the ranking of human groups. Under recapitulation, adults of inferior races are like children of superior races. But neoteny reverses the argument. In the context of neoteny, it is "good" - that is, advanced or superior - to retain the traits of childhood, to develop more slowly. Thus, superior groups retain their childlike characters as adults, while inferior groups pass through the higher phase of childhood and then degenerate toward apishness. Now consider the conventional prejudice of white scientists: whites are superior, blacks inferior,. Under recapitulation, black adults sould be like white children. But under neoteny, white adults shold be like black children."


Which chimpanzee looks more advanced (i.e., more similar to humans), the juvenile or the adult, and which theory does this support?




Monday, October 19, 2009

Measuring Bodies: Two Case Studies on the Apishness of Undesirables

“Evolutionary theory transformed human thought during the nineteenth century. Nearly every question in the life of sciences was reformulated in its light (Gould, 142).” The preceding chapters exploited the data of brain size to support the ideas on distinctions among race, class, and gender. Now, chapter 4 discusses two more direct arguments created from the theory of evolution. The first argument is called recapitulation also know as, “ontogeny recapitulates phylogeny,” which is a general evolutionary defense for ranking of groups. The second is a more precise evolutionary hypothesis labeled Lombroso’s criminal anthropology. For it states a relationship between biological nature and human criminal behavior. Both theories will use apish morphology to help determine which groups of humans are superior and inferior.

The ape in all of us: recapitulation

Once evolution was a known fact, nineteenth century naturalist began the process of reconstructing the “tree of life“, but the fossil record was extremely imperfect and major trunks, branches, and limbs were lost forever. The only solution was to find an indirect process to understanding the evolutionary tract. The answer was found with the great German zoologist Ernst Haeckel. Haeckel suggested that reconstructing the path of evolution can be directly traced through embryological development of higher forms. He proclaimed “ontogeny recapitulates phylogeny” stated that an individual through its own growth, passes through stages representing adult ancestral forms. The recapitulation theory became one of the most influential theories of the late nineteenth century, where it dominated and influenced several fields of science that include embryology, comparative morphology, and paleontology. Recapitulation was regarded as the key to understanding the evolutionary tract. Recapitulation branched from biology and influenced other areas of interest. Freud used it to complain his Oedipus complex. Recapitulation became an anatomical theory for ranking humans based on the entire body instead of the brain size. It was able to state that adults of an inferior group must be like children of a superior group, for the children from a superior group represented a primitive adult ancestor. The theory was able to state that adults blacks and women were like white male children, and they were considered a living representation of an ancestral stage of white males (Gould, 144). This severed as a general theory of biological determinism, which was able to rank inferior groups in relations to race, sex, and class. As a result of recapitulation, E.D. Cope, an influential American paleontologist, was able to focus on anthropometric particularity craniomerty to support the ranking of races, and once again apish features applied to the inferior races will play a dominate role. The recapitulationists not only used anatomy has an argument for ranking races, but also extended it to use psychic development of how savages and women were emotionally like children of white males. They also compared prehistoric art to drawings from civilized children and primitive adult groups. Herbert Spencer goes as far to state that, “The intellectual traits of the uncivilized… are traits of the recurring children of the civilized (Gould, 146).” Some recapitulationists even took greater leaps in their ideas. For example, G. Stanley Hall relates the higher suicide rates of women as a primitive evolutionary trait (Further explanation found on pg. 147). However by 1920 the theory of recapitulation had collapsed, and was replaced by the theory of neoteny, which was a new way of ranking human groups. Neoteny takes on the opposite role or retard development of recapitulation, and states that it is superior to retain the traits of childhood, because it was better to develop slowly. Louis Bolk who proposed the theory, created a list of features shared by adult humans and fetal/juvenile apes, but lost in adult apes. These features are: vaulted cranium, large brain in relation to body size, small face, hair confined largely to head, armpits, and pubic regions, and un-rotated big toe. The biggest problem with this new theory is that it suggested white male adults were the inferior group. For nearly seventy years scientists were under the impression that recapitulation had collected data to support the superiority of white male; however, all that data collected actually proved the inferiority of the upper-class white male to their lower class whites, women, and black adults. An old argument never dies, and several scientists try to wiggle around this dilemma. Havelock Ellis would suggest urban males were developing womanly anatomy, and it proclaimed the superiority of urban life. While Bolk would try on proclaim that the white male race was the most progressive, as in terms of most retarded. Unfortunately, Bolk over looked all the features from recapitulation that placed white males far from the conditions of a child. He also overlooked that Orientals are far more retarded, and suggested the differences between Orientals and white males was too close to call. Lastly, H.J. Eysenck created a neotenic argument for black inferiority. He used three facts to forage his story. 1. Black babies and young black children exhibit more rapid sensorimotor development that white babies, suggesting black children develop more quickly away from the fetal state. 2. Average white IQ surpasses black IQ by age three. 3. There is a slight negative correlation between sensorimotor in the first year of life and later IQ. Eysenck is stating that children who develop more rapidly will in later life have a lower IQ. However, Eysenck theory doesn’t hold true for his argument is based on non- causal correlations, and he is clearly showing his hereditarian bias.

Why do you think American paleontologist agree with E. D. Cope as proclaiming the inferiority of Southern Europeans over Northern Europeans?

In reference to the neoteny theory. Why was it more difficult to prove that white adult males had more child like traits? Can you think of any others arguments to support white adult males as the superior group?

The ape in some of us: criminal anthropology

The subject of this theory is about Cesare Lombroso’s theory of l’uomo delinquente, the criminal man, which discusses anatomical differences between criminals and sane men. It wasn’t until he examined the skull of Vihella (the Italian version of Jack-the-Ripper) that he was able to support his theory. “For he saw in the skull a series of atavistic features recalling an apish past rather than a human present (Gould, 153).” Lombroso’s evolutionary theory was based on anthropometric data, which stated criminals are evolutionary throwbacks. Their atavism is both physical and mental, but it is the anatomical signs of apishness that are decisive. “We are governed by silent laws which never cease to operate and which rule society with more authority than the laws inscribes on our statute books. Crime….appears to be a natural phenomenon’ (Lombroso, 1887, pg. 157). In order for Lombroso to prove men with apish features were naturally born criminals. He needed to prove inferior animals displayed criminal behaviors. He was able to compile one of the most ludicrous excursion analyses of criminal behaviors in animals. He cited such examples as: ant driven by rage to kill and dismember an aphid, an adulterous stork who, with her lover murdered her husband, and a criminal association of beavers who ganged up to murder a solitary compatriot. After proving criminal behavior in animals his next step was a comparison of criminals to inferior groups. Lombroso used the cultural group Dinka of the Upper Nile to identify criminality within inferior people. The following features were used to display the criminality of the Dinka: heavy tattooing, heavy threshold of pain, breaking their incisors, and a display of apish stigmata as normal parts of their body. Lombroso’s theory of comparing atavistic criminals with animals, savages, and people of lower races resembles the basic argument of recapitulation. However, one of Lombroso’s major flaws was found in the apish stigmata he used. Some of these features used were: greater skull thickness, large jaws, relatively long arms, precocious wrinkles, absence of baldness, darker skin, diminished sensitivity to pain, and even noted that the feet of prostitutes are often prehensile as in apes. The problem that his anatomical stigmata was citing extreme values on a normal curve that approaches average measures for the same trait in apes. Just like Eysenck, his theory didn’t hold true for his argument of criminal features found in inferior races. However, this flaw didn’t stop him for he included other factors into his stigmata to prove the criminality was found in inferior races. While comparing canine teeth and flattened palate he used the anatomy of lemurs and rodents. His stigmata even included social factors. Some of these social factors were: criminals had their own language, tattoos on the body, and the ability not to blush. Eventually, his theory came under heated debate. He did back down from his theory of atavism, but not for one moment did he dispute his idea that crime is biological.

The recapitulation theory failed because of the neoteny theory. What similarities does Lombroso’s theory have with recapitulation, and why does his theory hold so much influence in criminal anthropology even though it is flawed?

“Evolutionary theory transformed human thought during the nineteenth century. Nearly every question in the life of sciences was reformulated in its light (Gould, 142).” Do you think the theories of recapitulation and criminal anthropology were a direct influence of evolution or do you think there were others factors contributing to the development of these two theories?

The influence of criminal anthropology

Criminal anthropology became the subject of discussion in legal and penal circles for years. Lombrosian anthropology had primary influence on how to understand the crime. Lombroso said study the criminal, not his upbringing, education, or what inspired his crime, but study the criminal in his natural place. His theory became a prescreening of criminals, which became a primary judgment in many criminal trials. It also invoked the argument should the punishments must fit the criminal not the crime. This was widely adopted by the United States for our modern apparatus of parole, early release, and indeterminate sentencing stems from Lombroso’s theory. Another influence that has stemmed from Lombroso theory is to sequester the dangerous and for Lombroso this meet criminal with his apish stigmata, but today the dangerous often means the defiant, the poor, and the black.

Now we live in a more subtle century, but the arguments of the old never die. Instead of cranial measurements it’s the complexity of intelligence testing. Additionally, the Lombroso’s features of apish stigmata are no longer being used, but linking genes and structure of the brain are trying to define the of the behaviors of criminals. For example the XYY an chromosomal anomaly where the extra Y is linked to male aggression, which is thought to lead to an increase in criminal behavior.

What might be other causes of innate criminal behavior that scientists are looking at?

Tuesday, October 13, 2009

Measuring Heads: Paul Broca and the Heyday of Craniology

Gould starts the chapter by stating “Evoluntionary theory swept away the creationist rug that had supported the intense debate between monogenists and polygenists, but it has satisfied both sides by presenting an even better rationale for their shared racism.” (pp 105). This Statement bridges the gap nicely between the ranking of intelligence in a hierarchical and unilinear mode before evolutionary theory was introduced and after. This suggests that the theory of evolution made it possible for scientists to produce an innate ordered hierarchy of species, with, of course, white males occupying the uppermost position. In this way evolutionary theory provided the groundwork for a lot of hierarchical structures of the human species.
Next it is important to understand Galton’s idea that everything can be quantified. Galton dabbled in such studies as attempting to quantify: efficacy of prayer, beauty, boredom, and inheritance of intelligence. As Gould so eloquently put it; “Quantification was Galton’s God” (pp108). The idea that anything can be quantified and abstracted as long as the right types of measurements were taken also became one of the basic ideas to contribute to the abstract study of human intelligence.
Taken together, evolutionary theory and the quantification of abstract entities, would imply that you could in fact quantify intelligence, and arrange into a hierarchical schema that reflects higher beings and their subordinates. In this way Gould is laying the foundation for the “purely” scientific study of human intelligence. However again returning to the social context within which science is practiced, science can be used “not to generate new theories, but to illustrate a priori conclusions” (pp106). In other words science will still be used to reproduce the hierarchical social structure of a society, because the researcher is still looking at how the data fits into HIS own preconceived notions about the way that the world works.
Gould then utilizes two examples, Robert Bennett Bean and Paul Broca, to illustrate this point.
Bean studied the relationship between race and the size of the genu and splenium, or front and back of the brain. He operated under the assumption that intelligence was seated in the front of the brain, and therefore that a larger frontal portion of the brain would reflect a higher degree of intelligence. And much to his surprise it showed (insert sarcastic tone) that the white race was more intelligent that the black race. Bean also went on to describe how this was reflected in the general mannerisms of the black race; they are affectionate, emotional, etc. However Franklin P. Mall suspected that this was incorrect, and conducted the same study with one key difference, he did not know the race of the skulls until after the measurements had been taken. Mall’s conclusion was the brains of whites and blacks were in fact the same. What can be learned from this example is that as Gould stated earlier “objectivity must be operationally defined as fir treatment of data, not absence of preference” (pp36). This case shows that even though Mall may have indeed been a product of his time, which I have no doubt he upheld a certain degree of racism, that by treating the data fairly the true patterns can be reflected. However, by this time it was too late because Bean had published his finding in popular journals, and it had become just another “purely” scientific statement about the inferiority of blacks in comparison to whites. This again reflects the social consequences of science and the need for scientist to truly treat the data fairly; otherwise false claims can be made to the social hierarchy of society.
Paul Broca engaged in the study of Brain mass, however, notably Broca also was actively engaged in the criticism of his peers for not remaining objective. Broca appeared to be a master of explaining away specimens that did not fit into his pattern of observation, and again this can be looked at as fitting the data to the theory rather than using the data to formulate a new theory. Broca was elective in the attributes and specimens that he chose to study, whenever one did not show what he expected he disregarded it and started anew, or explained it away, such as was the case of the Germans, large brained criminals, and small brained “men of eminence”. By giving scientific explanations for such anomalies Broca was able to solidify his position among science and “prove” that any and all patterns fit nicely into his theory. The issues that arise within this context are much the same as in the Bean scenario, if the argument appears scientific than it must be and society will take it for what it is at face value.
The dangers of operating under these pretenses is that the researcher potentially may cause harm to the subject that is being studied, this truly becomes a question of ethics. While it is not deliberate in all cases, however for the Bean case it is suspected to be deliberate, we have to really consider the real ramifications of the work scientists do. We must really consider the goal of the researcher, whether it is truly a venture aimed at the acquisition of knowledge, or is there a socio-political agenda.
Questions:
Gould brings up the point that women, races (other than white), and lower class groups have all been lumped into a standard analogous group, all fitting the same criteria for occupying lower rungs on the ladder of social hierarchy.
In what ways has this been perpetuated within our society today?
What are the real social ramifications of science that operate under these pretenses?
Can we ever achieve a science that is rid of this set of biases against women, different races, or lower social classes?

American Polygeny and Craniometry before Darwin: Blacks and Indians as Separate, Inferior Species

Gould’s second chapter deals with the categorizing “Blacks and Indians as separate, inferior species”, however, while explaining that this idea was not one of wide spread popularity, because it was a contradiction of theology.

Gould makes a point to talk about prominent figures in history, such as Benjamin Franklin, and Abraham Lincoln to make the point that even though such figures are considered humanists that racism was inherent in their time. For example Franklin was cited as saying that his dream for America was a land free of the black race, and Lincoln stated “I as much as any other man am in favor of having the superior position assigned to the white race”, when talking about the physical differences between the white race and the black race, and the impossibility of the two races achieving political equality. Again I think that the point to take away from this section is that even people, such as Lincoln, who are considered in our time as champions of racial equality, were a product of their time and were involved in a certain degree of racism.

Gould discusses the difference between Monogenism-or humans as a species with a single origin and degenerated accordingly from the ideal( Adam and Eve) with Blacks being further away from the perfection of Adam and Eve, than the White race, as well as Polygenism- Humans are derived from different sources and that is why there are different races.

“The idea that higher creatures repeat the adult stages of lower animals during their own growth”, to me seems to be a statement that gets to the core of these beliefs. By believing a statement such as this one implies a hierarchy that is meant to explain away any similarities between the different races, and therefore it seems more probable to focus on the differences between races.

Louis Agassiz:
A few beliefs, or practices that Agassiz upheld were key in the development of the theory of Polygeny in America. The fact that he believed that species generally did not migrate far from the centers around which they were created, with few exceptions such as humans, and he was considered a splitter, or that he broke species into separate species based on a minimal amount of difference, thus not accounting for variation within a species. I think that these are important because Agassiz’s practice of splitting species into separate species based on small differences would most surely lead him to divide the human species into distinct species based on skin color, or like attributes. The other part of Agassiz’s beliefs would have him believe that Blacks were created in Africa, because they are in their greatest numbers, however, if he was unable to separate the Caucasians from other population, into distinct species, than he was unable to account for the localities of certain races and the wide spread presence of Whites. This was in turn reflected in his belief that there were distinct species among humans, and they are better adapted to different geographic regions.

Samuel George Morton:
Morton measured the brain mass density of different races, believing that more intelligent or superior races would possess a higher brain mass. His hypothesis was that “a ranking of races could be established objectively by physical characteristics of the brain, particularly by its size”. This again returns to Gould’s main focus of the book, to examine the faults in science that has been conducted with the assumption that intelligence can be ranked in a linear hierarchical fashion, and therefore implies that intelligence is completely a biological phenomenon. Gould reexamines the data presented by Morton, and discovers that he inconsistently measures skulls, one example of this is the way that Gould suspects Morton was packing seeds into the skulls, by filling them and a few shakes for races Morton believed would be inferior, and by packing the seeds more tightly and pressing them in more forcibly for races, such as the Caucasian skulls, in order to maximize the amount of seeds in the skull. However, Morton did recognize the inconsistencies of using seeds to measure the brain; that is why he switched to lead shot, BB sized. Morton also did not account for the sex or size of the individuals. As well as, cropping his data source based on his personal beliefs of what the patterns should be. Gould does make the point to say that if Morton had believed what he was doing was wrong that he would have covered his tracks better. This suggests that the actions conducted by Morton were more reflective of the cultural expectations of his research than his own personal attitude toward other races.

Again returning to the social context within which research is conducted, this example shows that while Morton was susceptible to the cultural norms and beliefs, he may not have aware that they were influencing the way in which his research was conducted. By showing that even activists of the day were entrenched in the same types of beliefs, I think Gould is showing that racism was more wide spread through the society than just a few outspoken individuals, and that racism is present even if it is inherent and unbeknownst to the person that it affects, such as Morton.

Questions:
What kind of examples can you think of cultural norms being imposed on research in other fields?

What type of social implication do you think that research conducted, like Morton’s, under cultural standards may have? (other than establishing a ranked hierarchy of race)

In what ways do you think society has enabled biased research to take place (present and past)?

Sunday, October 11, 2009

Introduction to The Mismeasure of Man

Gould’s book, The Mismeaure of Man, is a history of the practice of biological determinism, focusing on the implementation of a naturally inherent intellectual hierarchy.

Two major themes that I think are important in The Mismeasure of Man are:
1: Research is always conducted within a social context.

2: Making the case against “the argument that intelligence can be meaningfully abstracted as a single number capable of ranking all people on a linear scale and unalterable worth.” Gould also makes a point to how that Biological Determinism is alive and well in modern America, whether we acknowledge it or not.

The first theme that I find important to mention is that Gould acknowledges that all science is practiced within some form of social framework. Gould even states “science must be understood as a social phenomenon” (pp53). By accepting that science is inherently social activity, then one must also identify the given social context within which the research is situated. Gould gives many examples of this, for example the brain mass calculations suggesting that European males were the superior specimen in all cases. This example is socially situated within western imperialism, and more specifically in the case of Native Americans, the westward expansion and settlement of the United States. In these scenarios science is used to justify western cultural dominance. The circumstances under which research is conducted will ultimately affect the way in which the results are utilized as well.

Gould suggests that a researcher’s background will always influence the types of questions asked, or even that the researcher may have a preference for the results, however, in science “objectivity must be operationally defined as fair treatment of data” (pp36) what implications does this have for research and do you think this is a justifiable way of looking at the question of subjectivity versus objectivity?

Gould states that “you have to sneak up on generalities not assault them head-on” (pp20). While the first point discussed seems to be the underlying emphasis of the book, this statement speaks directly to one of the reasons for presenting the case against the hierarchical ranking of intelligence based on biology. By using biological determinism Gould is essentially moving from the specific to the general, to address the issue of socially situated research.

The second theme I would like to discuss is Gould’s make focus of the book, making the case against “the argument that intelligence can be meaningfully abstracted as a single number capable of ranking all people on a linear scale and unalterable worth”. Gould makes a point to state that he has divided his book into two sections, one set of case studies pertaining to the past uses of biological determinism, and the second section of case studies dealing with more modern times. The first set of cases studies includes examples such as brain mass measurements, while the second set includes a discussion on the IQ tests. It is also interesting to note that not all science developed is used in its intended fashion, again referring to IQ tests, and how they were originally designed to identify children in need of additional assistance in school. However, while craniometry of the Native Americans was used to justify the western expansion, by hierarchically ordering peoples relative to one another, and to his surprise European men turned out to be on top. IQ tests and the bell curve are used as a way to examine intelligence in such a way as to literally assign a number to it and essentially the tests privilege a western European knowledge base. In this way IQ tests may not explicitly express a racial prejudice or a class structure inherent in the western world’s value of certain kinds of knowledge, but it is apparent that there is one present.

While IQ tests are just one form of standardized tests that produce a score meant to discern relative intelligence, the GRE and SAT are two others can you think of any other tests that function on the same basis?
Do you think that privileging standardized tests scores is justified, and why do you think that some institutions are moving away from utilizing these scores (WMU’s Anthropology application does not require the GRE for example while other graduate programs do)?

In these examples Gould has placed Biological determinism within it larger social context and has discussed the how social context can affect the way in which intelligence is abstracted in such a way as to order it linearly and hierarchically, ultimately ending at western European upper class dominance.

There seems to be a Marxist undertone to the way that Gould discusses these issues. To me it appears that Gould is equating Scientific research with the means of production (in this cases the means of producing knowledge), and in Marxist theory those who control the means of production also have the means to produce their same social class. Or in other words the use of biological determinism functions ideologically to reproduce the social relations (ultimately justifying them) and therefore reproduce the Capitalist society as a whole. Even the use of a scalar analysis, moving from a specific set of phenomena to explain a larger pattern is reminiscent of Marxist theory.Any comments on that?

Wednesday, October 7, 2009

Human Evolution

“Light will be thrown on the origin of man and his history.” This last statement by Darwin in Origin has been taken by Jones and expanded to include updated evidence that we now know legitimizes evolution for humans as well, that man also “follows the rules that govern the whale and the AIDS virus (310).” We have also had to adapt to our environments given the genes we inherited. Over time this is apparent in the switch from hunter/gather subsistence to the introduction of agriculture. Once thought to be a sign of progress, many anthropologists now claim farming to be the worst mistake in human history. Marxist theory aside, it has taken a toll on our bodies, our social order, and the environment we create for ourselves. Humans have had to evolve to eat different foods, live within larger societies, and yet still maintain enough genetic diversity to move on and survive. How do large community groups affect human populations and their genes (greater mating pool, exposure to disease, etc)?

The study of disease among human populations is evidence enough for what and where our history lies. Jones uses the example of sickle-cell anemia as an earlier adaptive defense for malaria which still haunts millions of people today. Diseases afflicted within humans must be fought off, and the body has developed genes for this still persistent today. However, other poisons exist (tobacco, pollution, radio waves) that have recently emerged. Contemporary populations will see if we can adapt to the cancers that plague us now as a result. What is cancer, and how does it relate to the body, and how will evolution play a role in the future?

There are about a hundred thousand genes to make a human. Sixty percent of us in industrialized countries will die as a result of the genes we inherited. There are 4000 diseases caused by single gene defects alone, such as cystic fibrosis, muscular dystrophy, and hemophilia. Two copies of the gene must be mutated for a person to be affected by one of these disorders known as autosomal recessive (or dominant) disorder. A person with the disease usually has unaffected parents who each carry one copy of the mutated gene. Two unaffected people, who each have one copy of the mutated gene, have a 25% chance with each pregnancy of having a child affected by the disorder. Genes can also affect X and Y chromosomes. Many disorders are difficult to study when genes aren’t the only factors. How might environment influence disease in conjunction with genetic inheritance?

Though not a fan of using Wikipedia in reference, here is a long list of genetic disorders with causal type of mutation and the chromosome involved:
http://en.wikipedia.org/wiki/List_of_genetic_disorders

Genetic epidemiology source complete with Power Point slides:
http://www.dorak.info/epi/genetepi.html


Human evolution can be looked at using concepts throughout Jones’s book: geography, migration patterns, geology, morphology, etc. What is most unique, however, is the evolution of our brain. Why it is that we started walking upright and began making conscious decisions is yet to be determined, but it is the brain that sets us apart from the other organisms, and why it is that we can sit here and think ourselves somehow special from other animals. With more and more hominid finds we are forced to reevaluate how our family tree divides. Researchers now know that human evolution did not follow a linear line of progression but instead took many paths. As the only surviving species of the genus Homo, what does that say about our ability to survive and will that change for the future? How do our concepts of history change with each new find? How have studies changed in relation to finds like “Lucy?” Without these “missing links” how can we explain human evolution in relation to the rest of the animal kingdom? And how much of us are governed by our genes, and how much by culture?


BBC articles:
http://www.bbc.co.uk/sn/prehistoric_life/human/human_evolution/index.shtml
Illustrations of evolution:
http://www.evolution-textbook.org/content/free/figures/ch25.html
Smithsonian National Museum of Natural History:
http://anthropology.si.edu/humanorigins/faq/encarta/encarta.htm

Monday, October 5, 2009

What is Taphonomy?

The fossil record is rich in its biological and ecological information (Behrensmeyer, 103), but very few fossil are persevered. Leaving the fossil record uneven and incomplete. The natural processes affect our fossil record by giving us a set sampling size even before our research begins. Taphonomy seeks to understand the natural processes in order for information to be collected and evaluated correctly.
Taphonomy can be defined as the study of post-mortem, pre-burial remains of an organism from the biosphere into the lithosphere, or more generally characterized by Behrensmeyer and Kidwell as “the study of processes of preservation and how they affect information in the fossil record.” “Today, taphonomy focuses more on a geobiological understanding of the earth, grounded on the postmortem process that recycle biological materials and affect our ability positively or negatively to reconstruct past environments and biotas” (Behrensmeyer,104). However there are several factors and processes that affect the preservation of skeletal remains. These are: the amount and durability of the remains. Secondly, the physical, chemical, and biological components at pre-burial site that includes, air, water, and soil. Third, the amount of time the remains are exposed on the surface and how quickly the remains are buried. Next, the digenetic conditions within the upper part of the sedimentary column, this includes microbial process, physical reworking, or unequal amounts of biochemical in the soil. Lastly, the depth and location of the remains in the sedimentary. Most of our longest surviving fossils are commonly found in stable cratonic margins or interiors and continental rifts margins; for they escape the tectonic recycling. Some destructive factors during these stages are commonly referred to as: bioerosion, scavenging, dissolution, abrasion, rounding, disarticulation, and weathering which all affect the outcome of the remains. With all these factors and several more it might seem that the chance to reconstruct the past is impossible, but through the combination of understanding natural process, a more complete research of materials in prehistoric history, and ingenious experiments and observation, it is becoming possible to solve specific problems in our fossil record (Lewin, 95).
There are several ways we can use taphonomy to aid our understanding and to resolve certain questions. Take for example, a taphonomists can examine the characteristics of contemporary kill site, where animals have been killed, processed, and eaten by various predators, including humans. This enables the taphonomist to monitor how each type of predator consumes their prey, while noting which bones are carried off or cracked open for marrow, and how bones are scattered throughout the site. This data enables archaeologists to develop a profile of characteristics of a historical kill site (Boyd,318).
Another example how taphonomy can be used, is by looking at modern sites where skeletal remains are affected by sediments rapidly moving through water. These sediments will leave a number of distinctive characteristics on the bones, which can be compared that to of a historical sites with the same sediment mixture. If the markings do not match then taphonomists can find out whether the distinctive marks were made from animals gnawing on the bones, or even caused by flaked-stone tools. Taphomomic data is also been applied to other fields such as paleobiology, paleoceanography, ichnology and biostratigraphy




Behrensmeyer, Anna, with Susan Kidwell, and Robert Gastaldo
2000 “Taphonomy and Paleobiology.” Paleobiology Society 26(4):103-147

Body, Robert and Joan B. Silk
2003 How Humans Evolved. New York: W.W. Norton and Company Inc.

Lewin, Roger, and Robert A. Foley
2004 Principles of Human Evolution. Malden, MA: Blackwell Publishing


Shipman, P.
1981 Life history of a fossil: An introduction to taphonomy and paleoecology. Harvard
University Press

Gradualism and Punctuated Equilibrium



Gradualism and Punctuated Equilibrium



I started researching this topic by skimming through my old (2003) textbook from my Human Evolution class that I took as an undergraduate. The authors make no mention of these two terms within the text. They do acknowledge that although slow rates of change are the typical units observed within the fossil record, rapid evolutionary events most likely happened, and the lack of evidence is a result of an incomplete fossil record ( Boyd and Silk 2003:22). The textbook for our seminar is a 2004 edition and dedicates several pages to gradualism and punctuated equilibrium. To me, this represents the variation in academia when it comes to challenging or accepting new concepts.


Gradualism may be defined as the process in evolution that accumulates small units of change at a steady rate, over long temporal periods. The gradual accumulation of new adaptations causes a genetic divergence of offspring from the parent, or ancestral species (Lewin and Foley 2004:52). Gradualism is therefore characterized as being a slow process that remains consistent and constant where change is cumulative within a species. Gradualism is essentially a fundamental part of the theory of Modern Synthesis (1942) that is a union of ideas which resulted from the differences that remained between strict Darwinism and evolutionary theory. Modern Synthesis has three principal components of which the first is gradualism. Although gradualism was originated by James Hutton in 1795, it traveled into Charles Lyell’s repertoire in the form of Unitarianism. It then influenced Charles Darwin and his theory of evolution. It is important to point out that Darwin rejected the concept of saltation; saltation describes evolution as occurring in jumps from one generation to the next in a rapid fashion.


Punctuated Equilibrium essentially describes that species go through static periods with relatively little change that are accentuated by rapidly occurring modifications resulting in speciation. The species then return to a static period. Lewin and Foley (2004:52) indicate that separation of a daughter species from the ancestral species may still occur under punctuated equilibrium, but mainly it occurs through drifts in smaller, isolated populations. Steven Jay Gould and Niles Eldredge published their theory in 1972. Their work was influenced by Ernest Mayr, who also was a key contributor to Modern Synthesis, and Michael Lerner. One of the important differences between punctuated equilibrium and gradualism is how speciation is viewed. Punctuated equilibrium views adaptation as a possible result of speciation, while gradualism views it as a cause of speciation. In addition Gould and Eldredge view is similar to saltation only in the idea that change occurs rapidly. For Gould and Eldredge, adaptation occurs as result of speciation, but follows Darwinian fundamentals in doing so.


Gradualism and punctuated equilibrium may represent two different gears of the same mechanism; gradualism is the hour hand and punctuated equilibrium is the minute hand in which both are enclosed in a mechanism designed (or evolved) to measure time.


Gould and Eldredge (1993) celebrated the acceptance of their concept of punctuated equilibrium into the realm of theory. Can both theories actually coincide if some of the principal components to each contradict each other?




References



Boyd, Robert and Joan B. Silk


2003 How Humans Evolved. 3rd Edition. New York: W. W. Norton and Company Inc.



Gould, Stephen J. and Niles Eldredge


1993 Punctuated Equilibrium Comes of Age. Nature. 18 November 1993 (366) :


223-227.



Lewin, Roger and Robert A. Foley


2004 Principles of Human Evolution. 2nd Edition. Victoria, Australia: Blackwell


Publishing.



Links:



http://www.talkorigins.org/faqs/punc-eq.html



http://evolution.berkeley.edu/evosite/evo101/VIIA1bPunctuated.shtml



http://www.blackwellpublishing.com/ridley/a-z/Phyletic_gradualism.asp



http://www.creationdefense.org/76.htm



http://www.istheory.yorku.ca/punctuatedequilibriumtheory.htm



http://www.antievolution.org/people/wre/essays/pe104.html