How has intelligence testing changed throughout history?
The scientific study of human intelligence dates back well over 100 years. In that time there have been numerous schools of thought about how to measure intelligence. The core disagreement between researchers and theorists about intelligence is around whether it’s genetic or largely influenced by the environment; whether it’s nature or nurture.
In the late 1800s, Englishman Sir Francis Galton (1822-1911) became one of the first people to study intelligence. He tried to measure physical characteristics of noblemen and created a laboratory to measure their reaction time and other physical and sensory qualities.
Regarded as one of the fathers of modern-day intelligence research, Galton pioneered psychometric and statistical methods. Given the technology of the day, he wasn’t particularly successful at measuring biological parameters. But he did create testable hypotheses about intelligence that later researchers used.
The first IQ tests
It wasn’t until the turn of the 20th century that Frenchman Alfred Binet (1857-1911) developed the first test resembling a modern intelligence test. Binet designed a series of questions aimed at distinguishing children who may have learning disabilities or need special help, which he thought children of different ages could answer correctly. His test was based on the assumption that intelligence developed with age but one’s relative standing among peers remained largely stable.
The German psychologist William Stern (1871-1938) introduced the idea of intelligence quotient, or IQ. This entailed a formula for mental age that could be assessed by a test, such as the one devised by Binet, divided by chronological age, multiplied by 100.
Lewis Madison Terman (1877-1956), a cognitive psychology professor at Standford University, redeveloped the Binet test for use in the United States. Terman updated the test in many ways, most significantly by making a version that could be used for adults. And in the 1930s, another American psychologist, David Wechsler (1896-1981), further expanded the idea of assessing adult intelligence using written tests.
Modern-day Wechsler and Stanford-Binet tests have undergone considerable scientific developments over the last century. They represent a significant achievement in psychological testing and measure a wide range of cognitive processes – vocabulary, knowledge, arithmetic, immediate and long-term memory, spatial processing and reasoning – with considerable precision.
One controversy around these tests involved the eugenics movement, but that’s beyond the scope of this introductory article. You can read more about that aspect of intelligence testing here.
Where intelligence comes from
Scores on the tests have been shown to predict a wide range of scholastic, academic and organisational variables. There have also been other types of intelligence tests that measure only non-verbal abilities.
The US military used Army Alpha and Beta tests, for instance, to measure the intelligence of candidates, some of whom were illiterate. For those who couldn’t read or write, the tests involved using a series of non-verbal reasoning questions to assess differences in intelligence.
These types of tests were regarded by many as “culturally fair” – that is, they didn’t discriminate against people who had poor education or lower levels of reading and language ability. And some researchers and theorists argued they could be used “fairly” and “objectively” to assess a person’s true underlying intellectual capabilities.
Researchers have often identified a strong relationship between IQ test performance and educational achievement; scores from even an early age can predict academic achievement and scholastic performance in later years.
One reason why IQ tests predict scholastic performance might be that they cover similar ground and were constructed for this purpose. Since problem solving and reasoning are taught within education systems, longer and better education often results in improved IQ as well as scholastic performance. Children who miss school often show deficits in IQ; older children in the same class who have access to an extra year of education often score significantly higher.
This has led many psychologists and teachers to question whether IQ tests are fair to certain groups. But others have argued that a third factor – socioeconomic status – is also at play here. It’s likely that more affluent parents spend more time with their developing children and have more resources to help them.
While this is a popular belief, research shows it’s not the whole story. When parental socioeconomics status is taken into account, IQ still predicts scholastic performance. But when IQ is controlled, socioeconomic status only weakly predicts scholastic performance.
All this suggests that while socioeconomc status is an important factor to consider in a child’s development, there are other reasons for the relationship between IQ and academic achievement.
Nature and nurture
Many researchers still argue that cognitive abilities measured by IQ tests have a predominantly genetic basis. But there’s very little evidence to support the view, despite hundreds of millions of dollars spent on research to identify genes responsible for intelligence and cognitive ability.
The argument has shifted over time from hoping to identify a small set of genes associated with intelligence to accepting that, if there is such a basis to intelligence, thousands of genes contribute small variance in IQ scores.
Even if we could identify intelligence genes, the assumption that they work independently of the environment is incorrect. We know that genes get turned on and off depending on environmental cues and triggers.
Creating better environments at sensitive periods of development is likely to have profound effects on our intelligence. Some studies show, for instance, that nutritional interventions can improve cognitive performance, although there’s much work still to be done in this area.
IQ tests have had many detractors. Some have suggested that intelligence becomes whatever IQ tests measure. One of the first historians of psychology, Harvard professor Edwin Boring, for instance, said:
The construct of human intelligence is fundamental to the sort of society that we live in; intelligence is central to new discoveries, to finding solutions to important problems, and to many other important qualities we value. Numerous questions remain about not just how to measure intelligence but also how we improve intelligence and prevent our cognitive abilities from declining as we get older.
This article is published in collaboration with The Conversation. Publication does not imply endorsement of views by the World Economic Forum.
To keep up with the Agenda subscribe to our weekly newsletter.
Author: Con Stough is a Professor & Co-Director at the Swinburne Centre for Human Psychopharmacology at Swinburne University of Technology.
Image: A student reads under the afternoon sun on the main campus. REUTERS/Mike Segar.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Economic Progress
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Filipe Beato and Jamie Saunders
November 21, 2024