Wednesday, March 30, 2011

Media Reporting on International Science and Math Test Gives Wrong Impression

The media periodically hits us will damning reports on how low American students are in international rankings of science and math skills. You are given the impression that there is some global super bowl exam taking place where an accurate sample of our best students is pitted against those from other countries. The media is then filled with quotes from “experts” with vested interests blaming some facet of US life for the poor showing. Recent headlines shouted that “China students were number one”. Previous years had placed Finland as first.


Sweeping conclusions such as this certainly deserve the next level of examination. Especially since there were 258,950 foreign college science students enrolled in the US in 2009 and the number has grown consistently.

They must know something that the test doesn’t show. What criteria were used to establish the number one position? How did other western countries do? What criteria were used to select the students being tested? These are important criteria if we are to understand the meaning of these rankings. To gain a better understanding we went to the web site the *Programme for International Student Assessment (PISA). PISA is the organization that is responsible for the studies. This site explains the basics of the testing with some comments.

The methodology is complex and only covered at a very high level at the site. PISA has examined the results and provides an overview of the selection of the population taking the tests and expresses concerns that the results are being reported uncritically.

Each country submits 5000 participants to take tests in reading, math and science in a two hour written exam. The questions are said to test the use of a science education to solve problems rather than a rote memory of formula. The completed tests are subject to detailed and complicated analysis. Countries can also submit other national tests they have conducted for inclusion in the analysis. Only a band of ages around 15 year old students in schools are considered. Home schooled students are not eligible.

The mystery was solved when an analysis showed that demographics of the area within the country from which the sample of students is taken can greatly influence the test results. The actual sampling methodology for the US is not explained on the site. To explain the surprising low results for the US PISA made the following important statements.

Critics say that low performance in the United States is closely related to American Poverty. It's also shown that when adjusted for poverty, the richest areas in the US outperform every other country's average scores, especially areas with less than 10% poverty (and even areas with 10% to 25% poverty outperform countries with similar rates)”.

PSIA is also critical about those that selectively and uncritically demagogue the results of the studies for their own special interest. They comment further:

In essence, the criticism isn't so much directly against the Programme for International Student Assessment itself, but against people who use PISA data uncritically to justify measures such as Charter Schools.”


The powerful demographic effect occurs because high poverty areas as a group have low average scores. If the US students chosen are selected to be a representative sample of the entire country of 15 year olds there will be low achievers from the poverty area pulling down the results of the high achievers. Conversely, if the US students are chosen from a low poverty area the resulting group will have a much higher score. Apparently the US contestants had significant representative from low achieving areas and the US results were not representative of the future pool of US engineers and scientists. To be dominant a country only needs a portion of its population to be outstanding in science and engineering. `.

Each country’s published results will be similarly highly influenced by the area from which they pick their participating students. This was apparently left up to the country. In 2009 the headlines shouted that China was now number one in science and math. But the students tested were only from China-Shanghai which is a select group.

The studies that were publicized by the media were not valid as a comparison between countries. Without a consistency of the domains chosen for sampling by each country no conclusions regarding competitive positions can be made.

This is not to say that the studies are not informative. Studies of this type could be valuable for measuring progress of a specific domain over time. However, these specific studies have not yet been conducted long enough to reach conclusions. Determining where we currently stand globally in educating the15 year old students that will eventually make up our pool of active mathematicians and scientists is a more complex problem. With the right selection of exam takers we are likely to be judged number one.

Our problems are not with our advanced education in science and mathematics. It is with the standards of our finance sector

No comments: