DECEIT IN HISTORY
"Through
experimental science we have been able to learn all these facts about the
natural world, triumphing over darkness and ignorance to classify the stars and
to estimate their masses, composition, distances, and velocities; to classify
living species and to unravel their genetic relations. . . . These great accomplishments
of experimental science were achieved by men .. . [who] had in common only a
few things: they were honest and actually made the observations they recorded,
and they published the results of their work in a form permitting others to
duplicate the experiment or observation."
So says The Berkeley Physics Course, an
influential text that has been used across the United States to impress college
students with both the substance and the tradition of modern physics.1
As with nonscientific systems of belief, however, the elements insisted on most
strongly are often those with the least factual reliability. The great
scientists of the past were not all so honest and they did not always obtain
the experimental results they reported.
EClaudius
Ptolemy, known as "the greatest astronomer of antiquity," did most of
his observing not at night on the coast of Egypt but during the day, in the
great library at Alexandria, where he appropriated the work of a Greek astronomer
and proceeded to call it his own.
EGalileo
Galilei is often hailed as the founder of modern scientific method because of
his insistence that experiment, not the works of Aristotle, should be the
arbiter of truth. But colleagues of the seventeenth-century Italian physicist
had difficulty reproducing his results and doubted he did certain experiments.
EIsaac
Newton, the boy genius who formulated the laws of gravitation, relied in his
magnum opus on an unseemly fudge factor in order to make the predictive power
of his work seem much greater than it was.
EJohn
Dalton, the great nineteenth-century chemist who discovered the laws of
chemical combination and proved the existence of different types of atoms,
published elegant results that no present-day chemist has been able to repeat.
EGregor
Mendel, the Austrian monk who founded the science of genetics, published papers
on his work with peas in which the statistics are too good to be true.
EThe
American physicist Robert Millikan won the Nobel prize for being the first to
measure the electric charge of an electron. But Millikan extensively
misrepresented his work in order to make his experimental results seem more
convincing than was in fact the case.
Experimental science is founded on a
paradox. It purports to make objectively ascertainable fact the criterion of
truth. But what gives science its intellectual delight is not dull facts but
the ideas and theories that make sense of the facts. When textbooks appeal to
the primacy of fact, there is an element of rhetoric in the argument. Finding
facts in actuality is less rewarded than developing a theory or law that
explains the facts, and herein lies an enticement. In making sense out of the
unruly substance of nature, and in trying to get there first, a scientist is
sometimes tempted to play fast and loose with the facts in order to make a
theory look more compelling than it really is.
It is difficult for a nonscientist to
appreciate the overriding importance to the researcher of priority of
discovery. Credit in science goes only for originality, for being the first to
discover something. With rare exceptions, there are no rewards for being second.
Discovery without priority is a bitter fruit. In the clash of rival claims and
competing theories, a scientist often takes active measures to ensure that his
ideas are noticed, and that it is under his name that a new finding is
recognized.
The desire to win credit, to gain the
respect of one's peers, is a powerful motive for almost all scientists. From
the earliest days of science, the thirst for recognition has brought with it
the temptation to "improve" a little on the truth, or even to invent
data out of whole cloth, in order to make a theory prevail.
Claudius Ptolemy, who lived during the
second century A.D. in Alexandria, Egypt, was one of the most influential
scientists in history. His synthesis of early astronomical ideas resulted in a
system for predicting the positions of the planets. The central assumption of
the Ptolemaic system was that the earth was at rest and that the sun and other
planets revolved around it in essentially circular orbits.
For nearly 1,500 years, far longer than
Newton or Einstein have held sway, Ptolemy's ideas shaped man's view of the
structure of the universe. The Ptolemaic system prevailed without challenge
throughout the Dark Ages, from the early years of the Roman empire until the
late Renaissance. Arab philosophers, the guardians of Greek science during the
Middle Ages, dubbed Ptolemy's writing the Almagest, from the Greek word for
"greatest." He came to be regarded as the preeminent astronomer of
the ancient world. It was not until Copernicus in 1543 put the sun instead of
the earth at the center of the planetary system that Ptolemy's 1,500-year reign
as the king of astronomers began to come to an end. Yet this titan of the
heavens had feet of clay.
In the nineteenth century, astronomers
re-examining Ptolemy's original data began to notice some curious features.
Back calculations from the present-day position of the planets showed that many
of Ptolemy's observations were wrong. The errors were gross even by the
standards of ancient astronomy. Dennis Rawlins, an astronomer at the University
of California, San Diego, believes from internal evidence that Ptolemy did not
make the observations himself, as he claims to have done, but lifted them
wholesale from the work of an earlier astronomer, Hipparchus of Rhodes, who
compiled one of the best star catalogs of ancient times.
The island of Rhodes, where Hipparchus
made his observations, is five degrees of latitude north of Alexandria.
Naturally there is a five-degree band of southern stars that can be seen from
Alexandria but not from Rhodes. Not one of the 1,025 stars listed in Ptolemy's catalog
comes from this five-degree band. Also, every example given in the Almagest of
how to work out spherical astronomy problems is given for a latitude the same
as that of Rhodes. "If one didn't know better," says Rawlins in an
ironic comment, "one might suspect (as did even Theon of Alexandria,
Ptolemy's most placid, tireless admirer in the 4th century) that Ptolemy took
the examples from Hipparchus."2
Not only questions of theft hang over
the head of antiquity's great astronomer. Ptolemy is also accused of a more
modern scientific crime\that of having derived the data that he
cites to support his theory from the theory itself instead of from nature. His
chief accuser is Robert Newton, a member of the applied physics laboratory at
Johns Hopkins University. In his book, The Crime of Claudius Ptolemy, Newton
has assiduously collected scores, of instances in which Ptolemy's reported
result is almost identical with what the Alexandrian sage wanted to prove and
greatly different from what he should have observed.3 A striking
example is that Ptolemy claimed he had observed an autumnal equinox at 2 P.M.
on September 25, A.D. 132. He stressed that he had measured the phenomenon
"with the greatest care." But, says Newton, back calculation from
modern tables shows that an observer in Alexandria should have seen the equinox
at 9:54 A.M. on September 24, more than a day earlier.
In giving his date for the equinox,
Ptolemy was trying to show the accuracy of the length of the year as determined
by Hipparchus. Hipparchus too had measured an autumnal equinox, 278 years
earlier, on September 27, 146 BC.. Newton shows that if 278 times Hipparchus'
estimate of a year (which is excellent but not quite right) is added to the
Hipparchus equinox, then the time arrived at is within minutes of the time
reported by Ptolemy. In other words, Ptolemy must have worked backward from the
result he was trying to prove instead of making an independent observation.
Defenders of Ptolemy, such as historian
Owen Gingerich, claim that modern scholars are being unfair in applying
contemporary standards of scientific procedure to Ptolemy. Yet even Gingerich, who
calls Ptolemy "the greatest astronomer of antiquity," concedes that
the Almagest contains "some remarkably fishy numbers."4
But he insists that Ptolemy chose merely to publish the data that best
supported his theories and was innocent of any intent to deceive. Whatever
Ptolemy's intent, his borrowing of Hipparchus' work won him nearly two
millennia of glory before being detected.
The feature that supposedly
distinguishes science from other kinds of knowledge is its reliance on
empirical evidence, on testing ideas against the facts in nature. But Ptolemy
was not the only scientist to neglect an observer's duties; even Galileo, a
founding father of modern empiricism, is suspected of reporting experiments
that could not have been performed with the results he claims.
Galileo Galilei is perhaps best
remembered as the patient investigator who dropped stones from the Leaning
Tower of Pisa. The story is probably apocryphal but it captures the quality
that allegedly set Galileo apart from his medieval contemporaries\his inclination to search for answers in nature, not
in the works of Aristotle. Galileo was persecuted by the Church for his defense
of the Copernican theory and his trial is held up by today's scientific
textbooks as a heroic object lesson in the battle of reason against
superstition. Such textbooks naturally tend to stress Galileo's empiricism, in
contrast to his opponents' dogmatism. "After Galileo," says one,
"the ultimate proof of a theory would be the evidence of the real
world."5 The textbook approvingly cites how Galileo
painstakingly tested his theory of falling bodies by measuring the time it took
for a brass ball to roll down a groove in a long board: in "experiments
near a hundred times repeated," Galileo found that the times agreed with
his law, with no differences "worth mentioning."
According to historian I. Bernard Cohen,
however, Galileo's conclusion "only shows how firmly he had made up his
mind beforehand, for the rough conditions of the experiment would never have
yielded an exact law. Actually the discrepancies were so great that a
contemporary worker, Pre Mersenne, could not reproduce the results described by
Galileo, and even doubted that he had ever made the experiment."6
In all likelihood, Galileo was relying not merely on his experimental skill but
on his exquisite talents as a propagandist.7
Galileo liked to perform "thought
experiments," imagining an outcome rather than observing it. In his Dialogue on the Two Great Systems of the
World, in which Galileo describes the motion of a ball dropped from the
mast of a moving ship, the Aristotelian, Simplicio, asks whether Galileo made
the experiment himself. "No," Galileo replied, "and I do not
need it, as without any experience I can affirm that it is so, because it cannot
be otherwise."
The textbooks' portrayal of Galileo as a
meticulous experimentalist has been reinforced by scholars. According to one
translation of his works, Galileo reportedly said: "There is in nature
perhaps nothing older than motion, concerning which the books written by
philosophers are neither few nor small. Nevertheless, I have discovered by
experiment some properties of it which are worth knowing and which have not
hitherto been observed or demonstrated."8 The words "by
experiment" do not appear in the original Italian; they have been added by
the translator, who evidently had strong feelings on how Galileo should have
proceeded.
Unlike the textbook writers, some
historians, such as Alexandre Koyre, have seen Galileo as an idealist rather
than an experimental physicist; as a man who used argument and rhetoric to persuade
others of the truth of his theories.9 With Galileo, the desire to
make his ideas prevail apparently led him to report experiments that could not
have been performed exactly as described. Thus an ambiguous attitude toward
data was present from the very beginning of Western experimental science. On
the one hand, experimental data was upheld as the ultimate arbiter of truth; on
the other hand, fact was subordinated to theory when necessary and even, if it
didn't fit, distorted. The Renaissance saw the flowering of Western
experimental science, but in Galileo, the propensity to manipulate fact was the
worm in the bud.
Both sides of this ambiguous attitude to
data reached full expression in the work of Isaac Newton. The founder of
physics and perhaps the greatest scientist in history, Newton in his Principia of 1687 established the goals,
methods, and boundaries of modern science. Yet this exemplar of the scientific
method was not above bolstering his case with false data when the real results
failed to win acceptance for his theories. The Principia met with a certain resistance on the Continent,
especially in Germany where opposition was fomented by Newton's rival Leibniz,
whose system of philosophy was at odds with Newton's theory of universal
gravitation. To make the Principia more
persuasive, Newton in later editions of his work improved the accuracy of
certain supporting measurements. According to historian Richard S. Westfall,
Newton "adjusted" his calculations on the velocity of sound and on
the precession of the equinoxes, and altered the correlation of a variable in
his theory of gravitation so that it would agree precisely with theory. In the
final edition of his opus, Newton pointed to a precision of better than 1 part
in 1,000, boldly claiming accuracies that previously had been observed only in
the field of astronomy. The fudge factor, says Westfall, was "manipulated
with unparalleled skill by the unsmiling Newton."
The hiatus between lofty principle and
low practice could not be more striking. As amazing as it is that a figure of
Newton's stature should stoop to falsification, even more surprising is that
none of his contemporaries realized the full extent of his fraud. Using his contrived
data as a spectacular rhetorical weapon, Newton overwhelmed even the skeptics
with the rightness of his ideas. More than 250 years passed before the
manipulation was completely revealed. As Westfall comments, "Having
proposed exact correlation as the criterion of truth, [Newton] took care to see
that exact correlation was presented, whether or not it was properly achieved.
Not the least part of the Principia's
persuasiveness was its deliberate pretense to a degree of precision quite beyond
its legitimate claim. If the Principia established
the quantitative pattern of modern science, it equally suggested a less sublime
truth\that no one can manipulate the fudge factor
so effectively as the master mathematician himself."10
Newton's willingness to resort to
sleight of hand is evident in more than just falsification of data. He used his
position as president of the Royal Society, England's premier scientific club,
to wage his battle with Leibniz over who first invented calculus. What was
shameful about Newton's behavior was the hypocrisy with which he paid lip
service to fair procedure but followed the very opposite course." It would
be an iniquitous judge "who would admit anyone as a witness in his own
cause," announced the preface of a Royal Society report of 1712 which
examined the question of priority in calculus. Ostensibly the work of a committee
of impartial scientists, the report was a complete vindication of Newton's
claims and even accused Leibniz of plagiary. In fact the whole report,
sanctimonious preface included, had been written by Newton himself. Historians
now believe that Leibniz' invention of calculus was made independently of
Newton.
Newton having set the standards, it is
perhaps not so surprising to find other scientists using the truth to support
their own theories in ways that make a mockery of the scientific method. Historians
have raised substantial questions about the experiments of John Dalton, a
towering figure in early nineteenth-century chemistry and a founder of the
atomic theory of matter. From his belief that each element is composed of its
own kind of atoms, Dalton developed his law of simple multiple proportions. The
law holds that when two elements form a chemical. compound they do so in fixed
proportions because the atoms of one element combine with a precise whole
number-one, two, or more-of the atoms of the other element. Dalton supplied
major evidence for this law from his study of the oxides of nitrogen, stating
that oxygen would combine with a given amount of nitrogen only in certain fixed
ratios.
Modern inquiry raises considerable doubts
about Dalton's data. For one thing, historians are now sure that Dalton first
speculated on the law and then made experiments in order to prove it.12
For another, he seems to have selected his data, publishing only the
"best" results, in other words those that supported his theory. His
best results are distinctly hard to duplicate. "From my own experiments I
am convinced that it is almost impossible to get these simple ratios in mixing
nitric oxide and air over water," says historian J. R. Partington.13
Scientists' cavalier attitude toward data
in the nineteenth century was sufficiently widespread that in 1830 the
phenomenon was described in a treatise by Charles Babbage, inventor of a calculating
machine that was the forerunner of the computer. In his book Reflections on the Decline o f Science in
England, Babbage even categorized the different types of fraud that were
prevalent.,, "Trimming," he wrote, "consists of clipping off little
bits and there from those observations which differ most in excess from the
mean, and in sticking them on to those which are too small." Though not
approving of the practice, Babbage found that at times it might be less
reprehensible than other types of fraud. "The reason of this is, that the
average given by the observations of the trimmer is the same, whether they are
trimmed or untrimmed. His object is to gain a reputation for extreme accuracy
in making observations; but from respect for truth, or from prudent foresight,
he does not distort the position of the fact he gets from nature."
Worse than trimming, in Babbage's view, was
what he described as "cooking," a practice known today as selective
reporting. "Cooking is an art of various forms," wrote Babbage,
"the object of which is to give ordinary observations the appearance and character
of those of the highest degree of accuracy. One of its numerous processes is to
make multitudes of observations, and out of these to select those only which
agree, or very nearly agree. If a hundred observations are made, the cook must
be very unlucky if he cannot pick out fifteen or twenty which will do for serving
up."
Most pernicious of all, wrote Babbage, is
the scientist who pulls numbers out of thin air. "The forger is one who,
wishing to acquire a reputation for science, records observations which he has never
made. . . . Fortunately instances of the occurrence of forging are rare."
As the number of scientists increased
throughout the nineteenth century, new varieties of deception came into being.
Out of competitive zeal and the battle for scientific glory grew an altogether novel
scientific sin, that of omitting to mention similar work that had preceded the
unveiling of a new theory. Because of the importance of originality in science,
tradition requires that a scientist acknowledge in his publications those whose
work in the field preceded his. The mere absence of such acknowledgment constitutes
a claim for originality. But even Charles Darwin, author of the theory of
evolution, was accused of failing to give adequate acknowledgment to previous
researchers.
According to anthropologist Loren Eiseley,
Darwin appropriated the work of Edward Blyth, a little-known British zoologist who
wrote on natural selection and evolution in two papers published in 1835 and
1837. Eiseley points to similarities in phrasing, the use of rare words, and
the choice of examples. While Darwin in his opus quotes Blyth on a few points,
notes Eiseley, he does not cite the papers that deal directly with natural
selection, even though it is clear he read them.15 The thesis has
been disputed by paleontologist Stephen J. Gould.16 But Eiseley is
not the only critic of Darwin's acknowledgment practices. He was accused by a contemporary,
the acerbic man of letters Samuel Butler, of passing over in silence those who
had developed similar ideas. Indeed, when Darwin's On the Origin of Species first appeared in 1859, he made little
mention of predecessors. Later, in an 1861 ghistorical sketch" added to
the third edition of the Origin, he declineated
some of the previous work, but still gave few details.Under continued attack,
he added to the historical sketch in three subsequent editions. It was still
not enough to satisfy all his critics. In 1879, Butler published a book
entitled Evolution Old and New in
which he accused Darwin of slighting the evolutionary speculations of Buffon,
Lamarck, and Darwin's own grandfather Erasmus. Remarked Darwin's son Francis:
"The affair gave my father much pain, but the warm sympathy of those whose
opinions he respected soon helped him to let it pass into well-merited oblivion."17
A champion of Darwin's evolutionary cause
during the late nineteenth century, Thomas Henry Huxley, made a remark in a letter
to a friend that well sums up the complexities in the struggle for recognition.18
"You have no idea of the intrigues that go on in this blessed world of
science. Science is, I fear, no purer than any other region of human activity,
though it should be. Merit alone is very little good; it must be backed by tact
and knowledge of the world to do very much." Moreover, as Darwin himself
admitted, the sheer approbation of his peers was not an irrelevant factor.19
"I wish I could set less value on the bauble fame, either present or
posthumous, than I do, but not, I think, to any extreme degree." Though
Eiseley's charges of theft are undoubtedly overstated, it is clear that Darwin
was laggard in giving credit to earlier authors of theories of evolution.
More serious than a mere breach of
scientific etiquette is the charge raised against that other pillar of modern
biology, the Abbé Gregor Mendel.
By breeding plants and noting that certain traits were inherited in a discrete
fashion, Mendel discovered the existence of what are now called genes. His analysis
of inheritance in peas allowed him to identify what he called dominant and
recessive characters, and the proportions in which these would be expected to
appear in the offspring. The elegance of his insights, culled after many years
of tedious experiment, earned Mendel a reputation in the twentieth century as
the founder of the science of genetics.
The extreme precision of his data,
however, led the eminent statistician Ronald A. Fisher in 1936 to closely
examine Mendel's methods.20 The results were too good. Fisher
concluded that something other than hard work must have been involved.
"The data of most, if not all, of the experiments have been falsified so
as to agree closely with Mendel's expectations," wrote Fisher. He politely
concluded that Mendel could not have "adjusted" the outcome himself
but must have been "deceived by some assistant who knew too well what was
expected." Geneticists who later looked at the problem were not so kind,
deciding that Mendel must have selected data in order to make the best case.
"The impression that one gets from Mendel's paper itself and from Fisher's
study of it," wrote one historian of genetics, "is that Mendel had
the theory in mind when he made the experiments. He may even have deduced the
rules from a particulate view of heredity which he had reached before beginning
work with peas."21 In 1966 geneticist Sewall Wright, in a brief
but often quoted analysis, suggested that Mendel's only fault was an innocent
tendency to err in favor of the expected results when making his tallies of
peas with different traits: "I am afraid that it must be concluded that he
made occasional subconscious errors in favor of expectation," concludes
Wright.22
Wright's exculpation of the father of
modern genetics did not win universal conviction. "Another explanation
would be that Mendel performed one or two more experiments and reported only
those results that agreed with his expectation," wrote B. L. van der
Waerden in 1968. "Such a selection would, of course, produce a bias toward
the expected values." But van der Waerden apparently saw nothing wrong
with such methods: "I feel many perfectly honest scientists would tend to
follow such a procedure. As soon as one has a number of results clearly
confirming a new theory, one would publish these results, leaving aside
doubtful cases."23
Academics may debate the precise nature
of Mendel's misdeeds, but horticulturists have long since arrived at a verdict,
if the following anonymous comment is anything to go by.24 Entitled
"Peas on Earth," it appeared in a professional journal: "In the
beginning there was Mendel, thinking his lonely thoughts alone. And he said:
`Let there be peas,' and there were peas and it was good. And he put the peas
in the garden saying unto them `Increase and multiply, segregate and assort
yourselves independently,' and they did and it was good. And now it came to
pass that when Mendel gathered up his peas, he divided them into round and
wrinkled, and called the round dominant and the wrinkled recessive, and it was
good. But now Mendel saw that there were 450 round peas and 102 wrinkled ones;
this was not good. For the law stateth that there should be only 3 round for
every wrinkled. And Mendel said unto himself 'Gott in Himmel, an enemy has done
this, he has sown bad peas in my garden under the cover of night.' And Mendel
smote the table in righteous wrath, saying `Depart from me, you cursed and evil
peas, into the outer darkness where thou shalt be devoured by the rats and
mice,' and lo it was done and there remained 300 round peas and 100 wrinkled
peas, and it was good. It was very, very good. And Mendel published."
The debate over whether Mendel
consciously or unwittingly improved upon his results cannot be resolved with
certainty because many of his raw data do not exist. With twentieth-century
scientists, it is more often possible to compare their published work with the
raw material on which it was based. The comparison is necessary because it
often reveals serious discrepancies between appearance and reality in the
laboratory. As biologist Peter Medawar observes: "It is no use looking to
scientific `papers,' for they not merely conceal but actively misrepresent the
reasoning that goes into the work they describe. . . . Only unstudied evidence
will do\and that means listening at a
keyhole."25
Consider the case of Robert A. Millikan,
a U.S. physicist who won the Nobel prize in 1923 for determining the electric
charge on the electron. He became the most famous American scientist of his
day, winning sixteen prizes and twenty honorary degrees before his death in
1953. In addition he was an adviser to Presidents Hoover and Franklin D.
Roosevelt, and president of the American Association for the Advancement of
Science. A careful study of Millikan's notebooks has brought to light some
bizarre procedures in the methods by which Millikan climbed to scientific fame
and glory.
As an unknown professor at the University
of Chicago, Millikan published his first measurements of e, the electronic charge, in 1910. The measurements, which depended
on introducing droplets of liquid into an electric field and noting the
strength of field necessary to keep them suspended, were difficult to make and
subject to considerable variation. In strict accordance with the ethos that
demands full disclosure of data, Millikan used stars to grade the quality of
his thirty-eight measurements from "best" to "fair," and
noted that he had discarded seven entirely.
The candor did not continue for long.
Millikan's rival in measuring electric charge, Felix Ehrenhaft of the
University of Vienna, Austria, immediately showed how the variability in Millikan's
published measurements in fact supported Ehrenhaft's belief in the existence of
subelectrons carrying fractional electronic charges. Battle was joined between
Millikan and Ehrenhaft, and the question of subelectrons was discussed around
the scientific world by leading physicists such as Max Planck, Albert Einstein,
Max Born, and Erwin Schrodinger.
To rebut Ehrenhaft, Millikan published an
article in 1913 full of new and more accurate results favoring a single charge
for the electron. He emphasized, in italics, that "this is not a selected
group of drops but represents all of the drops experimented upon during 60
consecutive days."
On the face of it, Millikan had achieved
a brilliant rejoinder to Ehrenhaft and had proved beyond a doubt the
correctness of his measure of the electron charge\all
through the sheer power of scientific precision. However, a look through
Medawar's keyhole shows a quite different situation. Harvard historian Gerald
Holton went back to the original notebooks on which Millikan based his 1913
paper and found major gaps in the reporting of data.26 Despite his
specific assurance to the contrary, Millikan had selected only his best data for
publication. The raw observations in his notebooks are individually annotated
with private comments such as "beauty. publish this surely,
beautiful!" and "very low, something wrong." The 58 observations
presented in his 1913 article were in fact selected from a total of 140. Even
if observations are counted only
after February 13, 1912, the date that the first published observation was
taken, there are still 49 drops that have been excluded. 27
Millikan had no need to worry that his
deceit would be exposed, for, as Holton notes, the "notebooks belonged to
the realm of private science. . . . Therefore he evaluated his data .. . guided
both by a theory about the nature of electric charge and by a sense of the
quality or weight of the particular run. It is exactly what he had done in his
first major paper, before he had learned not to assign stars to data in
public."
Across the Atlantic, meanwhile,
Ehrenhaft and his colleagues assiduously published readings, good, bad, and
indifferent. The picture that emerged from their work did not support the
notion of a single, indivisible electronic charge. This view was contrary to
prevailing theory at the time and, as Holton notes, "from Ehrenhaft's
point of view it was, for just this reason, to be regarded as an exciting
opportunity and challenge. In Millikan's terms, on the contrary, such an
interpretation of the raw readings would force one to turn one's back on a
basic fact of nature\the integral character of e\which
clearly beckoned."
For Millikan, the battle ended in
a Nobel prize (which also cited his work on the photoelectric effect); for
Ehrenhaft, in disillusionment and eventually a broken spirit. But Ehrenhaft,
who had the more accurate equipment and made better measurements than Millikan,
may yet be vindicated. Physicists at Stanford University using a similar
methodology have recently found evidence of a kind of subelectronic charge.28
The example of Millikan and the
other adepts of science who cut corners in order to make their theories prevail
contains some alarming implications. Scientific history by its nature tends to
record only the deeds of those few who have successfully contributed to
knowledge and to ignore the many failures. If even history's most successful
scientists resort to misrepresenting their findings in various ways, how
extensive may have been the deceits of those whose work is now rightly
forgotten?
History shows that deceit in the
annals of science is more common than is often assumed. Those who improved upon
their data to make them more persuasive to others doubtless persuaded
themselves that they were lying only in order to make the truth prevail. But almost
invariably the real motive for the various misrepresentations in the history of
research seems to arise less from a concern for truth than from personal
ambition and the pursuit, as Darwin put it, of "the bauble fame."
Newton wanted to persuade the skeptics of his ideas in France and Germany.
Millikan misreported data in order to defeat a rival, not to make his work
mirror more perfectly an ideal of scientific precision.
The twentieth century has seen the
development of science from a hobby to a career become almost complete. Galileo
was supported in grand style by the Duke of Tuscany. Charles Darwin, born into
the well-to-do Darwin and Wedgwood clans, never had to worry about making money
from his scientific speculations. Gregor Mendel entered the Augustinian
monastery in Brno where he was able to pursue his studies in complete freedom
from financial worries. In the twentieth century, the cost of buying
instruments and hiring technicians has put science almost entirely out of the
amateur's reach. The tradition that kept curiosity about nature divorced from
the generation of personal income has been left far behind. Almost all
scientists nowadays pursue science as a career. Their vocation is also the
source of their salary. Whether supported by government or industry, they work
within a career structure that offers rewards for tangible, often short-term,
success. Few scientists today can leave it to posterity to judge their work;
their universities may deny them tenure, and the flow of grants and contracts
from the federal government is likely to dry up quite quickly, unless evidence
of immediate and continuing success is forthcoming.
If the luminaries of scientific history
would on occasion misrepresent their data for the personal vindication of seeing
their ideas prevail, the temptations must be all the greater for contemporary
scientists. Not only personal justification but also professional rewards
depend on winning acceptance for an idea or theory or technique. Often an extra
measure of acceptance can be won by minor misrepresentations. "Tidying
up" data, making results seem just a little more clear-cut, selecting only
the "best" data for publication-all these seemingly excusable
adjustments may help toward getting an article published, making a name for
oneself, being asked to join a journal's editorial board, securing the next
government grant, or winning a prestigious prize.
In short, careerist pressures are intense
and unremitting. Many scientists, no doubt, refuse to let their work be distorted
by them. Yet for those who do, the rewards for even deceitfully gained success
are considerable and the chances of apprehension negligible. The temptations of
careerism, and the almost total absence of credible deterrents to those who
would cheat the system, are graphically demonstrated in the meteoric career of
that uniquely twentieth-century scientist Elias Alsabti.
Notes
CHAPTER 2
DECEIT IN HISTORY
1. C. Kittel, W. D. Knight, M. A. Ruderman, The
Berkeley Physics Course,
Vol. 1, Mechanics (McGraw-Hill, New
York, 1965).
This passage,
together with an interesting analysis of the scientific textbook writers' use
of history, is quoted in an article by Stephen G. Brush, "Should the
History of Science Be Rated X?" Science,
183, 1164-1172, 1974.
2. Dennis Rawlins, "The Unexpurgated Almajest: The Secret Life of
the Greatest Astronomer of Antiquity,"
Journal for the History of Astronomy, in press.
3. Robert R. Newton, The Crime of Claudius Ptolemy (Johns Hopkins University Press,
Baltimore, 1977). For a summary of the argument see Nicholas Wade,
"Scandal in the Heavens: Renowned Astronomer Accused of Fraud," Science, 198, 707-709, 1977.
4. Owen Gingerich, "On Ptolemy As the Greatest Astronomer of
Antiquity," Science, 193, 476-477, 1976, and "Was
Ptolemy a Fraud?" preprint No. 751, Center for Astrophysics, Harvard
College Observatory, Cambridge, 1977. See also a news article summarizing an
attempt to absolve Ptolemy, Scientific
American, 3,90-93,1979.
5. Cecil J. Schneer, The Evolution of Physical
Science (Grove Press, New
York, 1960), p. 65.
6. I. Bernard Cohen, Lives in Science (Simon & Schuster, New York, 1957), p.14.
7.
Some research has suggested that Galileo could have easily carried out certain
experiments, and that historians who claim they were all imaginary are
overstating the case. See Thomas B. Settle, "An Experiment in the History
of Science," Science, 133, 19-23, 1961. See also Stillman
Drake, "Galileo's Experimental Confirmation of Horizontal Inertia:
Unpublished Manuscripts," Isis, 64, 291-305, 1973. See also James
MacLachlan, "A Test of an Imaginary Experiment of Galileo's," Isis, 64, 374-379, 1973.
8. Alexandre Koyre,
"Traduttore-Traditore. A Propos de Copernic et de Galilee," Isis, 34, 209-210, 1943.
9. Alexandre Koyre, Etudes Galileennes (Hermann, Paris, 1966). This is a reprint of three articles published between
1935 and 1939.
10. Richard S. Westfall, "Newton and the Fudge Factor," Science, 179, 751-758, 1973. See also various letters in response in Science, 180, 1118, 1973.
11. William J. Broad, "Priority War: Discord in Pursuit of Glory,"
Science, 211, 465-467, 1981.
12. J. R. Partington, A Short History of Chemistry (Harper & Brothers, New York, 1960), p.
170. Also see Leonard K. Nash, "The Origin of Dalton's Chemical Atomic
Theory," Isis, 47, 101-116, 1956.
13. J. R. Partington, "The Origins of the Atomic Theory," Annals of Science, 4, 278, 1939.
14. Charles Babbage, Reflections on
the Decline of Science in England (Augustus M. Kelley, New York, 1970), pp.
174-183.
15. Loren Eiseley, Darwin and the
Mysterious Mr. X (E. P. Dutton, New York, 1979).
16. Stephen J. Gould, "Darwin Vindicated," The New York Review of Books, August 16, 1979, p. 36.
17. Francis Darwin, The Life and
Letters of Charles Darwin (John Murray, London, 1887), p. 220.
18. L. Huxley, Life and Letters of Thomas Henry Huxley (Macmillan, London,
1900), p. 97.
19. For
this and other Darwin quotes on ambition see Robert K. Merton, The Sociology of Science: Theoretical and
Empirical Investigations (University of Chicago Press, 1973), pp. 305-307.
20. R. A. Fisher, "Has Mendel's Work Been Rediscovered?" Annals of Science, 1, 115-137, 1936. For reprints of this and several other papers on
Mendel see Curt Stern and Eva R. Sherwood, The
Origin of Genetics: A Mendel Source Book (W. H. Freeman and Co., San
Francisco, 1966), pp. 1-175.
21. L. C. Dunn, A Short
History of Genetics (McGraw-Hill, New York, 1965), p. 13.
22.
For Wright's analysis see Curt Stem and Eva R. Sherwood, The Origin of Genetics: A Mendel Source Book (W. H. Freeman and
Co., San Francisco, 1966), pp. 173-175.
23. B. L. van der Waerden, "Mendel's Experiments," Centaurus, 12, 275-288,1968.
24. Anonymous, "Peas on Earth," Hort Science, 7, 5,
1972.
25. Peter B. Medawar, The Art of the
Soluble (Barnes & Noble, New York, 1968), p. 7.
26. Gerald Holton, "Subelectrons, Presuppositions, and the
Millikan- Ehrenhaft Dispute," Historical Studies in the Physical Sciences,
9,166-224,1978.
27. Allan D. Franklin, "Millikan's Published and Unpublished Data on
Oil Drops," Historical Studies in the Physical Sciences, 11, 185-201,1981.
28.
For an account of the Stanford discoveries see "Fractional Charge," Science 81, April 1981, p. 6.
(W. Broad and N.
Wade, Betrayers of the Truth \ Fraud and Deceit in the Halls of Science, Simon and Schuster, New York, 1982. ISBN
0-671-44769-6. Chapter 2)