September 08, 2007

Second World War (4)

Manhattan Project


The Manhattan Project was an effort during World War II in the United States to develop the first nuclear weapon. It was directed by American physicist Dr. Julius Robert Oppenheimer.

The industrial problem was centered around the production of sufficient fissile material, of sufficient purity. This effort was two-fold, and is represented in the two bombs that were dropped.

The Hiroshima bomb, Little Boy, was uranium-235, a minor isotope of uranium that has to be physically separated from more prevalent uranium-238, which is not suitable for use in an explosive device.

The separation was effected mostly by gaseous diffusion of uranium hexafluoride (UF6), but also by other techniques. The bulk of this separation work was done at Oak Ridge.

The Nagasaki bomb, Fat Man, in contrast, consisted primarily of plutonium-239, a synthetic element which could be induced to supercriticality only by implosion. The design of an implosion device was at the center of the efforts by physicists at Los Alamos during the Project.

The property of uranium-238 which makes it less suitable directly for use in an atomic bomb is used in the production of plutonium -- with sufficiently slow neutrons, uranium-238 will absorb neutrons and transmute into plutonium-239. The production and purification of plutonium was at the center of wartime, and post-war, efforts at the Hanford Site, using techniques developed in part by Glenn Seaborg.

The choice of civilian instead of military targets has often been criticized. However, the U.S. already had a policy of massive incendiary attacks against civilian targets in Japan. These dropped 20% explosives, to break up wooden structures and provide fuel, and then dropped 80% (by weight) small incendiary bombs to set the cities on fire.

The resulting raids completely destroyed many Japanese cities, including Tokyo, even before atomic weapons were deployed. The allies performed such attacks because Japanese industry was extremely dispersed among civilian targets, with many tiny family-owned factories operating in the midst of civilian housing.

History


In the years between World War I and World War II, the United States had risen to pre-eminence in nuclear physics, driven by the work of recent immigrants and local physicists. These scientists had developed the basic tools of nuclear physics -- cyclotrons and other particle accelerators - and many new substances using these tools, including radioisotopes like carbon-14.

Early Ideas on Nuclear Energy


Enrico Fermi recalled the beginning of the project in a speech given in 1954 when he retired as President of the APS.

I remember very vividly the first month, January, 1939, that I started working at the Pupin Laboratories because things began happening very fast. In that period, Niels Bohr was on a lecture engagement in Princeton and I remember one afternoon Willis Lamb came back very excited and said that Bohr had leaked out great news.

The great news that had leaked out was the discovery of fission and at least the outline of its interpretation. Then, somewhat later that same month, there was a meeting in Washington where the possible importance of the newly discovered phenomenon of fission was first discussed in semi-jocular earnest as a possible source of nuclear power.

US President Franklin D. Roosevelt was presented with a letter signed by Albert Einstein (transcribed by Leo Szilard) on October 11, 1939, which urged the United States to rapidly develop an atomic bomb program. The president agreed. The Navy awarded Columbia University the first Atomic Energy funding of $6,000, which grew into the Manhattan Project under Oppenheimer and Enrico Fermi's work.

Scientists in Germany discovered nuclear fission in late 1938. Refugee scientists Leo Szilard, Edward Teller and Eugene Wigner believed that the energy released in nuclear fission might be used in bombs by the Germans.

They persuaded Albert Einstein, America's most famous physicist, to warn President Franklin Roosevelt of this danger in an August 2, 1939, letter. In response to the warning, Roosevelt ordered increased research in nuclear physics.

Under the auspices of National Bureau of Standards chief Lyman Briggs, small research programs had begun in 1939 at the Naval Research Laboratory in Washington, where physicist Philip Abelson explored uranium isotope separation. At Columbia University Italian nuclear physicist Enrico Fermi built prototype nuclear reactors using various configurations of graphite and uranium.

Vannevar Bush, director of the Carnegie Institution of Washington, organized the National Defense Research Committee in 1940 to mobilize the United States' scientific resources in support of the war effort.

New laboratories were created, including the Radiation Laboratory at the Massachusetts Institute of Technology, which aided the development of radar, and the Underwater Sound Laboratory at San Diego, which developed sonar.

The National Defense Research Council (NDRC) also took over the uranium project, as Briggs' program in nuclear physics was called. In 1940, Bush and Roosevelt created the Office of Scientific Research and Development to expand these efforts.

The uranium project had not made much progress by the summer of 1941, when word came from Britain of calculations by Otto Frisch and Fritz Peierls that showed that a very small amount of the fissionable isotope of uranium, U-235 - could produce an explosion equivalent to that of several thousand tons of TNT.

The National Academy of Sciences proposed an all-out effort to build nuclear weapons. Bush created a special committee, the S-1 Committee, to guide the effort. No sooner was this decision made than the Japanese bombed Pearl Harbor on December 7th, 1941. The war had begun for the United States.

At the University of Chicago Metallurgical Laboratory, the University of California Radiation Laboratory and Columbia University's physics department, efforts to prepare the nuclear materials for a weapon were accelerated.

Uranium 235 had to be separated from uranium ore and plutonium made by neutron bombardment of natural uranium. Beginning in 1942, huge plants were built at Oak Ridge (Site X) in Tennessee and Hanford (Site W) outside of Richland, Washington, to produce these materials.

When the United States entered World War II in December 1941, several projects were under way to investigate the separation of fissionable uranium 235 from uranium 238, the manufacture of plutonium, and the feasibility of nuclear piles and explosions.

Physicist and Nobel laureate Arthur Holly Compton organized the Metallurgical Laboratory at the University of Chicago in early 1942 to study plutonium and fission piles. Compton asked theoretical physicist J. Robert Oppenheimer of the University of California to study the feasibility of a nuclear weapon.

In the spring of 1942, Oppenheimer and Robert Serber of the University of Illinois, worked on the problems of neutron diffusion (how neutrons moved in the chain reaction) and hydrodynamics (how the explosion produced by the chain reaction might behave).

To review this work and the general theory of fission reactions, Oppenheimer convened a summer study at the University of California, Berkeley in June 1942. Theorists Hans Bethe, John Van Vleck, Edward Teller, Felix Bloch, Richard Tolman and Emil Konopinski concluded that a fission bomb was feasible. The scientists suggested that such a reaction could be initiated by assembling a critical mass - an amount of nuclear explosive adequate to sustain it - either by firing two subcritical masses of plutonium or uranium 235 together or by imploding (crushing) a hollow sphere made of these materials with a blanket of high explosives. Until the numbers were better known, this was all that could be done.

Teller saw another possibility: By surrounding a fission bomb with deuterium and tritium, a much more powerful "superbomb" might be constructed. This concept was based on studies of energy production in stars made by Bethe before the war . When the detonation wave from the fission bomb moved through the mixture of deuterium and tritium nuclei, they would fuse together to produce much more energy than fission, in the process of nuclear fusion, just as elements fused in the sun produce light and heat.

Bethe was skeptical, and as Teller pushed hard for his "superbomb" and proposed scheme after scheme, Bethe refuted each one. When Teller raised the possibility that an atomic bomb might ignite the atmosphere, however, he kindled a worry that was not entirely extinguished until the Trinity test, even though Bethe showed, theoretically, that it couldn't happen.

The summer conferences, the results of which were later summarized by Serber in "The Los Alamos Primer" (LA-1), provided the theoretical basis for the design of the atomic bomb, which was to become the principal task at Los Alamos during the war, and the idea of the H-bomb, which was to haunt the Laboratory in the postwar era. Seldom has a physics summer school been as portentous for the future of mankind.

With the prospect of a long war, a group of theorists under the direction of J. Robert Oppenheimer met at Berkeley during the summer of 1942 to develop preliminary plans for designing and building a nuclear weapon. Crucial questions remained, however, about the properties of fast neutrons. John Manley, a physicist at the University of Chicago Metallurgical Laboratory, was assigned to help Oppenheimer find answers to these questions by coordinating several experimental physics groups scattered across the country.

The measurements of the interactions of fast neutrons with the materials in a bomb are essential because the number of neutrons produced in the fission of uranium and plutonium must be known, and because the substance surrounding the nuclear material must have the ability to reflect, or scatter, neutrons back into the chain reaction before it is blown apart in order to increase the energy produced. Therefore, the neutron scattering properties of materials had to be measured to find the best reflectors.

Estimating the explosive power required knowledge of many other nuclear properties, including the cross-section (a measure of the probability of an encounter between particles that result in a specified effect) for nuclear processes of neutrons in uranium and other elements. Fast neutrons could only be produced in particle accelerators, which were still relatively uncommon instruments in physics departments in 1942.

The need for better coordination was clear. By September 1942, the difficulties involved with conducting preliminary studies on nuclear weapons at universities scattered throughout the country indicated the need for a laboratory dedicated solely to that purpose. The need for it, however, was overshadowed by the demand for plants to produce uranium-235 and plutonium - the fissionable materials that would provide the nuclear explosives.

Vannevar Bush, the head of the civilian Office of Scientific Research and Development (OSRD), asked President Franklin Roosevelt to assign the large-scale operations connected with the quickly growing nuclear weapons project to the military. Roosevelt chose the Army to work with the OSRD in building production plants. The Army Corps of Engineers selected Col. James Marshall to oversee the construction of factories to separate uranium isotopes and manufacture plutonium for the bomb.

OSRD scientists had explored several methods to produce plutonium and separate uranium-235 from uranium, but none of the processes was ready for production - only microscopic amounts had been prepared.

Only one method - electromagnetic separation, which had been developed by Ernest Lawrence at the University of California Radiation Laboratory at the University of California, Berkeley - seemed promising for large-scale production. But scientists could not stop studying other potential methods of producing fissionable materials, because it was so expensive and because it was unlikely that it alone could produce enough material before the war was over.

Marshall and his deputy, Col. Kenneth Nichols, had to struggle to understand both the processes and the scientists with whom they had to work. Thrust suddenly into the new field of nuclear physics, they felt unable to distinguish between technical and personal preferences. Although they decided that a site near Knoxville, Tenn., would be suitable for the first production plant, they didn't know how large the site had to be and so put off its acquisition. There were other problems, too.

Because of its experimental nature, the nuclear weapons work could not compete with the Army's more-urgent tasks for top-priority ratings. The selection of scientists' work and production-plant construction often were delayed by Marshall's inability to get the critical materials, such as steel, that also were needed in other military productions.

Even selecting a name for the new Army project was difficult. The title chosen by Gen. Brehon Somervell, "Development of Substitute Materials," was objectionable because it seemed to reveal too much.

The Manhattan District


In the summer of 1942, Col. Leslie Groves was deputy to the chief of construction for the Army Corps of Engineers and had overseen construction of The Pentagon, the world's largest office building. Hoping for an overseas command, Groves objected when Somervell appointed him to take charge of the weapons project. His objections were overruled and Groves resigned himself to leading a project he thought had little chance of succeeding.

The first thing he did was rechristen the project The Manhattan District. The name evolved from the Corps of Engineers practice of naming districts after its headquarters' city (Marshall's headquarters were in New York City). At the same time, Groves was promoted to brigadier general, which gave him the rank thought necessary to deal with the senior scientists in the project.

In August 1942, the Manhattan Engineer District was created by the government to meet the goal of producing an atomic weapon under the pressure of ongoing global war. Its central mission became known as the Manhattan Project. Under the direction of Brigadier General Leslie Groves of the Army Corps of Engineers, who recently had supervised the construction of the Pentagon, secret atomic energy communities were created almost overnight in Oak Ridge, Tennessee, at Los Alamos, New Mexico, and in Hanford, Washington, to house the workers and gigantic new machinery needed to produce the bomb. The weapon itself would be built at the Los Alamos laboratory, under the direction of physicist J. Robert Oppenheimer.

Plucked from campuses around the country, medical researchers came face to face with the need to understand and control the effect upon the thousands of people, doctors included, of radioactive materials being produced in previously unimaginable quantities.

In November 1942 General Groves, through the intermediation of an Eastman Kodak official, paid a call on University of Rochester radiologist Stafford Warren. Rochester, like MIT and Berkeley, was another locale where radiation research had brought together physicists and physicians.
"They wanted to know what I was doing in radiation. So I discussed the cancer work and some of the other things," Warren told an interviewer in the 1960s. Then "[w]e got upstairs and they looked in the closet and they closed the transom and they looked out the window. . . . Then they closed and locked the door and said, 'Sit down.'"

Soon thereafter, Dr. Warren was made a colonel in the U.S. Army and the medical director of the Manhattan Project. As his deputy, Warren called on Dr. Hymer Friedell, a radiologist who had worked with Dr. Stone in California. Dr. Stone himself had meanwhile moved to the University of Chicago, where he would play a key role in Manhattan Project-related medical research.

Initially, researchers knew little or nothing about the health effects of the basic bomb components, uranium, plutonium, and polonium. But, as a secret history written in 1946 stated, they knew the tale of the radium dial painters:
The memory of this tragedy was very vivid in the minds of people, and the thoughts of potential dangers of working in areas where radiation hazards existed were intensified because the deleterious effects of radiation could not be seen or felt and the results of over-exposure might not become apparent for long periods after such exposure.

The need for secrecy, Stafford Warren later recalled, compounded the urgency of understanding and controlling risk. Word of death or toxic hazard could leak out to the surrounding community and blow the project's cover.

The need to protect the Manhattan Project workers soon gave rise to a new discipline, called health physics, which sought to understand radiation effects and monitor and protect nuclear worker health and safety. The Project was soon inundated with data from radiation-detection instruments, blood and urine samples, and physical exams. The "clinical study of the personnel," Robert Stone wrote in 1943, "is one vast experiment. Never before has so large a collection of individuals been exposed to so much radiation." Along with these data-gathering efforts came ethical issues.

Would disclosure of potential or actual harm to the workers, much less the public, impair the program? For example, a July 1945 Manhattan Project memo discussed whether to inform a worker that her case of nephritis (a kidney disease) may have been due to her work on the Project. The issue was of special import because, the memo indicated, the illness might well be a precursor of more cases. The worker, the memo explained, "is unaware of her condition which now shows up on routine physical check and urinalysis."

As this memo showed, there was an urgent need for decisions on how to protect the workers, while at the same time safeguard the security of the project: "The employees must necessarily be rotated out, and not permitted to resume further exposure. In frequent instances no other type of employment is available. Claims and litigation will necessarily flow from the circumstances outlined." There were also, the memo concluded, "Ethical considerations":
The feelings of the medical officers are keenly appreciated. Are they in accordance with their canons of ethics to be permitted to advise the patient of his true condition, its cause, effect, and probable prognosis? If not on ethical grounds, are they to be permitted to fulfill their moral obligations to the individual employees in so advising him? If not on moral grounds, are those civilian medical doctors employed here bound to make full disclosure to patients under penalty of liability for malpractice or proceeding for revocation of license for their failure to do so?

It is not clear what was decided in this case. However, the potential conflict between the government doctors' duty to those working on government projects and the same doctors' obligations to the government would not disappear. Following the war, as we see in chapter 12, this conflict would be sharply posed as medical researchers studied miners at work producing uranium for the nation's nuclear weapons.

Another basic question was the extent to which human beings could or should be studied to obtain the data needed to protect them. The radium dial painter data served as a baseline to determine how the effects of exposures in the body could be measured. But this left the question of whether plutonium, uranium, and polonium behaved more or less like radium. Research was needed to understand how these elements worked in the body and to establish safety levels. A large number of animal studies were conducted at laboratories in Chicago, Berkeley, Rochester, and elsewhere; but the relevance of the data to humans remained in doubt.

The Manhattan Project contracted with the University of Rochester to receive the data on physical exams and other tests from Project sites and to prepare statistical analyses. While boxes of these raw data have been retrieved, it is not clear what use was made of them. Accidents, while remarkably few and far between, became a key source of the data used in constructing an understanding of radiation risk. But accidents were not predictable, and their occurrence only enhanced the immediacy of the need to gain better data.

In 1944, the Manhattan Project medical team, under Stafford Warren and with the evident concurrence of Robert Oppenheimer, made plans to inject polonium, plutonium, uranium, and possibly other radioactive elements into human beings. As discussed in chapter 5, the researchers turned to patients, not workers, as the source of experimental data needed to protect workers. By the time the program was abandoned by the government, experimentation with plutonium had taken place in hospitals at the Universities of California, Chicago, and Rochester, and at the Army hospital in Oak Ridge, and further experimentation with polonium and uranium had taken place at Rochester.

The surviving documentation provides little indication that the medical officials and researchers who planned this program considered the ethical implications of using patients for a purpose that no one claimed would benefit them, under circumstances where the existence of the substances injected was a wartime secret. Following the war, however, the ethical questions raised by these experiments would be revisited in debates that themselves were long kept secret.

In addition to experimentation with internally administered radioisotopes, external radiation was administered in human experiments directed by Dr. Stone at Chicago and San Francisco and by others at Memorial Hospital in New York City. Once again, the primary subjects were patients, although some healthy subjects were also involved. In these cases, the researchers may have felt that the treatment was of therapeutic value to the patients. But, in addition to the question of whether the patients were informed of the government's interest, this research raised the question of whether the government's interest affected the patients' treatment. As discussed in chapter 8, these questions would recur when, beginning in 1951, and for two decades thereafter, the Defense Department would fund the collection of data from irradiated patients.

Ensuring safety required more, however, than simply studying how radioactive substances moved through and affected the human body. It also involved studying how these substances moved through the environment. While undetectable to the human senses, radiation in the environment is easily measurable by instruments. When General Groves chose Hanford, on the Columbia River in Washington state, as a site for the plutonium production facility, a secret research program was mounted to understand the fate of radioactive pollution in the water, the air, and wildlife.

Outdoor research was at times improvisational. Years after the fact, Stafford Warren would recall how Manhattan Project researchers had deliberately "contaminated the alfalfa field" next to the University of Rochester medical school with radiosodium, to determine the shielding requirements for radiation-measuring equipment. Warren's associate Dr. Harold Hodge recalled that a shipment of radiosodium was received by plane from Robley Evans at MIT, mixed with water in a barrel, and poured into garden sprinklers:
We walked along and sprinkled the driveway. This was after dark. . . . The next thing, we went out and sprayed a considerable part of the field. . . . It was sprayed and then after a while sprayed again, so there was a second and third application. We were all in rubber, so we didn't get wet with the stuff . . . then Staff [Warren] said that one of the things we needed was to see what would be the effect on the inside of a wooden building. So we took the end of the parking garage, and we sprinkled that up about as high as our shoulders, and somebody went inside and made measurements, and we sprinkled it again. Then we wanted to know about the inside of a brick building, and so we sprinkled the side of the animal house. . . . I had no idea what the readings were. . . I hadn't the foggiest idea of what we were doing, except that obviously it was something radioactive.

Outdoor releases would put at risk unsuspecting citizens, even communities, as well as workers. There were no clear policies and no history of practice to guide how these releases should be conducted. As we explore in chapter 11, this would be worked out by experts and officials in secret, on behalf of the workers and citizens who might be affected.

On August 6, 1945, when the atomic bomb was dropped on Hiroshima, the most sensitive of secrets became a symbol for the ages. A week later, the bomb was the subject of a government report that revealed to the public the uses of plutonium and uranium. Immediately, debate began over the future of atomic energy. Could it be controlled at the international level?

Should it remain entirely under control of the military? What role would industry have in developing its potential? Although American policymakers failed to establish international control of the bomb, they succeeded in creating a national agency with responsibility for the domestic control of atomic energy.

The most divisive question in the creation of the new agency that would hold sway over the atom was the role of the military. Following congressional hearings, the Atomic Energy Commission was established by the 1946 McMahon Act, to be headed by five civilian commissioners. President Truman appointed David Lilienthal, former head of the Tennessee Valley Authority, as the first chairman of the AEC, which took over responsibilities of the Manhattan Engineer District in January 1947.

Also in 1947, under the National Security Act, the armed services were put under the authority of the newly created National Military Establishment (NME), to be headed by the secretary of defense. In 1949 the National Security Act was amended, and the NME was transformed into an executive department--the Department of Defense.

The Armed Forces Special Weapons Project, which would coordinate the Defense Department's responsibilities in the area of nuclear weapons, became the military heir to the Manhattan Engineer District. The Military Liaison Committee was also established as an intermediary between the Atomic Energy Commission and the Defense Department; it was also to help set military requirements for the number and type of nuclear weapons needed by the armed services.

Even before the AEC officially assumed responsibility for the bomb from the Manhattan Project, the Interim Medical Advisory Committee, chaired by former Manhattan Project medical director Stafford Warren, began meeting to map out an ambitious postwar biomedical research program. Former Manhattan Project contractors proposed to resume the research that had been interrupted by the war and to continue wartime radiation effects studies upon human subjects.

Second World War (3)

World War 2 Propaganda


World War II saw continued use of propaganda as a weapon of war, both by Hitler's propagandist Joseph Goebbels and the British Political Warfare Executive.

Some historical revionists claim that the use of gas chambers in the Holocaust is an instance of an effective Allied propaganda campaign that could not be reined in after the war, much like the now discredited claim made during the Gulf War that Iraqi soldiers were ripping newborn babies out of incubators and throwing them to the ground.

History of Propaganda


Propaganda has been a human activity as far back as reliable recorded evidence exists. The writings of Romans like Livy are considered masterpieces of pro-Roman statist propaganda.

The term itself originates with the Roman Catholic Sacred Congregation for the Propagation of the Faith (sacra congregatio christiano nomini propagando or, briefly, propaganda fide), the department of the pontifical administration charged with the spread of Catholicism and with the regulation of ecclesiastical affairs in non-Catholic countries (mission territory).

The actual Latin stem propagand- conveys a sense of "that which ought to be spread".

Propaganda techniques were first codified and applied in a scientific manner by journalist Walter Lippman and psychologist Edward Bernays (nephew of Sigmund Freud) early in the 20th century. During World War I, Lippman and Bernays were hired by the United States president Woodrow Wilson to sway popular opinion to enter the war on the side of Britain.

The war propaganda campaign of Lippman and Bernays produced within six months so intense an anti-German hysteria as to permanently impress American business (and Adolf Hitler, among others) with the potential of large-scale propaganda to control public opinion. Bernays coined the terms "group mind" and "engineering consent", important concepts in practical propaganda work.

The current public relations industry is a direct outgrowth of Lippman and Bernays' work and is still used extensively by the United States government. For the first half of the 20th century Bernays and Lippman themselves ran a very successful public relations firm.

Nazi Germany


Most propaganda in Germany was produced by the Ministry for Public Enlightenment and Propaganda ("Promi" in German abbreviation). Joseph Goebbels was placed in charge of this ministry shortly after Hitler took power in 1933.

All journalists, writers, and artists were required to register with one of the Ministry's subordinate chambers for the press, fine arts, music, theater, film, literature, or radio.

The Nazis believed in propaganda as a vital tool in achieving their goals. Adolf Hitler, Germany's Führer, was impressed by the power of Allied propaganda during World War 1 and believed that it had been a primary cause of the collapse of morale and revolts in the German home front and Navy in 1918.

Hitler would meet nearly every day with Goebbels to discuss the news and Goebbels would obtain Hitler's thoughts on the subject; Goebbels would then meet with senior Ministry officials and pass down the official Party line on world events.

Broadcasters and journalists required prior approval before their works were disseminated.

In addition the Nazis had no moral qualms about spreading propaganda which they themselves knew to the false and indeed spreading deliberately false information was part of a doctrine known as the Big Lie.

Nazi propaganda before the start of World War II had several distinct audiences:

  • German audiences were continually reminded of the struggle of the Nazi Party and Germany against foreign enemies and internal enemies, especially Jews.

  • Ethnic Germans in countries such as Czechoslovakia, Poland, the Soviet Union, and the Baltic states were told that blood ties to Germany were stronger than their allegiance to their new countries.

  • Potential enemies, such as France and Great Britain, were told that Germany had no quarrel with the people of the country, but that their governments were trying to start a war with Germany.

  • All audiences were reminded of the greatness of German cultural, scientific, and military achievements.


Until the Battle of Stalingrad's conclusion on February 4, 1943, German propaganda emphasized the prowness of German arms and the humanity German soldiers had shown to the peoples of occupied territories.

In contrast, British and Allied fliers were depicted as cowardly murderers, and Americans in particular as gangsters in the style of Al Capone. At the same time, German propaganda sought to alienate Americans and British from each other, and both these Western belligerents from the Soviets.

After Stalingrad, the main theme changed to Germany as the sole defender of Western European culture against the "Bolshevist hordes." The introduction of the V-1 and V-2 "vengeance weapons" was emphasized to convince Britons of the hopelessness of defeating Germany.

Goebbels committed suicide shortly after Hitler on April 30, 1945. In his stead, Hans Fritzsche, who had been head of the Radio Chamber, was tried and acquitted by the Nuremberg war crimes tribunal.

World War 2 Technology


One hundred years ago, a half century before the atomic bombing of Hiroshima and Nagasaki, the discovery of x rays spotlighted the extraordinary promise, and peril, of the atom. From that time until 1942, atomic research was in private hands. The Second World War and the Manhattan Project, which planned and built the first atomic bombs, transformed a cottage industry of researchers into the largest and one of the most secretive research projects ever undertaken. Scientists who had once raced to publish their results learned to speak in codes accessible only to those with a "need to know." Indeed, during the war the very existence of the man-made element plutonium was a national secret.

After the war's end, the network of radiation researchers, government and military officials, and physicians mobilized for the Manhattan Project did not disband. Rather, they began working on government programs to promote both peaceful uses of atomic energy and nuclear weapons development.

Having harnessed the atom in secret for war, the federal government turned enthusiastically to providing governmental and nongovernmental researchers, corporations, and farmers with new tools for peace--radioisotopes--mass-produced with the same machinery that produced essential materials for the nation's nuclear weapons. Radioisotopes, the newly established Atomic Energy Commission (AEC) promised, would create new businesses, improve agricultural production, and through "human uses" in medical research, save lives.

From its 1947 creation to the 1974 reorganization of atomic energy activities, the AEC produced radioisotopes that were used in thousands of human radiation experiments conducted at universities, hospitals, and government facilities. This research brought major advances in the understanding of the workings of the human body and the ability of doctors to diagnose, prevent, and treat disease.

The growth of radiation research with humans after World War II was part of the enormous expansion of the entire biomedical research enterprise following the war. Although human experiments had long been part of medicine, there had been relatively few subjects, the research had not been as systematic, and there were far fewer promising interventions than there were in the late 1940s.

With so many more human beings as research subjects, and with potentially dangerous new substances involved, certain moral questions in the relationship between the physician-researcher and the human subject--questions that were raised in the nineteenth century--assumed more prominence than ever: What was there to protect people if a researcher's zeal for data gathering conflicted with his or her commitment to the subjects' well-being? Was the age-old ethical tradition of the doctor-patient relationship, in which the patient was to defer to the doctor's expertise and wisdom, adequate when the doctor was also a researcher and the procedures were experimental?

While these questions about the role of medical researchers were fresh in the air, the Manhattan Project, and then the Cold War, presented new ethical questions of a different order.

In March 1946, former British Prime Minister Winston Churchill told an audience in Fulton, Missouri, that an "iron curtain" had descended between Eastern and Western Europe--giving a name to the hostile division of the continent that had existed since the end of World War II. By the following year, Cold War was the term used to describe this state of affairs between the United States and its allies on the one hand and the Soviet bloc on the other. A quick succession of events underscored the scope of this conflict, as well as the stakes involved: In 1948 a Soviet blockade precipitated a crisis over Berlin; in 1949, the American nuclear monopoly ended when the Soviet Union exploded its first atomic bomb; in 1950, the Korean War began.

The seeming likelihood that atomic bombs would be used again in war, and that American civilians as well as soldiers would be targets, meant that the country had to know as much as it could, as quickly as it could, about the effects of radiation and the treatment of radiation injury.

This need for knowledge put radiation researchers, including physicians, in the middle of new questions of risk and benefit, disclosure and consent. The focus of these questions was, directly and indirectly, an unprecedented public health hazard: nuclear war. In addressing these questions, medical researchers had to define the new roles that they would play.

As advisers to the government, radiation researchers were asked to assist military commanders, who called for human experimentation to determine the effects of atomic weapons on their troops. But these researchers also knew that human experimentation might not readily provide the answers the military needed.

As physicians, they had a commitment to prevent disease and heal. At the same time, as government advisers, they were called upon to participate in making decisions to proceed with weapons development and testing programs that they knew could put citizens, soldiers, and workers at risk. As experts they were asked to ensure that the risks would not be excessive. And as researchers they saw these programs as an opportunity for gathering data.

As researchers, they were often among the first to volunteer to take the risks that were unavoidable in such research. But the risks could not always be disclosed to members of the public who were also exposed.

In keeping with the tradition of scientific inquiry, these researchers understood that their work should be the subject of vigorous discussion, at least among other scientists in their field. But, as government officials and advisers, they understood that their public statements had to be constrained by Cold War national security requirements, and they shared in official concern that public misunderstanding could compromise government programs and their own research.

Medical researchers, especially those expert in radiation, were not oblivious to the importance of the special roles they were being asked to play. "Never before in history," began the 1949 medical text Atomic Medicine, "have the interests of the weaponeers and those who practice the healing arts been so closely related." This volume, edited by Captain C. F. Behrens, the head of the Navy's new atomic medicine division, was evidently the first treatise on the topic.

It concluded with a chapter by Dr. Shields Warren, the first chief of the AEC's Division of Biology and Medicine, who would become a major figure in setting policy for postwar biomedical radiation research. While the atomic bomb was not "of medicine's contriving," the book began, it was to physicians "more than to any other profession" that atomic energy had brought a "bewildering array of new problems, brilliant prospects, and inescapable responsibilities."

The text, a prefatory chapter explained, treats "not of high policy, of ethics, of strategy or of international control [of nuclear materials], as physicians these matters are not for us."[3] Yet what many readers of Atomic Medicine could not know in 1949 was that Behrens, along with Warren and other biomedical experts, was already engaged in vigorous but secret discussions of the ethics underlying human radiation experiments. At the heart of these discussions lay difficult choices at the intersection of geopolitics, science, and medicine that would have a fundamental impact on the federal government's relationship with the American people.

Radiation has existed in nature from the origins of the universe, but was unknown to man until a century ago. Its discovery came by accident. On a Friday evening, November 8, 1895, the German physicist Wilhelm Roentgen was studying the nature of electrical currents by using a cathode ray tube, a common piece of scientific equipment.

When he turned the tube on, he noticed to his surprise that a glowing spot appeared on a black paper screen coated with fluorescent material that was across the room. Intrigued, he soon determined that invisible but highly penetrating rays were being produced at one end of the cathode ray tube. The rays could expose photographic plates, leaving shadows of dense objects, such as bone.

After about six weeks of experimenting with his discovery, which he called x rays, Roentgen sent a summary and several "shadow pictures" to a local scientific society. The society published the report in its regular journal and wisely printed extra copies. News spread rapidly; Roentgen sent copies to physicists throughout Europe. One Berlin physicist "could not help thinking that I was reading a fairy tale . . . only the actual photograph proved to everyone that this was a fact."

Physicians immediately recognized these rays as a new tool for diagnosis, a window into the interior of the body. The useless left arm of German Emperor Wilhelm II was x-rayed to reveal the cause of his disability, while Queen Amelia of Portugal used x rays of several of her court ladies to vividly display the dangers of "tightlacing."

Physicians began to use x rays routinely for examining fractures and locating foreign objects, such as needles swallowed by children or bullets shot into adults. During World War I, more than 1.1 million wounded soldiers were treated with the help of diagnostic x rays.

In 1896, Roentgen's insight led to the discovery of natural radioactivity. Henri Becquerel, who had been studying phosphorescence, discovered that shadow pictures were also created when wrapped photographic plates were exposed to crystals partly composed of uranium. Could this radioactive property be concentrated further by extracting and purifying some as-yet-unknown component of the uranium crystals? Marie and Pierre Curie began laborious chemical analyses that led to the isolation of the element polonium, named after Marie's native Poland. Continuing their work, they isolated the element radium. To describe these elements' emission of energy, they coined the word radioactivity.

As with x rays, popular hopes and fears for natural radioactivity far exceeded the actual applications. One 1905 headline captures it all: "Radium, as a Substitute for Gas, Electricity, and as a Positive Cure for Every Disease." Following initial enthusiasm that radiation could, by destroying tumors, provide a miracle cure for cancer, the reappearance of irradiated tumors led to discouragement.

Despite distressing setbacks, research into the medical uses of radiation persisted. In the 1920s French researchers, performing experiments on animals, discovered that radiation treatments administered in a series of fractionated doses, instead of a single massive dose, could eliminate tumors without causing permanent damage. With the new method of treatment, doctors began to report impressive survival rates for patients with a variety of cancers. Fractionation became, and remains, an accepted approach to cancer treatment.

Along with better understanding of radiation's benefits came a better practical appreciation of its dangers. Radiation burns were quickly apparent, but the greater danger took longer to manifest itself. Doctors and researchers were frequently among the victims. Radiation researchers were also slow to take steps to protect themselves from the hidden danger. One journal opened its April 1914 issue by noting that "[w]e have to deplore once more the sacrifice of a radiologist, the victim of his art."(April 1914): 393.

Clear and early evidence of tragic results sharpened both expert and public concern. By 1924, a New Jersey dentist noticed an unusual rate of deterioration of the jawbone among local women. On further investigation he learned that all at one time had jobs painting a radium solution onto watch dials.

Further studies revealed that as they painted, they licked their brushes to maintain a sharp point. Doing so, they absorbed radium into their bodies. The radium gradually revealed its presence in jaw deterioration, blood disease, and eventually, a painful, disfiguring deterioration of the jaw.

There was no question that radium was the culprit. The immediate outcome was a highly publicized crusade, investigation, lawsuits, and payments to the victims. Despite the publicity surrounding the dial painters, response to the danger remained agonizingly slow. Patent medicines containing radium and radium therapies continued.

The tragedy of the radium dial painters and similar cases of patients who took radium nostrums have provided basic data for protection standards for radioactive substances taken into the body. One prominent researcher in the new area of radiation safety was Robley Evans.

Evans was drawn into the field by the highly publicized death in 1932 of Eben Byers, following routine consumption of the nostrum Radiothor. Byers's death spurred Evans, then a California Institute of Technology physics graduate student, to undertake research that led to a study of the effects on the body of ingesting radium; this study would continue for more than half a century.

Evans's study and subsequent studies of the effects of radium treatments provided the anchor in human data for our understanding of the effects of radiation within the human body. As the dangers of the imprudent use of x rays and internal radiation became clear, private scientific advisory committees sprang up to develop voluntary guidelines to promote safety among those working with radiation. When the government did enter the atomic age, it often referred to the guidelines of these private committees as it developed radiation protection standards.

The Miracle of Tracers


In 1913, the Hungarian chemist Georg von Hevesy began to experiment with the use of radioactive forms of elements (radioisotopes) to trace the behavior of the normal, nonradioactive forms of a variety of elements. Ten years later Hevesy extended his chemical experiments to biology, using a radioisotope of lead to trace the movement of lead from soil into bean plants. In 1943, Hevesy won the Nobel Prize for his work on the use of radioisotopes as tracers.

Previously, those seeking to understand life processes of an organism had to extract molecules and structures from dead cells or organisms, and then study those molecules by arduous chemical procedures, or use traceable chemicals that were foreign to the organism being studied but that mimicked normal body chemicals in some important way. Foreign chemicals could alter the very processes being measured and, in any case, were often as difficult to measure precisely as were normal body constituents.

The radioactive tracer--as Our Friend the Atom, a book written by Dr. Heinz Haber for Walt Disney productions, explained in 1956 to readers of all ages--was an elegant alternative: "Making a sample of material mildly radioactive is like putting a bell on a sheep. The shepherd traces the whole flock around by the sound of the bell. In the same way it is possible to keep tabs on tracer-atoms with a Geiger counter or any other radiation detector."

By the late 1920s the tracer technique was being applied to humans in Boston by researchers using an injection of dissolved radon to measure the rate of blood circulation, an early example of using radioactivity to observe life processes. However, research opportunities were limited by the fact that some of the elements that are most important in living creatures do not possess naturally occurring radioactive isotopes.

The answer to this problem came simultaneously at faculty clubs and seminars in Berkeley and Boston in the early 1930s. Medical researchers realized that the famed "atom smasher," the cyclotron invented by University of California physicist Ernest Lawrence, could be used as a factory to create radioisotopes for medical research and treatment. "Take an ordinary needle," Our Friend the Atom explained, "put it into an atomic reactor for a short while. Some of the ions contained in the steel will capture a neutron and be transformed into a radioisotope of iron. . . . Now that needle could be found in the proverbial haystack without any trouble."

In 1936, two of Lawrence's Berkeley colleagues, Drs. Joseph Hamilton and Robert Stone, administered radiosodium to treat several leukemia patients. In 1937, Ernest Lawrence's brother, physician John Lawrence, became the first to use radiophosphorus for the treatment of leukemia. This application was extended the following year to the treatment of polycythemia vera, a blood disease.

This method soon became a standard treatment for that disease. In 1938, Hamilton and Stone also began pioneering work in the use of cyclotron-produced neutrons for the treatment of cancer. The following year, not long before the war in Europe began, Ernest Lawrence unveiled a larger atom smasher, to be used to create additional radioisotopes and hence dubbed the "medical cyclotron." The discovery that some radioisotopes deposited selectively in different parts of the body--the thyroid, for example--inspired a spirited search for a radioactive "magic bullet" that might treat, or even cure, cancer and other diseases.

In Cambridge, the age of "nuclear medicine" is said to have begun in November 1936 with a lunchtime seminar at Harvard, at which MIT President Karl Compton talked on "What Physics Can Do for Biology and Medicine." Robley Evans, by that time at MIT, is reported to have helped prepare the portion of the talk from which medical researchers at the Massachusetts General Hospital's thyroid clinic came to realize that MIT's atom smasher could produce a great research tool for their work--radioisotopes.

Soon, doctors at the thyroid clinic began a series of experiments, including some involving humans, that would lead to the development of radioiodine as a standard tool for diagnosing and treating thyroid disease.

In late 1938, the discovery of atomic fission in Germany prompted concern among physicists in England and the United States that Nazi Germany might be the first to harness the power of the atom--as a propulsion method for submarines, as radioactive poison, or most worrisome of all, as a bomb capable of unimagined destruction. In the United States, a world-famous physicist, Albert Einstein, and a recent émigré from Hungary, Leo Szilard, alerted President Franklin D. Roosevelt to the military implications of the German discovery in an August 1939 letter.

Assigning his own science adviser, Vannevar Bush, to the task of determining the feasibility of an atomic bomb, Roosevelt's simple "O.K.," scrawled on a piece of paper, set in motion the chain of events that would lead to the largest and most expensive engineering project in history. Soon, Ernest Lawrence's Radiation Laboratory and its medical cyclotron were mobilized to aid in the nationwide effort to build the world's first atomic bomb. In a related effort, Drs. Stone and Hamilton, and others, would turn their talents to the medical research needed to ensure the safety of those working on the bomb.

On August 6, 1945, when the atomic bomb was dropped on Hiroshima, the most sensitive of secrets became a symbol for the ages. A week later, the bomb was the subject of a government report that revealed to the public the uses of plutonium and uranium. Immediately, debate began over the future of atomic energy. Could it be controlled at the international level? Should it remain entirely under control of the military? What role would industry have in developing its potential? Although American policymakers failed to establish international control of the bomb, they succeeded in creating a national agency with responsibility for the domestic control of atomic energy.

The most divisive question in the creation of the new agency that would hold sway over the atom was the role of the military. Following congressional hearings, the Atomic Energy Commission was established by the 1946 McMahon Act, to be headed by five civilian commissioners. President Truman appointed David Lilienthal, former head of the Tennessee Valley Authority, as the first chairman of the AEC, which took over responsibilities of the Manhattan Engineer District in January 1947.

Also in 1947, under the National Security Act, the armed services were put under the authority of the newly created National Military Establishment (NME), to be headed by the secretary of defense. In 1949 the National Security Act was amended, and the NME was transformed into an executive department--the Department of Defense. The Armed Forces Special Weapons Project, which would coordinate the Defense Department's responsibilities in the area of nuclear weapons, became the military heir to the Manhattan Engineer District. The Military Liaison Committee was also established as an intermediary between the Atomic Energy Commission and the Defense Department; it was also to help set military requirements for the number and type of nuclear weapons needed by the armed services.

Even before the AEC officially assumed responsibility for the bomb from the Manhattan Project, the Interim Medical Advisory Committee, chaired by former Manhattan Project medical director Stafford Warren, began meeting to map out an ambitious postwar biomedical research program. Former Manhattan Project contractors proposed to resume the research that had been interrupted by the war and to continue wartime radiation effects studies upon human subjects.

In May 1947, Lilienthal commissioned a blue-ribbon panel, the Medical Board of Review, that reported the following month on the agency's biomedical program. In strongly recommending a broad research and training program, the board found the need for research "both urgent and extensive." The need was "urgent because of the extraordinary danger of exposing living creatures to radioactivity.

It is urgent because effective defensive measures (in the military sense) against radiant energy are not yet known." The board, pointing to the AEC's "absolute monopoly of new and important tools for research and important knowledge," noted the commensurate responsibilities--both to employees and others who could suffer from "its negligence or ignorance" and to the scientific world, with which it was obliged to "share its acquisitions . . . whenever security considerations permit." In the fall of 1947, as recommended by the Medical Board of Review, the AEC created a Division of Biology and Medicine (DBM) to coordinate biomedical research involving atomic energy and an Advisory Committee for Biology and Medicine (ACBM), which reported directly to the AEC's chairman.

Not surprisingly, the DBM and ACBM became gathering places for the luminaries of radiation science. The ACBM was headed by a Rockefeller Foundation official, Dr. Alan Gregg. It settled on Dr. Shields Warren, a Harvard-trained pathologist, to serve as the first chief of the DBM. Warren, as we shall see, would play a central role in developments related to radiation research and human experimentation.

In the 1930s, focusing on cancer research, and influenced by the work of Hevesy and the pioneering radioisotope work being done in Berkeley and Boston, Warren turned to the question of the effects of radiation on animals and the treatment of acute leukemia, the "most hopeless . . . of tumors at that time." As the war neared, Warren enlisted in the Naval Reserve. He continued medical work for the Navy, turning down an invitation to join Stafford Warren (no relation) on "a project . . . that he couldn't tell me anything about [the Manhattan Project]."

While most of the AEC's budget would be devoted to highly secret weapons development and related activities, the biomedical research program represented the commission's proud public face. Even before the AEC opened its doors, Manhattan Project officials and experts had laid the groundwork for a bold program to encourage the use of radioisotopes for scientific research, especially in medicine. This program was first presented to the broad public in a September 1946 article in the New York Times Magazine. The article began dramatically by describing the use of "radioactive salt" to measure circulation in a crushed leg, so that a decision on whether to amputate below or above the knee could be made.

By November 1946, the isotope distribution program was well under way, with more than 200 requests approved, about half of which were designated for "human uses." From the beginning, the AEC's Isotope Division at Oak Ridge had in its program director, Paul Aebersold, a veritable Johnny Appleseed for radioelements.

In presentations before the public and to researchers, Aebersold, dubbed "Mr. Isotope," touted the simplicity and low cost with which scientists would be provided with radioisotopes: "The materials and services are made available . . . with a minimum of red tape and under conditions which encourage their use."At an international cancer conference in St. Louis in 1947, the AEC announced that it would make radioisotopes available without cost for cancer research and experimental cancer treatment. This, Shields Warren later recalled, had a "tremendous effect" and "led to a revolution in the type of work done in this field."

To AEC administrators, Aebersold emphasized the benefits to the AEC's public image: "Much of the Commission's success is judged by the public and scientists . . . on its willingness to carry out a wide and liberal policy on the distribution of materials, information, and services," he wrote in a memo to the AEC's general manager.

The AEC biomedical program as a whole also provided for funding of cancer research centers, research equipment, and numerous other research projects. Here, too, were advances that would save many lives. Before the war, radiotherapy had reached a plateau, limited by the cost of radium and the inability of the machines of the time to focus radiation precisely on tumors to the exclusion of surrounding healthy tissue.

AEC facilities inherited from the Manhattan Project could produce radioactive cobalt, a cheaper substitute for radium. As well, the AEC's "teletherapy" program funded the development of new equipment capable of producing precisely focused high-energy beams.

The AEC's highly publicized peacetime medical program was not immune to the pressures of the Cold War political climate. Even the lives of young researchers in the AEC Fellowship Program conducting nonclassified research were subject to Federal Bureau of Investigation review despite protests from commission members.

Congressionally mandated Cold War requirements such as loyalty oaths and noncommunist affidavits, Chairman Lilienthal declared, would have a chilling effect on scientific discussion and could damage the AEC's ability to recruit a new generation of scientists. The reach of the law, the Advisory Committee for Biology and Medicine agreed, was like a "blighting hand; for thoughtful men now know how political domination can distort free inquiry into a malignant servant of expediency and authoritarian abstraction." Nonetheless, the AEC accepted the congressional conditions for its fellowship program and determined to seek the program's expansion.

The AEC's direct promotional efforts were multiplied by the success of Aebersold and his colleagues in carrying the message to other government agencies, as well as to industry and private researchers. This success led, in turn, to new programs.

In August 1947, General Groves urged Major General Paul Hawley, the director of the medical programs of the Veterans Administration, to address medical problems related to the military's use of atomic energy. Soon thereafter, Hawley appointed an advisory committee, manned by Stafford Warren and other medical researchers. The advisers recommended that the VA create both a "publicized" program to promote the use of radioisotopes in research and a "confidential" program to deal with potential liability claims from veterans exposed to radiation hazards. The "publicized" program soon mushroomed, with Stafford Warren, Shields Warren, and Hymer Friedell among the key advisers.

By 1974, according to VA reports, more than 2,000 human radiation experiments would be performed at VA facilities, many of which would work in tandem with neighboring medical schools, such as the relationship between the UCLA medical school, where Stafford Warren was now dean, and the Wadsworth (West Los Angeles) VA Hospital.

While the AEC's weapons-related work would continue to be cloaked in secrecy, the isotope program was used by researchers in all corners of the land to achieve new scientific understanding and help create new diagnostic and therapeutic tools. It was, however, only a small part of an enormous institution. By 1951 the AEC would employ 60,000 people, all but 5,000 through contractors. Its land would encompass 2,800 square miles, an area equal to Rhode Island and Delaware combined.

In addition to research centers throughout the United States, its operations "extend[ed] from the ore fields of the Belgian Congo and the Arctic region of Canada to the weapons proving ground at Enewetak Atoll in the Pacific and the medical projects studying the after-effects of atomic bombing in . . . Japan." The Isotope Division, however, would employ only about fifty people and, when reactor production time was accounted for, occupy only a fraction of its budget and resources.

Second World War (2)

World War 2 Holocaust


The word 'Holocaust', from the Greek word 'holokauston' meaning "a burnt sacrifice offered to God", originally referred to a sacrifice Jews were required to make by the Torah, and later to large scale catastrophes or massacres. Due to the theological meaning that this word carries, many Jews find the use of this word problematic, as it could imply that Jews were a sacrifice. Instead of holocaust many Jews prefer the Hebrew word Shoah, which means "desolation".

While nowadays the term 'Holocaust' usually refers to the above-mentioned large-scale killings of Jews, it is also sometimes used to refer to other occurrences of genocide, especially the Armenian and Hellenic Holocausts, the murder of about 2.5 million Christians by the Young Turk government between 1915 and 1923.

However, the Turkish government officially denies that there was any genocide, claiming that most of the deaths resulted from armed conflict, disease and famine during the turmoil of World War I, despite the fact that most casualties occured in villages far from the battlefield and that there is historical proof this was a systematic attempt to wipe out all non-Muslims.

In some circles, the term holocaust is used to describe the systematic murder of the other groups which were exterminated in the same circumstances by the Nazis, including ethnic Roma and Sinti (also known as Gypsies), political dissidents, communists, homosexuals, mental patients, Jehovah's Witnesses, Russians, Poles, and other Slavs, raising the total number of victims of Nazis to between ten and fourteen million civilians, and up to 4 million POWs.

Today, the term is also used to describe other attempts at genocide, both before and after World War II, or more generally, for any overwhelmingly massive deliberate loss of life, such as that which would result from nuclear war, hence the phrase "Nuclear Holocaust".

Shoa, also spelled Shoah and Sho'ah, Hebrew for "Destruction", is the Hebrew term for the Holocaust. It is used by many Jews and a growing number of Christians due to theological discomfort with the literal meaning of the word Holocaust; it is considered theologically offensive to imply that the Jews of Europe were a sacrifice to God.

It is nonetheless recognized that most people who use the term Holocaust do not intend such a meaning. Similarly, many Roma (Gypsy) people use the word Porajmos, meaning "Devouring" to describe the Nazi attempt to exterminate that group.

One feature of the Nazi Holocaust that distinguishes it from other mass murders was the systematic method with which the mass killings were conducted. Detailed lists of present, and future, potential victims were made and meticulous records of the killings have been found.

In addition, considerable effort was expended over the course of the Holocaust to find increasingly efficient means of killing more people, for example, by switching from carbon monoxide poisoning in the Aktion Reinhard death camps of Belzec, Sobibor, Treblinka to the use of Zyklon-B at Majdanek and Auschwitz; gas vans using carbon monoxide for mass killings were used in the Chelmno death camp.

In addition to mass killings, Nazis conducted many experiments with prisoners, children inclusive. Dr. Josef Mengele, one of the most widely known Nazis, was known as the "Angel of Death" by the inmates of Auschwitz, for his experiments.

The full extent of what was happening in German-controlled areas was not known until after the war. However, numerous rumors and eye-witness accounts from escapees and others did give some indication that Jews were being killed in large numbers. Some protests were held. For example on October 29, 1942 in the United Kingdom, leading clergymen and political figures held a public meeting to register outrage over Germany's persecution of Jews.

Concentration and Extermination Camps


Concentration camps for, "undesirables," were spread throughout Europe, with new camps being created near centers of dense "undesirable" populations, often focusing on heavily Jewish, Polish intelligentsia, communists, or Roma groups. Most of the camps were located on the area of General Government.

Concentration camps for Jews and other, "undesirables," also existed in Germany itself, and while not specifically designed for systematic extermination, many concentration camp prisoners died because of harsh conditions or were executed.

Some camps, such as Auschwitz-Birkenau, combined slave labor with systematic extermination. Upon arrival in these camps, prisoners were divided into two groups: those too weak for work were immediately murdered in gas chambers (which were sometimes disguised as showers) and their bodies burned, while others were first used for slave labor in factories or industrial enterprises located in the camp or nearby.

The Nazis also forced some prisoners to work in the removal of the corpses and to harvest elements of the bodies. Gold teeth were extracted from the corpses and women's hair (shaved from the heads of victims before they entered the gas chambers) was recycled for use in products such as rugs and socks.

Three camps--Belzec, Sobibor, and Treblinka II--were used exclusively for extermination. Only a small number of prisoners were kept alive to work at the task of disposing of the bodies of people murdered in the gas chambers.

The transport was often carried out under horrifying conditions using rail freight cars.

Jews


Anti-Semitism was common in Europe in the 1920s and 1930s (though its history extends far back throughout many centuries during the course of Judaism). Adolf Hitler's fanatical anti-Semitism was laid out in his 1925 book Mein Kampf, which became popular in Germany once he acquired political power. On April 1, 1933 the recently elected Nazis under Julius Streicher organized a one-day boycott of all Jewish-owned businesses in Germany (the last remaining Jewish enterprises in Germany were closed on July 6, 1939). This policy helped to usher-in a series of anti-Semitic acts that would eventually culminate in the Jewish Holocaust.

In many cities throughout Europe, Jews had been living in concentrated areas. During the first years of World War II, the Nazis formalized the borders of these areas and restricted movement, creating modern ghettos to which Jews were confined. The ghettos were, in effect, prisons, in which many Jews died from hunger and disease; others were executed by the Nazis and their collaborators.

Concentration camps for Jews existed in Germany itself. During the invasion of the Soviet Union over 3,000 special killing units (Einsatzgruppen) followed the Armed Forces and conducted mass killings of the Jewish population that lived on Soviet territory. Entire communities were wiped out by being rounded up, robbed of their possessions and clothing, and shot at the edges of ditches.

In December of 1941 Hitler has finally decided to exterminate the Jews of Europe. In January of 1942, during the Wannsee conference, several Nazi leaders discussed the details of the "final solution of the Jewish question" (Endlösung der Judenfrage).

Dr. Josef Buhler pushed Heydrich to take off the final solution in the General Government. They began to systematically deport the Jewish populations of the ghettos and from all occupied territories to extermination camps, such as Auschwitz and Treblinka II.

Homosexuals


Homosexuals were another of the groups targeted during the time of the Holocaust. However, the Nazi party made no attempt to exterminate all homosexuals; according to Nazi law, being homosexual itself was not grounds for arrest. Some prominent members of the Nazi leadership were known to other Nazi leaders to be homosexual, which may account for the fact that the leadership offered mixed signals on how to deal with homosexuals. Some leaders clearly wanted homosexuals exterminated; others wanted them left alone, while others wanted laws against homosexual acts enforced, but otherwise allowed homosexuals to live as other citizens did.

Estimates vary wildly as to the number of homosexuals killed. They range from as low as 10,000 to as high as 600,000. The large variance is partly dependent on how researchers tally those who were Jewish and homosexual, or even Jewish, homosexual and communist. In addition, records as to the reasons for internment remain non-existent in many areas. See Homosexuals in Nazi Germany for more information.

Gypsies


Hitler's campaign of genocide against the Roma people of Europe was seen by many as a particularly bizarre application of Nazi racial science. German anthropologists were forced to contend with the fact that Gypsies were descendants of the original Aryan invaders of India, who made their way back to Europe. Ironically, this made them no less Aryan than the German people itself, in practice if not in theory. This dilemma was resolved by Professor Hans Gunther, a leading racial scientist, who wrote:
"The Gypsies have indeed retained some elements from their Nordic home, but they are descended from the lowest classes of the population in that region. In the course of their migration, they absorbed the blood of the surrounding peoples, thus becoming an Oriental, West-Asiatic racial mixture with an addition of Indian, mid-Asiatic, and European strains."


As a result, however, and despite discriminatory measures, some groups of Roma, including the Sinti and Lalleri tribes of Germany, were spared deportation and death. Remaining Gypsy groups suffered much like the Jews (and in some instances, were degraded even more than Jews). In Eastern Europe, Gypsies were deported to the Jewish ghettoes, shot by SS Einsatzgruppen in their villages, and deported and gassed in Auschwitz and Treblinka.

Others


Slavic people were targeted by the Nazis, mostly intellectuals and prominent people, although there were some mass murders and instances of genocide (Croatian Ustashe as the most notorious example).

During Operation Barbarossa, the German invasion of the Soviet Union 1941-1944, hundreds of thousands (if not millions) of Russian army POWs were arbitrarily executed in the field by the invading German armies, in particular by the notorious Waffen S.S., or were shipped to the many extermination camps for execution simply because they were of Slavic extraction. Thousands of Russian peasant villages were annihilated by German troops for more or less the same reason.

Around 2000 Jehovah's Witnesses perished in concentration camps, where they were held for political and ideological reasons, as they refused involvement in politics, would not say "Heil Hitler" and did not serve in the German army. - See Jehovah's Witnesses and the Holocaust.

On August 18, 1941, Adolf Hitler ordered an end to the systematic euthanasia of mentally ill and handicapped people due to protests within Germany.

Extent of the Holocaust


The exact number of people killed by the Nazi regime is still subject to further research. Recently declassified British and Soviet documents have indicated the total may be somewhat higher than previously believed. However, the following estimates are considered to be highly reliable.

  • 5.6–6.1 million Jews

  • 3.5–6 million Slavic civilians

  • 2.5–4 million POWs

  • 1–1.5 million political dissidents

  • 200 000–800 000 Roma & Sinti

  • 200 000–300 000 handicapped

  • 10 000–250 000 homosexuals

  • 2 000 Jehovah's Witnesses


The Triangles


To identify prisoners in the camps according to their "offense", they were required to wear colored triangles on their clothing. Although the colors used differed from camp to camp, the colors most commonly were:

  • Yellow: Jews -- two overlaid to form a Star of David, with the word "Jude" (Jew) inscribed

  • Red: political dissidents, including communists

  • Green: common criminals

  • Purple: Jehovah's Witnesses

  • Blue: immigrants

  • Brown: Roma and Sinti (Gypsies)

  • Black: Lesbians and "anti-socials"

  • Pink: Gay men


Historical Interpretations


As with any historical event, scholars continue to argue over what, exactly, happened, and why. Among the major questions historians have sought to answer are:

  • how many people were killed in the Holocaust?

  • who was directly involved in the killing?

  • who authorized the killing?

  • who knew about the killing?

  • why did people directly participate in, authorize, or tacitly accept the killing?


Functionalism versus Intentionalism


A major issue in contemporary Holocaust studies is the question of functionalism versus intentionalism. Intentionalists argue that the Holocaust was planned by Hitler from the very beginning. Functionalists hold that the Holocaust was started in 1942 as a result of the failure of the Nazi deportation policy and the impending military losses in Russia. They claim that extermination fantasies outlined in Hitler's Mein Kampf and other Nazi literature were mere propaganda and did not constitute concrete plans.

Another controversy was started by the historian Daniel Goldhagen, who argues that ordinary Germans were knowing and willing participants in the Holocaust, which he claims had its roots in a deep eliminative German anti-Semitism. Others claim that while anti-Semitism undeniably existed in Germany, the extermination was unknown to many and had to be enforced by the dictatorial Nazi apparatus.

Revisionists and Deniers


Some groups, commonly referred to as "Holocaust deniers", deny that the Holocaust happened. Many of the Holocaust deniers are neo-Nazis or just antisemites.

Holocaust revisionism claims that far fewer than 5-6 million Jews were killed, and that the killing was not a result of deliberate Nazi policy. Although Holocaust revisionists claim to present documentary evidence to support their claims, critics argue that the evidence is flawed, the research is specious, and the conclusions are pre-determined. Many claim that such revisionism is a form of Anti-Semitism and tantamount to denial.

Holocaust Theology


In light of the magnitude of what was seen in the Holocaust, many people have re-examined the classical theological views on God's goodness and actions in the world. How can people still have any faith after the Holocaust? For the theological responses to questions raised by the Holocaust, see Holocaust theology.

Political Ramifications


The Holocaust has had a number of political and social ramifications which reach to the present. The need to find a homeland for many Jewish refugees led to a great many Jews emigrating to Palestine, most of which was soon to become the modern State of Israel. This immigration had a direct effect on the Arabs of the region, which is discussed in the articles on the Arab-Israeli conflict, the Israeli-Palestinian conflict and in many articles linked to these.


World War 2 Atomic Bomb



On August 6 and 9, 1945, the cities of Hiroshima and Nagasaki were destroyed by the first atomic bombs used in warfare.

The first atomic bomb ever to be used in a military operation was dropped on the city of Hiroshima, Japan On August 6, 1945 at 8:16:02 a.m. Hiroshima time. The bomb, affectionately named "Little Boy," exploded 1,900 feet above the courtyard of Shima Hospital, with a force equivalent to 12,500 tons of TNT. By the end of 1945, 140,00 people had died as a direct result of the bombing. Within the following five years, another 60,000 would die of bomb-related causes.

The bomb killed men, women, and children indiscriminately. It killed both military personnel and civilians. Although the city produced military items and housed soldiers, it was not selected as a "purely military target" as President Truman had promised. There were six civilians in Hiroshima to every soldier.

The second bomb, called "Fat Man," exploded over Nagasaki, Japan, at 11:02 a.m. on August 9, 1945. It exploded at 1,650 feet with a force of 22,000 tons of TNT. 70,000 people lost their lives in Nagasaki by the end of 1945 due to the bombing. A total of 140,00 died within the next five years.

Hiroshima
During World War II, Hiroshima was a city of considerable military importance. It contained the 2nd Army Headquarters, which commanded the defense of all of southern Japan. The city was a communications center, a storage point, and an assembly area for troops. To quote a Japanese report, "Probably more than a thousand times since the beginning of the war did the Hiroshima citizens see off with cries of 'Banzai' the troops leaving from the harbor."

The center of the city contained a number of reinforced concrete buildings as well as lighter structures. Outside the center, the area was congested by a dense collection of small wooden workshops set among Japanese houses; a few larger industrial plants lay near the outskirts of the city.

The houses were of wooden construction with tile roofs. Many of the industrial buildings also were of wood frame construction. The city as a whole was highly susceptible to fire damage.

Some of the reinforced concrete buildings were of a far stronger construction than is required by normal standards in America, because of the earthquake danger in Japan. This exceptionally strong construction undoubtedly accounted for the fact that the framework of some of the buildings which were fairly close to the center of damage in the city did not collapse.

Another is that the blast was more downward than sideways; this has much to do with the "survival" of the Prefectural Promotional Hall (pictured), which was only a few metres from the aiming point.

The population of Hiroshima had reached a peak of over 380,000 earlier in the war but prior to the atomic bombing the population had steadily decreased because of a systematic evacuation ordered by the Japanese government. At the time of the attack the population was approximately 255,000. This figure is based on the registered population, used by the Japanese in computing ration quantities, and the estimates of additional workers and troops who were brought into the city may not be highly accurate.

Hiroshima was the primary target of the first U.S. nuclear attack mission. The mission went smoothly in every respect. The weather was good, and the crew and equipment functioned perfectly. In every detail, the attack was carried out exactly as planned, and the bomb performed exactly as expected.

The bomb exploded over Hiroshima at 8:15 on the morning of August 6, 1945. About an hour previously, the Japanese early warning radar net had detected the approach of some American aircraft headed for the southern part of Japan. The alert had been given and radio broadcasting stopped in many cities, among them Hiroshima.

The planes approached the coast at a very high altitude. At nearly 8:00 A.M., the radar operator in Hiroshima determined that the number of planes coming in was very small - probably not more than three - and the air raid alert was lifted. The normal radio broadcast warning was given to the people that it might be advisable to go to shelter if B-29's were actually sighted, but no raid was expected beyond some sort of reconnaissance.

At 8:16 A.M., the B-29 Enola Gay dropped the atomic bomb called "Little Boy" over the central part of the city and the bomb exploded with a blast equivalent to 12,000 tons of TNT, killing 80,000 outright.

At the same time, Tokyo control operator of the Japanese Broadcasting Corporation noticed that the Hiroshima station had gone off the air. He tried to use another telephone line to reestablish his program, but it too had failed. About twenty minutes later the Tokyo railroad telegraph center realized that the main line telegraph had stopped working just north of Hiroshima. From some small railway stops within ten miles of the city there came unofficial and confused reports of a terrible explosion in Hiroshima. All these reports were transmitted to the Headquarters of the Japanese General Staff.

Military headquarters repeatedly tried to call the Army Control Station in Hiroshima. The complete silence from that city puzzled the men at Headquarters; they knew that no large enemy raid could have occurred, and they knew that no sizeable store of explosives was in Hiroshima at that time. A young officer of the Japanese General Staff was instructed to fly immediately to Hiroshima, to land, survey the damage, and return to Tokyo with reliable information for the staff. It was generally felt at Headquarters that nothing serious had taken place, that it was all a terrible rumor starting from a few sparks of truth.

The staff officer went to the airport and took off for the southwest. After flying for about three hours, while still nearly 100 miles from Hiroshima, he and his pilot saw a great cloud of smoke from the bomb. In the bright afternoon, the remains of Hiroshima were burning.

Their plane soon reached the city, around which they circled in disbelief. A great scar on the land, still burning, and covered by a heavy cloud of smoke, was all that was left of a great city. They landed south of the city, and the staff officer immediately began to organize relief measures, after reporting to Tokyo.

Tokyo's first knowledge of what had really caused the disaster came from the White House public announcement in Washington, sixteen hours after the nuclear attack on Hiroshima. By the end of 1945, it is estimated that 60,000 more people died due to nuclear fallout sickness. However, this total does not include longer term casualties from radiation exposure.

Starting almost immediately after the conclusion of World War II, and continuing to the present day, the dropping of atomic bombs on the cities of Hiroshima and Nagasaki has been questioned. Their use has been called barbarian since, besides destroying a military base and a military industrial center, tens of thousands of civilians were killed.

Some have claimed that the Japanese were already essentially defeated, and that use of the bombs was unnecessary. Some have also suggested that a demonstration of an atomic bomb in an uninhabited region should have been attempted.

In reply, defenders of the decision to use the bombs say that it is almost certain that the Japanese would not have surrendered without their use, and that hundreds of thousands - perhaps millions - would have perished in the planned U.S. invasion of Japan.

To support their argument, they point out that the Japanese agreed to surrender only after the second bomb was dropped, when it was evident that the first was not an isolated event, and future prospects were for a continuing rain of such bombs. Actually, the U.S. did not have another atomic bomb ready after the bombing of Nagasaki due the difficulty of producing fissile material. Regarding the suggestion of a demonstration, they maintain that, given the mind-set of the Japanese at the time, it is unlikely that any conceivable benign demonstration would have induced surrender.

Others contend that Japan had been trying to surrender for at least two months, but the US refused by insisting on an unconditional surrender—which they did not get even after the bombing, the bone of contention being retention of the Emperor.

Tens of thousands of people marked the 40th anniversary of the atomic bombing of the city on August 6, 1985.

Nagasaki
The city of Nagasaki had been one of the largest sea ports in southern Japan and was of great war-time importance because of its many and varied industries, including the production of ordnance, ships, military equipment, and other war materials. The narrow long strip attacked was of particular importance because of its industries.

In contrast to many modern aspects of Nagasaki, the residences almost without exception were of flimsy, typical Japanese construction, consisting of wood or wood-frame buildings, with wood walls with or without plaster, and tile roofs. Many of the smaller industries and business establishments were also housed in wooden buildings or flimsily built masonry buildings.



Nagasaki had been permitted to grow for many years without conforming to any definite city zoning plan and therefore residences were constructed adjacent to factory buildings and to each other almost as close as it was possible to build them throughout the entire industrial valley.

Nagasaki had never been subjected to large scale bombing prior to the explosion of a nuclear weapon there. On August 1st, 1945, however, a number of high explosive bombs were dropped on the city. A few of these bombs hit in the shipyards and dock areas in the southwest portion of the city. Several of the bombs hit the Mitsubishi Steel and Arms Works and six bombs landed at the Nagasaki Medical School and Hospital, with three direct hits on buildings there.

While the damage from these few bombs were relatively small, it created considerable concern in Nagasaki and a number of people, principally school children, were evacuated to rural areas for safety, thus reducing the population in the city at the time of the nuclear attack.

At 11:02 am on August 9, 1945, the American B-29 Superfortress "Bockscar," in search of the shipyards, instead spotted the Mitsubishi Arms Works through a break in the clouds. On this target, it dropped the nuclear bomb Fat Man, the second nuclear weapon to be detonated over Japan. Even though the "Fat Man" missed by over a mile and a half, it still leveled nearly half the city. 75,000 of Nagasaki's 240,000 residents were killed, followed by the death of at least as many from resulting sickness and injury.

However another report issues a different residental number, speaking of Nagasaki's population which dropped in one split-second from 422,000 to 383,000, thus 39,000 were killed, over 25,000 were injured.

If taken into account those who died from radioactive materials causing cancer, the total number of casualties is to be believed at least 100,000 killed residents. Estimates from physicists who have studied each atomic explosion state that the bomb that was used had utilized only 1/10th of 1 percent of their respective explosive capabilities.

The city was rebuilt after the war, albeit dramatically changed, as any city would be after such colossal damage. New temples were built, and new churches as well, since the Christian presence never died out and even increased dramatically in numbers after the war.

Some of the rubble was left as a memorial, like the one-legged torii gate and a stone arch near ground zero. New structures were also raised as memorials, such as the Atomic Bomb Museum. Nagasaki remains first and foremost a port city, supporting a rich shipping industry and setting a strong example of perseverance and peace.



Second World War Weapons




Karabiner 98k



The Karabiner 98k was a German rifle introduced into general service in 1898. It was manufactured by the Mauser armory in huge quantities until it became obsolete after WWII. The 98K is a bolt action rifle that holds five rounds of 7.9mm on a stripper clip. It was the primary German infantry rifle in both world wars, and was noted for its excellent accuracy and effective range of 800 meters.

For this reason it continued to be used with a telescopic sight as a sniper rifle, after it was obsolete as a standard weapon. The 98k had the same disadvantages as all other turn of the century military rifles, that being bulky and heavy and slow rate of fire. It was also designed to be used with a bayonet and to fire special grenades. A version with a folding stock was introduced in 1941 to be used by airborne marksmen.

Towards the end of the war the 98K was being phased out in favor of the much more advanced SG44.


Karabiner 98k



The Karabiner 98k was a German rifle introduced into general service in 1898. It was manufactured by the Mauser armory in huge quantities until it became obsolete after WWII. The 98K is a bolt action rifle that holds five rounds of 7.9mm on a stripper clip. It was the primary German infantry rifle in both world wars, and was noted for its excellent accuracy and effective range of 800 meters.

For this reason it continued to be used with a telescopic sight as a sniper rifle, after it was obsolete as a standard weapon. The 98k had the same disadvantages as all other turn of the century military rifles, that being bulky and heavy and slow rate of fire. It was also designed to be used with a bayonet and to fire special grenades. A version with a folding stock was introduced in 1941 to be used by airborne marksmen.

Towards the end of the war the 98K was being phased out in favor of the much more advanced SG44.

Sturmgewehr 44



The Sturmgewher 44 was the world's first true assault rifle and was introduced by the German army late in WWII. It was the direct inspiration for the Russian AK47, the most prolific gun in the world. If the war had continued another year, the SG44 would have replaced every other rifle, ligh machine gun, and submachine gun in the Wehrmacht, including the antique Karabiner 98k and anemic MP38.

The SG44 was revolutionary in that it combined the best elements of both rifles and submachine guns. It fired an intermediate cartridge that was powerful enough to hit targets accurately at long ranges, yet not so overwhelming that automatic fire became impossible.

The SG44 was originally called the Maschinenpistol 43, but when Hitler cancelled the MP43 project for dubious reasons, its designers were so confident in its benefits, they changed the name and secretly continued research.

The assault rifle proved an invaluable weapon, especially on the Eastern front, where it was first deployed. A properly trained soldier with an SG44 had a greatly improved tactical repertoire, in that he could effectively engage targets at long range across open terrain, or in close range urban fighting, as well as provide cover fire in all situations as a machine gun role.

The wisdom of the assault rifle concept has been born out in that, with the exception of a few specialized positions such as the sniper, virtually every soldier in every army today carries a descendent of the SG44.

Thompson M1



Also known as the Tommy Gun, the Thompson was a popular submachine gun that became [in]famous during prohibition, when gangsters would use it because of the high volume of automatic fire it made available from such a compact firearm and it could be attained legally.

Designed during World War I by General John T. Thompson, the Tommy Gun was available in the .45 Caliber ACP (Automatic Colt Pistol) cartridge, and was used by the US Army through WW2. The means of operation is direct blow-back, although early models made use of the Blish lock, turning the mechanism into a delayed blow-back system. After WW2 it saw limited service in Korea, and was carried unofficially by a smattering of soldiers in Vietnam. Domestically, it was used by law enforcement, most prominently by the FBI, until 1976 when it was declared obsolete, and all Thompsons in government possession were destroyed, except for a few token museum peices and training models. Owing to both its gangster and WWII connections, Thompsons are highly sought after collector's items. An original 1928 gun in working condition can easily fetch $15,000. Semi-auto replicas are currently produced by the Auto-Ordnance Company, which is operated as a division of Kahr firearms.

M1 Garand



The Garand (M1) was the first semi-automatic rifle to be put in active military service. It weighed 9 pounds 8 ounces unloaded, and was 43.5 inches long. Simple in construction and easy to maintain, the rifle fired a standard clip of eight rounds, originally .276" caliber but later modified to .30" caliber. (The prototype rifles in .276 had a capacity of 10 rounds.)

It was developed by weapons designer, John Garand in the 1930s and the .30" caliber weapon became the standard long arm of the US Army, entering service in 1936. It served through World War II and the Korean War where it proved to be an excellent weapon to the point where the Axis Powers used as many as they could capture. Some were still being used in the Vietnam War in 1963, although it was officially superseded by the M14 rifle in 1957.

It did have its defects. The magazine held 8 cartridges, which were loaded by inserting an "en bloc" clip containing them into the rifle. It was not possible to load single rounds, so a partially discharged magazine could not be easily refilled. When the rifle fired the last round, it automatically ejected the clip, producing a loud high-pitched "ping" sound, although this generally could not be heard over the din of battle, despite the commonly-heard myth to the contrary.

Despite these problems, the rifle was well-received in several quarters. Gen. George S. Patton called it "the greatest implement of battle ever devised." The rifle remains popular with civilian weapons collectors and enthusiasts in the United States.

MP38/40



The MP38 was the standard German submachine gun of WWII. As the number in its name suggests, it was first issued in 1938. Two years later, it was replaced by the MP40, which was identical except used less expensive stamped metal for certain parts, which was more cost effective for a mass produced weapon. It was a very successful firearm and even Allied forces preferred them over their own submachine guns and scavenged MP40s whenever possible. The design was copied by other countries both during and after the war.

M1 Carbine



The M1 Carbine is a carbine version of the Garand semi-automatic rifle that was a standard firearm in the USA military during World War II and the Korean War.

The basic idea of the gun was in response to the blitzkrieg tactical doctrine of the Axis. Facing an enemy that used this tactical approach meant that support troops could come under direct attack by front line forces. In anticipation for this possibility, this carbine was commissioned to supply an adequate defensive weapon to those troops.

However, the weapon was also adopted by regular troops since it had a good rate of fire and a superior range to a submachine gun, but it was less unwieldly than a regular M1 Garand rifle which made it suitable for close quarters combat. The folding version was also favored by paratroopers, due to its low weight and compactness, yet was more powerful than the previously issued Tommy Guns.

During the two wars in which it was used, 6.25 million M1 Carbines of various models were manufactured, thus making it the most produced small arm in American military history.



Colt M1911



The M1911 is a .45" caliber, single action, semi-automatic handgun, originally designed by John Browning, which was the standard-issue handgun in the combat arm of the United States Armed Forces from 1911 to 1985.

The weapon had its origins in problems encountered by American units fighting Moro insurgents during the Philippine-American War in which the then-standard .38" caliber revolver was found to be unsuitable for the rigors of jungle warfare. The Army formed an Ordnance Board, headed by John T. Thompson, to select a more suitable weapon. The board decided a .45" caliber weapon would be most appropriate, and took bids from six firearms manufacturing companies in 1906.

Of the six designs submitted, two were selected for field testing in 1907, one of them being Colt's model, which Browning had basically modified to government specifications from an earlier autoloading .38" caliber design of his. A series of field tests was designed to decide between the two finalists (the other being a design by Arthur Savage) and the Colt passed with flying colors, firing 6,000 rounds non-stop (a record at the time).

The weapon was formally adopted by the Army on March 29, 1911, thus gaining its nomenclature. It was adopted by the Navy and Marine Corps in 1913. Originally manufactured only by Colt, demand for the firearm for use in World War I saw the expansion of manufacture to the government-owned Springfield Armory.

Battlefield experience in the First World War led to a redesign of the weapon, completed in 1926, and named the M1911A1. Changes to the original design were exceedingly minor (shorter trigger, recess near the trigger frame, etc.); for this reason, those unfamiliar with the sidearm are often unable to tell the difference between the two models at a glance, and also for this reason those familiar with the weapon consider its design one of the most effective in the history of firearms. The soundness of design is also borne out in its longevity of service (over 70 years).

World War II and the years leading up to it created a great demand for the weapon, which in turn led to the Army's extending manufacturing contracts to several manufacturers, including Remington Rand, Ithaca, Union Switch and Signal Company, and Singer (the sewing-machine manufacturer), as well as the Springfield Armory and Rock Island Arsenal.

After the Second World War, the sidearm continued to be a mainstay in the U.S. armed forces, seeing action in the Korean War and the Vietnam War (where it was the weapon of choice for U.S. "tunnel rats"). It was replaced, largely due to considerations of NATO commitments, with a 9mm sidearm, the M9, on January 14, 1985. The M1911A1 is still used by special operations units of the U.S. Army, Navy, and Marine Corps, and by Hostage Rescue Team units of the FBI, among other agencies. The M1911A1 design is also favored by police SWAT teams throughout the United States.

Today the M1911A1 type is widely used by the general public in the United States for both practical and recreational purposes. The pistol is commonly used for concealed carry, personal defense, target shooting, and competition. Numerous aftermarket accessories allow the user to customize the pistol to his or her liking. There is a growing number of manufacturers of 1911A1-type pistols and the model continues to be quite popular for its reliability, simplicity, and All-American appeal. Various tactical, target, and compact models are available. Price ranges from a low end of $250 for an imported "clunker" to more than $3,000 for the best competition or tactical models, which are precisely assembled and tuned by hand. Despite being challenged by more exotic and lightweight (and largely imported) pistol designs in .45 caliber, such as the Glock 21 and Sig Arms P220, the original 1911 design will soon be 100 years old with no signs of decreasing popularity.

The weapon typically uses a variety of 230-grain full metal jacket ammunition also originally designed by Browning, with a normal capacity of 7 or 8 rounds, or even more with larger aftermarket magazines.

Parabellum Luger P-08



Some of the common infantry weapons of World War 2 included:

A Luger is an arm-locked pistol. It is semi-automatic, magazine-fed, and operates on the short-recoil principle. The pistol, designed by Georg Luger, was based of a design of Hugo Borchardt, which never saw the great success of its little brother.

The Luger P-08 was the standard sidearm for the German army during both world wars, but it was in the process of being replaced by the Walther P38 at war's end. Although obsolete today, the Luger is still sought after by collectors both for its sleek design, and to a greater extent by its infamous connection to Nazi Germany. Thousands of Lugers were brought back as souveneirs by American GIs after WWII, and are still in circulation. Additionally in response to demand, modern look alike pistols are built by several companies.

Operation: The Luger uses a jointed arm that locks in the extended position. Upon recoiling with the barrel, a cam strikes the joint, causing the arm to hinge and the cartridge case to extract, beginning the firing sequence again.

Bazooka



The bazooka weapon was one of the first anti-tank weapons based on the HEAT shell to enter service, used by the United States Armed Forces in World War II. It was nicknamed a "bazooka" from a vague resemblance to the musical instrument. It was highly effective, so much so that the Germans copied it outright to produce their own version known as the Panzerschreck. The bazooka could be found in all theatres of war during World War II, and was used until the Korean War when it was then replaced by newer weapons such as the LAW in time for the Vietnam War.

Prior to the war the US Army had developed a shaped-charge hand grenade for anti-tank use that was effective at defeating up to 100mm of armor, by far the best such weapon in the world at the time. However it remained very difficult to use, requiring it to be placed directly on the tank, and for this reason it was largely ignored.

Things changed when Colonel Skinner suggested placing the grenade on the front of his experimental rocket launcher, which was a weapon looking for a role. This proved to be a good match, and by late 1942 the Rocket Launcher, M1A1 was introduced. This consisted of a long (4ft) tube with a simple wooden stock and sights, into which the 60mm rocket grenades were inserted at the rear. A small battery provided a charge to ignite the rocket when the trigger was pulled. The main drawback to the weapon was the large backblast and smoke trail which gave away the position of the shooter.

In 1944 the M1A1 model was supplemented by the improved M9 and then the M9A1 which could be broken into two halves for easier carrying. A larger 3.5lb warhead was under development, but didn't reach service until after the war had ended. By the time of the Korean War an even larger M20 with a 2lb 3.5" warhead was starting to enter service, which could penetrate well over 200mm of armor and had an extended range of about 150m.

MG34



The Maschinengewehr 34, or MG34, was a German machine gun first issued in 1934, considered by many to be the first modern general-purpose machine gun. It was used as the primary infantry machine gun during the 1930s, and remained as the primary tank and aircraft defensive weapon. It was intended that it would be replaced in infantry service by the related MG42, but there were never enough of the new design to go around, and MG34s soldiered on in all roles until the end of World War II.

The MG34 was designed primarily by Heinrich Vollmer from Mauser Werke, based on the recently introduced Rheinmetall designed Solothurn 1930 (MG30) that was starting to enter service in Switzerland. The principle changes were to move the feed mechanism to a more convienient location on the left of the breech, and the addition of a shroud around the barrel. Changes to the operating mechanism improved the rate of fire to between 800 and 900 RPM.

The MG34 could use both magazine-fed and belt-fed 7.92mm ammunition. Belts were supplied in 50-round single strips or 250-round boxes. The drums held either 50 rounds in the standard version, or 75 in the "double drum" version. Early guns had to be modified to use the drums by replacing a part on the gun, but this modification was later supplied from the factory.

In the light machine gun role it was used with a bipod and weighed only 12.1 kg, considerably less than other machine guns of the era. In the medium machine gun role it could be mounted on one of two tripods, a smaller one weighing 6.75 kg, the larger 23.6 kg. The larger included a number of features making it useful for a number of roles. The legs could be extended to allow it to be used in the anti-aircraft role (and many were), and when lowered it could be placed to allow the gun to be fired "remotely" while it swept an arc in front of the mounting with fire, or aimed through a periscope attached to the tripod.

The new gun was accepted for service almost immediately and was generally liked by the troops. It was used to great effect by German soldiers assisting the fascists in the Spanish Civil War. At the time it was considerably more advanced than guns being used by other forces (with the exception of the MG30), both in terms of rate of fire, and in being easily man portable by a single gunner. However the MG34 was also very expensive, both in terms of construction and the raw materials needed (49 kg of steel) and it was unable to be built in the sorts of numbers required for the ever expanding German army. It also proved to be rather tempermental, jamming easily when dirty.

By the late 1930s an effort had started to simplify the MG34, leading to the MG42. The MG42's square barrel cover made it unsuitable for use in tank cupolas however, and the MG34 remained in production until the end of the war for this role.

The MG34 was also used as the basis of a new aircraft gun, the MG81. For this role the breech was slighly modified to allow feeds from either side, and in one version two guns were bolted together on a single trigger to form a weapon known as the MG81Z (for zwillig, twin in German). Production of the MG34 was never enough to satisfy any of its users, and while the MG81 was a huge improvement over the earlier MG30-based MG15 and MG17, those guns could be still found in use until the end of the war.

MG42



The Maschinengewehr 1942, or MG42, is a German machine gun, first manufactured in 1942 as the successor to the MG34. During WWII, the MG42 had the fastest rate of fire of any weapon, at 1200 rounds per minute (up to 1800 in some versions). At this rate it becomes impossible for the human ear to discern the sound of individual bullets being fired, and thus when in use the gun makes a sound described both as "ripping cloth" and "Hitler's Buzzsaw". During the war, over 400,000 were manufactured.

In the late 1930s the MG34 was arguably the best machine gun in the world at the time, but was expensive and time consuming to construct. In order to arm the increasingly large German army, an effort was started to build a simpler gun that could be built much faster. The winning design was offered by a newcommer to the contest, Metall-und-Lackierwarenfabrik Johannes Grossfuss AG, experts in pressed and punched steel parts. Their efforts resulted in a dramatic reduction in complexity – it took 75 man-hours to complete the new gun as opposed to 150 for the MG34, and cost 250RM as opposed to 327RM.

The resulting MG39 remained largely similar to the earlier MG34, a deliberate decision made in order to maintain familiarity. The only major change from the gunner's perspective was dropping the drum-feed options, leaving it with belts only, and the further increase in the rate of fire. Although made of "cheap" parts, the prototypes also proved to be considerably more rugged and resistant to jamming than the somewhat tempermental MG34.

Given the success of the prototype, it's somewhat mysterious that the gun did not enter production until 1942, thereby requiring a renaming to MG42. As soon as it was introduced it garnered intense demand by field units, a demand that German industry was never able to meet.

The MG42 weighed 11.6kg in the light machine gun role with the bipod, lighter than the MG34 and easily portable. The bipod, the same one used on the MG34, could be mounted to the front or the center of the gun depending on where it was being used. In the role as a heavy machine gun it utilised a newly developed Lafette 42 tripod that weighed 20.5kg on its own. The barrel was lighter than the MG34s and wore out more quickly, but could be replaced in seconds by an experienced gunner.

In 1944 the acute material shortages of the Third Rheich led to a newer version, the MG45 (or MG42V), which used steel of lesser quality, reduced weight to only 9kg, and yet further improved the maximum rate of fire. First tests were undertaken in June 1944, but development dragged on and eventually only ten were ever built.

Even today it is still regarded by many experts as the best machine gun ever. The MG42, with minor modifications, is still the primary heavy machine gun of the modern German army, now called the MG3. A number of other armies around the world have adopted versions of the original, and guns looking similar, or identical, to the MG42 remain in widespread service today. The US Army's M-60 is based upon the MG42.



Bren



The Bren Gun was Britain's primary light machine gun of WWII. It was adopted by the British army in 1935 to replace the aging Lewis Gun. It fired .30 caliber rounds at a rate of 500rpm. The disadvantages of the weapon were that it fired much slower than its German counterparts, and it only accepted box or drum magazines, which meant more frequent reloading than belt fed machine guns.

Its weight also stretched the definition of "light" machine gun, often requiring it to be partially disassembled and its parts carried by two soldiers when on long march. Despite these shortcomings, it was popular with British troops, and respected for its high reliability and combat effectivenes. It was manufactured by the Enfield armory, and is still in use in modified forms by the British military today.

Sten



The Sten gun is a British submachine gun used in WWII notable for its extremely inexpensive production costs. The Sten gun was visually distinctive for its very bare appearance (just a pipe with a metal loop for a stock) and that the magazine stuck out horizontally, rather than downwards, which is the default for submachine guns.

Stens fired full automatic 9mm, and were often disparaged by soldiers for inaccuracy and break downs. They were essentially the cheapest possible weapon that would work at all. The logic behind the Sten's introduction was that Britain was facing imminent danger of being conquered by the Nazis, and desperately needed to arm large numbers of recruits very quickly. Prior to 1941 the British army had purchased Thompson submachine guns from America, but these were quite expensive. In order to rapidly equip a sufficient fighting force to counter the German threat, the Enfield armory was commisioned to produce a radically cheaper alternative.

Stens were produced in a wide number of variants, with the Mark II (pictured) being the most prolific, at 2 million units. Later versions had slightly more robust construction, including wooden grips and stock, as well as models with integral silencers. All combined, approximately 4.5 million Stens were produced during the war, many of which were airlifted by the crate to resistance fighters throughout occupied Europe. Due to their slim profile, and ease of dismantlement, they were good for concealment and guerilla war.

Despite its relative unpopularity with troops, the Sten saw continued use even after the economic crunch of WWII was over. Specialized versions of the Sten were used by British commandos in Korea, due to its very low weight and bulk.

Stens were so cheap, that even today despite the generally high cost of WWII era collectibles, a Sten gun, converted to semi-auto, can be had usually for less than $100.

Lee-Enfield


The Lee-Enfield was the standard British Army rifle for much of the 20th century. It was a simple but very reliable bolt action rifle firing the standard rimmed .303" MK VII round. It had been scheduled for replacement almost before seeing action, but a series of delays and interruptions led to it being used into the 1950s, and not disappearing completely until the 1980s. Several versions were produced, and the short, magazine version resulted in the widely-used acronym SMLE. This actually referred to the 1914 version which was 3" shorter than the previous Long, magazine rifle.

It also had a broad bayonet boss flush with the muzzle which took the 18" sword bayonet. It was called the No. 1 SMLE. By D-Day (6.06.44) the lighter No. 4 SMLE was in use. The main change was to expose 2" of barrel at the muzzle onto which fitted the new socket bayonet. This looked like a shiny 7" nail. After 1945 a regular flat bladed 7" bayonet was issued. Also post 1945, the No. 8 or "jungle carbine" was developed for use in Malaya and other similar campaigns. IE the rifle was shortened by about 7" and most of the wood in front of the breech removed. A bell shaped flash hider was fitted and this meant a new bayonet with large ring fitting was required.

This rifle was probably designed at the Royal Enfield Small Arms Factory, which is based in what is now known as Enfield Lock at the bottom of Ordnance Road. The old site has now been built over with a housing estate; however some of the original buildings have been converted and evidence of the works are still visible. Within the local area there is also the Waltham Abbey gunpowder mills.

PIAT


The PIAT, for Projector, Infantry, Anti Tank, was the first effective anti-tank weapon based on the HEAT shell. It was developed by the British starting in 1941, reaching the field in time for the invasion of Sicily in 1943. Unlike the US bazooka and it's German copy, the Panzerschreck, the PIAT could be used in enclosed spaces which made it more useful in close-combat and for hiding in houses.

At the start of World War II, all major armies were investing in research into HEAT to produce an infantry weapon capable of defeating modern armor. The US and Germans concentrated on rockets to propel their weapons, in 1941 when the PIAT was being developed, these systems were nowhere near ready for use.

Instead they turned to a prewar weapon known as the Blacker Bombard, a small man-portable mortar using a large spring for propulsion. The spring pushed against a 12 pound steel canister and rod that rode up the barrel and impacted with the rear of the shell, igniting a small propulsion charge. The heavy bolt and rod, known as the spigot, was used primarily to damp out the recoil of the round leaving the barrel.

The Blacker Bombard was never used operationally, but was perfect for modification as the launcher for a HEAT round. For this use the upper portion of the "barrel" was cut away on one side to form a trough, which could be reloaded by dropping rounds into it while lying prone. The charge on the shell was small enough that it caused no real smoke or backblast, a significant advantage over the bazooka. On the downside the spring required a heavy barrel to hold it, and the spigot itself added even more weight, resulting in a weapon that weighed 34 pounds unloaded.

The three pound HEAT warhead was able to penetrate about 100mm of armor at 100m, the weapon's rate range. This was too little to defeat the frontal armor of the newer German designs, but remained effective against side and rear armor.Private Ernest Alvia "Smokey" Smith of the Seaforth Highlanders of Canada earned the Victoria Cross after crawling to within 10 metres of a Panther to destroy it. The weapon was also used in a "house breaking" role out to about 350m.

Early rounds used in Sicily proved to require a "perfect" hit or they would not fire, and the weapon soon garnered a poor opinion among the troops. The Army then instigated a rapid series of improvements, and by the time of the invasion of the Italian mainland, the weapon had matured completely.


World War 2 Flags


In this section we offer a collection of information on and images of World War 2 Flags.

United Kingdom Flag




The United Kingdom Flag is blue field with the red cross of Saint George (patron saint of England) edged in white superimposed on the diagonal red cross of Saint Patrick (patron saint of Ireland), which is superimposed on the diagonal white cross of Saint Andrew (patron saint of Scotland); properly known as the Union Flag, but commonly called the Union Jack; the design and colors (especially the Blue Ensign) have been the basis for a number of other flags including other Commonwealth countries and their constituent states or provinces, as well as British overseas territories.



Italian Flag




The Italian flag is divided vertically into 3 equal sections of green, white and red. The three colors of the flag represent, symbolically, the three cardinal virtues of Hope, Faith, and Charity.



Japanese Flag




The Japanese Flag is white with a large red disk in the center. The disc symbolizes the sun without rays.



Nazi Flag






Soviet Flag




The Soviet Union flag is plain red, with a hammer crossed with a sickle and a red star in the upper hoist. The hammer symbolizes the nation's industrial workers, while the sickle symbolizes the nation's agricultural workers. The red star represents the rule of the communist party.



United States Flag




The 48 star flag was the official flag of the United States from 1912-1959.