Sunday 30 December 2012

differences in Radio AM and FM

RADIO

In the modern society, radio is the most widely used medium of broadcasting and electronic communication : it plays a major role in many areas such as public safety, industrial manufacturing, processing, agriculture, transportation, entertainment, national defense, space travel, overseas communication, news reporting and weather forecasting. In radio broadcasts, they use the radio waves which can be both microwaves and longer radio waves. These are transmitted in two ways: amplitude modulation (AM ) and frequency modulation ( FM ). These two kinds of wave have many differences.

Radio waves are among the many types of electromagnetic waves that travel within the electromagnetic spectrum. Radio waves can be defined by their frequency (in hertz, after Heinich Hertz , who first produced radio waves electronically), which is number of times they pass through a complete cycle per second; or by their wavelength, which is determined by the distance (by meters) that is traveled from the crest of one wave to the crest of the next.

Radio frequencies are measured in units called kilohertz, megahertz, and gigahertz. (1 kilohertz = 1000 hertz : 1 megahertz = 106 hertz, 1 gigahertz = 109 hertz). All radio waves fall within a frequency range of 3 kilohertz, or 3000 cycles per second to 30 gigahertz. Within the range of frequencies, radio waves are further divided into two groups or bands such as very low frequency ( VLF 10-30 kHz ), low frequency (LF 30-300 KHz), medium frequency ( MF 300-3000 KHz), high frequency ( HF 3-30 MHZ) and very high frequency ( VHF 30-300MHZ).

Amplitude modulation is the oldest method of transmitting voice and music through the airwaves is by amplitude modulation. This is accomplished by combining a sound wave from a microphone, tape, record, or CD with a "carrier" radio wave. The result : a wave that transmits voice or programming as its amplitude ( intensity ) increases and decreases. Amplitude modulation is used by station broadcasting in the AM band and by most international short wave stations.

Frequency modulation is another way to convey information, voice , and music on a radio wave is to slightly change, or modulate, the frequency. The main advantage of FM broadcasting is of it is static free. But the drawback to FM is since the frequency is varied, station takes up more room on the band. Frequency modulation is, of course, used on the FM band. And it is used for "action band" and ham transmission in the VHF/UHF frequency range.
In amplitude modulation, what is modified is the amplitude of a carrier wave on one specific frequency. The antenna sends out two kinds of Am waves : ground waves and sky waves. Ground waves spread out horizontally from the antenna. They travel through the air along the earth's surface. Sky waves spread up into the sky . When they reach the layer of atmosphere called the ionosphere, they may be reflected back to earth . This reflection enables AM radio waves to be received at great distances from the antenna.

Frequency modulation station generally reach audiences from 15 to 65 miles ( 24-105km) away. Because of frequency of the carrier wave is modulated, rather than amplitude, background noise is reduced. In FM transmission, the frequency of the carrier wave varies according to the strength of the audio signal or program. Unlike AM , where the strength of the carrier wave varies, the strength of the carrier wave in FM remains the same , while its frequency varies above or below a central value broadcast. FM transmission have a broadcast waves ( 88-108 MHZ) are shorter than AM broadcast waves (540 - 1600 kHz) and do not go as far.

In AM transmission, the amplitude of the carrier waves varies to match changes in the electromagnetic waves coming from the radio studio. In FM transmission, the amplitude of the carrier waves remains constant. However, the frequency of the waves changes to match the electromagnetic waves sent from the studio.

Two types of radio waves are broadcast by AM transmitter : ground waves, which spread out horizontally from the ground and travel along the earth's surface, and air waves, which travel up into the ionosphere, allows AM transmission to travel great distances. AM radio stations with powerful transmitters can reach listeners as far as 1000 miles ( 1600 km ) away.

FM radio waves also travel horizontally and skyward. However , due to the higher frequency of the carrier waves, the waves that go skyward are not reflected. They pass through the atmosphere and into space. Although AM waves can be received at greater distances than FM waves , FM. waves do have advantages. They are not affected by static as much as Am waves. Static is caused by electricity in the atmosphere. FM waves also result in a truer reproduction of sound than AM waves.

Furthermore, FM has much better sound than AM because AM has different frequency and wavelength than FM. AM stations broadcast on frequencies of between 535 and 1605 kilohertz. The FM band extends from 88 to 108 megahertz. So that people can compare two different bands on the radio.

Cryogenics and the future

Cryogenics and the Future

Cryogenics is a study that is of great importance to the human race and has been a major project for engineers for the last 100 years. Cryogenics, which is derived from the Greek word kryos meaning "Icy Cold," is the study of matter at low temperatures. However low is not even the right word for the temperatures involved in cryogenics, seeing as the highest temperature dealt with in cryogenics is 100 (C (-148 (F) and the lowest temperature used, is the unattainable temperature -273.15 (C (-459.67 (F). Also, when speaking of cryogenics, the terms Celsius and Fahrenheit are rarely used. Instead scientists use a different measurement called the Kelvin (K). The Kelvin scale for Cryogenics goes from 173 K to a fraction of a Kelvin above absolute zero. There are also two main sciences used in cryogenics, and they are Superconductivity and Superfluidity.

Cryogenics first came about in 1877, when a Swiss Physicist named Rasul Pictet and a French Engineer named Louis P. Cailletet liquefied oxygen for the first time. Cailletet created liquid oxygen in his lab using a process known as adiabatic expansion, which is a "thermodynamic process in which the temperature of a gas is expanded without adding or extracting heat from the gas or the surrounding system"(Vance 26). At the same time Pictet used the "Joule-Thompson Effect," a thermodynamic process that states that the "temperature of a fluid is reduced in a process involving expansion below a certain temperature and pressure"(McClintock 4). After Cailletet and Pictet, a third method, known as cascading, was developed by Karol S. Olszewski and Zygmut von Wroblewski in Poland. At this point in history Oxygen was now able to be liquefied at 90 K, then soon after liquid Nitrogen was obtained at 77 K, and because of these advancements scientist all over the world began competing in a race to lower the temperature of matter to Absolute Zero (0 K) [Vance, 1-10].

Then in 1898, James DeWar mad a major advance when he succeeded in liquifying hydrogen at 20 K. The reason this advance was so spectacular was that at 20 K hydrogen is also boiling, and this presented a very difficult handling and storage problem. DeWar solved this problem by inventing a double-walled storage container known as the DeWar flask, which could contain and hold the liquid hydrogen for a few days. However, at this time scientists realized that if they were going to make any more advances they would have to have better holding containers. So, scientists came up with insulation techniques that we still use today. These techniques include expanded foam materials and radiation shielding. [McClintock 43-55]

The last major advance in cryogenics finally came in 1908 when the Dutch physicist Heike Kamerling Onnes liquefied Helium at 4.2 and then 3.2 K. The rest of the advances in cryogenics have been extremely small since it is a fundamental Thermodynamic law that you can approach but never actually reach absolute zero. Since 1908 our technology has greatly increased and we can now freeze sodium gas to within 40 millionths of a Kelvin above absolute zero. However, in the back of every physicists head they want to break the Thermodynamic law and reach a temperature of absolute zero where every proton, electron, and neutron in an atom is absolutely frozen.

Also , their are two subjects that are also closely related to cryogenics called Superconductivity and Superfluidity. Superconductivity is a low-temperature phenomenon where a metal loses all electrical resistance below a certain temperature, called the Critical Temperature(Tc), and transfers to "...a state of zero resistance,..."(Tilley 11). This unusual behavior was also discovered by Heike Kamerlingh Onnes. It was discovered when Onnes and one of his graduate students realized that Mercury loses all of its electrical resistance when it reaches a temperature of 4.15 K. However, almost all elements and compounds have Tc's between 1 K and 15 K (or -457.68 (F and -432.67 (F) so they would not be very useful to us on a day to day basis[McClintock 208-226].

Then in 1986, J Gregore Bednorz and K. Alex Muller discovered that an oxide of lanthanum, barium, and copper becomes superconductive at 30 K. This discovery shocked the world and stimulated scientists to find even more "High-Temperature Superconductors". After this discovery, in 1987, scientists at the University of Houston and the University of Alabama discovered YBCO, a compound with a Tc of 95 K. This discovery made superconductivity possible above the boiling point of liquid Nitrogen, so now the relatively cheap, liquid nitrogen could replace the high priced liquid helium required for cryogenic experiments. To date the highest reported Tc is 125 K, which belongs to a compund made of Thallium, Barium, Calcium, Copper, and Oxygen. Now, with the availability of high-temperature superconductors, all the sciences including, cryogenics have made extraordinary advances. Some applications are demonstrated by magnetically levitated trains, energy storage, motors, and Zero-Loss Transmission Lines. Also, superconducting electromagnets are used in Particle Accelerators, Fusion Energy Plants, and Magnetic Resonance Imaging devices (MRI's) in Hospitals. Furthermore high-speed cryogenic computer memories and communication devices are in various stages of research. This field has grown immensely since 1986 as you can see and will probably keep growing.

The second subject related to cryogenics is Superfluidity. Superfluidity is a strange state of matter that is most common in liquid Helium, when it is below a temperature of 2.17 K. Superfluidity means that the liquid "...discloses no viscosity when traveling through a capillary or narrow slit..."(Landau 195) and also flows "...through the slit disclosing no friction..."(Landau 195) That this means is that when Helium reaches this state it can flow, without any friction, through the smallest holes and in between atoms in a compund. If the top is off the beaker it is also possible for the liquid Helium to flow up the side of the baker and out of the beaker until all the liquid helium is gone. It was then discovered that when any liquid approaches about .2 K it has almost the exact same properties of superconducting metals, as far as specific heat, magnetic properties, and thermal conductivity. Even though, both superconducting and Superfluidic materials have similar properties, the phenomenon of Superfluidity is much more complex, and is not completely understood by today's physicists.[McClintock 103-107]

Cryogenics also consists of many smaller sciences, including Cryobiology, which is "the study of the effects of low-temperatures on materials of biological origin."(Vance 528) Developments in this field have led to modern methods of preserving blood, semen, tissue, and organs below the temperature that was obtained by the use of liquid nitrogen. Also Cryobiology has led to the development of the cryogenic scalpel which can deaden or destroy tissue with a high degree of accuracy, making it possible to clot cuts as soon as you cut them. So in theory you could one day have surgery without having to deal with any blood.

Another field is Cryopumping. Cryopumping is the process "of condensing gas or vapor on a low-temperature surface."(Vance 339) This is done by extracting gas from a vacuum vessel by conventional methods then freezing the remaining gas on low temperature coils. This process has been useful when trying to simulate the properties that the vacuum in outerspace will have on electronic circuitry.

Cryogenics has also been a part of many modern advances including:
The transportation of energy in the form of a liquefied gas.
Processing, handling, and providing food by cryogenic means has become a large business, providing both frozen and freeze-dried food.
Liquid Oxygen powers rockets and propulsion systems for space research.
Liquid Hydrogen is used in high-energy physics experiments.
Using cryogenic drill bits so drilling for oil and other gases is easier.
Chemical synthesis and catalysis.
Better fire fighting fluids.
Gas separation.
Metal Fabrication.

As you can see by now cryogenics is still a very young science, but in the last ten years it has catapulted to being the backbone of almost every other form of science. However, its full potential will probably not be understood for quite a while. Though, as you can see, if we can grasp the concepts of cryogenics we will have a tool that will allow us to do things ranging from making better drill bits to exploring the universe. The future of cryogenics can best be summed up by Krafft A. Ehricke, a rocket developer, when he said, "It's centeral goal is the preservation of civilization."



References

Khalatnikov, I. M., An Introduction to the Theory of Superfluidity (New York: W.A. Benjamin Inc., 1965).
McClintock, Michael, Cryogenics (New York: Reinhold Publishing Corp., 1964)
Tilley, David R. and Tilley, John, Superfluidity and Superconductivity (New York: John Wiley and Sons, 1974)
Vance, Robert W., Cryogenic Technology (London: John Wiley & Sons, Inc., 1963)

Creation

The Cosmos
Where is the universe from? Where is it going? How is it put together? How did it get to be this way.
These are Big questions. Very easy to ask but almost impossible to answer. We want answers for philosophical reason having nothing to do with science. No one will get rich from discovering the structure of the universe unless they right a book about it.
The area of science dealing with Big questions is called cosmology. The reason for it's study is found in the fact that:
The universe was born at a specific time in the past and has expanded ever since.

The Expansion of the Universe
Edwin Hubble established the existence of other galaxies. He noted that the light from these galaxies was shifted toward the red. That is it's wavelength was longer than that of the light emitted from the corresponding atoms in the lab. Furthermore he found that the farther away the galaxy was the more it was shifted toward the red end of the spectrum. Hubble attributed this shift to the doppler effect.
Hubble saw this and concluded that all galaxies are rushing away from us and the universe is expanding as a whole. Modern equipment has observed and verified this so-called Hubble expansion exists throughout the observable universe.
This shows three important things. First there is no significance to the fact that earth seems to be the center of the universal expansion. In any galaxy it would look as if you were standing still and all others were rushing away from you. Second the movement of the universe is not like an explosion. Galaxies are not moved through the universe but expand with the universe. Third the galaxies themselves do not expand only the space between them.
Finally if you ask where the expansion started the only answer is everywhere. In the words of the fifteenth-century philosopher Nicholas of Cusa, "the universe has its center everywhere and its edge nowhere."
This theory has one fact that is inescapable. The universe was not always there but did have an beginning. This has come to be known as the Big-Bang theory.

Universal Freezing
When the universe was younger it was smaller. When matter and energy are compacted the temperature inevitably rises. Thus when the universe was younger it was hotter. We can see six crucial events called "freezing's" where the fabric of the universe changed in a fundamental way.
The most recent occurred when the universe was about 500,000 years old, about 14,999,500,000 years ago. After 500,000 years permanent atoms started to form. Before 500,000 years matter existed as loose electrons and nuclei in a state called plasma.
Moving back in time the next freezing occurred at about three minutes. This was when nuclei first started to form. Before this only elementary particles existed in the universe.
From about three minutes to ten-millionths of a second the universe was a seething mass of elementary particles- protons, neutrons, electrons and the rest of the particle zoo.
Now there are four distinct forces in the universe the strong, electromagnetic, the weak, and gravitational. Before they must have all been one before the first ten-millionth of a second. The timetable of these forces as theorized today is:
(10 to the -10th power) second: the weak and electromagnetic forces form into one called electroweak.
(10 to the -33rd power) second: the strong force joins the electroweak leaving only gravity.
(10 to the -43rd power) second: known as the plank time for one of the founders of quantum mechanics. Before this time the universe and all forces were completely unified and as allinged as is possible.

Comets

A comet is generally considered to consist of a small, sharp
nucleus embedded in a nebulous disk called the coma. American
astronomer Fred L. Whipple proposed in 1949 that the nucleus,
containing practically all the mass of the comet, is a "dirty snowball"
conglomerate of ices and dust. Major proofs of the snowball theory
rest on various data. For one, of the observed gases and meteoric
particles that are ejected to provide the coma and tails of comets,
most of the gases are fragmentary molecules, or radicals, of the most
common elements in space: hydrogen, carbon, nitrogen, and oxygen.
The radicals, for example, of CH, NH, and OH may be broken away
from the stable molecules CH4 (methane), NH3 (ammonia), and H2O
(water), which may exist as ices or more complex, very cold
compounds in the nucleus. Another fact in support of the snowball
theory is that the best-observed comets move in orbits that deviate
significantly from Newtonian gravitational motion. This provides
clear evidence that the escaping gases produce a jet action, propelling
the nucleus of a comet slightly away from its otherwise predictable
path. In addition, short-period comets, observed over many
revolutions, tend to fade very slowly with time, as would be expected
of the kind of structure proposed by Whipple. Finally, the existence of
comet groups shows that cometary nuclei are fairly solid units.
The head of a comet, including the hazy coma, may exceed the planet
Jupiter in size. The solid portion of most comets, however, is
equivalent to only a few cubic kilometers. The dust-blackened
nucleus of Halley's comet, for example, is about 15 by 4 km (about 9
by 2.5 mi) in size.

Collisions

Collisions

When two objects collide, their motions are changed as a result of the collision, as is shown when playing pool.

There are several laws governing collisions, the principal one being the law of conservation of linear momentum, which says the total momentum of an isolated system is the table and the balls, and the law then implies that the total momentum of the balls just before they collide is the total momentum just after the collision.

Therefore if the masses of two colliding objects are known, the velocity and the velocity of the other before the collision, you can calculate the final velocity of this second object after it has collided. To obtain an exact answer however, we must find out what type of collision takes place, whether it is elastic or not elastic.

The type of collision is characterized by what is called the coefficient of restitution. This quantity is approximately constant for a collision between two given objects, and can be determined experimentally. If the relative velocities of the two objects are the same before and after impact, the coefficient is equal to 1, and the collision is elastic. In practice, however, such perfectly elastic collisions occur only on an atomic scale; most collisions are therefor not elastic, with a coefficient of restitution of less than 1.

Chromium

Chromium is a very hard, brittle, gray metal, which is sometimes
referred to as Siberian red lead. It does not rust easily and becomes shiny and
bright when it is polished. The shiny trim on our automobile bumpers and
door handles is usually electroplated chromium.
Most chromium comes from something called chromite which is
a mixture of chromium , iron, and oxygen. Chromite is a common rather
ordinary black mineral that no one really noticed until more recent times.
Nearly all the world's supply of chromite comes from Zimbabwe, Russia,
Turkey, South Africa, Cuba, and the Philippines. The United States imports
almost all its chromite.
Chromium is added to other metals to make them harder and
more resistant to chemicals. Small quantities mixed with steel make the metal
stainless. Stainless steel is used to make cutlery and kitchen equipment
because it does not rust easily and takes a mirror-like polish. This steel
contains about 13 percent chromium and 0.3 percent carbon.
The hardness of steel can be increased by adding small
quantities of chromium to it. Chromium steel alloys (mixtures containing one
or more metals) are used to make armor plating for tanks and warships. They
are also used for ball bearings and the hard cutting edges of high-speed
machine tools.
Nickel is highly resistant to electric current and is often added to
chromium steels to make them easier to work. For example, stainless steel
sinks can be pressed out from flat sheets of steel that can contain 18 percent
chomium and 8 percent nickel.
When nickel is mixed with chromium, the resulting metal can
stand very high temperatures without corroding. For example, the heating
elements of toasters can be made from an alloy that is 80 percent nickel and
20 percent chromium. This metal operates at a temperature of about 1380
degrees Fahrenheit (750 degrees Celsius).
Chromium was discovered in 1798 by the French chemist
Nicolas Vauquelin. He chose the name chromium from the Greek word
chroma, which means color. Chromium was a good choice of name, many
chromium compounds are brightly colored. Rubies are red and emeralds are
green because they contain chromium compounds.
Some of the brightest colors in the artist's palette contain
chromium. Chrome yellow is made from a substance which contains
chromium, lead, and oxygen. Zinc yellow contains zinc, chromium and
oxygen. Chrome red is another chromium compound. Chrome green is used
in paints and in printing cotton fabrics.
Chromium salts are used in tanning leather. Leather tanned in
this way is very soft and flexible. It is used in the manufacture of soft gloves
and other luxury goods. Other chromium compounds are used to treat metal
and wood. This treatment helps to preserve objects from corrosion and rot.
Chromium is an element wit the chemical symbol Cr, an atomic
weight of 51.996. Although it is twice as heavy as aluminum, it is lighter tan
all other common metals. It melts at a temperature of 3434 degrees
Fahrenheit (1890 degrees Celsius).

Chlorophyll

CHLOROPHYLL






















NAME
Biology
November. 19






A. Chlorophyll belongs to the Plant Kingdom. Chlorophyll is not found in the Animal Kingdom. Chlorophyll is found inside of Chloroplasts, and Chloroplasts are found inside of plant cells.

B. Chlorophyll is a pigment that makes plants green. It is important because it converts sunlight to split water into hydrogen and oxygen.

C. Chlorophyll is found inside the chloroplast which is located near the cell wall. It is located here because the suns rays might not penetrate deep into the plant, and the plant needs the suns rays to generate hydrogen and oxygen.

D. If a plant did not have chlorophyll then the plant would be unable to get the energy from the sun, and it would slowly die. There are no diseases or dysfunction's of chlorophyll. If there were plants would have a serious problem.

E. I once had a friend named Bill,
and he was green with Chlorophyll,
He Didn't have to eat,
not a beat or any meat,
Instead of going to dine he would feast on sunshine
Bill went to the Land of the Midnight Sun
and there he was done.

Charles Babbage

Charles Babbage may have spent his life in vain, trying to make a machine considered by most of his friends to be ridiculous. 150 years ago, Babbage drew hundreds of drawings projecting the fundamentals on which today's computers are founded. But the technology was not there to meet his dreams. He was born on December 26, 1791, in Totnes, Devonshire, England. As a child he was always interested about the mechanics of everything and in the supernatural. He reportedly once tried to prove the existence of the devil by making a circle in his own blood on the floor and reciting the Lord's Prayer backward. In college, he formed a ghost club dedicated to verifying the existence of the supernatural. When in Trinity College in Cambridge, Charles carried out childish pranks and rebelled because of the boredom he felt from knowing more than his instructors. Despite this, however, he was on his way to understanding the advanced theories of mathematics and even formed an Analytical Society to present and discuss original papers on mathematics and to interest people in translating the works of several foreign mathematicians into English. His studies also led him to a critical study of logarithmic tables and was constantly reporting errors in them. During this analysis, it occurred to him that all these tables could be calculated by machinery. He was convinced that it was possible to construct a machine that would be able to compute by successive differences and to even print out the results. (He conceived of this 50 years before type-setting machines or typewriters were invented.)
In 1814, the age of 23, Charles married 22-year-old Georgina Whitmore. Georgina would have eight children in thirteen years, of which only three sons would survive to maturity. Babbage really took no interest in raising his children. After Georgina died at the age of 35, his mother took over the upbringing. In 1816, Babbage had his first taste of failure when his application for the professorship of mathematics at East India College in Haileybury was rejected due to political reasons, as was his application, three years later, for the chair of mathematics at the University of Edinburgh. Fortunately, his elder brother supported his family while Babbage continued his work on calculating machines.
At the age of 30, Babbage was ready to announce to the Royal Astronomical Society that he had embarked on the construction of a table-calculating machine. His paper, "Observations on the Application of Machinery to the Computation of Mathematical Tables" was widely acclaimed and consequently, Babbage was presented with the first gold medal awarded by the Astronomical Society. Babbage was also determined to impress the prestigious Royal Society and wrote a letter to its president, Sir Humphrey Davy, proposing and explaining his ideas behind constructing a calculating machine, or the Difference Engine, as he would call it. A 12-man committee considered Babbage's appeal for funds for his project and in May 1823, the Society agreed that the cause was worthy.
While constructing this machine, implementation problems arose as well as a misunderstanding with the British Government, both of whom regarded this machine as property of the other. This misunderstanding would cause problems for the next twenty years, and would result in delaying Babbage's work. Babbage also apparently miscalculated his task. The Engine would need about 50 times the amount of money he was given. In 1827, Babbage was overwhelmed by a number of personal tragedies: the deaths of his father, wife and two of his children. Consequently, Babbage took ill and his family advised him to travel abroad for a few months. Upon his return, he approached the Duke of Wellington, then prime minister, regarding the possibility of a further grant. In the duke, Babbage found a friend who could really understand the principles and capabilities of the Engine and the two would remain friends for the rest of the duke's life. Babbage was also granted more money. He continued work on the project for many years.
In old age, Babbage agreed, at the age of 71, to have the completed section of his Difference Engine shown to the public for the first time. Babbage's many disappointments led him to say that he never lived a happy day in his life. Babbage died in 1871, two months shy of his 80th birthday.

Carbon

CARBON

Without the element of carbon, life as we know it would not exist.
Carbon provides the framework for all tissues of plants and animals. They
are built of elements grouped around chains or rings made of carbon atoms.
Carbon also provides common fuels--coal, oil, gasoline, and natural gas.
Sugar, starch, and paper are compounds of carbon with hydrogen and
oxygen. Proteins such as hair, meat, and silk contain these and other
elements such as nitrogen, phosphorus, and sulfur.

More than six and a half million compounds of the element carbon,
many times more then those of any other element, are known, and more are
discovered and synthesized each week. Hundreds of carbon compounds are
commercially important but the element itself in the forms of diamond,
graphite, charcoal, and carbon black is also used in a variety of manufactured
products.

Besides the wide occurrence of carbon in compounds, two forms of
the element--diamond and graphite, are deposited in widely scattered
locations around the Earth.

PROPERTIES OF CARBON

Symbol = C
Atomic Number = 6
Atomic Weight = 12.011
Density at 68 Degrees F = 1.88-3.53
Boiling Point = 8,721 degrees F
Melting Point = 6,420 degrees F

Black Holes

Every day we look out upon the night sky, wondering and dreaming of what lies beyond our planet. The universe that we live in is so diverse and unique, and it interests us to learn about all the variance that lies beyond our grasp. Within this marvel of wonders, our universe holds a mystery that is very difficult to understand because of the complications that arise when trying to examine and explore the principles of space. That mystery happens to be that of the ever elusive, black hole.

This essay will hopefully give you the knowledge and understanding of the concepts, properties, and processes involved with the space phenomenon of the black hole. It will describe how a black hole is generally formed, how it functions, and the effects it has on the universe.

By definition, a black hole is a region where matter collapses to infinite density, and where, as a result, the curvature of space-time is extreme. Moreover, the intense gravitational field of the black hole prevents any light or other electromagnetic radiation from escaping. But where lies the "point of no return" at which any matter or energy is doomed to disappear from the visible universe?

The black hole's surface is known as the event horizon. Behind this horizon, the inward pull of gravity is overwhelming and no information about the black hole's interior can escape to the outer universe. Applying the Einstein Field Equations to collapsing stars, Kurt Schwarzschild discovered the critical radius for a given mass at which matter would collapse into an infinitely dense state known as a singularity.

At the center of the black hole lies the singularity, where matter is crushed to infinite density, the pull of gravity is infinitely strong, and space-time has infinite curvature. Here it is no longer meaningful to speak of space and time, much less space-time. Jumbled up at the singularity, space and time as we know them cease to exist. At the singularity, the laws of physics break down, including Einstein's Theory of General Relativity. This is known as Quantum Gravity. In this realm, space and time are broken apart and cause and effect cannot be unraveled. Even today, there is no satisfactory theory for what happens at and beyond the rim of the singularity.

A rotating black hole has an interesting feature, called a Cauchy horizon, contained in its interior. The Cauchy horizon is a light-like surface which is the boundary of the domain of validity of the Cauchy problem. What this means is that it is impossible to use the laws of physics to predict the structure of the region after the Cauchy horizon. This breakdown of predictability has led physicists to hypothesize that a singularity should form at the Cauchy horizon, forcing the evolution of the interior to stop at the Cauchy horizon, rendering the idea of a region after it meaningless.

Recently this hypothesis was tested in a simple black hole model. A spherically symmetric black hole with a point electric charge has the same essential features as a rotating black hole. It was shown in the spherical model that the Cauchy horizon does develop a scalar curvature singularity. It was also found that the mass of the black hole measured near the Cauchy horizon diverges exponentially as the Cauchy horizon is approached. This led to this phenomena being dubbed "mass inflation."

In order to understand what exactly a black hole is, we must first take a look at the basis for the cause of a black hole. All black holes are formed from the gravitational collapse of a star, usually having a great, massive, core. A star is created when huge, gigantic, gas clouds bind together due to attractive forces and form a hot core, combined from all the energy of the two gas clouds. This energy produced is so great when it first collides, that a nuclear reaction occurs and the gases within the star start to burn continuously. The hydrogen gas is usually the first type of gas consumed in a star and then other gas elements such as carbon,
Oxygen, and helium are consumed.

This chain reaction fuels the star for millions or billions of years depending upon the amount of gases there are. The star manages to avoid collapsing at this point because of the equilibrium achieved by itself. The gravitational pull from the core of the star is equal to the gravitational pull of the gases forming a type of orbit, however when this equality is broken the star can go into several different stages.

Usually if the star is small in mass, most of the gases will be
consumed while some of it escapes. This occurs because there is not a tremendous gravitational pull upon those gases and therefore the star weakens and becomes smaller. It is then referred to as a white dwarf. A teaspoonful of white dwarf material would weigh five-and-a-half tons on Earth. Yet a white dwarf star can contract no further; it's electrons resist further compression by exerting an outward pressure that counteracts gravity. If the star was to have a larger mass, then it might go supernova, such as SN 1987A, meaning that the nuclear fusion within the star simply goes out of control, causing the star to explode.

After exploding, a fraction of the star is usually left (if it has not turned into pure gas) and that fraction of the star is known as a neutron star. Neutron stars are so dense, a teaspoonful would weigh 100 million tons on Earth. As heavy as neutron stars are, they too can only contract so far. This is because, as crushed as they are, the neutrons also resist the inward pull of gravity, just as a white dwarf's electrons do.

A black hole is one of the last options that a star may take. If the core of the star is so massive (approximately 6-8 times the mass of the sun) then it is most likely that when the star's gases are almost consumed those gases will collapse inward, forced into the core by the gravitational force laid upon them. The core continues to collapse to a critical size or circumference, or "the point of no return."

After a black hole is created, the gravitational force continues to pull in space debris and other types of matters to help add to the mass of the core, making the hole stronger and more powerful.

The most defining quality of a black hole is its emission of gravitational waves so strong they can cause light to bend toward it. Gravitational waves are disturbances in the curvature of space-time caused by the motions of matter. Propagating at (or near) the speed of light, gravitational waves do not travel through space-time as such -- the fabric of space-time itself is oscillating. Though gravitational waves pass straight through matter, their strength weakens as the distance from the original source increases.

Although many physicists doubted the existence of gravitational waves, physical evidence was presented when American researchers observed a binary pulsar system that was thought to consist of two neutron stars orbiting each other closely and rapidly. Radio pulses from one of the stars showed that its orbital period was decreasing. In other words, the stars were spiraling toward each other, and by the exact amount predicted if the system were losing energy by radiating gravity waves.

Most black holes tend to be in a consistent spinning motion as a result of the gravitational waves. This motion absorbs various matter and spins it within the ring (known as the event horizon) that is formed around the black hole. The matter keeps within the event horizon until it has spun into the center where it is concentrated within the core adding to the mass. Such spinning black holes are known as Kerr black holes.

Time runs slower where gravity is stronger. If we look at something next to a black hole, it appears to be in slow motion, and it is. The further into the hole you get, the slower time is running. However, if you are inside, you think that you are moving normally, and everyone outside is moving very fast.

Some scientists think that if you enter a black hole the forces inside will transport you to another place in space and time. At the other end would be a white hole, which is theoretically a point in space that just expels matter and energy.

Also as a result of the powerful gravitational waves, most black holes orbit around stars, partly due to the fact that they were once stars. This may cause some problems for the neighboring stars, for if a black hole gets powerful enough it may actually pull a star into it and disrupt the orbit of many other stars. The black hole can then grow strong enough (from the star's mass) as to possibly absorb another star.

When a black hole absorbs a star, the star is first pulled into the ergosphere, which sweeps all the matter into the event horizon, named for its flat horizontal appearance and because this happens to be the place where mostly all the action within the black hole occurs. When the star is passed on into the event horizon the light that the star endures is bent within the current and therefore cannot be seen in space. At this exact point in time, high amounts of radiation are given off, and with the proper equipment, can be detected and seen as an image of a black hole. Through this technique, astronomers now believe that they have found a black hole known as Centaurus A. The existence of a star apparently absorbing nothingness led astronomers to suggest and confirm the existence of another black hole, Cygnus X1.

By emitting gravitational waves, non-stationary black holes lose energy, eventually becoming stationary and ceasing to radiate in this manner. In other words, they decay and become stationary black holes, namely holes that are perfectly spherical or whose rotation is perfectly uniform. According to Einstein's Theory of General Relativity, such objects cannot emit gravitational waves.

Black hole electrodynamics is the theory of electrodynamics outside a black hole. This can be very trivial if you consider just a black hole described by the three usual parameters: mass, electric charge, and angular momentum. Initially simplifying the case by disregarding rotation, we simply get the well known solution of a point charge. This is not very physically interesting, since it seems highly unlikely that any black hole (or any celestial body) should not be rotating. Adding rotation, it seems that charge is present. A rotating, charged black hole creates a magnetic field around the hole because the inertial frame is dragged around the hole. Far from the black hole, at infinity, the black hole electric field is that of a point charge.

However, black holes do not even have charges. The magnitude of the gravitational pull repels even charges from the hole, and different charges would neutralize the charge of the hole.

The domain of a black hole can be separated into three regions, the first being the rotating black hole and the area near it, the accretion disk (a region of force-free fields), and an acceleration region outside the plasma.

Disk accretion can occur onto supermassive black holes at the center of galaxies and in binary systems between a black hole (not necessarily supermassive) and a supermassive star. The accretion disk of a rotating black hole, is, by the black hole, driven into the equatorial plane of the rotation. The force on the disk is gravitational.

Black holes are not really black, because they can radiate matter and energy. As they do this, they slowly lose mass, and thus are said to evaporate.

Black holes, it turns out, follow the basic laws of thermo-dynamics. The gravitational acceleration at the event horizon corresponds to the temperature term in thermo-dynamical equations, mass corresponds to energy, and the rotational energy of a spinning black hole is similar to the work term for ordinary matter, such as gas. Black holes have a finite temperature; this temperature is inversely proportional to the mass of the hole. Hence smaller holes are hotter. The surface area of the event horizon also has significance because it is related to the entropy of the hole.

Entropy, for a black hole, can be said to be the logarithm of the number of ways it could have been made. The logarithm of the number of microscopic arrangements that could give rise to the observed macroscopic state is just the standard definition of entropy. The enormous entropy of a black hole results from the lost information concerning the structural and chemical properties before it collapsed. Only three properties can remain to be observed in the black hole: mass, spin, and charge.

Physicist Stephen Hawking realized that because a black hole has a finite entropy and temperature, in can be in thermal equilibrium with its surroundings, and therefore must be able to radiate. Hawking radiation, as it is known, is allowed by a quantum mechanism called virtual particles. As a consequence of the uncertainty principle, and the equivalence of matter and energy, a particle and its antiparticle can appear spontaneously, exist for a very short time, and then turn back into energy. This is happening all the time, all over the universe. It has been observed in the "Lamb shift" of the spectrum of the hydrogen atom. The spectrum of light is altered slightly because the tiny electric fields of these virtual pairs cause the atom's electron to shake in its orbit.

Now, if a virtual pair appears near a black hole, one particle might become caught up in a the hole's gravity and dragged in, leaving the other without its partner. Unable to annihilate and turn back into energy, the lone particle must become real, and can now escape the black hole. Therefore, mass and energy are lost; they must come from someplace, and the only source is the black hole itself. So the hole loses mass.

If the hole has a small mass, it will have a small radius. This makes it easier for the virtual particles to split up and one to escape from the gravitational pull, since they can only separate by about a wavelength. Therefore, hotter black holes (which are less massive) evaporate much more quickly than larger ones. The evaporation timescale can be derived by using the expression for temperature, which is inversely proportional to mass, the expression for area, which is proportional to mass squared, and the blackbody power law. The result is that the time required for the black hole to totally evaporate is proportional to the original mass cubed. As expected, smaller black holes evaporate more quickly than more massive ones.

The lifetime for a black hole with twice the mass of the sun should be about 10^67 years, but if it were possible for black holes to exist with masses on the order of a mountain, these would be furiously evaporating today. Although only stars around the mass of two suns or greater can form black holes in the present universe, it is conceivable that in the extremely hot and dense very early universe, small lumps of overdense matter collapsed to form tiny primordial black holes. These would have shrunk to an even smaller size today and would be radiating intensely. Evaporating black holes will finally be reduced to a mass where they explode, converting the rest of the matter to energy instantly. Although there is no real evidence for the existence of primordial black holes, there may still be some of them, evaporating at this very moment.

The first scientists to really take an in depth look at black holes and the collapsing of stars, were professor Robert Oppenheimer and his student, Hartland Snyder, in the early nineteen hundreds. They concluded on the basis of Einstein's theory of relativity that if the speed of light was the utmost speed of any object, then nothing could escape a black hole once in its gravitational orbit.

The name "black hole" was given due to the fact that light could not escape from the gravitational pull from the core, thus making the "black hole" impossible for humans to see without using technological advancements for measuring such things as radiation. The second part of the word was given the name "hole" due to the fact that the actual hole is where everything is absorbed and where the central core, known as the singularity, presides. This core is the main part of the black hole where the mass is concentrated and appears purely black on all readings, even through the use of radiation detection devices.

Just recently a major discovery was found with the help of a device known as The Hubble Telescope. This telescope has just recently found what many astronomers believe to be a black hole, after focusing on a star orbiting empty space. Several pictures were sent back to Earth from the telescope showing many computer enhanced pictures of various radiation fluctuations and other diverse types of readings that could be read from the area in which the black hole is suspected to be in.

Several diagrams were made showing how astronomers believe that if somehow you were to survive through the center of the black hole that there would be enough gravitational force to possible warp you to another end in the universe or possibly to another universe. The creative ideas that can be hypothesized from this discovery are endless.

Although our universe is filled with many unexplained, glorious phenomena, it is our duty to continue exploring them and to continue learning, but in the process we must not take any of it for granted.

As you have read, black holes are a major topic within our universe and they contain so much curiosity that they could possibly hold unlimited uses. Black holes are a sensation that astronomers are still very puzzled with. It seems that as we get closer to solving their existence and functions, we only end up with more and more questions.

Although these questions just lead us into more and more unanswered problems we seek and find refuge into them, dreaming that maybe one far off distant day, we will understand all the conceptions and we will be able to use the universe to our advantage and go where only our dreams could take us.



Bibliography

1.) Parker, Barry. Colliding Galaxies.

2.) Hawking, Stephen. Black Holes and Baby Universes.

3.) Encyclopedia Brittanica. Volume II, Black Holes. (c) 1996

BEC the new phase of matter

B E C
The New Phase of Matter

A new phase of matter has been discovered seventy years after Albert Einstein predicted it's existence. In this new state of matter, atoms do not move around like they would in an ordinary gas condensate. The atoms move in lock step with one another and have identical quantum properties. This will make it easier for physicists to research the mysterious properties of quantum mechanics. It was named "Molecule of the Year" because it was such a major discovery, but it is not a molecule at all. The phase, called the Bose-Einstein condensate (BEC) follows the laws of quantum physics.
In early 1995, scientists at the National Institute of Standards and Technology and the University of Colorado were the first to uncover the BEC. They magnetically trapped rubidium atoms and then supercooled the atoms to almost absolute zero. The graphic on the cover shows the Bose-Einstien condensation, where the atom's velocities peak at close to zero velocity, and the atoms slowly emerge from the condensate. The atoms were slowed to the low velocity by using laser beams. The hardware needed to create the BEC is a bargain at $50,000 to $1000,000 which makes it accessible to physics labs around the world.
The next step is to test the new phase of matter. We do not know yet if it absorbs, reflects,or refracts light. BEC is related to superconductivity and may unlock some mysteries of why some minerals are able to conduct electricity without resistance. The asymmetrical pattern of BEC is is thought by some astrophysicists to explain the bumpy distribution of matter in the early universe, a distribution that eventually led to the formation of galaxies. Physicists are working on creating an atom laser, using new technology derived from the BEC. The new lasers would be able to create etchings finer than those that etch silicon chips today.
The discovery of BEC has prompted a lot of research of the new phase. BEC is expected to yield benefits to industry and society. I expect that large businesses will take advantage of the new technology and start making products using the technology. This will probably be in the form of the atom lasers BEC is expected to make possible. The lasers might be used for laser surgery, or any application where lasers are used today.

beam me up scotty

Beam Me Up Scotty
Some people think that teleportation is not possible, while other people think that it is, and they are doing it.
The idea behind teleportation is that an object is equivalent to the information needed to construct it, the object can then be transported by transmitting the information in bytes,(1 byte= 1 yes or no answer) along a channel of telecommunications-communications, on the other end of the line is a receiver that reconstructs the object using the information given. Just like a fax machine except that a normal document fax takes up about 20 kilobytes (20,000 bytes) where as a human "fax" or teleportation would take 10 Gigabytes (100,000,000,000 bytes) for just one millimeter of human (A Fun Talk On Teleportation). But with a few technical breakthroughs, you might imagine, you'd be able to teleport over to a friend's house for dinner simply by stepping into a scanner that would record all the information about the atoms making you up, With all the data collected, the machine would then vaporize your body and relay the information to your friend's teleporter, which would rebuild you using basic molecular elements.
Some people don't try to think of a scientific answer to it, they just know that they can move something from point A to point B.
There are many kinds of teleportation, one kind is transferring a picture of an image to a piece of film in a special camera called a tele-camera, the teleporter sticks the lens of the camera to hiser fore head and thinks about the picture as hard as they can, most of the time it doesn't show up on the film but a couple of times the picture usually a picture of a building or historical marker barely shows up.
Another kind of teleportation is Water Witching, which is the act of bending small metal objects such as a spoon or some keys without touching them. A famous instance of Water Witching is a famous witcher was on a popular television show, during the show the man bent a fork and a spoon, several people called the station saying that while the show all the forks and spoons all bent up.
Usually when somebody says teleportation people usually think Star Trek, but instead of stepping onto a scanner and moving your body, some people can actually lift themselves in the air by just hypnotizing themselves. In maybe a few years with a little more technology people might replace cars, buses, trains and planes with a teleporter.

Aspirin

Andrew Donehoo
January 15, 1997
8-3
Aspirin
Aspirin is a white crystalline substance made of carbon, hydrogen, and oxygen. It is used in the treatment of rheumatic fever, headaches, neuralgia, colds, and arthritis; reduce temperature and pain. The formula for aspirin is CH3CO2C6H4CO2H. Aspirin's scientific name is actylsalicylic acid (ASA). The main ingredient in ASA is salicylic acid. This ingredient grows in small roots, leaves, flowers and fruits on plants.
About 100 years ago, a German chemist, Felix Hoffmann, set out to find a drug that would ease his father's arthritis without causing severe stomach irritation that came from sodium salicylate, the standard anti-arthritis treatment of the time. Hoffmann figured that the acidity of the salicylate made it hard on the stomach's lining. He began looking for a less acidic formulation. His search led him to the synthesization of acetylsalicylic acid. The compound shared the therapeutic properties of other salicylates, but caused less stomach irritation. ASA reduced fever, relieved moderate pain, and, at higher doses, alleviated rheumatic fever and arthritic conditions.
Though Hoffmann was confident that ASA would prove more affective than other salicylates, but his superiors incorrectly stated that ASA weakens the heart and that physicians would not subscribe it. Hoffmann's employer, Friedrich Bayer and Company, gave ASA its now famous name, aspirin.
It is not yet fully known how aspirin works, but most authorities agree that it achieves some of its effects by hindering the flow of prostaglandins. Prostaglandins are hormone-like substances that influence the elasticity of blood vessels. John Vane, Ph. D., noted that many forms of tissue injury were followed by the release of prostaglandins. It was proved that prostaglndins caused redness and fever, common signs of inflammation. Vane's research showed that by blocking the flow of prostaglandins, aspirin prevented blood from aggregating and forming blood clots.
Aspirin can be used for the temporary relief of headaches, painful discomfort and fever from colds, muscular aches and pains, and temporary relief to minor pains of arthritis, toothaches, and menstrual pain. Aspirin should not be used in patients who have an allergic reaction to aspirin and/or nonsteroidal anti-inflammatory agents.
The usual adult dosage for adults and children over the age of 12 is one or two tablets with water. This may be repeated every 4 hours as necessary up to 12 tablets a day or as directed by your doctor. You should not give aspirin to children under the age of 12. An overdose of 200 to 500 mg/kg is in the fatal range. Early symptoms of overdose are vomiting, hypernea, hyperactivity, and convulsions. This progresses quickly to depression, coma, respiratory failure and collapse. In case of an overdose, intensive supportive therapy should be instituted immediately. Plasma salicylates levels should be measured in order to determine the severity of the poisoning and to provide a guide for therapy. Emptying the stomach should be accomplished as soon as possible.
Children and teenagers should not use aspirin for chicken pox or flu symptoms before a doctor is consulted. You should not take this product if you are allergies to aspirin, have asthma, stomach problems that reoccur, gastric ulcers or bleeding problems unless directed by a doctor. Aspirin should be kept out of reach of children. In case of an overdose, you should seek professional assistance or contact a poison control center immediately. If you are pregnant or nursing a baby, seek the advice of a health professional before taking aspirin.
Since the discovery of aspirin, it has been proved to prevent or protect against recurrent strokes, throat cancer, breast cancer, coon cancer, and reduce the effects of heart attacks and strokes. A heart attack occurs when the is a blockage of blood flow to the heart muscle. Without adequate blood supply, the affected area of muscle dies and the heart's pumping action is either impaired or stopped altogether. When aspirin is taken, it thins the blood, allowing it to pass trough the thinner than usual blood vessels. Studies show that people who take an aspirin on a daily basis have a reduced risk of heart attack or stroke.
Though aspirin is taken for granted, it is a product that over a process of many years, evolved from willow bark into the acetylsalicylic acid that we take form symptoms ranging from the common cold to heart attacks.
In the top diagram on the next page, the Kolbe Synthesis is shown. It shows how salicylic acid is produced. The middle diagram shows the process that turns salicylic acid into acetylsalicylic acid. In the 3-D model of aspirin, the gray atoms are carbon, the white atoms are hydrogen, and the red atoms are oxygen.

Albert Einstien

ALBERT EINSTEIN

Einstein was a German/American physicist who contributed more to the 20th century vision of physical reality than any other scientist. Einstein's theory of RELATIVITY seemed to a lot of people to be pure human thought, as did his other theories.

LIFE
Albert Einstein was born in Ulm, Germany, on March 14, 1879. Einstein's parents were nonobservant Jews. They moved to Munich from Ulm when Einstein was an infant. The family business was to manufacture electrical equipment. When the business failed in 1894, the family move to Milan, Italy. He decided to officially give up his German citizenship. With in a year, still not having completed secondary school, he failed and examination that allow him to follow studies that would lead to a diploma in electrical engineering at the Swiss Federal Institute of Technology (the Zurich Polytechnic). He spent the following year in Aarau where there were excellent teachers and an excellent physics facility. In 1896 he returned to the Zurich Polytechnic, there he graduated in 1900 as a secondary school teacher of math and physics.
Two years later, he acquired a post at the Swiss patent office in Bern. While he was employed there from 1902 to 1909, he completed an extraordinary range of publications in theoretical physics. Most parts of there were written in his spare time. In 1905 he submitted one of his many scientific papers to the University of Zurich to obtain a Ph.D. degree. In 1908 he sent another scientific paper to the University of Bern and became a lecturer there.
In 1914 Einstein returned to Germany but did not reapply for citizenship. He was one of only handful of German professors who was opposed the use of force and did not support Germany's war aims. After the war, the allies wanted the removal of German scientist from international meetings, but Einstein was a Jew and traveling with a Swiss passport, he remained an acceptable German delegate. Albert Einstein's political views as a pacifist and a Zionist placed him against conservatives in Germany, who labeled him a traitor and a defeatist.
In Germany there was a rise of fascism, so he moved to the united states in 1933 and abandoned his pacifism. He unwillingly agreed that the new danger (the Germans) had to be brought down by force of arms. In 1939 he sent a letter to President Franklin D. Roosevelt that urged America to continue to develop an ATOMIC BOMB before the Germans did. This letter was one of many exchanges the White House and Einstein had. This contributed to Roosevelt's decision to fund what became the MANHATAN PROJECT.
Until the end of Einstein's life he searched for a Unified Field Theory, by which the phenomena of gravitation and electromagnetism could be derived from one set of equations. In 1955 Albert Einstein died in Princeton, New Jersey, where he held an analogous research position at the Institute for Advanced Study.

RELATIVITY
Einstein's theory of relativity had caused major revolution in 20th century physics and astronomy. It introduced the concept of "relativity" to science. It is the idea that there is no absolute motion only relative motion. Consequently replacing Isaac Newton's 200-year-old theory of mechanics. "Einstein showed that we do not reside in the flat, Euclidean space and uniform, absolute time of everyday life, but in another environment; curved-space time." The theory played a part in advances in physics. It led to the nuclear era, with potential for benefit as well as devastation, and made possible an understanding of the microworld of elementary particles and their interactions.

Albert Einstein

Einstein, Albert (1879-1955), German-born American physicist and Nobel laureate, best known as the creator of the special and general theories of relativity and for his bold hypothesis concerning the particle nature of light. He is perhaps the most well-known scientist of the 20th century.
Einstein was born in Ulm on March 14, 1879, and spent his youth in Munich, where his family owned a small shop that manufactured electric machinery. He did not talk until the age of three, but even as a youth he showed a brilliant curiosity about nature and an ability to understand difficult mathematical concepts. At the age of 12 he taught himself Euclidean geometry.
Einstein hated the dull regimentation and unimaginative spirit of school in Munich. When repeated business failure led the family to leave Germany for Milan, Italy, Einstein, who was then 15 years old, used the opportunity to withdraw from the school. He spent a year with his parents in Milan, and when it became clear that he would have to make his own way in the world, he finished secondary school in Arrau, Switzerland, and entered the Swiss National Polytechnic in Zürich. Einstein did not enjoy the methods of instruction there. He often cut classes and used the time to study physics on his own or to play his beloved violin. He passed his examinations and graduated in 1900 by studying the notes of a classmate. His professors did not think highly of him and would not recommend him for a university position.
For two years Einstein worked as a tutor and substitute teacher. In 1902 he secured a position as an examiner in the Swiss patent office in Bern. In 1903 he married Mileva Mariç, who had been his classmate at the polytechnic. They had two sons but eventually divorced. Einstein later remarried.

Early Scientific Publications
In 1905 Einstein received his doctorate from the University of Zürich for a theoretical dissertation on the dimensions of molecules, and he also published three theoretical papers of central importance to the development of 20th-century physics. In the first of these papers, on Brownian motion, he made significant predictions about the motion of particles that are randomly distributed in a fluid. These predictions were later confirmed by experiment.
The second paper, on the photoelectric effect, contained a revolutionary hypothesis concerning the nature of light. Einstein not only proposed that under certain circumstances light can be considered as consisting of particles, but he also hypothesized that the energy carried by any light particle, called a photon, is proportional to the frequency of the radiation. The formula for this is E = hu, where E is the energy of the radiation, h is a universal constant known as Planck's constant, and u is the frequency of the radiation. This proposal-that the energy contained within a light beam is transferred in individual units, or quanta-contradicted a hundred-year-old tradition of considering light energy a manifestation of continuous processes. Virtually no one accepted Einstein's proposal. In fact, when the American physicist Robert Andrews Millikan experimentally confirmed the theory almost a decade later, he was surprised and somewhat disquieted by the outcome.
Einstein, whose prime concern was to understand the nature of electromagnetic radiation, subsequently urged the development of a theory that would be a fusion of the wave and particle models for light. Again, very few physicists understood or were sympathetic to these ideas.

Einstein's Special Theory of Relativity
Einstein's third major paper in 1905, "On the Electrodynamics of Moving Bodies," contained what became known as the special theory of relativity. Since the time of the English mathematician and physicist Sir Isaac Newton, natural philosophers (as physicists and chemists were known) had been trying to understand the nature of matter and radiation, and how they interacted in some unified world picture. The position that mechanical laws are fundamental has become known as the mechanical world view, and the position that electrical laws are fundamental has become known as the electromagnetic world view. Neither approach, however, is capable of providing a consistent explanation for the way radiation (light, for example) and matter interact when viewed from different inertial frames of reference, that is, an interaction viewed simultaneously by an observer at rest and an observer moving at uniform speed.
In the spring of 1905, after considering these problems for ten years, Einstein realized that the crux of the problem lay not in a theory of matter but in a theory of measurement. At the heart of his special theory of relativity was the realization that all measurements of time and space depend on judgments as to whether two distant events occur simultaneously. This led him to develop a theory based on two postulates: the principle of relativity, that physical laws are the same in all inertial reference systems, and the principle of the invariance of the speed of light, that the speed of light in a vacuum is a universal constant. He was thus able to provide a consistent and correct description of physical events in different inertial frames of reference without making special assumptions about the nature of matter or radiation, or how they interact. Virtually no one understood Einstein's argument.

Early Reactions to Einstein
The difficulty that others had with Einstein's work was not because it was too mathematically complex or technically obscure; the problem resulted, rather, from Einstein's beliefs about the nature of good theories and the relationship between experiment and theory. Although he maintained that the only source of knowledge is experience, he also believed that scientific theories are the free creations of a finely tuned physical intuition and that the premises on which theories are based cannot be connected logically to experiment. A good theory, therefore, is one in which a minimum number of postulates is required to account for the physical evidence. This sparseness of postulates, a feature of all Einstein's work, was what made his work so difficult for colleagues to comprehend, let alone support.
Einstein did have important supporters, however. His chief early patron was the German physicist Max Planck. Einstein remained at the patent office for four years after his star began to rise within the physics community. He then moved rapidly upward in the German-speaking academic world; his first academic appointment was in 1909 at the University of Zürich. In 1911 he moved to the German-speaking university at Prague, and in 1912 he returned to the Swiss National Polytechnic in Zürich. Finally, in 1913, he was appointed director of the Kaiser Wilhelm Institute for Physics in Berlin.

The General Theory of Relativity
Even before he left the patent office in 1907, Einstein began work on extending and generalizing the theory of relativity to all coordinate systems. He began by enunciating the principle of equivalence, a postulate that gravitational fields are equivalent to accelerations of the frame of reference. For example, people in a moving elevator cannot, in principle, decide whether the force that acts on them is caused by gravitation or by a constant acceleration of the elevator. The full general theory of relativity was not published until 1916. In this theory the interactions of bodies, which heretofore had been ascribed to gravitational forces, are explained as the influence of bodies on the geometry of space-time (four-dimensional space, a mathematical abstraction, having the three dimensions from Euclidean space and time as the fourth dimension).
On the basis of the general theory of relativity, Einstein accounted for the previously unexplained variations in the orbital motion of the planets and predicted the bending of starlight in the vicinity of a massive body such as the sun. The confirmation of this latter phenomenon during an eclipse of the sun in 1919 became a media event, and Einstein's fame spread worldwide.
For the rest of his life Einstein devoted considerable time to generalizing his theory even more. His last effort, the unified field theory, which was not entirely successful, was an attempt to understand all physical interactions-including electromagnetic interactions and weak and strong interactions-in terms of the modification of the geometry of space-time between interacting entities.
Most of Einstein's colleagues felt that these efforts were misguided. Between 1915 and 1930 the mainstream of physics was in developing a new conception of the fundamental character of matter, known as quantum theory. This theory contained the feature of wave-particle duality (light exhibits the properties of a particle, as well as of a wave) that Einstein had earlier urged as necessary, as well as the uncertainty principle, which states that precision in measuring processes is limited. Additionally, it contained a novel rejection, at a fundamental level, of the notion of strict causality. Einstein, however, would not accept such notions and remained a critic of these developments until the end of his life. "God," Einstein once said, "does not play dice with the world."

World Citizen
After 1919, Einstein became internationally renowned. He accrued honors and awards, including the Nobel Prize in physics in 1921, from various world scientific societies. His visit to any part of the world became a national event; photographers and reporters followed him everywhere. While regretting his loss of privacy, Einstein capitalized on his fame to further his own political and social views.
The two social movements that received his full support were pacifism and Zionism. During World War I he was one of a handful of German academics willing to publicly decry Germany's involvement in the war. After the war his continued public support of pacifist and Zionist goals made him the target of vicious attacks by anti-Semitic and right-wing elements in Germany. Even his scientific theories were publicly ridiculed, especially the theory of relativity.
When Hitler came to power, Einstein immediately decided to leave Germany for the United States. He took a position at the Institute for Advanced Study at Princeton, New Jersey. While continuing his efforts on behalf of world Zionism, Einstein renounced his former pacifist stand in the face of the awesome threat to humankind posed by the Nazi regime in Germany.
In 1939 Einstein collaborated with several other physicists in writing a letter to President Franklin D. Roosevelt, pointing out the possibility of making an atomic bomb and the likelihood that the German government was embarking on such a course. The letter, which bore only Einstein's signature, helped lend urgency to efforts in the U.S. to build the atomic bomb, but Einstein himself played no role in the work and knew nothing about it at the time.
After the war, Einstein was active in the cause of international disarmament and world government. He continued his active support of Zionism but declined the offer made by leaders of the state of Israel to become president of that country. In the U.S. during the late 1940s and early '50s he spoke out on the need for the nation's intellectuals to make any sacrifice necessary to preserve political freedom. Einstein died in Princeton on April 18, 1955.
Einstein's efforts in behalf of social causes have sometimes been viewed as unrealistic. In fact, his proposals were always carefully thought out. Like his scientific theories, they were motivated by sound intuition based on a shrewd and careful assessment of evidence and observation. Although Einstein gave much of himself to political and social causes, science always came first, because, he often said, only the discovery of the nature of the universe would have lasting meaning. His writings include Relativity: The Special and General Theory (1916); About Zionism (1931); Builders of the Universe (1932); Why War? (1933), with Sigmund Freud; The World as I See It (1934); The Evolution of Physics (1938), with the Polish physicist Leopold Infeld; and Out of My Later Years (1950). Einstein's collected papers are being published in a multivolume work, beginning in 1987.

Albert Einstein 2

ALBERT EINSTEIN


Albert Einstein was born in Germany on March 14,
1879.As a kid he had trouble learning to speak. His parents
thought that he might be mentally retarded. He was not
smart in school. He suffered under the learning methods that
they used in the schools of Germany at that time so he was
never able to finish his studies. In 1894 his father's business
had failed and the family moved to Milan, Italy. Einstein who
had grown interested in science, went to Zurich, Switzerland,
to enter a famous technical school. There his ability in
mathematics and physics began to show.
When Einstein was graduated in 1900 he was unable to get a
teaching appointment at a university. Instead he got a
clerical job in the patent office at Bern, Switzerland. It was
not what he wanted but it would give him leisure for studying
and thinking. While over there he wrote scientific papers.
Einstein submitted one of his scientific papers to the
University of Zurich to obtain a Ph.D. degree in 1905. In
1908 he sent a second paper to the University of Bern and
became lecturer there. The next year Einstein received a
regular appointment as associate professor of physics at the
University of Zurich. By 1909, Einstein was recognized
throughout Europe as a leading scientific thinker. In 1909 the
fame that resulted from his theories got Einstein a job at the
University of Prague, and in 1913 he was appointed director of
a new research institution opened in Berlin, the Kaiser Wilhelm
Physics Institute.
In 1915, during World War 1, Einstein published a paper
that extended his theories. He put forth new views on the
nature of gravitation. Newton's theories he said were not
accurate enough. Einstein's theories seemed to explain the
slow rotation of the entire orbit of the planet Mercury, which
Newton's theories did not explain. Einstein's theories also
predicted that light rays passing near the sun would be bent
out of a straight line. When this was verified at the eclipse of
1919, Einstein was instantly accepted as the great scientific
thinker since Newton.
By now Germany had fallen in the hands of Adolf Hitler
and his Nazis. Albert Einstein was Jewish. In 1933 when the
Nazis came to power, Einstein happened to be in California.
He did not return to Germany. He went to Belgium instead.
The Nazis confiscated his possessions, publicly burned his
writings, and expelled him from all German scientific societies.
Einstein came back to the United States and became a
citizen.
The atomic bomb is an explosive device that depends
upon the release of energy in a nuclear reaction known as
FISSION, which is the splitting of atomic nuclei. Einstein sent
a letter to President Franklin D. Roosevelt, pointing out that
atomic bombs are possible and that enemy nations must be
allowed to make them first.
Roosevelt agreed with Einstein and funded the
Manhattan Project.
On April 18, 1955, Albert Einstein died. To his dying
day, he urged the world to come to some agreement that
would make nuclear wars forever impossible.

SHAHIN TEHRANI

aerospace wind tunnel

Wind Tunnel

In this report I will talk about the wind tunnel. I will described what they are used for. The
different types of wind tunnels from the slow speed subsonic to the high speed hypersonic tunnels.
I will also give A few examples of the wind tunnels used today.
The wind tunnel is a device used by many people, from High school students to NASA
engineers. The wind tunnel is a device used to test planes to see how well it will do under certain
conditions. The plane maybe as big as a full size 747 or as small as a match. To understand how a
wind tunnel is used to help in the designing process you have to know how a wind tunnel works.
How Wind Tunnels Work
A wind tunnel is a machine used to fly aircraft's, missiles, engines, and rockets on the ground
under pre-set conditions. With a wind tunnel you can chose the air speed, pressure, altitude and
temperature to name a few things. A wind tunnel is usually has a tube like appearance with which
wind is produced by a large fan to flow over what they are testing (plane, missiles, rockets, etc.)or
a model of it. The object in the wind tunnel is fixed and placed in the test section of the tunnel and
instruments are placed on the model to record the aerodynamic forces acting on the model.
Types of Wind Tunnels
There are four basic types of wind tunnels. Which are low subsonic, transonic,
supersonic, and hypersonic. The wind tunnels are classified by the amount of speed they can
produce. The subsonic has a speed lower then the speed of sound. The transonic has a speed
which is about equal to the speed of sound (Mach 1 760 miles per hour at sea level). . The
supersonic (Mach 2.75 to 4.96) has a speed of about five times the speed of sound And the fasts
of them all the hypersonic (Mach39.5) which has a speed of more then 30,000 miles per hour.
Wind Tunnel Test
There are basically two types of wind tunnel test which are static stability and the pressure
test. With these two test you can determine the aerodynamic characteristics of the aircraft. The
static stability test the measures the forces moments due to the external characteristic. These
forces include axial, side and normal force, rolling, pitching and yawing moment. This forces are
found by using a strain gauge which is located on the external portion of the plane. It measures
the external flow fields. Then the shadowgraph is used to show the shock waves and flow fields at
a certain speed or angle of attack. There is also the oil flow which shows you the surface flow
pattern.
The pressure test is used to provide the pressures acting on the test object. This is done by
placing taps over the surface. The taps are then connected to transducers that read the local
pressures. With this information they the can balance out the plane. Then the static stability and
the pressure test data are combined to find the distributed loads.
Wind Tunnels Used Today
Wind tunnel vary in size from a few inches to 12m by 24m (40ft by 80ft) located at the
Ames Research Center of the National Aeronautics and Space Administration or NASA, at moffet
Field, California. This wind tunnel at Ames can accommodate a Full-size aircraft with a wingspan
of 22m (72ft). They also have a hypervelocity tunnel at Ames that can create air velocities of up
to 30,000 mph (48,000 km/h) for one second. This high speed is able to be done by placing a
small model of the spacecraft in a device that produces an explosive charge into the tunnel in one
direction, while this is going on there is another explosive charge that simultaneously pushes gas
into the tunnel from the other direction. There is also a wind tunnel at the Lewis Flight Propulsion
Laboratory also own by NASA in Cleveland, Ohio, can test full-size jet engines at air velocities of
up to 2,400mph (3860km/h) and at altitudes of up to 100,000ft (30,500m).
Benefits of the Wind Tunnel
There are many benefits that one can gain in using a wind tunnel. Since designing an
airplane is a long and complicated process and an expensive one as well. With the wind tunnel you
can build models and test them at a fraction of the price compared to making the real thing. When
designing an airplane one has to take into account the public safety and still be able to keep the
design in mind to do what it is designed to do. With a wind tunnel you can design and test what
you make before you make it.
With a wind tunnel you can also solve problems that already exist. One example of this is
when the first jet engine propelled aircraft's where produced in the 40's . The problem occurred
when the jet planes released there missiles that where on the external part of the plane, the
missiles had a tendency to move up when released causing a collision with the plane resulting in
death of the pilot .With the wind tunnel the were able to solve this problem with out the lost of
any more lives.
On February 1, 1956 wind tunnels were so important that the Army formed the ABMA at
Redstone Arsenal in Huntsville, Alabama from army missile program assets. This program was
made to support for on going research and development projects for the army ballistic missile
program in this program they made a 14inc wind tunnel to test the missiles.
Early test were done to determine the aerodynamics of the Jupiter IRBM (Intermediate
Range Ballistic Missile)and its nose cone. The Jupiter C missile was one of the first Launch
Vehicles tested in the wind tunnel. The Jupiter C was a modified Redstone rocket made of nose
cone re-entry testing. A modified Jupiter C the Juno 1, launched America's first satellite, the
explorer 1 into orbit. Soon after this the ABMA wind tunnel went to NASA. The wind tunnel
played a vital role in the exploration of space. The wind tunnel played a major role in the Saturn
V, the first rocket that put the first man on the moon(Apollo mission) to the current Space Shuttle
Launch Vehicle. The tunnel mission changed from testing medium to long range missiles to
supporting America's "Race Into Space". NASA increased the payload of the original 10lb
satellite(explorer 1 ) to a man in a capsule(project Mercury). To the Apollo Project. The Saturn
family of launch vehicles spent hundreds of hours in the wind tunnel. There were various
configurations that were tried to find the best result. At first they were going to make a fully
reusable shuttle but that idea cost to much and was ruled out due to there budget. With the
budget in mind the current space shuttle started to be formed. But it still took many years in a
wind tunnel before the final design of the Orbiter, External Tank and Solid Rocket Boosters final
took there shape as the one we know of today. Even after the space shuttle took flight they were
still being tested to increase performance. Test were done to determine the cause of tile damage.
As the shuttle program continued to progress at a rapid pace it came to a stand still when the
Challenger Accident occurred. After the accident the 14in wind tunnel was immediately put into
use. to analyze what had occurred. These test verified what happen to the SRB leak and the
rupture of the aft external tank STA 2058 ring frame. The data was used to determine the
trajectory and control reconstruction. With all of this information they got from this they are
trying to develop a way to abort scenarios involving orbiter separation during transonic flight. All
of these configuration were done to the scale model that is .004 of the real shuttle.
This is just a few applications of the wind tunnel. There are many more things that they
can do. With the invention of the wind tunnel the cost of designing an aircraft and testing an
aircraft has been reduced, And most important lives have been saved. With out the wind tunnel
there would be no way for us to know what will happen before it happens.

A Technical Analysis of Human Factors and Ergonomics in Moder

A Technical Analysis of Ergonomics and Human Factors in Modern Flight Deck Design

I. Introduction
Since the dawn of the aviation era, cockpit design has become increasingly complicated owing to the advent of new technologies enabling aircraft to fly farther and faster more efficiently than ever before. With greater workloads imposed on pilots as fleets modernize, the reality of he or she exceeding the workload limit has become manifest. Because of the unpredictable nature of man, this problem is impossible to eliminate completely. However, the instances of occurrence can be drastically reduced by examining the nature of man, how he operates in the cockpit, and what must be done by engineers to design a system in which man and machine are ideally interfaced. The latter point involves an in-depth analysis of system design with an emphasis on human factors, biomechanics, cockpit controls, and display systems. By analyzing these components of cockpit design, and determining which variables of each will yield the lowest errors, a system can be designed in which the Liveware-Hardware interface can promote safety and reduce mishap frequency.

II. The History Of Human Factors in Cockpit Design
The history of cockpit design can be traced as far back as the first balloon flights, where a barometer was used to measure altitude. The Wright brothers incorporated a string attached to the aircraft to indicate slips and skids (Hawkins, 241). However, the first real efforts towards human factors implementation in cockpit design began in the early 1930's. During this time, the United States Postal Service began flying aircraft in all-weather missions (Kane, 4:9). The greater reliance on instrumentation raised the question of where to put each display and control. However, not much attention was being focused on this area as engineers cared more about getting the instrument in the cockpit, than about how it would interface with the pilot (Sanders & McCormick, 739).
In the mid- to late 1930's, the development of the first gyroscopic instruments forced engineers to make their first major human factors-related decision. Rudimentary situation indicators raised concern about whether the displays should reflect the view as seen from inside the cockpit, having the horizon move behind a fixed miniature airplane, or as it would be seen from outside the aircraft. Until the end of World War I, aircraft were manufactured using both types of display. This caused confusion among pilots who were familiar with one type of display and were flying an aircraft with the other. Several safety violations were observed because of this, none of which were fatal (Fitts, 20-21).
Shortly after World War II, aircraft cockpits were standardized to the 'six-pack' configuration. This was a collection of the six critical flight instruments arranged in two rows of three directly in front of the pilot. In clockwise order from the upper left, they were the airspeed indicator, artificial horizon, altimeter, turn coordinator, heading indicator and vertical speed indicator. This arrangement of instruments provided easy transition training for pilots going from one aircraft to another. In addition, instrument scanning was enhanced, because the instruments were strategically placed so the pilot could reference each instrument against the artificial horizon in a hub and spoke method (Fitts, 26-30).
Since then, the bulk of human interfacing with cockpit development has been largely due to technological achievements. The dramatic increase in the complexity of aircraft after the dawn of the jet age brought with it a greater need than ever for automation that exceeded a simple autopilot. Human factors studies in other industries, and within the military paved the way for some of the most recent technological innovations such as the glass cockpit, Heads Up Display (HUD), and other advanced panel displays. Although these systems are on the cutting edge of technology, they too are susceptible to design problems, some of which are responsible for the incidents and accidents mentioned earlier. They will be discussed in further detail in another chapter (Hawkins, 249-54).

III. System Design
A design team should support the concept that the pilot's interface with the system, including task needs, decision needs, feedback requirements, and responsibilities, must be primary considerations for defining the system's functions and logic, as opposed to the system concept coming first and the user interface coming later, after the system's functionality is fully defined. There are numerous examples where application of human-centered design principles and processes could be better applied to improve the design process and final product. Although manufacturers utilize human factors specialists to varying degrees, they are typically brought into the design effort in limited roles or late in the process, after the operational and functional requirements have been defined (Sanders & McCormick, 727-8). When joining the design process late, the ability of the human factors specialist to influence the final design and facilitate incorporation of human-centered design principles is severely compromised. Human factors should be considered on par with other disciplines involved in the design process.
The design process can be seen as a six-step process; determining the objectives and performance specifications, defining the system, basic system design, interface design, facilitator design, and testing and evaluation of the system. This model is theoretical, and few design systems actually meet its performance objectives. Each step directly involves input from human factors data, and incorporates it in the design philosophy (Bailey, 192-5).
Determining the objectives and performance specifications includes defining a fundamental purpose of the system, and evaluating what the system must do to achieve that purpose. This also includes identifying the intended users of the system and what skills those operators will have. Fundamentally, this first step addresses a broad definition of what activity-based needs the system must address. The second step, definition of the system, determines the functions the system must do to achieve the performance specifications (unlike the broader purpose-based evaluation in the first step). Here, the human factors specialists will ensure that functions match the needs of the operator. During this step, functional flow diagrams can be drafted, but the design team must keep in mind that only general functions can be listed. More specific system characteristics are covered in step three, basic system design (Sanders & McCormick, 728-9).
The basic system design phase determines a number of variables, one of which is the allocation of functions to Liveware, Hardware, and Software. A sample allocation model considers five methods: mandatory, balance of value, utilitarian, affective and cognitive support, and dynamic. Mandatory allocation is the distribution of tasks based on limitations. There are some tasks which Liveware is incapable of handling, and likewise with Hardware. Other considerations with mandatory allocation are laws and environmental restraints. Balance of value allocation is the theory that each task is either incapable of being done by Liveware or Hardware, is better done by Liveware or Hardware, or can only be done only by Liveware or Hardware. Utilitarian allocation is based on economic restraints. With the avionics package in many commercial jets costing as much as 15% of the overall aircraft price (Hawkins, 243), it would be very easy for design teams to allocate as many tasks to the operator as possible. This, in fact, was standard practice before the advent of automation as it exists today. The antithesis to that philosophy is to automate as many tasks as possible to relieve pressure on the pilot. Affective and cognitive support allocation recognizes the unique need of the Liveware component and assigns tasks to Hardware to provide as much information and decision-making support as possible. It also takes into account limitations, such as emotions and stress which can impede Liveware performance. Finally, dynamic allocation refers to an operator-controlled process where the pilot can determine which functions should be delegated to the machine, and which he or she should control at any time. Again, this allocation model is only theoretical, and often a design process will encompass all, or sometimes none of these philosophies (Sanders & McCormick, 730-4).
Basic system design also delegates Liveware performance requirements, characteristics that the operator must posses for the system to meet design specifications (such as accuracy, speed, training, proficiency). Once that is determined, an in-depth task description and analysis is created. This phase is essential to the human factors interface, because it analyzes the nature of the task and breaks it down into every step necessary to complete that task. The steps are further broken down to determine the following criteria: stimulus required to initiate the step, decision making which must be accomplished (if any), actions required, information needed, feedback, potential sources of error and what needs to be done to accomplish successful step completion. Task analysis is the foremost method of defining the Liveware-Hardware interface. It is imperative that a cockpit be designed using a process similar to this if it is to maintain effective communication between the operator and machine (Bailey, 202-6). It is widely accepted that the equipment determines the job. Based on that assumption, operator participation in this design phase can greatly enhance job enlargement and enrichment (Sanders & McCormick, 737; Hawkins, 143-4).
Interface design, the fourth process in the design model, analyzes the interfaces between all components of the SHEL model, with an emphasis on the human factors role in gathering and interpreting data. During this stage, evaluations are made of suggested designs, human factors data is gathered (such as statistical data on body dimensions), and any gathered data is applied. Any application of data goes through a sub-process that determines the data's practical significance, its interface with the environment, the risks of implementation, and any give and take involved. The last item involved in this phase is conducting Liveware performance studies to determine the capabilities and limitations of that component in the suggested design. The fifth step in the design stage is facilitator design. Facilitators are basically Software designs that enhance the Liveware-Hardware, such as operating manuals, placards, and graphs. Finally, the last design step is to conduct testing of the proposed design and evaluate the human factors input and interfaces between all components involved. An application of this process to each system design will enhance the operators ability to control the system within desired specifications. Some of the specific design characteristics can be found in subsequent chapters.

IV. Biomechanics
In December of 1981, a Piper Comanche aircraft temporarily lost directional control in gusty conditions within the performance specifications of the aircraft. The pilot later reported that with the control column full aft, he was unable to maintain adequate aileron control because his knees were interfering with proper control movement (NTSB database). Although this is a small incident, it should alert engineers to a potential problem area. Probably the most fundamental, and easiest to quantify interface in the cockpit is the physical dimensions of the Liveware component and the Hardware designs which must accommodate it. The comfort of the workspace has long been known to alleviate or perpetuate fatigue over long periods of time (Hawkins, 282-3). These facts indicate a need to discuss the factors involved in workspace design.
When designing a cockpit, the engineer should determine the physical dimensions of the operator. Given the variable dimensions of the human body, it is naturally impossible to design a system that will accommodate all users. An industry standard is to use 95% of the population's average dimensions, by discarding the top and bottom 2.5% in any data. From this, general design can be accomplished by incorporating the reach and strength limitations of smaller people, and the clearance limitations of larger people. Three basic design philosophies must be adhered to when designing around physical dimensions: reach and clearance envelopes, user position with respect to the display area, and the position of the body (Bailey, 273).
Other differences must be taken into account when designing a system, such as ethnic and gender differences. It is known, for example, that women are, on average, 7% shorter than men (Pheasant, 44). If the 95 percentile convention is used, the question arises, on which gender do we base that? One was to speak of the comparison is to discuss the F/M ratio, or the average female characteristic divided by the average male characteristic. Although this ratio doesn't take into account the possibility of overlap (i.e., the bottom 5th percentile of males are likely to be shorter than the top 5th percentile of females), that is not an issue in cockpit design (Pheasant, 44). The other variable, ethnicity must also be evaluated in system design. Some Asian races, for example have a sitting height almost ten centimeters lower than Europeans (Pheasant, 50). This can raise a potential problem when designing an instrument panel, or windshield.
Some design guides have been established to help the engineer with conceptual problems such as these, but for the most part, systems designers are limited to data gathered from human factors research (Tillman & Tillman, 80-7). As one story went, during the final design phase of the Boeing 777, the chairman of United Airlines was invited to preview it. When he stood in his first class seat, his head collided with an overhead baggage rack. Boeing officials were apologetic, but the engineers were grinning inside. A few months later, the launch of the first 777 in service included overhead baggage racks that were much higher, and less likely to be involved in a collision. Unlike this experience, designing clearances and reach envelopes for a cockpit is too expensive to be a trial and error venture.

V. Controls
In early 1974, the NTSB released a recomendation to the FAA regarding control inconsistencies:

"A-74-39. Amend 14 cfr 23 to include specifications for standardizing fuel selection valve handle designs, displays, and modes of operation" (NTSB database).

A series of safety accidents occurred during transition training of pilots moving from the Beechcraft Bonanza and Baron aircraft when flap and gear handles were mistakenly confused:

"As part of a recently completed special investigation, the safety board reviewed its files for every inadvertent landing gear retraction accident between 1975 and 1978. These accidents typically happened because the pilot was attempting to put the flaps control up after landing, and moved the landing gear control instead. This inadvertent movement of the landing gear control was often attributed to the pilot's being under stress or distracted, and being more accustomed to flying aircraft in which these two controls were in exactly opposite locations. Two popular light aircraft, the Beech Bonanza and Baron, were involved in the majority of these accidents. The bonanza constituted only about 30 percent of the active light single engine aircraft fleet retractable landing gear, but was involved in 16 of the 24 accidents suffered by this category of aircraft. Similarly, the baron constituted only 16 percent of the light twin fleet, yet suffered 21 of the 39 such accidents occurring to these aircraft" (NTSB database).

Like biomechanics, the design of controls is the study of physical relationships within the Liveware-Hardware interface. However, control design philosophy tends to be more subtle, and there is slightly more emphasis on psychological components. A designer determines what kind of control to use in a system only after the purpose of the system has been established, and what operator needs and limitations are.
In general, controls serve one of four actions: activation, discrete setting, quantitative setting, and continuous control. Activation controls are those that toggle a system on or off, like a light switch. Discrete setting switches are variable position switches with three or more options, such as a fuel selector switch with three settings. Quantitative setting switches are usually knobs that control a system along a predefined quantitative dimension, such as a radio tuner or volume control. Continuous controls are controls that require constant equipment control, such as a steering wheel. A control is a system, and therefore follows the same guidelines for system design described above. In general, there are a few guidelines to control design that are unique to that system. Controls should be easily identified by color coding, labeling, size and shape coding and location (Bailey, 258-64).
When designing controls, some general principles apply. Normal requirements for control operation should not exceed the maximum limitations of the least capable operator. More important controls should be given placement priority. The neutral position of the controls should correspond with the operator's most comfortable position, and full control deflection should not require an extreme body position (locked legs, or arms). The controls should be designed within the most biomechanically efficient design. The number of controls should be kept to a minimum to reduce workload, or when that is not possible, combining activation controls into discrete controls is preferable. When designing a system, it should be noted that foot control is stronger, but less accurate than hand control. Continuous control operation should be distributed around the body, instead of focused on one particular part, and should be kept as short as possible (Damon, 291-2).
Detailed studies have been conducted about control design, and some concerns were such things as the ability of an operator to discern one control with another, size and spacing of controls, and stereotypes. It was stated that even with vision available, easily discernible controls were mistaken for another (Fitts, 898; Adams, 276). A study by Jenkins revealed a set of control knobs that were not prone to such error, or were less likely to yield errors (Adams, 276-7). Some of these have been incorporated in aircraft designs as recent as the Boeing 777. Another study, conducted by Bradley in 1969 revealed that size and spacing of knobs was directly related to inadvertent operation. He believed that if a knob were too large, small, far apart, or close together, the operator was prone to a greater error yield. In the study, Bradley concluded that the optimum spacing between half-inch knobs would be one inch between their edges. This would yield the lowest inadvertent knob operation (Fitts, 901-2; Adams, 278). Population stereotypes address the issue of how a control should be operated (should a light switch be moved up, to the left, to the right, or down to turn it on?). There are four advantages that follow a model of ideal control relationship. They are decreased reaction time, fewer errors, better speed of knob adjustment, and faster learning. (Van Cott & Kinkdale, 349). These operational advantages become a great source of error to the operator unfamiliar with the aircraft and experiencing stress. During a time of high workload, one characteristic of the Liveware component is to revert to what was first learned (Adams, 279-80). In the case of the Bonanza and Baron pilots, this was the case in mistaking the gear and flap switches.

VI. Displays
In late 1986, the NTSB released the following recommendation to the FAA based on three accidents that had occurred within the preceding two years:

"A-86-105. Issue an Air Carrier Operations Bulletin-Part 135, directing Principal Operations Inspectors to ensure that commuter air carrier training programs specifically emphasize the differences existing in cockpit instrumentation and equipment in the fleet of their commuter operators and that these training programs cover the human engineering aspects of these differences and the human performance problems associated with these differences" (NTSB database).

The instrumentation in a cockpit environment provides the only source of feedback to the pilot in instrument flying conditions. Therefore, it is a very valuable design characteristic, and special attention must be paid to optimum engineering. There are two basic kinds of instruments that accomplish this task: symbolic and pictorial instruments. All instruments are coded representations of what can be found in the real world, but some are more abstract than others. Symbolic instrumentation is usually more abstract than pictorial (Adams, 195-6). When designing a cockpit, the first consideration involves the choice between these two types of instruments. This decision is based directly on the operational requirements of the system, and the purpose of the system. Once this has been determined, the next step is to decide what sort of data is going to be displayed by the system, and choose a specific instrument accordingly.
Symbolic instrumentation usually displays a combination of four types of information: quantitative, qualitative, comparison, and reading checking (Adams, 197). Quantitative instruments display the numerical value of a variable, and is best displayed using counters, or dials with a low degree of curvature. The preferable orientation of a straight dial would be horizontal, similar to the heading indicator found in glass cockpits. However, conflicting research has shown that no loss accuracy could be noted with high curvature dials (Murrell, 162). Another experiment showed that moving index displays with a fixed pointer are more accurate than a moving pointer on a fixed index (Adams, 200-1). Qualitative readings is the judgment of approximate values, trends, directions, or rate of variable change. This information is displayed when a high level of accuracy is not required for successful task completion (Adams, 197). A study conducted by Grether and Connell in 1948 suggested that vertical straight dials are superior to circular dials because an increase in needle deflection will always indicate a positive change. However, conflicting arguments came from studies conducted a few years later that stated no ambiguity will manifest provided no control inputs are made if a circular dial is used. It has also been suggested that moving pointers along a fixed background are superior to fixed pointers, but the few errors in reading a directional gyro seem to disagree with this supposition (Murrell, 163). Comparisons of two readings are best shown on circular dials with no markings, but if they are necessary, the markings should not be closer than 10 degrees to each other (Murrell, 163). Check reading involves verifying if a change has occurred from the desired value (Adams, 197). The most efficient instrumentation for this kind of task are any with a moving pointer. However, the studies concerning this type of informational display has only been conducted with a single instrument. It is not known if this is the most efficient instrument type when the operator is involved in a quick scan (Murrell, 163-4).
The pictorial instrument is most efficiently used in situation displays, such as the attitude indicator or air traffic control radar. In one experiment, pilots were allowed to use various kinds of situation instruments to tackle a navigational problem. Their performance was recorded, and the procedure was repeated using different pilots with only symbolic instruments. Interestingly, the pilots given the pictorial instrumentation performed no navigation errors, whereas those given the symbolic displays made errors almost ten percent of the time (Adams, 208-209). Regardless of these results, it has long been known that the most efficient navigational methods are accomplished by combining the advantages of these two types of instruments.

VII. Summary
The preceding chapters illustrate design-side techniques that can be incorporated by engineers to reduce the occurrence of mishaps due to Liveware-Hardware interface problems. The system design model presented is ideal and theoretical. To practice it would cost corporations much more money than they would save if they were to use less cost-efficient methods. However, today's society seems to be moving towards a global concensus to take safety more seriously, and perhaps in the future, total human factors optimization will become manifest. The discussion of biomechanics in chapter three was purposely broad, because it is such a wide and diverse field. The concepts touched upon indicate the areas of concern that a designer must address before creating a cockpit that is ergonomically friendly in the physical sense. Controls and displays hold a little more relevance, because they are the fundamental control and feedback devices involved in controlling the aircraft. These were discussed in greater detail because many of those concepts never reach the conscious mind of the operator. Although awareness of these factors is not critical to safe aircraft operation, they do play a vital role in the subconscious mind of the pilot during critical operational phases under high stress. Because of the unpredictable nature of man, it would be foolish to assume a zero tolerance environment to potential errors like these, but further investigation into the design process, biomechanics, control and display devices may yield greater insight as far as causal factors is concerned. Armed with this knowledge, engineers can set out to build aircraft not only to transport people and material, but also to save lives.