History Podcasts

Special Aspects - History

Special Aspects - History



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Naval warfare included more than the contacts between rival fleets and their air components. It involved constant surveillance of enemy movements and bases, destruction of ship- Airsea rescue, which kept personnel losses to a minimum, preserved that element of military power most difficult to replace and bolstered the morale of all fighting men. In all these activities aviation participated and for their accomplishment developed special techniques, a knowledge of which is necesary to an understanding of victory in the Pacific. Naval Air Search and Reconnaissance Pearl Harbor showed the need for air patrols. The Japanese Fleet whose planes did such damage on the morning of 7 December 1941, were within range the evening before. Had enough Catalinas been out, the fleet might have been discovered, but the ability of United States forces to surprise the enemy on many occasions later in the conflict indicated that more than planes in the air were needed to conduct an adequate 1 search. Above all it required special radar equipment and thorough training which American forces did not possess inl 1941. Admiral Hart in the Philippines commented on the vast amount of misinformation he received over the warning net. Before that ill-fated campaign in the East Indies had ended, the patrol-plane pilots and crews had learned their business the hard way. During the latter stages of the Japanese advance the only information available to Allied commanders came from the Catalinas of Patrol Wing 10 operating from tenders whose almost daily moves enabled them to service their planes after landing fields had been knocked out. The lessons learned were applied elsewhere as fast as aircraft, equipment, and trained crews could be obtained. Although naval search planes were not available for the Battle of the Coral Sea in May 1942, the following month at Midway a Catalina was the first to report the Japanese fleet. When the same type of flying boat was used in the Solomons, its limitations rapidly became apparent. The surprise and sinking of four Allied cruisers at Savo Island on the night after the landings on Guadalcanal might have been avoided if reconnaissance had been complete. In the weeks that followed, concentration of enemy fighters made impossible the use of Catalinas in the area north of Guadalcanal. Although Army Flying Fortresses were employed for patrols, lack of special equipment and training restricted their usefulness. Late in 1942 the Navy began receiving Liberators, which after extensive modification and time for training the naval crews appeared in the Solomons early the following year. This plane had both the range to reach the centers of enemy activity and the firepower needed to operate singly. The possession of such a plane also made possible the development of photographic reconnaissance. Because the Japanese had for years excluded foreigners from military areas and especially from the mandated islands, Allied intelligence knew very little about the nature or extent of installations. In the spring of 1943 the first photographic squadron , accompanied by expert personnel for processing and interpretation, reached the South Pacific. From that time forward, extensive photographic reconnaissance was made in advance of every major operation. In addition to specially equipped units, every 9 search plane carrried a camera and was able to supplement visual sighting with photographic evidence. The camera and radar enormously increased the effectiveness or naval patrol aircraft. Although the first function of patrol aviation was to sight and report, naval planes frequently discovered enemy merchant shipping alone or with only light escort. Since the aircraft carried machine guns, bombs, and, in the latter part of the war, rockets and guided missiles, they made sucessful attacks on cargo vessels and contributed to the effort that ultimately strangled Japanese industry. Antishipping operations also possessed direct military importanance. In the South and Southwest Pacific areas the enemy frequently attempied to move troops and supplies at night in small vessels and barges, ducking in and out among the numerous islands and hiding in coves by day. In detecting these clandestine shipments, the slow speed of the Catalinas became an asset and darkness provided adequate protection for their vulnerability. With special paint and equipped with radar they became Black Cats searching out enemy vessels and barges wherever they could be found. Not only did they themselves strike but they also worked out techniques for guiding motor torpedo boats, destroyers, and other light vessels to Japanese convoys. The Black Cats made reconnaissance a 24-hour-a-day job. In the Atlantic, patrol squadrons devoted their principal effort to antisubmarine warfare. Because the Japanese directed many of their underwater craft to supply garrisons on bypassed islands, antisubmarine activities were overshadowed by other phases of patrol aviation in the Pacific. All squadrons, however, were given instrction in the special techniques of this type of warfare, and although patrol planes were instrumental in sinking only five Japanese submarines, viligance was never relaxed and a high degree of proficiency maintained through training. As the United States offensive moved across the Pacific, patrol aviation accompanied it. Search and photographic planes checked and rechecked enemy installations and movements. When the carrier forces moved against an objective, they desired to achieve surprise. If Japansee search aircraft encountered carrier planes, they could have inferred the presence of carriers and transmitted the fact before being shot down. In the invasion of the Marianas and later operations, Navy Liberators flew along the flanks and in advance of the carrier force, shooting down enemy search planes. Prior to the landings in the Philippines they knocked off Japanese picket boats east of Formosa. During the critical periods when amphibious forces were establishing a beachhead, naval commanders needed accurate knowledge of approaching enemy units. For this purpose tenders accompanied the invasion fleet and commenced operating seaplanes immediately. Although this remained a dangerous activity so long as the enemy had aircraft and fields in use it was necessary and by 1943 the Navy had available the Mariner (PBM), a faster, longerranged flying boat with more firepower than the Catalina. At Okinawa the Mariners conducted their first searches at the main objective even before the troops went ashore and on 7 April 1945 had an opportunity to demonstrate their value. A United States submarine the previous day sighted a Japanese force built around the Yamato, the world’s largest battleship, headed toward our invasion fleet. Search planes immediately took off and some hours later spotted the enemy and guided carrier planes into the attack which resulted in the destruction of the Yamato, a light cruiser, and four destroyers. The Mariners not only maintained continuous contact but landed on the open sea to pick up the personnel of carrier planes shot down during the action. The last 6 months of the war saw- the culmination of patrol aviation. New plane types became available in increasing numbers. To avoid the duplication of labor inherent in build - ing a plane and then modifying it extensively. the Navy designed a version of the Liberator to meet its special requirements and gave it the nautical name Privateer (PB4Y-2). A twoengine land plane, the Ventura (PV-1) originally developed for antisubmarine work in the Atlantic was also employed in the Pacific, and a new model named the Harpoon (PV-2 ) appeared in 1945. In preparation, but not ready in time for war operations, was the Neptune

P2V ) one of which startled the world in 1946 by flying from Perth, Australia, to Columbus, Ohio, a distance of over 11,000 miles and the longest flight on record. What a plane with that range and ease of operation would have meant in 1941 may easily be imagined. By the spring of 1945 the Navy operated searches that literally covered the Pacific from the Aleutians to Australia, from Seattle to Singapore. Especially important was the area betwen the Philippines and the mainland of Asia through which vital supplies from the East Indies passed to Japan. To sever these lines of communicatin, patrol planes proved particularly useful not only sinking ships themselves but guiding submarines to likely targets and even calling up Army bombers to dispose of one convoy too large for a single patrol plane to handle. This coordinated campaign reduced Japanese shipping to such a thin trickle that by summer the big planes were crossing to French Indo- China where they went after the railroads which were the last link in enemy communications with the southern regions. Farther north other naval aircraft, based on Okinawa and Iwo Jima, were conducting patrols along the coast of China as far as Korea and around the coasts of the Japanese home islands. They also attacked shipping with bombs, rockets, and guided missiles and laid mines in the principal shipping lanes. At the extreme top of the Japanee Empire, search planes from the Aleutians regularly- visited the Kurile Islands. The effectivess of this reconnaissance in terms of area covered can be seen from the charts on pages 12 and 13 which compare the searches in effect at the end of the war with those at the time of the Guadalcanal landings. The effectiveness in terms of results achieved is indicated above. All of this was accomplished with the greatest economy. At no time did the Navy have in operation in the Pacific area more than 500 search planes of all types. At the outset of the war, operating procedure for the rescue of pilots and air crews was undeveloped. On the other hand a number of basic safety devices had been provided permitting a pilot to survive the unexpected failure of his plane. The parachute, the inflatatable life jacket popularly known as the "Mae West," and the rubber life raft with its emergency survival and signalling gear were standard equipment. During the war, safety gear was steadily improved and the probabilities of survival were all in favor of the flyer, whether the trouble was simple engine failure or being shot down in flames. In the first half of 1942 many pilots survived crashes in combat areas but frequently little or nothing could be done to effect their recovery. A number of rescues, however, were made usually as the result of individual initiative, and after the battleof Midway, Catalinas picked up many pilots. Organized rescue operations developed in the Solomons campaign. Catalinas, popularly known as "Dumbos," were dispatched to pick up personnel who had been shot down. At first this was an incidental duty assigned as the occa- 726015 47 2 11 sion arose, but it later developed to a point where Dumbo circled near the scene of a raid. Positions were reported as planes went down, and the Dumbo, often protected by planes from the strike, recovered the personnel. The bravery of the rescue crews in landing in positions exposed to enemy shore fire became legendary. It was fortunate that no rescue personnel were lost in such operations. By 1944 in the Central Pacific the problem of making rescues in open-ocean areas first became 7 AUGUST 1942-, acute. Since only the most skillful and experienced seaplane pilots could land and take off again in the enormous swells, the job required as much seamanship as airmanship, and it became standard practice to avoid open-sea landings unless conditions were favorable and there was no other rescue agent available. Ships, usually destroyers, made the recoveries wherever possible. Catalinas continued not only to be used extensively to search for survivors, to drop emergency gear, and to circle overhead until a A T T U —

.-.
Aerial Mining The offensive mine-laying campaign waged against Japan was little publicized but the results were highly successful. At least 649,736 tons of shipping were sunk and another 1,377,- 780 tons damaged, of which 378,827 were still out of use at the end of the war. The total sunk and damaged represented one quarter of the prewar strength of the Japanese merchant marine. In addition 9 destroyers, 4 submarines, and 36 auxiliary craft went down as the result of mine explosions; and 2 battleships, 2 escort carriers. 8 cruisers, 29 destroyers or destroyer escorts, a submarine, and 18 other combatant vessels were damaged, In the course of the war 25,000 mines were laid, 21,389 or 85 percent by aircraft. From a total of 4,760 sorties, only 55 mine-laying planes failed to return. Although surface vessels and submarines were also employed, airplanes proved particularly adapted to mine-laying. They could penetrate enemy harbors and repeat the operation without being endangered by mines previously sown. Much of the work could he carried on at night with relatively little loss of accuracy and with increased secrecy as to the exact location of the 14 mines, which added to the Japanese difficulty in sweeping. All United States and Allied air services participated, using practically every type of bombing plane from the Avenger (TBF) to the Superfortress ( B–29), and, of course, the ever-present Catalina. The mines themselves were developed, produced, supplied and serviced largely by the United States Navy with a few British types being employed in Burma and the Southwest Pacific. Naval mine-warfare officers collaborated in the planning and execution of all operations. Although mining resulted in the destruction of large numbers of vessels , , it had other important effects not so easily determinable. It forced the Japanese to close ports until they could be swept, thereby causing the loss of valuable ship time. Even with relatively few mines at a time often repeated attacks resulted in the abandonment of many harbors. To prevent the enemy from staging his fleet through certain anchorages they were mined when important operations were in progreS in adjacent a rea s . Shallow waters were mined to force shipping into the open sea where United States submarines could attack. In the last month of the war the mining campaign was extended to home waters to cut of the last Japanese connection with the mainland. In the outer zone, particularly though the with comparatively small numbers being used against strategic objectives. The campaign was carried on by Royal Air Force, Australian, and United States Army aircraft operating from bases in the Southwest Pacific, China, and India. It prevented the Japanese from using such important ports as Rangoon to reinforce their troopS in Burma and greatly curtailed their obtaining supplies of oil from sulch places as Sura - baya and Balikpapan. In the South and Central Pacific, Navy planes used mines for tactical purposes to keep the Japanese Fleet from using certain harbors while amphibious operations were being conducted in nearby areas. Over half the naval mines expended during the war were laid by the Superfortresses of the Twentieth Air Force in and about the home islands, particularly in the straits of Shimonseki and around the Inland Sea. This forced the Japanese to carry goods from the Asiatic mainland to ports in northern Honshu from which 1 adequate distribution by rail was impossible. To complicate the enemy’s problem Navy Privateers from Okinawa mined the shores of the Yellow- Sea as far as the southern coast of Korea. The movement of ships of over 1,000 tons was stopped altogether. Careful minelaying prevented the use of all but three of Japan’s merchant-marine shipyards, thus preventing the repair of vessels already damaged. Cut off from the East Indies by air and submarine action, the enemy saw his last link with the Asiatic mainland severed bv aerial mines. American and Allied services working in close collaboration completed the strangle-hold on Japanese industry. Air Support of Amphibious Operations The primary missions of air support were local defense and direct support of troops ashore. Defense included combat air patrols to ward off enemy air raids, antisubmarine patrols flown constantlly around the approaches to the objective area, and special missions such as the silencing of heavy coastal batteries. Direct troop support consisted prinipally of attacks with bombs, rockets, machine guns and incendiaries on enemy troops and defenses. In order to be effective, both defensive and offensive air operations required a high degree of coordination and control. This was practically impossible to secure through the normal task-group communication channels because in a major amphibious operation as many as thirty- different carrier air l 5 groups and land-based Marine air units might be jointly engagd in operations. The taskforce and task-group organization involved too many echelons of command to permit prompt action on requests for air support. The need for the development of air-support doctrine was apparent in the landings on Guadalcanal and Tulagi in August 1942. Three carriers supported this operation, and their air groups reported to a support air director in the flagship of the amphibious commander and prior to the landings carried out missions assigned by him. Although the Navy had foreseen the need for liaison parties ashore with the troops and had occasionally employed them in peacetime maneuvers, on Guadalcanal inadequate commuications and lack of experience handicapped the direction of support missions after the Marines had landed. The air defense for this operation also left much to be desired. The plan called for a combat air patrol of fighters directed by a ship- His function was to receive information from ships’ radars of enemy air raids and the position of friendly fighters, to relay this information to the patrolling fighters, and to direct them to a point where they could make visual contact with enemy planes. As the radar of the cruiser on which he was embarked failed to detect the raids, the fighter director was unable to carry out his mission. After the first two days the carriers were obliged to withdraw, leaving the amphibious force and the troops ashore entirely without local air support until a captured airfield on Guadalcanal could be completed and supplied with landbased aircraft. The tragic history of the weeks that followed, during which planes available for defense and for troop support were pitifully few, clearly demonstrated the importance of maintaining a continuous supply of carrier-based air power during the critical period between the 16 initial assault and the eventual land-based aircraft ashore. establishment of It was late in August 1942 before land-based support operations actually got under way. Use was made of radio for communicating requests from troops to supporting planes, and from this experience came a realization of the tremendously increased effectiveness gained from having liaison officers who worked constantly with the troops and knew the special problems inolved. As a result, the Navy organized a number of air liaison parties which, unlike the officers who went ashore on 7 August, were especially trained to accompany front-line troops and to relay their requests to the controlling command. Such parties were successfully used at Kiska, the Gilberts, and in subsequent operations. Eventually, their functions were taken over by units within the Marine and Army ground organization. In the assault on Tarawa on 20 November 1943, there appeared for the first time the overwhelming concentration of air power that characterized all landing operations in the Central Pacific. A total of 17 aircraft carriers with a complement of 900 planes participated. Eight were the new, comparatively slow- escort carriers assigned exclusively to tactical air support, a mission for which they were well fitted and which permitted the release of the fast carriers for use against enemy air bases and other distant targets. As escort carriers become available in increasing numbers it was possible to expand enormously - the volume of air support. During the Gilberts campaign use was also made of a specialized troop-support control unit afloat equipped both to receive and filter the reqests for help and to assign offensive support missions to the aircraft overhead. In each succeeding operation air-support control units grew in size, number, and complexity, eventually assuming complete control of every air-borne plane in the objective area. These units func- tioned first on battleships and later on command ships. The latter were converted transports with the necessary concentration of radar and radio-communications equipment. These ships were used as joint headquarters by the amphibious, shore, and air commanders. Fighter direction, the control of defensive air support, was conducted in the Gilberts from designated ships in the landing fleet, but there was little coordination between such ships. After the experience of this operation control of all amphibious fighter-director teams was centralized in the existing air-support control organization, so that all support aircraft, both offensive and defensive, received direction and coordination from a single command. The two activities were thereafter physically located in adjacent control rooms on a command ship, which was in constant communication with subordinate control units or teams whether on other command ships, picket destroyers, or ashore. In January 1944 the amphibious forces of the Central Pacific invaded Kwajalein. The pattern of tactical air support in Pacific amphibious operations emerged clearly. Although later operations brought increasing complexity and refinement in technique, no important departures from this pattern were made. In the Marianas assault of June 1944 airsupport control employed three command ships with additional standbys available. The development of standardized techniques made it possible to pass control of the air-support operations without interruption from one ship to another. Similarly, aS land-based aircraft became established ashore, it was found feasible to transfer elements most closely integrated with troop movements to a control center on the beachhead while retaining afloat fighter direction, antisubmarine patrol, and air-sea rescue. Another new technique developed in the Marianas was the coordination of shore-based artillery, naval gunfire, and air support. By placing the separate controllers on the same ship it was possible to select the most effective type of weapon (air, naval, or artillery) for each request from the ground troops. In September 1944 came simultaneous landings at Morotai and the Palaus. Esort carriers provided the direct support for both. While the Morotai landing was virtually unopposed, fanatical resistance from underground positions and caves was encountered at Bloody Nose Ridge on Peleliu. In hand-to-hand fighting precision attacks by support aircraft were provided as close as 100 yards from front-line positions, a feat that would have been impossible without the rigid air discipline and concentrated control sys - tem developed in earlier operations. In the campaign for the recapture of the Philippines, Army, Navy, and Marine aircraft participated together in tactical air support. Landings in the Leyte-Samar area were made on 20 October 1944 by forces under the command of General of the Army MacArthur. Although after softening-up by air and ship bombardment the landings were successfully made without too much ground opposition, Japanese sea and air resistance developed on an all-out scale. In the ensuing Battle for Leyte Gulf, the Air Support Commander carried his control to the point of diverting aircraft from troop-support missions to strikes against enemy surface forces. This was an outstanding example not only of the versatility of carrier aircraft but also of the flexibilityof air power made possible by the type of airsupport organization developed and perfected in the Pacific war. In the Lingayen Gulf landing in January and the assault on Iwo Jima in February, air support followed the established pattern. The increasing use of Kamikaze attacks by the Japanese, however, emphasized the defense function of the air-support control units. The largest amphibious operation of the Pacific war, the assault and occupation of Okinawa, 17 saw air support at its highest level. From 20 to 31 carriers provided tacticat air support for 1,213 ships and 451,866 combat and service troops. As landing fields on Okinawa were captured and activated, a total of over 400 shorebased Marine and Army planes were addded progressively to the carrier-based aircraft. The statistics are impressive and indicative of the scope of the support function of aircraft. During 88 dayS, 1,904 direct-support missions were flown involving a total of 17,361 individual offensive sorties. An average of 560 planes was in the air each day on all types of missions, including defensive patrols. These aircraft expended 7,141 tons of bombs, 49,641 5-inch rockets, 1,573 wing tanks containing 260,000 gallons of napalin, the blazing gasoline jelly, and 9,300,- 000 rounds of 50-caliber ammunition. Okinawa provided a crucial test for amphibious fighter direction. As in the Philippines, the intensity of Japanese opposition increased the importance of air defense. With an area of approximately 7,850 square miles to cover and with the majority of the enemy air strength based only 350 miles away in Kyushu to the north and in Formosa to the southwest, the magnitude of the centralized air-defense responsibility is apparent. During the first 54 days, 18,675 fighterplane sorties were flown for the protection of the amphibious force alone, while in addition the fast and support carriers provided their own combal air patrol. In the 82 days during which the amphibious forces air-support control unit was responsible for the defense of the objective area. the Japanese dispatched 896 air raids in - volving more than 3,089 planes. Of these the centrally controlled combat air patrol over the objective area shot down 1,067 planes, including 50 shot down by night fighters. Antiaircraft fire and suicide dives destroyed at least 948 more, making a total of 2,015 Japanese planes. These figures do not include Japanese planes shot down by the combat air patrols over the 18 carriers and by the antiaircraft guns of the carrier forces which were not under air-support control. Enemy air tactics had been foreseen and 15 radar picket stations, located from 20 to 95 miles from the center of the area, had been established to cover paths of approach. Each station was manned by a radar-equipped destroyer or smaller vessel with a fighter-director team aboard. These teams were linked with the central air-defense control organization. They directed fighter patrols assigned to their sectors and passed control and information to other units as the raiders left their area. The picket line was so effective in intercepting enemy raids that the Japanese switched tactics and began to concentrate on picket vessels which heretofore had been neglected for larger and more profitable targets. Despite the pounding these picket stations received, which resulted in 7 destroyers sunk, 18 seriously damaged, and 6 damaged slightly, fighter-director ships were still on station when responsibility for air defense was transferred ashore to the Air Defense Commander 82 days after the original landings. Air-support control as it functioned in the Okinawa campaign had grown to include more than aircraft. It provided for the integration of all available weapons -- land, sea, and air. For limited forces operating far from bases, economy in the use of weapons became mandatory. The control system provided for defense with a minimum of fighter planes, releasing others for support missions. It made possible the use of aircraft only against targets susceptible to air attack and saw that nava l gunfire or field artiliery was usedwhere more efficient. Such an economical use of power grew from the Navy’s concept of organization which treated all elements of the naval forces as integral parts of the whole complex required for control of the sea. Each should be used in the manner best suited to its inherent characteristics and all should be formed chine through the into a unified operating matask- force system. The airsupport control units were themselves a specialized adaptation of the task-force pattern for the accomplishment of a well-defined mission. Although the surrender of Japan made unnecessary the final amphibious assault on the enemy homeland, the Okinawa operation demonstrated the ability of the United States to transport its forces over vast sea distances and to land them on a hostile shore. The possession of this technique altered the world’s strategic picture.


The origins of Jewish faith are explained throughout the Torah. According to the text, God first revealed himself to a Hebrew man named Abraham, who became known as the founder of Judaism.

Jews believe that God made a special covenant with Abraham and that he and his descendants were chosen people who would create a great nation.

Abraham’s son Isaac, and his grandson Jacob, also became central figures in ancient Jewish history. Jacob took the name Israel, and his children and future generations became known as Israelites.

More than 1,000 years after Abraham, the prophet Moses led the Israelites out of Egypt after being enslaved for hundreds of years.

According to scriptures, God revealed his laws, known as the Ten Commandments, to Moses at Mt. Sinai.


Special Aspects - History

To manually start video, click:

Then use your browser's back button to return

It was truly shocking. I came here, in a sense I recognize now that I was very—I came from a very middle class background. My Ph.D. supervisor, sort of, his father was the Lord? (Dr. Lane, please confirm or clarify), so that was the atmosphere, very British, very kind of middle class, very ordinary, my family wasn’t from science at all. I was the first person in my family ever to go into university and into science, so that’s what I knew. I came here and I was just amazed, there were people from all over the world. There was this incredibly intense atmosphere. You lived on the campus. You walked across the grass [and] you were in the lab. It was just wild! James Lab, where I was, was the wildest of the wild. That was the atmosphere, it was completely, we were the best, we were the hardest, we were the toughest here, the rest were just not there. That was the atmosphere that Joe and Mike loved, [Bob] Tjian, and all those people. A huge amount was happening it was a very exciting time. It was crazy.

I remember the first day I came, we arrived at night, we got a taxi, we’d been typically naïve and just got into a taxi at JFK [John F. Kennedy International Airport], and the guy got lost and eventually we arrived here, and with two big suitcases. We stayed actually in the farmhouse because Rich Roberts was on sabbatical leave so we got his apartment. Then we woke up the next morning, looked out over the Sound [Long Island Sound], I lived in London all my whole life, I couldn’t believe it! I walked in the lab and in the main postdoc [area] there used to be a shared office for all the post docs in James. There was a bucket of water and the post docs were lining up to stick their head in the bucket of water and time how long they could keep their head under the water, right? It was just a competition to see how tough they were. I just couldn’t, I’d never seen anything like this in my life, I was in shock!

So there was just a tremendous atmosphere of fun, and people. It was exciting, just exciting. It went on like that the whole time you’d see people really arguing a lot in the corridor and [in] very intense, intellectual debate. And then the science—you were doing very exciting things and very hands on. So you’d go in there, there was a big development tank next to the coffeepot. You walked through the library to this development tank. You’d be walking there and somebody’d be reading a journal and you’d be with this dripping autorad?? [autoradiograph] with your latest result, you’d walk past them and people would talk. It was a very great atmosphere. Wonderful people and just great fun. I remember going to a party with Walter Schaffner, who had just found enhancers, nobody believed this, this was a crazy result. He put this piece of DNA anywhere and it seemed to make transcription stronger and it didn’t matter whether it was one way around or the other. Nobody had ever seen anything like this before. It was crazy. He was crazy. He came to this party dressed as Dracula. He drank a plate of HeLa cells as his blood, human sacrifice. It was enormous fun. Strange working hours, we used to come into work at about ten in the morning, have coffee and donuts and then talk, and then lunch and then we’d start work. But we’d work until maybe two or three at night, and then we’d go to Huntington to the bar, and then come back, go to bed [and] start again. The technicians used to come in. It actually worked, it was almost continuous science because the technical people would come in earlier in the day and they’d set up some cell cultures or something, science was going on twenty-four hours. I mean people were in the lab, you could go into the lab at any time and there would be somebody there working. If you went to have dinner with somebody else on the campus, people would get up between the main course and the dessert to go turn on or off their gel or do something, they’d come back again. So that was great fun.


Contents

Metaphysical study is conducted using deduction from that which is known a priori. Like foundational mathematics (which is sometimes considered a special case of metaphysics applied to the existence of number), it tries to give a coherent account of the structure of the world, capable of explaining our everyday and scientific perception of the world, and being free from contradictions. In mathematics, there are many different ways to define numbers similarly, in metaphysics, there are many different ways to define objects, properties, concepts, and other entities that are claimed to make up the world. While metaphysics may, as a special case, study the entities postulated by fundamental science such as atoms and superstrings, its core topic is the set of categories such as object, property and causality which those scientific theories assume. For example: claiming that "electrons have charge" is a scientific theory while exploring what it means for electrons to be (or at least, to be perceived as) "objects", charge to be a "property", and for both to exist in a topological entity called "space" is the task of metaphysics. [5]

There are two broad stances about what is "the world" studied by metaphysics. According to metaphysical realism, the objects studied by metaphysics exist independently of any observer so that the subject is the most fundamental of all sciences. [6] Metaphysical anti-realism, on the other hand, assumes that the objects studied by metaphysics exist inside the mind of an observer, so the subject becomes a form of introspection and conceptual analysis. [6] This position is of more recent origin. Some philosophers, notably Kant, discuss both of these "worlds" and what can be inferred about each one. Some, such as the logical positivists, and many scientists, reject the metaphysical realism as meaningless and unverifiable. Others reply that this criticism also applies to any type of knowledge, including hard science, which claims to describe anything other than the contents of human perception, and thus that the world of perception is the objective world in some sense. Metaphysics itself usually assumes that some stance has been taken on these questions and that it may proceed independently of the choice—the question of which stance to take belongs instead to another branch of philosophy, epistemology.

Ontology (being) Edit

Ontology is the branch of philosophy that studies concepts such as existence, being, becoming, and reality. It includes the questions of how entities are grouped into basic categories and which of these entities exist on the most fundamental level. Ontology is sometimes referred to as the science of being. It has been characterized as general metaphysics in contrast to special metaphysics, which is concerned with more particular aspects of being. [7] Ontologists often try to determine what the categories or highest kinds are and how they form a system of categories that provides an encompassing classification of all entities. Commonly proposed categories include substances, properties, relations, states of affairs and events. These categories are characterized by fundamental ontological concepts, like particularity and universality, abstractness and concreteness or possibility and necessity. Of special interest is the concept of ontological dependence, which determines whether the entities of a category exist on the most fundamental level. Disagreements within ontology are often about whether entities belonging to a certain category exist and, if so, how they are related to other entities. [8] [9] [10] [11]

Identity and change Edit

Identity is a fundamental metaphysical concern. Metaphysicians investigating identity are tasked with the question of what, exactly, it means for something to be identical to itself, or – more controversially – to something else. Issues of identity arise in the context of time: what does it mean for something to be itself across two moments in time? How do we account for this? Another question of identity arises when we ask what our criteria ought to be for determining identity, and how the reality of identity interfaces with linguistic expressions.

The metaphysical positions one takes on identity have far-reaching implications on issues such as the Mind–body problem, personal identity, ethics, and law.

A few ancient Greeks took extreme positions on the nature of change. Parmenides denied change altogether, while Heraclitus argued that change was ubiquitous: "No man ever steps in the same river twice."

Identity, sometimes called numerical identity, is the relation that a thing bears to itself, and which no thing bears to anything other than itself (cf. sameness).

A modern philosopher who made a lasting impact on the philosophy of identity was Leibniz, whose Law of the Indiscernibility of Identicals is still widely accepted today. It states that if some object x is identical to some object y, then any property that x has, y will have as well.

∀ x ∀ y ( x = y → ∀ P ( P ( x ) ↔ P ( y ) ) )

However, it does seem that objects can change over time. If one were to look at a tree one day, and the tree later lost a leaf, it would seem that one could still be looking at that same tree. Two rival theories to account for the relationship between change and identity are perdurantism, which treats the tree as a series of tree-stages, and endurantism, which maintains that the organism—the same tree—is present at every stage in its history.

By appealing to intrinsic and extrinsic properties, endurantism finds a way to harmonize identity with change. Endurantists believe that objects persist by being strictly numerically identical over time. [12] However, if Leibniz's Law of the Indiscernibility of Identicals is utilized to define numerical identity here, it seems that objects must be completely unchanged in order to persist. Discriminating between intrinsic properties and extrinsic properties, endurantists state that numerical identity means that, if some object x is identical to some object y, then any intrinsic property that x has, y will have as well. Thus, if an object persists, intrinsic properties of it are unchanged, but extrinsic properties can change over time. Besides the object itself, environments and other objects can change over time properties that relate to other objects would change even if this object does not change.

Perdurantism can harmonize identity with change in another way. In four-dimensionalism, a version of perdurantism, what persists is a four-dimensional object which does not change although three-dimensional slices of the object may differ.

Space and time Edit

Objects appear to us in space and time, while abstract entities such as classes, properties, and relations do not. How do space and time serve this function as a ground for objects? Are space and time entities themselves, of some form? Must they exist prior to objects? How exactly can they be defined? How is time related to change must there always be something changing in order for time to exist?

Causality Edit

Classical philosophy recognized a number of causes, including teleological future causes. In special relativity and quantum field theory the notions of space, time and causality become tangled together, with temporal orders of causations becoming dependent on who is observing them. [ citation needed ] The laws of physics are symmetrical in time, so could equally well be used to describe time as running backwards. Why then do we perceive it as flowing in one direction, the arrow of time, and as containing causation flowing in the same direction?

For that matter, can an effect precede its cause? This was the title of a 1954 paper by Michael Dummett, [13] which sparked a discussion that continues today. [14] Earlier, in 1947, C. S. Lewis had argued that one can meaningfully pray concerning the outcome of, e.g., a medical test while recognizing that the outcome is determined by past events: "My free act contributes to the cosmic shape." [15] Likewise, some interpretations of quantum mechanics, dating to 1945, involve backward-in-time causal influences. [16]

Causality is linked by many philosophers to the concept of counterfactuals. To say that A caused B means that if A had not happened then B would not have happened. This view was advanced by David Lewis in his 1973 paper "Causation". [17] His subsequent papers [18] further develop his theory of causation.

Causality is usually required as a foundation for philosophy of science if science aims to understand causes and effects and make predictions about them.

Necessity and possibility Edit

Metaphysicians investigate questions about the ways the world could have been. David Lewis, in On the Plurality of Worlds, endorsed a view called concrete modal realism, according to which facts about how things could have been are made true by other concrete worlds in which things are different. Other philosophers, including Gottfried Leibniz, have dealt with the idea of possible worlds as well. A necessary fact is true across all possible worlds. A possible fact is true in some possible world, even if not in the actual world. For example, it is possible that cats could have had two tails, or that any particular apple could have not existed. By contrast, certain propositions seem necessarily true, such as analytic propositions, e.g., "All bachelors are unmarried." The view that any analytic truth is necessary is not universally held among philosophers. A less controversial view is that self-identity is necessary, as it seems fundamentally incoherent to claim that any x is not identical to itself this is known as the law of identity, a putative "first principle". Similarly, Aristotle describes the principle of non-contradiction:

It is impossible that the same quality should both belong and not belong to the same thing . This is the most certain of all principles . Wherefore they who demonstrate refer to this as an ultimate opinion. For it is by nature the source of all the other axioms.

Metaphysical cosmology and cosmogony Edit

Metaphysical cosmology is the branch of metaphysics that deals with the world as the totality of all phenomena in space and time. Historically, it formed a major part of the subject alongside Ontology, though its role is more peripheral in contemporary philosophy. It has had a broad scope, and in many cases was founded in religion. The ancient Greeks drew no distinction between this use and their model for the cosmos. However, in modern times it addresses questions about the Universe which are beyond the scope of the physical sciences. It is distinguished from religious cosmology in that it approaches these questions using philosophical methods (e.g. dialectics).

Cosmogony deals specifically with the origin of the universe. Modern metaphysical cosmology and cosmogony try to address questions such as:

  • What is the origin of the Universe? What is its first cause? Is its existence necessary? (see monism, pantheism, emanationism and creationism)
  • What are the ultimate material components of the Universe? (see mechanism, dynamism, hylomorphism, atomism)
  • What is the ultimate reason for the existence of the Universe? Does the cosmos have a purpose? (see teleology)

Mind and matter Edit

Accounting for the existence of mind in a world largely composed of matter is a metaphysical problem which is so large and important as to have become a specialized subject of study in its own right, philosophy of mind.

Substance dualism is a classical theory in which mind and body are essentially different, with the mind having some of the attributes traditionally assigned to the soul, and which creates an immediate conceptual puzzle about how the two interact. This form of substance dualism differs from the dualism of some eastern philosophical traditions (like Nyāya), which also posit a soul for the soul, under their view, is ontologically distinct from the mind. [19] Idealism postulates that material objects do not exist unless perceived and only as perceptions. Adherents of panpsychism, a kind of property dualism, hold that everything has a mental aspect, but not that everything exists in a mind. Neutral monism postulates that existence consists of a single substance that in itself is neither mental nor physical, but is capable of mental and physical aspects or attributes – thus it implies a dual-aspect theory. For the last century, the dominant theories have been science-inspired including materialistic monism, type identity theory, token identity theory, functionalism, reductive physicalism, nonreductive physicalism, eliminative materialism, anomalous monism, property dualism, epiphenomenalism and emergence.

Determinism and free will Edit

Determinism is the philosophical proposition that every event, including human cognition, decision and action, is causally determined by an unbroken chain of prior occurrences. It holds that nothing happens that has not already been determined. The principal consequence of the deterministic claim is that it poses a challenge to the existence of free will.

The problem of free will is the problem of whether rational agents exercise control over their own actions and decisions. Addressing this problem requires understanding the relation between freedom and causation, and determining whether the laws of nature are causally deterministic. Some philosophers, known as incompatibilists, view determinism and free will as mutually exclusive. If they believe in determinism, they will therefore believe free will to be an illusion, a position known as Hard Determinism. Proponents range from Baruch Spinoza to Ted Honderich. Henri Bergson defended free will in his dissertation Time and Free Will from 1889.

Others, labeled compatibilists (or "soft determinists"), believe that the two ideas can be reconciled coherently. Adherents of this view include Thomas Hobbes and many modern philosophers such as John Martin Fischer, Gary Watson, Harry Frankfurt, and the like.

Incompatibilists who accept free will but reject determinism are called libertarians, a term not to be confused with the political sense. Robert Kane and Alvin Plantinga are modern defenders of this theory.

Natural and social kinds Edit

The earliest type of classification of social construction traces back to Plato in his dialogue Phaedrus where he claims that the biological classification system seems to carve nature at the joints. [20] In contrast, later philosophers such as Michel Foucault and Jorge Luis Borges have challenged the capacity of natural and social classification. In his essay The Analytical Language of John Wilkins, Borges makes us imagine a certain encyclopedia where the animals are divided into (a) those that belong to the emperor (b) embalmed ones (c) those that are trained. and so forth, in order to bring forward the ambiguity of natural and social kinds. [21] According to metaphysics author Alyssa Ney: "the reason all this is interesting is that there seems to be a metaphysical difference between the Borgesian system and Plato's". [22] The difference is not obvious but one classification attempts to carve entities up according to objective distinction while the other does not. According to Quine this notion is closely related to the notion of similarity. [23]

Number Edit

There are different ways to set up the notion of number in metaphysics theories. Platonist theories postulate number as a fundamental category itself. Others consider it to be a property of an entity called a "group" comprising other entities or to be a relation held between several groups of entities, such as "the number four is the set of all sets of four things". Many of the debates around universals are applied to the study of number, and are of particular importance due to its status as a foundation for the philosophy of mathematics and for mathematics itself.

Applied metaphysics Edit

Although metaphysics as a philosophical enterprise is highly hypothetical, it also has practical application in most other branches of philosophy, science, and now also information technology. Such areas generally assume some basic ontology (such as a system of objects, properties, classes, and space time) as well as other metaphysical stances on topics such as causality and agency, then build their own particular theories upon these.

In science, for example, some theories are based on the ontological assumption of objects with properties (such as electrons having charge) while others may reject objects completely (such as quantum field theories, where spread-out "electronness" becomes property of space-time rather than an object).

"Social" branches of philosophy such as philosophy of morality, aesthetics and philosophy of religion (which in turn give rise to practical subjects such as ethics, politics, law, and art) all require metaphysical foundations, which may be considered as branches or applications of metaphysics. For example, they may postulate the existence of basic entities such as value, beauty, and God. Then they use these postulates to make their own arguments about consequences resulting from them. When philosophers in these subjects make their foundations they are doing applied metaphysics, and may draw upon its core topics and methods to guide them, including ontology and other core and peripheral topics. As in science, the foundations chosen will in turn depend on the underlying ontology used, so philosophers in these subjects may have to dig right down to the ontological layer of metaphysics to find what is possible for their theories. For example, a contradiction obtained in a theory of God or Beauty might be due to an assumption that it is an object rather than some other kind of ontological entity.

Science Edit

Prior to the modern history of science, scientific questions were addressed as a part of natural philosophy. Originally, the term "science" (Latin: scientia) simply meant "knowledge". The scientific method, however, transformed natural philosophy into an empirical activity deriving from experiment, unlike the rest of philosophy. By the end of the 18th century, it had begun to be called "science" to distinguish it from other branches of philosophy. Science and philosophy have been considered separated disciplines ever since. Thereafter, metaphysics denoted philosophical enquiry of a non-empirical character into the nature of existence. [24]

Metaphysics continues asking "why" where science leaves off. For example, any theory of fundamental physics is based on some set of axioms, which may postulate the existence of entities such as atoms, particles, forces, charges, mass, or fields. Stating such postulates is considered to be the "end" of a science theory. Metaphysics takes these postulates and explores what they mean as human concepts. For example, do all theories of physics require the existence of space and time, [25] objects, and properties? Or can they be expressed using only objects, or only properties? Do the objects have to retain their identity over time or can they change? [26] If they change, then are they still the same object? Can theories be reformulated by converting properties or predicates (such as "red") into entities (such as redness or redness fields) or processes ('there is some redding happening over there' appears in some human languages in place of the use of properties). Is the distinction between objects and properties fundamental to the physical world or to our perception of it?

Much recent work has been devoted to analyzing the role of metaphysics in scientific theorizing. Alexandre Koyré led this movement, declaring in his book Metaphysics and Measurement, "It is not by following experiment, but by outstripping experiment, that the scientific mind makes progress." [27] That metaphysical propositions can influence scientific theorizing is John Watkins' most lasting contribution to philosophy. Since 1957 [28] [29] "he showed the ways in which some un-testable and hence, according to Popperian ideas, non-empirical propositions can nevertheless be influential in the development of properly testable and hence scientific theories. These profound results in applied elementary logic. represented an important corrective to positivist teachings about the meaninglessness of metaphysics and of normative claims". [30] Imre Lakatos maintained that all scientific theories have a metaphysical "hard core" essential for the generation of hypotheses and theoretical assumptions. [31] Thus, according to Lakatos, "scientific changes are connected with vast cataclysmic metaphysical revolutions." [32]

An example from biology of Lakatos' thesis: David Hull has argued that changes in the ontological status of the species concept have been central in the development of biological thought from Aristotle through Cuvier, Lamarck, and Darwin. Darwin's ignorance of metaphysics made it more difficult for him to respond to his critics because he could not readily grasp the ways in which their underlying metaphysical views differed from his own. [33]

In physics, new metaphysical ideas have arisen in connection with quantum mechanics, where subatomic particles arguably do not have the same sort of individuality as the particulars with which philosophy has traditionally been concerned. [34] Also, adherence to a deterministic metaphysics in the face of the challenge posed by the quantum-mechanical uncertainty principle led physicists such as Albert Einstein to propose alternative theories that retained determinism. [35] A.N. Whitehead is famous for creating a process philosophy metaphysics inspired by electromagnetism and special relativity. [36]

In chemistry, Gilbert Newton Lewis addressed the nature of motion, arguing that an electron should not be said to move when it has none of the properties of motion. [37]

Katherine Hawley notes that the metaphysics even of a widely accepted scientific theory may be challenged if it can be argued that the metaphysical presuppositions of the theory make no contribution to its predictive success. [38]

Theology Edit

There is a relationship between theological doctrines and philosophical reflection in the philosophy of a religion (such as Christian philosophy), philosophical reflections are strictly rational. On this way of seeing the two disciplines, if at least one of the premises of an argument is derived from revelation, the argument falls in the domain of theology otherwise it falls into philosophy's domain. [39] [40]

Meta-metaphysics is the branch of philosophy that is concerned with the foundations of metaphysics. [41] A number of individuals have suggested that much or all of metaphysics should be rejected, a meta-metaphysical position known as metaphysical deflationism [a] [42] or ontological deflationism. [43]

In the 16th century, Francis Bacon rejected scholastic metaphysics, and argued strongly for what is now called empiricism, being seen later as the father of modern empirical science. In the 18th century, David Hume took a strong position, arguing that all genuine knowledge involves either mathematics or matters of fact and that metaphysics, which goes beyond these, is worthless. He concludes his Enquiry Concerning Human Understanding (1748) with the statement:

If we take in our hand any volume [book] of divinity or school metaphysics, for instance let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion. [44]

Thirty-three years after Hume's Enquiry appeared, Immanuel Kant published his Critique of Pure Reason. Although he followed Hume in rejecting much of previous metaphysics, he argued that there was still room for some synthetic a priori knowledge, concerned with matters of fact yet obtainable independent of experience. [45] These included fundamental structures of space, time, and causality. He also argued for the freedom of the will and the existence of "things in themselves", the ultimate (but unknowable) objects of experience.

Wittgenstein introduced the concept that metaphysics could be influenced by theories of aesthetics, via logic, vis. a world composed of "atomical facts". [46] [47]

In the 1930s, A.J. Ayer and Rudolf Carnap endorsed Hume's position Carnap quoted the passage above. [48] They argued that metaphysical statements are neither true nor false but meaningless since, according to their verifiability theory of meaning, a statement is meaningful only if there can be empirical evidence for or against it. Thus, while Ayer rejected the monism of Spinoza, he avoided a commitment to pluralism, the contrary position, by holding both views to be without meaning. [49] Carnap took a similar line with the controversy over the reality of the external world. [50] While the logical positivism movement is now considered dead (with Ayer, a major proponent, admitting in a 1979 TV interview that "nearly all of it was false"), [51] it has continued to influence philosophy development. [52]

Arguing against such rejections, the Scholastic philosopher Edward Feser held that Hume's critique of metaphysics, and specifically Hume's fork, is "notoriously self-refuting". [53] Feser argues that Hume's fork itself is not a conceptual truth and is not empirically testable.

Some living philosophers, such as Amie Thomasson, have argued that many metaphysical questions can be dissolved just by looking at the way we use words others, such as Ted Sider, have argued that metaphysical questions are substantive, and that we can make progress toward answering them by comparing theories according to a range of theoretical virtues inspired by the sciences, such as simplicity and explanatory power. [54]

The word "metaphysics" derives from the Greek words μετά (metá, "after") and φυσικά (physiká, "physics"). [55] It was first used as the title for several of Aristotle's works, because they were usually anthologized after the works on physics in complete editions. The prefix meta- ("after") indicates that these works come "after" the chapters on physics. However, Aristotle himself did not call the subject of these books metaphysics: he referred to it as "first philosophy" (Greek: πρώτη φιλοσοφία Latin: philosophia prima). The editor of Aristotle's works, Andronicus of Rhodes, is thought to have placed the books on first philosophy right after another work, Physics, and called them τὰ μετὰ τὰ φυσικὰ βιβλία (tà metà tà physikà biblía) or "the books [that come] after the [books on] physics".

However, once the name was given, the commentators sought to find other reasons for its appropriateness. For instance, Thomas Aquinas understood it to refer to the chronological or pedagogical order among our philosophical studies, so that the "metaphysical sciences" would mean "those that we study after having mastered the sciences that deal with the physical world". [56]

The term was misread by other medieval commentators, who thought it meant "the science of what is beyond the physical". [57] Following this tradition, the prefix meta- has more recently been prefixed to the names of sciences to designate higher sciences dealing with ulterior and more fundamental problems: hence metamathematics, metaphysiology, etc. [58]

A person who creates or develops metaphysical theories is called a metaphysician. [59]

Common parlance also uses the word "metaphysics" for a different referent from that of the present article, namely for beliefs in arbitrary non-physical or magical entities. For example, "Metaphysical healing" to refer to healing by means of remedies that are magical rather than scientific. [60] This usage stemmed from the various historical schools of speculative metaphysics which operated by postulating all manner of physical, mental and spiritual entities as bases for particular metaphysical systems. Metaphysics as a subject does not preclude beliefs in such magical entities but neither does it promote them. Rather, it is the subject which provides the vocabulary and logic with which such beliefs might be analyzed and studied, for example to search for inconsistencies both within themselves and with other accepted systems such as Science.

Pre-history Edit

Cognitive archeology such as analysis of cave paintings and other pre-historic art and customs suggests that a form of perennial philosophy or Shamanic metaphysics may stretch back to the birth of behavioral modernity, all around the world. Similar beliefs are found in present-day "stone age" cultures such as Australian aboriginals. Perennial philosophy postulates the existence of a spirit or concept world alongside the day-to-day world, and interactions between these worlds during dreaming and ritual, or on special days or at special places. It has been argued that perennial philosophy formed the basis for Platonism, with Plato articulating, rather than creating, much older widespread beliefs. [61] [62]

Bronze Age Edit

Bronze Age cultures such as ancient Mesopotamia and ancient Egypt (along with similarly structured but chronologically later cultures such as Mayans and Aztecs) developed belief systems based on mythology, anthropomorphic gods, mind–body dualism, [ citation needed ] and a spirit world, [ citation needed ] to explain causes and cosmology. These cultures appear to have been interested in astronomy and may have associated or identified the stars with some of these entities. In ancient Egypt, the ontological distinction between order (maat) and chaos (Isfet) seems to have been important. [63]

Pre-Socratic Greece Edit

The first named Greek philosopher, according to Aristotle, is Thales of Miletus, early 6th century BCE. He made use of purely physical explanations to explain the phenomena of the world rather than the mythological and divine explanations of tradition. He is thought to have posited water as the single underlying principle (or Arche in later Aristotelian terminology) of the material world. His fellow, but younger Miletians, Anaximander and Anaximenes, also posited monistic underlying principles, namely apeiron (the indefinite or boundless) and air respectively.

Another school was the Eleatics, in southern Italy. The group was founded in the early fifth century BCE by Parmenides, and included Zeno of Elea and Melissus of Samos. Methodologically, the Eleatics were broadly rationalist, and took logical standards of clarity and necessity to be the criteria of truth. Parmenides' chief doctrine was that reality is a single unchanging and universal Being. Zeno used reductio ad absurdum, to demonstrate the illusory nature of change and time in his paradoxes.

Heraclitus of Ephesus, in contrast, made change central, teaching that "all things flow". His philosophy, expressed in brief aphorisms, is quite cryptic. For instance, he also taught the unity of opposites.

Democritus and his teacher Leucippus, are known for formulating an atomic theory for the cosmos. [64] They are considered forerunners of the scientific method.

Classical China Edit

Metaphysics in Chinese philosophy can be traced back to the earliest Chinese philosophical concepts from the Zhou Dynasty such as Tian (Heaven) and Yin and Yang. The fourth century BCE saw a turn towards cosmogony with the rise of Taoism (in the Daodejing and Zhuangzi) and sees the natural world as dynamic and constantly changing processes which spontaneously arise from a single immanent metaphysical source or principle (Tao). [65] Another philosophical school which arose around this time was the School of Naturalists which saw the ultimate metaphysical principle as the Taiji, the "supreme polarity" composed of the forces of Yin and Yang which were always in a state of change seeking balance. Another concern of Chinese metaphysics, especially Taoism, is the relationship and nature of Being and non-Being (you 有 and wu 無). The Taoists held that the ultimate, the Tao, was also non-being or no-presence. [65] Other important concepts were those of spontaneous generation or natural vitality (Ziran) and "correlative resonance" (Ganying).

After the fall of the Han Dynasty (220 CE), China saw the rise of the Neo-Taoist Xuanxue school. This school was very influential in developing the concepts of later Chinese metaphysics. [65] Buddhist philosophy entered China (c. 1st century) and was influenced by the native Chinese metaphysical concepts to develop new theories. The native Tiantai and Huayen schools of philosophy maintained and reinterpreted the Indian theories of shunyata (emptiness, kong 空) and Buddha-nature (Fo xing 佛性) into the theory of interpenetration of phenomena. Neo-Confucians like Zhang Zai under the influence of other schools developed the concepts of "principle" (li) and vital energy (qi).

Classical Greece Edit

Socrates and Plato Edit

Socrates is known for his dialectic or questioning approach to philosophy rather than a positive metaphysical doctrine.

His pupil, Plato is famous for his theory of forms (which he places in the mouth of Socrates in his dialogues). Platonic realism (also considered a form of idealism) [66] is considered to be a solution to the problem of universals i.e., what particular objects have in common is that they share a specific Form which is universal to all others of their respective kind.

The theory has a number of other aspects:

  • Epistemological: knowledge of the Forms is more certain than mere sensory data.
  • Ethical: The Form of the Good sets an objective standard for morality.
  • Time and Change: The world of the Forms is eternal and unchanging. Time and change belong only to the lower sensory world. "Time is a moving image of Eternity".
  • Abstract objects and mathematics: Numbers, geometrical figures, etc., exist mind-independently in the World of Forms.

Platonism developed into Neoplatonism, a philosophy with a monotheistic and mystical flavour that survived well into the early Christian era.

Aristotle Edit

Plato's pupil Aristotle wrote widely on almost every subject, including metaphysics. His solution to the problem of universals contrasts with Plato's. Whereas Platonic Forms are existentially apparent in the visible world, Aristotelian essences dwell in particulars.

Potentiality and Actuality [67] are principles of a dichotomy which Aristotle used throughout his philosophical works to analyze motion, causality and other issues.

The Aristotelian theory of change and causality stretches to four causes: the material, formal, efficient and final. The efficient cause corresponds to what is now known as a cause simplicity. Final causes are explicitly teleological, a concept now regarded as controversial in science. [68] The Matter/Form dichotomy was to become highly influential in later philosophy as the substance/essence distinction.

The opening arguments in Aristotle's Metaphysics, Book I, revolve around the senses, knowledge, experience, theory, and wisdom. The first main focus in the Metaphysics is attempting to determine how intellect "advances from sensation through memory, experience, and art, to theoretical knowledge". [69] Aristotle claims that eyesight provides us with the capability to recognize and remember experiences, while sound allows us to learn.

Classical India Edit

More on Indian philosophy: Hindu philosophy

Sāṃkhya Edit

Sāṃkhya is an ancient system of Indian philosophy based on a dualism involving the ultimate principles of consciousness and matter. [70] It is described as the rationalist school of Indian philosophy. [71] It is most related to the Yoga school of Hinduism, and its method was most influential on the development of Early Buddhism. [72]

The Sāmkhya is an enumerationist philosophy whose epistemology accepts three of six pramanas (proofs) as the only reliable means of gaining knowledge. These include pratyakṣa (perception), anumāṇa (inference) and śabda (āptavacana, word/testimony of reliable sources). [73] [74] [75]

Samkhya is strongly dualist. [76] [77] [78] Sāmkhya philosophy regards the universe as consisting of two realities puruṣa (consciousness) and prakṛti (matter). Jiva (a living being) is that state in which puruṣa is bonded to prakṛti in some form. [79] This fusion, state the Samkhya scholars, led to the emergence of buddhi ("spiritual awareness") and ahaṅkāra (ego consciousness). The universe is described by this school as one created by purusa-prakṛti entities infused with various permutations and combinations of variously enumerated elements, senses, feelings, activity and mind. [79] During the state of imbalance, one of more constituents overwhelm the others, creating a form of bondage, particularly of the mind. The end of this imbalance, bondage is called liberation, or moksha, by the Samkhya school. [80]

The existence of God or supreme being is not directly asserted, nor considered relevant by the Samkhya philosophers. Sāṃkhya denies the final cause of Ishvara (God). [81] While the Samkhya school considers the Vedas as a reliable source of knowledge, it is an atheistic philosophy according to Paul Deussen and other scholars. [82] [83] A key difference between Samkhya and Yoga schools, state scholars, [83] [84] is that Yoga school accepts a "personal, yet essentially inactive, deity" or "personal god". [85]

Samkhya is known for its theory of guṇas (qualities, innate tendencies). [86] Guṇa, it states, are of three types: sattva being good, compassionate, illuminating, positive, and constructive rajas is one of activity, chaotic, passion, impulsive, potentially good or bad and tamas being the quality of darkness, ignorance, destructive, lethargic, negative. Everything, all life forms and human beings, state Samkhya scholars, have these three guṇas, but in different proportions. The interplay of these guṇas defines the character of someone or something, of nature and determines the progress of life. [87] [88] The Samkhya theory of guṇas was widely discussed, developed and refined by various schools of Indian philosophies, including Buddhism. [89] Samkhya's philosophical treatises also influenced the development of various theories of Hindu ethics. [72]

Vedānta Edit

Realization of the nature of Self-identity is the principal object of the Vedanta system of Indian metaphysics. In the Upanishads, self-consciousness is not the first-person indexical self-awareness or the self-awareness which is self-reference without identification, [90] and also not the self-consciousness which as a kind of desire is satisfied by another self-consciousness. [91] It is Self-realisation the realisation of the Self consisting of consciousness that leads all else. [92]

The word Self-consciousness in the Upanishads means the knowledge about the existence and nature of Brahman. It means the consciousness of our own real being, the primary reality. [93] Self-consciousness means Self-knowledge, the knowledge of Prajna i.e. of Prana which is Brahman. [94] According to the Upanishads the Atman or Paramatman is phenomenally unknowable it is the object of realisation. The Atman is unknowable in its essential nature it is unknowable in its essential nature because it is the eternal subject who knows about everything including itself. The Atman is the knower and also the known. [95]

Metaphysicians regard the Self either to be distinct from the Absolute or entirely identical with the Absolute. They have given form to three schools of thought – a) the Dualistic school, b) the Quasi-dualistic school and c) the Monistic school, as the result of their varying mystical experiences. Prakrti and Atman, when treated as two separate and distinct aspects form the basis of the Dualism of the Shvetashvatara Upanishad. [96] Quasi-dualism is reflected in the Vaishnavite-monotheism of Ramanuja and the absolute Monism, in the teachings of Adi Shankara. [97]

Self-consciousness is the Fourth state of consciousness or Turiya, the first three being Vaisvanara, Taijasa and Prajna. These are the four states of individual consciousness.

There are three distinct stages leading to Self-realisation. The First stage is in mystically apprehending the glory of the Self within us as though we were distinct from it. The Second stage is in identifying the "I-within" with the Self, that we are in essential nature entirely identical with the pure Self. The Third stage is in realising that the Atman is Brahman, that there is no difference between the Self and the Absolute. The Fourth stage is in realising "I am the Absolute" – Aham Brahman Asmi. The Fifth stage is in realising that Brahman is the "All" that exists, as also that which does not exist. [98]

Buddhist metaphysics Edit

In Buddhist philosophy there are various metaphysical traditions that have proposed different questions about the nature of reality based on the teachings of the Buddha in the early Buddhist texts. The Buddha of the early texts does not focus on metaphysical questions but on ethical and spiritual training and in some cases, he dismisses certain metaphysical questions as unhelpful and indeterminate Avyakta, which he recommends should be set aside. The development of systematic metaphysics arose after the Buddha's death with the rise of the Abhidharma traditions. [99] The Buddhist Abhidharma schools developed their analysis of reality based on the concept of dharmas which are the ultimate physical and mental events that makeup experience and their relations to each other. Noa Ronkin has called their approach "phenomenological". [100]

Later philosophical traditions include the Madhyamika school of Nagarjuna, which further developed the theory of the emptiness (shunyata) of all phenomena or dharmas which rejects any kind of substance. This has been interpreted as a form of anti-foundationalism and anti-realism which sees reality as having no ultimate essence or ground. [101] The Yogacara school meanwhile promoted a theory called "awareness only" (vijnapti-matra) which has been interpreted as a form of Idealism or Phenomenology and denies the split between awareness itself and the objects of awareness. [102]

Islamic metaphysics Edit

Major ideas in Sufi metaphysics have surrounded the concept of weḥdah (وحدة) meaning "unity", or in Arabic توحيد tawhid. waḥdat al-wujūd literally means the "Unity of Existence" or "Unity of Being." The phrase has been translated "pantheism." [103] Wujud (i.e. existence or presence) here refers to Allah's wujud (compare tawhid). On the other hand, waḥdat ash-shuhūd, meaning "Apparentism" or "Monotheism of Witness", holds that God and his creation are entirely separate.

Scholasticism and the Middle Ages Edit

More on medieval philosophy and metaphysics: Medieval Philosophy

Between about 1100 and 1500, philosophy as a discipline took place as part of the Catholic church's teaching system, known as scholasticism. Scholastic philosophy took place within an established framework blending Christian theology with Aristotelian teachings. Although fundamental orthodoxies were not commonly challenged, there were nonetheless deep metaphysical disagreements, particularly over the problem of universals, which engaged Duns Scotus and Pierre Abelard. William of Ockham is remembered for his principle of ontological parsimony.

Continental rationalism Edit

In the early modern period (17th and 18th centuries), the system-building scope of philosophy is often linked to the rationalist method of philosophy, that is the technique of deducing the nature of the world by pure reason. The scholastic concepts of substance and accident were employed.

    proposed in his Monadology a plurality of non-interacting substances. is famous for his dualism of material and mental substances. believed reality was a single substance of God-or-nature.

Wolff Edit

Christian Wolff had theoretical philosophy divided into an ontology or philosophia prima as a general metaphysics, [104] which arises as a preliminary to the distinction of the three "special metaphysics" [105] on the soul, world and God: [106] [107] rational psychology, [108] [109] rational cosmology [110] and rational theology. [111] The three disciplines are called empirical and rational because they are independent of revelation. This scheme, which is the counterpart of religious tripartition in creature, creation, and Creator, is best known to philosophical students by Kant's treatment of it in the Critique of Pure Reason. In the "Preface" of the 2nd edition of Kant's book, Wolff is defined "the greatest of all dogmatic philosophers." [112]

British empiricism Edit

British empiricism marked something of a reaction to rationalist and system-building metaphysics, or speculative metaphysics as it was pejoratively termed. The skeptic David Hume famously declared that most metaphysics should be consigned to the flames (see below). Hume was notorious among his contemporaries as one of the first philosophers to openly doubt religion, but is better known now for his critique of causality. John Stuart Mill, Thomas Reid and John Locke were less skeptical, embracing a more cautious style of metaphysics based on realism, common sense and science. Other philosophers, notably George Berkeley were led from empiricism to idealistic metaphysics.

Kant Edit

Immanuel Kant attempted a grand synthesis and revision of the trends already mentioned: scholastic philosophy, systematic metaphysics, and skeptical empiricism, not to forget the burgeoning science of his day. As did the systems builders, he had an overarching framework in which all questions were to be addressed. Like Hume, who famously woke him from his 'dogmatic slumbers', he was suspicious of metaphysical speculation, and also places much emphasis on the limitations of the human mind. Kant described his shift in metaphysics away from making claims about an objective noumenal world, towards exploring the subjective phenomenal world, as a Copernican Revolution, by analogy to (though opposite in direction to) Copernicus' shift from man (the subject) to the sun (an object) at the center of the universe.

Kant saw rationalist philosophers as aiming for a kind of metaphysical knowledge he defined as the synthetic apriori—that is knowledge that does not come from the senses (it is a priori) but is nonetheless about reality (synthetic). Inasmuch as it is about reality, it differs from abstract mathematical propositions (which he terms analytical apriori), and being apriori it is distinct from empirical, scientific knowledge (which he terms synthetic aposteriori). The only synthetic apriori knowledge we can have is of how our minds organise the data of the senses that organising framework is space and time, which for Kant have no mind-independent existence, but nonetheless operate uniformly in all humans. Apriori knowledge of space and time is all that remains of metaphysics as traditionally conceived. There is a reality beyond sensory data or phenomena, which he calls the realm of noumena however, we cannot know it as it is in itself, but only as it appears to us. He allows himself to speculate that the origins of phenomenal God, morality, and free will might exist in the noumenal realm, but these possibilities have to be set against its basic unknowability for humans. Although he saw himself as having disposed of metaphysics, in a sense, he has generally been regarded in retrospect as having a metaphysics of his own, and as beginning the modern analytical conception of the subject.

Late modern philosophy Edit

Nineteenth century philosophy was overwhelmingly influenced by Kant and his successors. Schopenhauer, Schelling, Fichte and Hegel all purveyed their own panoramic versions of German Idealism, Kant's own caution about metaphysical speculation, and refutation of idealism, having fallen by the wayside. The idealistic impulse continued into the early twentieth century with British idealists such as F. H. Bradley and J. M. E. McTaggart. Followers of Karl Marx took Hegel's dialectic view of history and re-fashioned it as materialism.

Early analytic philosophy and positivism Edit

During the period when idealism was dominant in philosophy, science had been making great advances. The arrival of a new generation of scientifically minded philosophers led to a sharp decline in the popularity of idealism during the 1920s.

Analytic philosophy was spearheaded by Bertrand Russell and G. E. Moore. Russell and William James tried to compromise between idealism and materialism with the theory of neutral monism.

The early to mid twentieth century philosophy saw a trend to reject metaphysical questions as meaningless. The driving force behind this tendency was the philosophy of logical positivism as espoused by the Vienna Circle, which argued that the meaning of a statement was its prediction of observable results of an experiment, and thus that there is no need to postulate the existence of any objects other than these perceptual observations.

At around the same time, the American pragmatists were steering a middle course between materialism and idealism. System-building metaphysics, with a fresh inspiration from science, was revived by A. N. Whitehead and Charles Hartshorne.

Continental philosophy Edit

The forces that shaped analytic philosophy—the break with idealism, and the influence of science—were much less significant outside the English speaking world, although there was a shared turn toward language. Continental philosophy continued in a trajectory from post Kantianism.

The phenomenology of Husserl and others was intended as a collaborative project for the investigation of the features and structure of consciousness common to all humans, in line with Kant's basing his synthetic apriori on the uniform operation of consciousness. It was officially neutral with regards to ontology, but was nonetheless to spawn a number of metaphysical systems. Brentano's concept of intentionality would become widely influential, including on analytic philosophy.

Heidegger, author of Being and Time, saw himself as re-focusing on Being-qua-being, introducing the novel concept of Dasein in the process. Classing himself an existentialist, Sartre wrote an extensive study of Being and Nothingness.

The speculative realism movement marks a return to full blooded realism.

Process metaphysics Edit

There are two fundamental aspects of everyday experience: change and persistence. Until recently, the Western philosophical tradition has arguably championed substance and persistence, with some notable exceptions, however. According to process thinkers, novelty, flux and accident do matter, and sometimes they constitute the ultimate reality.

In a broad sense, process metaphysics is as old as Western philosophy, with figures such as Heraclitus, Plotinus, Duns Scotus, Leibniz, David Hume, Georg Wilhelm Friedrich Hegel, Friedrich Wilhelm Joseph von Schelling, Gustav Theodor Fechner, Friedrich Adolf Trendelenburg, Charles Renouvier, Karl Marx, Ernst Mach, Friedrich Wilhelm Nietzsche, Émile Boutroux, Henri Bergson, Samuel Alexander and Nicolas Berdyaev. It seemingly remains an open question whether major "Continental" figures such as the late Martin Heidegger, Maurice Merleau-Ponty, Gilles Deleuze, Michel Foucault, or Jacques Derrida should be included. [113]

In a strict sense, process metaphysics may be limited to the works of a few founding fathers: G. W. F. Hegel, Charles Sanders Peirce, William James, Henri Bergson, A. N. Whitehead, and John Dewey. From a European perspective, there was a very significant and early Whiteheadian influence on the works of outstanding scholars such as Émile Meyerson (1859–1933), Louis Couturat (1868–1914), Jean Wahl (1888–1974), Robin George Collingwood (1889–1943), Philippe Devaux (1902–1979), Hans Jonas (1903–1993), Dorothy M. Emmett (1904–2000), Maurice Merleau Ponty (1908–1961), Enzo Paci (1911–1976), Charlie Dunbar Broad (1887–1971), Wolfe Mays (1912–2005), Ilya Prigogine (1917–2003), Jules Vuillemin (1920–2001), Jean Ladrière (1921–2007), Gilles Deleuze (1925–1995), Wolfhart Pannenberg (1928–2014), and Reiner Wiehl (1929–2010). [114]

Contemporary analytic philosophy Edit

While early analytic philosophy tended to reject metaphysical theorizing, under the influence of logical positivism, it was revived in the second half of the twentieth century. Philosophers such as David K. Lewis and David Armstrong developed elaborate theories on a range of topics such as universals, causation, possibility and necessity and abstract objects. However, the focus of analytic philosophy generally is away from the construction of all-encompassing systems and toward close analysis of individual ideas.

Among the developments that led to the revival of metaphysical theorizing were Quine's attack on the analytic–synthetic distinction, which was generally taken to undermine Carnap's distinction between existence questions internal to a framework and those external to it. [115]

The philosophy of fiction, the problem of empty names, and the debate over existence's status as a property have all come of relative obscurity into the limelight, while perennial issues such as free will, possible worlds, and the philosophy of time have had new life breathed into them. [116] [117]

The analytic view is of metaphysics as studying phenomenal human concepts rather than making claims about the noumenal world, so its style often blurs into philosophy of language and introspective psychology. Compared to system-building, it can seem very dry, stylistically similar to computer programming, mathematics or even accountancy (as a common stated goal is to "account for" entities in the world). [ citation needed ]


Contents

Special relativity was originally proposed by Albert Einstein in a paper published on 26 September 1905 titled "On the Electrodynamics of Moving Bodies". [p 1] The incompatibility of Newtonian mechanics with Maxwell's equations of electromagnetism and, experimentally, the Michelson-Morley null result (and subsequent similar experiments) demonstrated that the historically hypothesized luminiferous aether did not exist. This led to Einstein's development of special relativity, which corrects mechanics to handle situations involving all motions and especially those at a speed close to that of light (known as relativistic velocities ). Today, special relativity is proven to be the most accurate model of motion at any speed when gravitational and quantum effects are negligible. [3] [4] Even so, the Newtonian model is still valid as a simple and accurate approximation at low velocities (relative to the speed of light), for example, everyday motions on Earth.

Special relativity has a wide range of consequences that have been experimentally verified. [5] They include the relativity of simultaneity, length contraction, time dilation, the relativistic velocity addition formula, the relativistic Doppler effect, relativistic mass, a universal speed limit, mass–energy equivalence, the speed of causality and the Thomas precession. [1] [2] It has, for example, replaced the conventional notion of an absolute universal time with the notion of a time that is dependent on reference frame and spatial position. Rather than an invariant time interval between two events, there is an invariant spacetime interval. Combined with other laws of physics, the two postulates of special relativity predict the equivalence of mass and energy, as expressed in the mass–energy equivalence formula E = m c 2 > , where c is the speed of light in a vacuum. [6] [7] It also explains how the phenomena of electricity and magnetism are related. [1] [2]

A defining feature of special relativity is the replacement of the Galilean transformations of Newtonian mechanics with the Lorentz transformations. Time and space cannot be defined separately from each other (as was previously thought to be the case). Rather, space and time are interwoven into a single continuum known as "spacetime". Events that occur at the same time for one observer can occur at different times for another.

Until Einstein developed general relativity, introducing a curved spacetime to incorporate gravity, the phrase "special relativity" was not used. A translation sometimes used is "restricted relativity" "special" really means "special case". [p 2] [p 3] [p 4] [note 1] Some of the work of Albert Einstein in special relativity is built on the earlier work by Hendrik Lorentz and Henri Poincaré. The theory became essentially complete in 1907. [4]

The theory is "special" in that it only applies in the special case where the spacetime is "flat", that is, the curvature of spacetime, described by the energy–momentum tensor and causing gravity, is negligible. [8] [note 2] In order to correctly accommodate gravity, Einstein formulated general relativity in 1915. Special relativity, contrary to some historical descriptions, does accommodate accelerations as well as accelerating frames of reference. [9] [10]

Just as Galilean relativity is now accepted to be an approximation of special relativity that is valid for low speeds, special relativity is considered an approximation of general relativity that is valid for weak gravitational fields, that is, at a sufficiently small scale (e.g., when tidal forces are negligible) and in conditions of free fall. General relativity, however, incorporates non-Euclidean geometry in order to represent gravitational effects as the geometric curvature of spacetime. Special relativity is restricted to the flat spacetime known as Minkowski space. As long as the universe can be modeled as a pseudo-Riemannian manifold, a Lorentz-invariant frame that abides by special relativity can be defined for a sufficiently small neighborhood of each point in this curved spacetime.

Galileo Galilei had already postulated that there is no absolute and well-defined state of rest (no privileged reference frames), a principle now called Galileo's principle of relativity. Einstein extended this principle so that it accounted for the constant speed of light, [11] a phenomenon that had been observed in the Michelson–Morley experiment. He also postulated that it holds for all the laws of physics, including both the laws of mechanics and of electrodynamics. [12]

Albert Einstein: Autobiographical Notes [p 5]

Einstein discerned two fundamental propositions that seemed to be the most assured, regardless of the exact validity of the (then) known laws of either mechanics or electrodynamics. These propositions were the constancy of the speed of light in a vacuum and the independence of physical laws (especially the constancy of the speed of light) from the choice of inertial system. In his initial presentation of special relativity in 1905 he expressed these postulates as: [p 1]

  • The Principle of Relativity – the laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems in uniform translatory motion relative to each other. [p 1]
  • The Principle of Invariant Light Speed – ". light is always propagated in empty space with a definite velocity [speed] c which is independent of the state of motion of the emitting body" (from the preface). [p 1] That is, light in vacuum propagates with the speed c (a fixed constant, independent of direction) in at least one system of inertial coordinates (the "stationary system"), regardless of the state of motion of the light source.

The constancy of the speed of light was motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous ether. There is conflicting evidence on the extent to which Einstein was influenced by the null result of the Michelson–Morley experiment. [13] [14] In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance.

The derivation of special relativity depends not only on these two explicit postulates, but also on several tacit assumptions (made in almost all theories of physics), including the isotropy and homogeneity of space and the independence of measuring rods and clocks from their past history. [p 6]

Following Einstein's original presentation of special relativity in 1905, many different sets of postulates have been proposed in various alternative derivations. [15] However, the most common set of postulates remains those employed by Einstein in his original paper. A more mathematical statement of the Principle of Relativity made later by Einstein, which introduces the concept of simplicity not mentioned above is:

Special principle of relativity: If a system of coordinates K is chosen so that, in relation to it, physical laws hold good in their simplest form, the same laws hold good in relation to any other system of coordinates K′ moving in uniform translation relatively to K. [16]

Henri Poincaré provided the mathematical framework for relativity theory by proving that Lorentz transformations are a subset of his Poincaré group of symmetry transformations. Einstein later derived these transformations from his axioms.

Many of Einstein's papers present derivations of the Lorentz transformation based upon these two principles. [p 7]

Reference frames and relative motion Edit

Reference frames play a crucial role in relativity theory. The term reference frame as used here is an observational perspective in space that is not undergoing any change in motion (acceleration), from which a position can be measured along 3 spatial axes (so, at rest or constant velocity). In addition, a reference frame has the ability to determine measurements of the time of events using a 'clock' (any reference device with uniform periodicity).

An event is an occurrence that can be assigned a single unique moment and location in space relative to a reference frame: it is a "point" in spacetime. Since the speed of light is constant in relativity irrespective of the reference frame, pulses of light can be used to unambiguously measure distances and refer back to the times that events occurred to the clock, even though light takes time to reach the clock after the event has transpired.

For example, the explosion of a firecracker may be considered to be an "event". We can completely specify an event by its four spacetime coordinates: The time of occurrence and its 3-dimensional spatial location define a reference point. Let's call this reference frame S.

In relativity theory, we often want to calculate the coordinates of an event from differing reference frames. The equations that relate measurements made in different frames are called transformation equations.

Standard configuration Edit

To gain insight into how the spacetime coordinates measured by observers in different reference frames compare with each other, it is useful to work with a simplified setup with frames in a standard configuration. [17] : 107 With care, this allows simplification of the math with no loss of generality in the conclusions that are reached. In Fig. 2-1, two Galilean reference frames (i.e., conventional 3-space frames) are displayed in relative motion. Frame S belongs to a first observer O, and frame S′ (pronounced "S prime" or "S dash") belongs to a second observer O′.

  • The x, y, z axes of frame S are oriented parallel to the respective primed axes of frame S′.
  • Frame S′ moves, for simplicity, in a single direction: the x-direction of frame S with a constant velocity v as measured in frame S.
  • The origins of frames S and S′ are coincident when time t = 0 for frame S and t′ = 0 for frame S′.

Since there is no absolute reference frame in relativity theory, a concept of 'moving' doesn't strictly exist, as everything may be moving with respect to some other reference frame. Instead, any two frames that move at the same speed in the same direction are said to be comoving. Therefore, S and S′ are not comoving.

Lack of an absolute reference frame Edit

The principle of relativity, which states that physical laws have the same form in each inertial reference frame, dates back to Galileo, and was incorporated into Newtonian physics. However, in the late 19th century, the existence of electromagnetic waves led some physicists to suggest that the universe was filled with a substance they called "aether", which, they postulated, would act as the medium through which these waves, or vibrations, propagated (in many respects similar to the way sound propagates through air). The aether was thought to be an absolute reference frame against which all speeds could be measured, and could be considered fixed and motionless relative to Earth or some other fixed reference point. The aether was supposed to be sufficiently elastic to support electromagnetic waves, while those waves could interact with matter, yet offering no resistance to bodies passing through it (its one property was that it allowed electromagnetic waves to propagate). The results of various experiments, including the Michelson–Morley experiment in 1887 (subsequently verified with more accurate and innovative experiments), led to the theory of special relativity, by showing that the aether did not exist. [18] Einstein's solution was to discard the notion of an aether and the absolute state of rest. In relativity, any reference frame moving with uniform motion will observe the same laws of physics. In particular, the speed of light in vacuum is always measured to be c, even when measured by multiple systems that are moving at different (but constant) velocities.

Relativity without the second postulate Edit

From the principle of relativity alone without assuming the constancy of the speed of light (i.e., using the isotropy of space and the symmetry implied by the principle of special relativity) it can be shown that the spacetime transformations between inertial frames are either Euclidean, Galilean, or Lorentzian. In the Lorentzian case, one can then obtain relativistic interval conservation and a certain finite limiting speed. Experiments suggest that this speed is the speed of light in vacuum. [p 8] [19]

Alternative approaches to special relativity Edit

Einstein consistently based the derivation of Lorentz invariance (the essential core of special relativity) on just the two basic principles of relativity and light-speed invariance. He wrote:

The insight fundamental for the special theory of relativity is this: The assumptions relativity and light speed invariance are compatible if relations of a new type ("Lorentz transformation") are postulated for the conversion of coordinates and times of events . The universal principle of the special theory of relativity is contained in the postulate: The laws of physics are invariant with respect to Lorentz transformations (for the transition from one inertial system to any other arbitrarily chosen inertial system). This is a restricting principle for natural laws . [p 5]

Thus many modern treatments of special relativity base it on the single postulate of universal Lorentz covariance, or, equivalently, on the single postulate of Minkowski spacetime. [p 9] [p 10]

Rather than considering universal Lorentz covariance to be a derived principle, this article considers it to be the fundamental postulate of special relativity. The traditional two-postulate approach to special relativity is presented in innumerable college textbooks and popular presentations. [20] Textbooks starting with the single postulate of Minkowski spacetime include those by Taylor and Wheeler [21] and by Callahan. [22] This is also the approach followed by the Wikipedia articles Spacetime and Minkowski diagram.

Lorentz transformation and its inverse Edit

Define an event to have spacetime coordinates (t,x,y,z) in system S and (t′,x′,y′,z′) in a reference frame moving at a velocity v with respect to that frame, S′. Then the Lorentz transformation specifies that these coordinates are related in the following way:

is the Lorentz factor and c is the speed of light in vacuum, and the velocity v of S′, relative to S, is parallel to the x-axis. For simplicity, the y and z coordinates are unaffected only the x and t coordinates are transformed. These Lorentz transformations form a one-parameter group of linear mappings, that parameter being called rapidity.

Solving the four transformation equations above for the unprimed coordinates yields the inverse Lorentz transformation:

Enforcing this inverse Lorentz transformation to coincide with the Lorentz transformation from the primed to the unprimed system, shows the unprimed frame as moving with the velocity v′ = −v, as measured in the primed frame.

There is nothing special about the x-axis. The transformation can apply to the y- or z-axis, or indeed in any direction parallel to the motion (which are warped by the γ factor) and perpendicular see the article Lorentz transformation for details.

A quantity invariant under Lorentz transformations is known as a Lorentz scalar.

Writing the Lorentz transformation and its inverse in terms of coordinate differences, where one event has coordinates (x1, t1) and (x1, t1) , another event has coordinates (x2, t2) and (x2, t2) , and the differences are defined as

If we take differentials instead of taking differences, we get

Graphical representation of the Lorentz transformation Edit

Spacetime diagrams (Minkowski diagrams) are an extremely useful aid to visualizing how coordinates transform between different reference frames. Although it is not as easy to perform exact computations using them as directly invoking the Lorentz transformations, their main power is their ability to provide an intuitive grasp of the results of a relativistic scenario. [19]

To draw a spacetime diagram, begin by considering two Galilean reference frames, S and S', in standard configuration, as shown in Fig. 2-1. [19] [23] : 155–199

While the unprimed frame is drawn with space and time axes that meet at right angles, the primed frame is drawn with axes that meet at acute or obtuse angles. This asymmetry is due to unavoidable distortions in how spacetime coordinates map onto a Cartesian plane, but the frames are actually equivalent.

The consequences of special relativity can be derived from the Lorentz transformation equations. [24] These transformations, and hence special relativity, lead to different physical predictions than those of Newtonian mechanics at all relative velocities, and most pronounced when relative velocities become comparable to the speed of light. The speed of light is so much larger than anything most humans encounter that some of the effects predicted by relativity are initially counterintuitive.

Invariant interval Edit

In special relativity, however, the interweaving of spatial and temporal coordinates generates the concept of an invariant interval, denoted as Δ s 2 > :

The interweaving of space and time revokes the implicitly assumed concepts of absolute simultaneity and synchronization across non-comoving frames.

The form of Δ s 2 , ,> being the difference of the squared time lapse and the squared spatial distance, demonstrates a fundamental discrepancy between Euclidean and spacetime distances. [note 7] The invariance of this interval is a property of the general Lorentz transform (also called the Poincaré transformation), making it an isometry of spacetime. The general Lorentz transform extends the standard Lorentz transform (which deals with translations without rotation, that is, Lorentz boosts, in the x-direction) with all other translations, reflections, and rotations between any Cartesian inertial frame. [28] : 33–34

In the analysis of simplified scenarios, such as spacetime diagrams, a reduced-dimensionality form of the invariant interval is often employed:

Demonstrating that the interval is invariant is straightforward for the reduced-dimensionality case and with frames in standard configuration: [19]

In considering the physical significance of Δ s 2 > , there are three cases to note: [19] [29] : 25–39

  • Δs 2 > 0: In this case, the two events are separated by more time than space, and they are hence said to be timelike separated. This implies that | Δ x / Δ t | < c , and given the Lorentz transformation Δ x ′ = γ ( Δ x − v Δ t ) , it is evident that there exists a v less than c for which Δ x ′ = 0 (in particular, v = Δ x / Δ t ). In other words, given two events that are timelike separated, it is possible to find a frame in which the two events happen at the same place. In this frame, the separation in time, Δ s / c , is called the proper time.
  • Δs 2 < 0: In this case, the two events are separated by more space than time, and they are hence said to be spacelike separated. This implies that | Δ x / Δ t | > c , and given the Lorentz transformation Δ t ′ = γ ( Δ t − v Δ x / c 2 ) , ),> there exists a v less than c for which Δ t ′ = 0 (in particular, v = c 2 Δ t / Δ x Delta t/Delta x> ). In other words, given two events that are spacelike separated, it is possible to find a frame in which the two events happen at the same time. In this frame, the separation in space, − Δ s 2 , >>,> is called the proper distance, or proper length. For values of v greater than and less than c 2 Δ t / Δ x , Delta t/Delta x,> the sign of Δ t ′ changes, meaning that the temporal order of spacelike-separated events changes depending on the frame in which the events are viewed. The temporal order of timelike-separated events, however, is absolute, since the only way that v could be greater than c 2 Δ t / Δ x Delta t/Delta x> would be if v > c .
  • Δs 2 = 0: In this case, the two events are said to be lightlike separated. This implies that | Δ x / Δ t | = c , and this relationship is frame independent due to the invariance of s 2 . .> From this, we observe that the speed of light is c in every inertial frame. In other words, starting from the assumption of universal Lorentz covariance, the constant speed of light is a derived result, rather than a postulate as in the two-postulates formulation of the special theory.

Relativity of simultaneity Edit

Consider two events happening in two different locations that occur simultaneously in the reference frame of one inertial observer. They may occur non-simultaneously in the reference frame of another inertial observer (lack of absolute simultaneity).

From Equation 3 (the forward Lorentz transformation in terms of coordinate differences)

It is clear that the two events that are simultaneous in frame S (satisfying Δt = 0 ), are not necessarily simultaneous in another inertial frame S′ (satisfying Δt′ = 0 ). Only if these events are additionally co-local in frame S (satisfying Δx = 0 ), will they be simultaneous in another frame S′.

The Sagnac effect can be considered a manifestation of the relativity of simultaneity. [30] Since relativity of simultaneity is a first order effect in v , [19] instruments based on the Sagnac effect for their operation, such as ring laser gyroscopes and fiber optic gyroscopes, are capable of extreme levels of sensitivity. [p 14]

Time dilation Edit

The time lapse between two events is not invariant from one observer to another, but is dependent on the relative speeds of the observers' reference frames (e.g., the twin paradox which concerns a twin who flies off in a spaceship traveling near the speed of light and returns to discover that the non-traveling twin sibling has aged much more, the paradox being that at constant velocity we are unable to discern which twin is non-traveling and which twin travels).

Suppose a clock is at rest in the unprimed system S. The location of the clock on two different ticks is then characterized by Δx = 0 . To find the relation between the times between these ticks as measured in both systems, Equation 3 can be used to find:

This shows that the time (Δt′) between the two ticks as seen in the frame in which the clock is moving (S′), is longer than the time (Δt) between these ticks as measured in the rest frame of the clock (S). Time dilation explains a number of physical phenomena for example, the lifetime of high speed muons created by the collision of cosmic rays with particles in the Earth's outer atmosphere and moving towards the surface is greater than the lifetime of slowly moving muons, created and decaying in a laboratory. [31]

Length contraction Edit

The dimensions (e.g., length) of an object as measured by one observer may be smaller than the results of measurements of the same object made by another observer (e.g., the ladder paradox involves a long ladder traveling near the speed of light and being contained within a smaller garage).

Similarly, suppose a measuring rod is at rest and aligned along the x-axis in the unprimed system S. In this system, the length of this rod is written as Δx. To measure the length of this rod in the system S′, in which the rod is moving, the distances x′ to the end points of the rod must be measured simultaneously in that system S′. In other words, the measurement is characterized by Δt′ = 0 , which can be combined with Equation 4 to find the relation between the lengths Δx and Δx′:

This shows that the length (Δx′) of the rod as measured in the frame in which it is moving (S′), is shorter than its length (Δx) in its own rest frame (S).

Time dilation and length contraction are not merely appearances. Time dilation is explicitly related to our way of measuring time intervals between events that occur at the same place in a given coordinate system (called "co-local" events). These time intervals (which can be, and are, actually measured experimentally by relevant observers) are different in another coordinate system moving with respect to the first, unless the events, in addition to being co-local, are also simultaneous. Similarly, length contraction relates to our measured distances between separated but simultaneous events in a given coordinate system of choice. If these events are not co-local, but are separated by distance (space), they will not occur at the same spatial distance from each other when seen from another moving coordinate system.

Lorentz transformation of velocities Edit

Consider two frames S and S′ in standard configuration. A particle in S moves in the x direction with velocity vector u . .> What is its velocity u ′ > in frame S′ ?

Substituting expressions for d x ′ and d t ′ from Equation 5 into Equation 8, followed by straightforward mathematical manipulations and back-substitution from Equation 7 yields the Lorentz transformation of the speed u to u ′ :

The inverse relation is obtained by interchanging the primed and unprimed symbols and replacing v with − v .

The forward and inverse transformations for this case are:

We note the following points:

  • If an object (e.g., a photon) were moving at the speed of light in one frame (i.e., u = ±c or u′ = ±c), then it would also be moving at the speed of light in any other frame, moving at | v | < c .
  • The resultant speed of two velocities with magnitude less than c is always a velocity with magnitude less than c.
  • If both |u| and |v| (and then also |u′| and |v′|) are small with respect to the speed of light (that is, e.g., | u / c | ≪ 1 ), then the intuitive Galilean transformations are recovered from the transformation equations for special relativity
  • Attaching a frame to a photon (riding a light beam like Einstein considers) requires special treatment of the transformations.

There is nothing special about the x direction in the standard configuration. The above formalism applies to any direction and three orthogonal directions allow dealing with all directions in space by decomposing the velocity vectors to their components in these directions. See Velocity-addition formula for details.

Thomas rotation Edit

The composition of two non-collinear Lorentz boosts (i.e., two non-collinear Lorentz transformations, neither of which involve rotation) results in a Lorentz transformation that is not a pure boost but is the composition of a boost and a rotation.

Unlike second-order relativistic effects such as length contraction or time dilation, this effect becomes quite significant even at fairly low velocities. For example, this can be seen in the spin of moving particles, where Thomas precession is a relativistic correction that applies to the spin of an elementary particle or the rotation of a macroscopic gyroscope, relating the angular velocity of the spin of a particle following a curvilinear orbit to the angular velocity of the orbital motion. [29] : 169–174

Thomas rotation provides the resolution to the well-known "meter stick and hole paradox". [p 15] [29] : 98–99

Causality and prohibition of motion faster than light Edit

In Fig. 4-3, the time interval between the events A (the "cause") and B (the "effect") is 'time-like' that is, there is a frame of reference in which events A and B occur at the same location in space, separated only by occurring at different times. If A precedes B in that frame, then A precedes B in all frames accessible by a Lorentz transformation. It is possible for matter (or information) to travel (below light speed) from the location of A, starting at the time of A, to the location of B, arriving at the time of B, so there can be a causal relationship (with A the cause and B the effect).

The interval AC in the diagram is 'space-like' that is, there is a frame of reference in which events A and C occur simultaneously, separated only in space. There are also frames in which A precedes C (as shown) and frames in which C precedes A. However, there are no frames accessible by a Lorentz transformation, in which events A and C occur at the same location. If it were possible for a cause-and-effect relationship to exist between events A and C, then paradoxes of causality would result.

For example, if signals could be sent faster than light, then signals could be sent into the sender's past (observer B in the diagrams). [32] [p 16] A variety of causal paradoxes could then be constructed.

Consider the spacetime diagrams in Fig. 4-4. A and B stand alongside a railroad track, when a high-speed train passes by, with C riding in the last car of the train and D riding in the leading car. The world lines of A and B are vertical (ct), distinguishing the stationary position of these observers on the ground, while the world lines of C and D are tilted forwards (ct′), reflecting the rapid motion of the observers C and D stationary in their train, as observed from the ground.

  1. Fig. 4-4a. The event of "B passing a message to D", as the leading car passes by, is at the origin of D's frame. D sends the message along the train to C in the rear car, using a fictitious "instantaneous communicator". The worldline of this message is the fat red arrow along the − x ′ axis, which is a line of simultaneity in the primed frames of C and D. In the (unprimed) ground frame the signal arrives earlier than it was sent.
  2. Fig. 4-4b. The event of "C passing the message to A", who is standing by the railroad tracks, is at the origin of their frames. Now A sends the message along the tracks to B via an "instantaneous communicator". The worldline of this message is the blue fat arrow, along the + x axis, which is a line of simultaneity for the frames of A and B. As seen from the spacetime diagram, B will receive the message before having sent it out, a violation of causality. [33]

Therefore, if causality is to be preserved, one of the consequences of special relativity is that no information signal or material object can travel faster than light in vacuum.

This is not to say that all faster than light speeds are impossible. Various trivial situations can be described where some "things" (not actual matter or energy) move faster than light. [35] For example, the location where the beam of a search light hits the bottom of a cloud can move faster than light when the search light is turned rapidly (although this does not violate causality or any other relativistic phenomenon). [36] [37]

Dragging effects Edit

In 1850, Hippolyte Fizeau and Léon Foucault independently established that light travels more slowly in water than in air, thus validating a prediction of Fresnel's wave theory of light and invalidating the corresponding prediction of Newton's corpuscular theory. [38] The speed of light was measured in still water. What would be the speed of light in flowing water?

In 1851, Fizeau conducted an experiment to answer this question, a simplified representation of which is illustrated in Fig. 5-1. A beam of light is divided by a beam splitter, and the split beams are passed in opposite directions through a tube of flowing water. They are recombined to form interference fringes, indicating a difference in optical path length, that an observer can view. The experiment demonstrated that dragging of the light by the flowing water caused a displacement of the fringes, showing that the motion of the water had affected the speed of the light.

According to the theories prevailing at the time, light traveling through a moving medium would be a simple sum of its speed through the medium plus the speed of the medium. Contrary to expectation, Fizeau found that although light appeared to be dragged by the water, the magnitude of the dragging was much lower than expected. If u ′ = c / n is the speed of light in still water, and v is the speed of the water, and u ± > is the water-bourne speed of light in the lab frame with the flow of water adding to or subtracting from the speed of light, then

Fizeau's results, although consistent with Fresnel's earlier hypothesis of partial aether dragging, were extremely disconcerting to physicists of the time. Among other things, the presence of an index of refraction term meant that, since n depends on wavelength, the aether must be capable of sustaining different motions at the same time. [note 8] A variety of theoretical explanations were proposed to explain Fresnel's dragging coefficient that were completely at odds with each other. Even before the Michelson–Morley experiment, Fizeau's experimental results were among a number of observations that created a critical situation in explaining the optics of moving bodies. [39]

From the point of view of special relativity, Fizeau's result is nothing but an approximation to Equation 10, the relativistic formula for composition of velocities. [28]

Relativistic aberration of light Edit

Because of the finite speed of light, if the relative motions of a source and receiver include a transverse component, then the direction from which light arrives at the receiver will be displaced from the geometric position in space of the source relative to the receiver. The classical calculation of the displacement takes two forms and makes different predictions depending on whether the receiver, the source, or both are in motion with respect to the medium. (1) If the receiver is in motion, the displacement would be the consequence of the aberration of light. The incident angle of the beam relative to the receiver would be calculable from the vector sum of the receiver's motions and the velocity of the incident light. [40] (2) If the source is in motion, the displacement would be the consequence of light-time correction. The displacement of the apparent position of the source from its geometric position would be the result of the source's motion during the time that its light takes to reach the receiver. [41]

The classical explanation failed experimental test. Since the aberration angle depends on the relationship between the velocity of the receiver and the speed of the incident light, passage of the incident light through a refractive medium should change the aberration angle. In 1810, Arago used this expected phenomenon in a failed attempt to measure the speed of light, [42] and in 1870, George Airy tested the hypothesis using a water-filled telescope, finding that, against expectation, the measured aberration was identical to the aberration measured with an air-filled telescope. [43] A "cumbrous" attempt to explain these results used the hypothesis of partial aether-drag, [44] but was incompatible with the results of the Michelson–Morley experiment, which apparently demanded complete aether-drag. [45]

Assuming inertial frames, the relativistic expression for the aberration of light is applicable to both the receiver moving and source moving cases. A variety of trigonometrically equivalent formulas have been published. Expressed in terms of the variables in Fig. 5-2, these include [28] : 57–60

Relativistic Doppler effect Edit

Relativistic longitudinal Doppler effect Edit

The classical Doppler effect depends on whether the source, receiver, or both are in motion with respect to the medium. The relativistic Doppler effect is independent of any medium. Nevertheless, relativistic Doppler shift for the longitudinal case, with source and receiver moving directly towards or away from each other, can be derived as if it were the classical phenomenon, but modified by the addition of a time dilation term, and that is the treatment described here. [46] [47]

For light, and with the receiver moving at relativistic speeds, clocks on the receiver are time dilated relative to clocks at the source. The receiver will measure the received frequency to be

An identical expression for relativistic Doppler shift is obtained when performing the analysis in the reference frame of the receiver with a moving source. [48] [19]

Transverse Doppler effect Edit

The transverse Doppler effect is one of the main novel predictions of the special theory of relativity.

Classically, one might expect that if source and receiver are moving transversely with respect to each other with no longitudinal component to their relative motions, that there should be no Doppler shift in the light arriving at the receiver.

Special relativity predicts otherwise. Fig. 5-3 illustrates two common variants of this scenario. Both variants can be analyzed using simple time dilation arguments. [19] In Fig. 5-3a, the receiver observes light from the source as being blueshifted by a factor of γ . In Fig. 5-3b, the light is redshifted by the same factor.

Measurement versus visual appearance Edit

Time dilation and length contraction are not optical illusions, but genuine effects. Measurements of these effects are not an artifact of Doppler shift, nor are they the result of neglecting to take into account the time it takes light to travel from an event to an observer.

Scientists make a fundamental distinction between measurement or observation on the one hand, versus visual appearance, or what one sees. The measured shape of an object is a hypothetical snapshot of all of the object's points as they exist at a single moment in time. The visual appearance of an object, however, is affected by the varying lengths of time that light takes to travel from different points on the object to one's eye.

For many years, the distinction between the two had not been generally appreciated, and it had generally been thought that a length contracted object passing by an observer would in fact actually be seen as length contracted. In 1959, James Terrell and Roger Penrose independently pointed out that differential time lag effects in signals reaching the observer from the different parts of a moving object result in a fast moving object's visual appearance being quite different from its measured shape. For example, a receding object would appear contracted, an approaching object would appear elongated, and a passing object would have a skew appearance that has been likened to a rotation. [p 19] [p 20] [49] [50] A sphere in motion retains the appearance of a sphere, although images on the surface of the sphere will appear distorted. [51]

Fig. 5-4 illustrates a cube viewed from a distance of four times the length of its sides. At high speeds, the sides of the cube that are perpendicular to the direction of motion appear hyperbolic in shape. The cube is actually not rotated. Rather, light from the rear of the cube takes longer to reach one's eyes compared with light from the front, during which time the cube has moved to the right. This illusion has come to be known as Terrell rotation or the Terrell–Penrose effect. [note 9]

Another example where visual appearance is at odds with measurement comes from the observation of apparent superluminal motion in various radio galaxies, BL Lac objects, quasars, and other astronomical objects that eject relativistic-speed jets of matter at narrow angles with respect to the viewer. An apparent optical illusion results giving the appearance of faster than light travel. [52] [53] [54] In Fig. 5-5, galaxy M87 streams out a high-speed jet of subatomic particles almost directly towards us, but Penrose–Terrell rotation causes the jet to appear to be moving laterally in the same manner that the appearance of the cube in Fig. 5-4 has been stretched out. [55]

Section Consequences derived from the Lorentz transformation dealt strictly with kinematics, the study of the motion of points, bodies, and systems of bodies without considering the forces that caused the motion. This section discusses masses, forces, energy and so forth, and as such requires consideration of physical effects beyond those encompassed by the Lorentz transformation itself.

Equivalence of mass and energy Edit

As an object's speed approaches the speed of light from an observer's point of view, its relativistic mass increases thereby making it more and more difficult to accelerate it from within the observer's frame of reference.

The energy content of an object at rest with mass m equals mc 2 . Conservation of energy implies that, in any reaction, a decrease of the sum of the masses of particles must be accompanied by an increase in kinetic energies of the particles after the reaction. Similarly, the mass of an object can be increased by taking in kinetic energies.

In addition to the papers referenced above—which give derivations of the Lorentz transformation and describe the foundations of special relativity—Einstein also wrote at least four papers giving heuristic arguments for the equivalence (and transmutability) of mass and energy, for E = mc 2 .

Mass–energy equivalence is a consequence of special relativity. The energy and momentum, which are separate in Newtonian mechanics, form a four-vector in relativity, and this relates the time component (the energy) to the space components (the momentum) in a non-trivial way. For an object at rest, the energy–momentum four-vector is (E/c, 0, 0, 0) : it has a time component which is the energy, and three space components which are zero. By changing frames with a Lorentz transformation in the x direction with a small value of the velocity v, the energy momentum four-vector becomes (E/c, Ev/c 2 , 0, 0) . The momentum is equal to the energy multiplied by the velocity divided by c 2 . As such, the Newtonian mass of an object, which is the ratio of the momentum to the velocity for slow velocities, is equal to E/c 2 .

The energy and momentum are properties of matter and radiation, and it is impossible to deduce that they form a four-vector just from the two basic postulates of special relativity by themselves, because these don't talk about matter or radiation, they only talk about space and time. The derivation therefore requires some additional physical reasoning. In his 1905 paper, Einstein used the additional principles that Newtonian mechanics should hold for slow velocities, so that there is one energy scalar and one three-vector momentum at slow velocities, and that the conservation law for energy and momentum is exactly true in relativity. Furthermore, he assumed that the energy of light is transformed by the same Doppler-shift factor as its frequency, which he had previously shown to be true based on Maxwell's equations. [p 1] The first of Einstein's papers on this subject was "Does the Inertia of a Body Depend upon its Energy Content?" in 1905. [p 21] Although Einstein's argument in this paper is nearly universally accepted by physicists as correct, even self-evident, many authors over the years have suggested that it is wrong. [56] Other authors suggest that the argument was merely inconclusive because it relied on some implicit assumptions. [57]

Einstein acknowledged the controversy over his derivation in his 1907 survey paper on special relativity. There he notes that it is problematic to rely on Maxwell's equations for the heuristic mass–energy argument. The argument in his 1905 paper can be carried out with the emission of any massless particles, but the Maxwell equations are implicitly used to make it obvious that the emission of light in particular can be achieved only by doing work. To emit electromagnetic waves, all you have to do is shake a charged particle, and this is clearly doing work, so that the emission is of energy. [p 22] [note 10]

How far can one travel from the Earth? Edit

Since one can not travel faster than light, one might conclude that a human can never travel farther from Earth than 40 light years if the traveler is active between the ages of 20 and 60. One would easily think that a traveler would never be able to reach more than the very few solar systems which exist within the limit of 20–40 light years from the earth. But that would be a mistaken conclusion. Because of time dilation, a hypothetical spaceship can travel thousands of light years during the pilot's 40 active years. If a spaceship could be built that accelerates at a constant 1g, it will, after a little less than a year, be travelling at almost the speed of light as seen from Earth. This is described by:

where v(t) is the velocity at a time t, a is the acceleration of 1g and t is the time as measured by people on Earth. [p 23] Therefore, after one year of accelerating at 9.81 m/s 2 , the spaceship will be travelling at v = 0.77c relative to Earth. Time dilation will increase the travellers life span as seen from the reference frame of the Earth to 2.7 years, but his lifespan measured by a clock travelling with him will not change. During his journey, people on Earth will experience more time than he does. A 5-year round trip for him will take 6.5 Earth years and cover a distance of over 6 light-years. A 20-year round trip for him (5 years accelerating, 5 decelerating, twice each) will land him back on Earth having travelled for 335 Earth years and a distance of 331 light years. [58] A full 40-year trip at 1g will appear on Earth to last 58,000 years and cover a distance of 55,000 light years. A 40-year trip at 1.1g will take 148,000 Earth years and cover about 140,000 light years. A one-way 28 year (14 years accelerating, 14 decelerating as measured with the astronaut's clock) trip at 1g acceleration could reach 2,000,000 light-years to the Andromeda Galaxy. [58] This same time dilation is why a muon travelling close to c is observed to travel much farther than c times its half-life (when at rest). [59]

Theoretical investigation in classical electromagnetism led to the discovery of wave propagation. Equations generalizing the electromagnetic effects found that finite propagation speed of the E and B fields required certain behaviors on charged particles. The general study of moving charges forms the Liénard–Wiechert potential, which is a step towards special relativity.

The Lorentz transformation of the electric field of a moving charge into a non-moving observer's reference frame results in the appearance of a mathematical term commonly called the magnetic field. Conversely, the magnetic field generated by a moving charge disappears and becomes a purely electrostatic field in a comoving frame of reference. Maxwell's equations are thus simply an empirical fit to special relativistic effects in a classical model of the Universe. As electric and magnetic fields are reference frame dependent and thus intertwined, one speaks of electromagnetic fields. Special relativity provides the transformation rules for how an electromagnetic field in one inertial frame appears in another inertial frame.

Maxwell's equations in the 3D form are already consistent with the physical content of special relativity, although they are easier to manipulate in a manifestly covariant form, that is, in the language of tensor calculus. [60]

Special relativity can be combined with quantum mechanics to form relativistic quantum mechanics and quantum electrodynamics. How general relativity and quantum mechanics can be unified is one of the unsolved problems in physics quantum gravity and a "theory of everything", which require a unification including general relativity too, are active and ongoing areas in theoretical research.

The early Bohr–Sommerfeld atomic model explained the fine structure of alkali metal atoms using both special relativity and the preliminary knowledge on quantum mechanics of the time. [61]

In 1928, Paul Dirac constructed an influential relativistic wave equation, now known as the Dirac equation in his honour, [p 24] that is fully compatible both with special relativity and with the final version of quantum theory existing after 1926. This equation not only describe the intrinsic angular momentum of the electrons called spin, it also led to the prediction of the antiparticle of the electron (the positron), [p 24] [p 25] and fine structure could only be fully explained with special relativity. It was the first foundation of relativistic quantum mechanics.

On the other hand, the existence of antiparticles leads to the conclusion that relativistic quantum mechanics is not enough for a more accurate and complete theory of particle interactions. Instead, a theory of particles interpreted as quantized fields, called quantum field theory, becomes necessary in which particles can be created and destroyed throughout space and time.

Special relativity in its Minkowski spacetime is accurate only when the absolute value of the gravitational potential is much less than c 2 in the region of interest. [62] In a strong gravitational field, one must use general relativity. General relativity becomes special relativity at the limit of a weak field. At very small scales, such as at the Planck length and below, quantum effects must be taken into consideration resulting in quantum gravity. However, at macroscopic scales and in the absence of strong gravitational fields, special relativity is experimentally tested to extremely high degree of accuracy (10 −20 ) [63] and thus accepted by the physics community. Experimental results which appear to contradict it are not reproducible and are thus widely believed to be due to experimental errors.

Special relativity is mathematically self-consistent, and it is an organic part of all modern physical theories, most notably quantum field theory, string theory, and general relativity (in the limiting case of negligible gravitational fields).

Newtonian mechanics mathematically follows from special relativity at small velocities (compared to the speed of light) – thus Newtonian mechanics can be considered as a special relativity of slow moving bodies. See classical mechanics for a more detailed discussion.

Several experiments predating Einstein's 1905 paper are now interpreted as evidence for relativity. Of these it is known Einstein was aware of the Fizeau experiment before 1905, [64] and historians have concluded that Einstein was at least aware of the Michelson–Morley experiment as early as 1899 despite claims he made in his later years that it played no role in his development of the theory. [14]

  • The Fizeau experiment (1851, repeated by Michelson and Morley in 1886) measured the speed of light in moving media, with results that are consistent with relativistic addition of colinear velocities.
  • The famous Michelson–Morley experiment (1881, 1887) gave further support to the postulate that detecting an absolute reference velocity was not achievable. It should be stated here that, contrary to many alternative claims, it said little about the invariance of the speed of light with respect to the source and observer's velocity, as both source and observer were travelling together at the same velocity at all times.
  • The Trouton–Noble experiment (1903) showed that the torque on a capacitor is independent of position and inertial reference frame.
  • The Experiments of Rayleigh and Brace (1902, 1904) showed that length contraction does not lead to birefringence for a co-moving observer, in accordance with the relativity principle.

Particle accelerators routinely accelerate and measure the properties of particles moving at near the speed of light, where their behavior is completely consistent with relativity theory and inconsistent with the earlier Newtonian mechanics. These machines would simply not work if they were not engineered according to relativistic principles. In addition, a considerable number of modern experiments have been conducted to test special relativity. Some examples:

    – testing the limiting speed of particles – testing relativistic Doppler effect and time dilation – relativistic effects on a fast-moving particle's half-life – time dilation in accordance with Lorentz transformations – testing isotropy of space and mass – various modern tests
  • Experiments to test emission theory demonstrated that the speed of light is independent of the speed of the emitter.
  • Experiments to test the aether drag hypothesis – no "aether flow obstruction".

Geometry of spacetime Edit

Comparison between flat Euclidean space and Minkowski space Edit

Special relativity uses a 'flat' 4-dimensional Minkowski space – an example of a spacetime. Minkowski spacetime appears to be very similar to the standard 3-dimensional Euclidean space, but there is a crucial difference with respect to time.

In 3D space, the differential of distance (line element) ds is defined by

where dx = (dx1, dx2, dx3) are the differentials of the three spatial dimensions. In Minkowski geometry, there is an extra dimension with coordinate X 0 derived from time, such that the distance differential fulfills

where dX = (dX0, dX1, dX2, dX3) are the differentials of the four spacetime dimensions. This suggests a deep theoretical insight: special relativity is simply a rotational symmetry of our spacetime, analogous to the rotational symmetry of Euclidean space (see Fig. 10-1). [66] Just as Euclidean space uses a Euclidean metric, so spacetime uses a Minkowski metric. Basically, special relativity can be stated as the invariance of any spacetime interval (that is the 4D distance between any two events) when viewed from any inertial reference frame. All equations and effects of special relativity can be derived from this rotational symmetry (the Poincaré group) of Minkowski spacetime.

The actual form of ds above depends on the metric and on the choices for the X 0 coordinate. To make the time coordinate look like the space coordinates, it can be treated as imaginary: X0 = ict (this is called a Wick rotation). According to Misner, Thorne and Wheeler (1971, §2.3), ultimately the deeper understanding of both special and general relativity will come from the study of the Minkowski metric (described below) and to take X 0 = ct , rather than a "disguised" Euclidean metric using ict as the time coordinate.

Some authors use X 0 = t , with factors of c elsewhere to compensate for instance, spatial coordinates are divided by c or factors of c ±2 are included in the metric tensor. [67] These numerous conventions can be superseded by using natural units where c = 1 . Then space and time have equivalent units, and no factors of c appear anywhere.

3D spacetime Edit

If we reduce the spatial dimensions to 2, so that we can represent the physics in a 3D space

we see that the null geodesics lie along a dual-cone (see Fig. 10-2) defined by the equation

which is the equation of a circle of radius c dt.

4D spacetime Edit

If we extend this to three spatial dimensions, the null geodesics are the 4-dimensional cone:

As illustrated in Fig. 10-3, the null geodesics can be visualized as a set of continuous concentric spheres with radii = c dt.

This null dual-cone represents the "line of sight" of a point in space. That is, when we look at the stars and say "The light from that star which I am receiving is X years old", we are looking down this line of sight: a null geodesic. We are looking at an event a distance d = x 1 2 + x 2 2 + x 3 2 ^<2>+x_<2>^<2>+x_<3>^<2>>>> away and a time d/c in the past. For this reason the null dual cone is also known as the 'light cone'. (The point in the lower left of the Fig. 10-2 represents the star, the origin represents the observer, and the line represents the null geodesic "line of sight".)

The cone in the −t region is the information that the point is 'receiving', while the cone in the +t section is the information that the point is 'sending'.

The geometry of Minkowski space can be depicted using Minkowski diagrams, which are useful also in understanding many of the thought experiments in special relativity.

Note that, in 4d spacetime, the concept of the center of mass becomes more complicated, see Center of mass (relativistic).

Physics in spacetime Edit

Transformations of physical quantities between reference frames Edit

Above, the Lorentz transformation for the time coordinate and three space coordinates illustrates that they are intertwined. This is true more generally: certain pairs of "timelike" and "spacelike" quantities naturally combine on equal footing under the same Lorentz transformation.

The Lorentz transformation in standard configuration above, that is, for a boost in the x-direction, can be recast into matrix form as follows:

In Newtonian mechanics, quantities that have magnitude and direction are mathematically described as 3d vectors in Euclidean space, and in general they are parametrized by time. In special relativity, this notion is extended by adding the appropriate timelike quantity to a spacelike vector quantity, and we have 4d vectors, or "four vectors", in Minkowski spacetime. The components of vectors are written using tensor index notation, as this has numerous advantages. The notation makes it clear the equations are manifestly covariant under the Poincaré group, thus bypassing the tedious calculations to check this fact. In constructing such equations, we often find that equations previously thought to be unrelated are, in fact, closely connected being part of the same tensor equation. Recognizing other physical quantities as tensors simplifies their transformation laws. Throughout, upper indices (superscripts) are contravariant indices rather than exponents except when they indicate a square (this should be clear from the context), and lower indices (subscripts) are covariant indices. For simplicity and consistency with the earlier equations, Cartesian coordinates will be used.

The simplest example of a four-vector is the position of an event in spacetime, which constitutes a timelike component ct and spacelike component x = (x, y, z) , in a contravariant position four vector with components:

where we define X 0 = ct so that the time coordinate has the same dimension of distance as the other spatial dimensions so that space and time are treated equally. [68] [69] [70] Now the transformation of the contravariant components of the position 4-vector can be compactly written as:

where the Lorentz factor is:

The four-acceleration is the proper time derivative of 4-velocity:

The transformation rules for three-dimensional velocities and accelerations are very awkward even above in standard configuration the velocity equations are quite complicated owing to their non-linearity. On the other hand, the transformation of four-velocity and four-acceleration are simpler by means of the Lorentz transformation matrix.

The four-gradient of a scalar field φ transforms covariantly rather than contravariantly:

which is the transpose of:

only in Cartesian coordinates. It's the covariant derivative which transforms in manifest covariance, in Cartesian coordinates this happens to reduce to the partial derivatives, but not in other coordinates.

More generally, the covariant components of a 4-vector transform according to the inverse Lorentz transformation:

The postulates of special relativity constrain the exact form the Lorentz transformation matrices take.

More generally, most physical quantities are best described as (components of) tensors. So to transform from one frame to another, we use the well-known tensor transformation law [71]

An example of a four-dimensional second order antisymmetric tensor is the relativistic angular momentum, which has six components: three are the classical angular momentum, and the other three are related to the boost of the center of mass of the system. The derivative of the relativistic angular momentum with respect to proper time is the relativistic torque, also second order antisymmetric tensor.

The electromagnetic field tensor is another second order antisymmetric tensor field, with six components: three for the electric field and another three for the magnetic field. There is also the stress–energy tensor for the electromagnetic field, namely the electromagnetic stress–energy tensor.

Metric Edit

The metric tensor allows one to define the inner product of two vectors, which in turn allows one to assign a magnitude to the vector. Given the four-dimensional nature of spacetime the Minkowski metric η has components (valid with suitably chosen coordinates) which can be arranged in a 4 × 4 matrix:

The Poincaré group is the most general group of transformations which preserves the Minkowski metric:

and this is the physical symmetry underlying special relativity.

The metric can be used for raising and lowering indices on vectors and tensors. Invariants can be constructed using the metric, the inner product of a 4-vector T with another 4-vector S is:

Invariant means that it takes the same value in all inertial frames, because it is a scalar (0 rank tensor), and so no Λ appears in its trivial transformation. The magnitude of the 4-vector T is the positive square root of the inner product with itself:

One can extend this idea to tensors of higher order, for a second order tensor we can form the invariants:

similarly for higher order tensors. Invariant expressions, particularly inner products of 4-vectors with themselves, provide equations that are useful for calculations, because one doesn't need to perform Lorentz transformations to determine the invariants.

Relativistic kinematics and invariance Edit

The coordinate differentials transform also contravariantly:

so the squared length of the differential of the position four-vector dX μ constructed using

is an invariant. Notice that when the line element dX 2 is negative that √ −dX 2 is the differential of proper time, while when dX 2 is positive, √ dX 2 is differential of the proper distance.

The 4-velocity U μ has an invariant form:

which means all velocity four-vectors have a magnitude of c. This is an expression of the fact that there is no such thing as being at coordinate rest in relativity: at the least, you are always moving forward through time. Differentiating the above equation by τ produces:

So in special relativity, the acceleration four-vector and the velocity four-vector are orthogonal.

Relativistic dynamics and invariance Edit

The invariant magnitude of the momentum 4-vector generates the energy–momentum relation:

We can work out what this invariant is by first arguing that, since it is a scalar, it doesn't matter in which reference frame we calculate it, and then by transforming to a frame where the total momentum is zero.

We see that the rest energy is an independent invariant. A rest energy can be calculated even for particles and systems in motion, by translating to a frame in which momentum is zero.

The rest energy is related to the mass according to the celebrated equation discussed above:

The mass of systems measured in their center of momentum frame (where total momentum is zero) is given by the total energy of the system in this frame. It may not be equal to the sum of individual system masses measured in other frames.

To use Newton's third law of motion, both forces must be defined as the rate of change of momentum with respect to the same time coordinate. That is, it requires the 3D force defined above. Unfortunately, there is no tensor in 4D which contains the components of the 3D force vector among its components.

If a particle is not traveling at c, one can transform the 3D force from the particle's co-moving reference frame into the observer's reference frame. This yields a 4-vector called the four-force. It is the rate of change of the above energy momentum four-vector with respect to proper time. The covariant version of the four-force is:

In the rest frame of the object, the time component of the four force is zero unless the "invariant mass" of the object is changing (this requires a non-closed system in which energy/mass is being directly added or removed from the object) in which case it is the negative of that rate of change of mass, times c. In general, though, the components of the four force are not equal to the components of the three-force, because the three force is defined by the rate of change of momentum with respect to coordinate time, that is, dp/dt while the four force is defined by the rate of change of momentum with respect to proper time, that is, dp/dτ.

In a continuous medium, the 3D density of force combines with the density of power to form a covariant 4-vector. The spatial part is the result of dividing the force on a small cell (in 3-space) by the volume of that cell. The time component is −1/c times the power transferred to that cell divided by the volume of the cell. This will be used below in the section on electromagnetism.


History of Christian mysticism

Although the essence of mysticism is the sense of contact with the transcendent, mysticism in the history of Christianity should not be understood merely in terms of special ecstatic experiences but as part of a religious process lived out within the Christian community. From this perspective mysticism played a vital part in the early church. Early Christianity was a religion of the spirit that expressed itself in the heightening and enlargement of human consciousness. It is clear from the Synoptic Gospels (e.g., Matthew 11:25–27) that Jesus was thought to have enjoyed a sense of special contact with God. In the primitive church an active part was played by prophets, who were believed to be recipients of a revelation coming directly from the Holy Spirit.

The mystical aspect of early Christianity finds its fullest expression, however, in the letters of Paul and The Gospel According to John. For Paul and John, mystical experience and aspiration are always for union with Christ. It was Paul’s supreme desire to know Christ and to be united with him. The recurring phrase, “in Christ,” implies personal union, a participation in Christ’s death and Resurrection. The Christ with whom Paul is united is not the man Jesus who is known “after the flesh.” He has been exalted and glorified, so that he is one with the Spirit.

Christ-mysticism appears again in The Gospel According to John, particularly in the farewell discourse (chapters 14–16), where Jesus speaks of his impending death and of his return in the Spirit to unite himself with his followers. In the prayer of Jesus in chapter 17 there is a vision of an interpenetrating union of souls in which all who are one with Christ share his perfect union with the Father.

In the early Christian centuries the mystical trend found expression not only in the traditions of Pauline and Johannine Christianity (as in the writings of Ignatius of Antioch and Irenaeus of Lyon) but also in the Gnostics (early Christian heretics who viewed matter as evil and the spirit as good). Scholars still debate the origins of Gnosticism, but most Gnostics thought of themselves as followers of Christ, albeit a Christ who was pure spirit. The religion of Valentinus, who was excommunicated in about ad 150, is a notable example of the mysticism of the Gnostics. He believed that human beings are alienated from God because of their spiritual ignorance Christ brings them into the gnosis (esoteric revelatory knowledge) that is union with God. Valentinus held that all human beings come from God and that all will in the end return to God. Other Gnostic groups held that there were three types of people—“spiritual,” “psychic,” and “material”—and that only the first two can be saved. The Pistis Sophia (3rd century) is preoccupied with the question of who finally will be saved. Those who are saved must renounce the world completely and follow the pure ethic of love and compassion so that they can be identified with Jesus and become rays of the divine Light.


‘American Exceptionalism’: A Short History

On the campaign trail, Mitt Romney contrasts his vision of American greatness with what he claims is Barack Obama’s proclivity for apologizing for it. The “president doesn’t have the same feelings about American exceptionalism that we do,” Romney has charged. All countries have their own brand of chest-thumping nationalism, but almost none is as patently universal — even messianic — as this belief in America’s special character and role in the world. While the mission may be centuries old, the phrase only recently entered the political lexicon, after it was first uttered by none other than Joseph Stalin. Today the term is experiencing a resurgence in an age of anxiety about American decline.

1630
As the Massachusetts Bay Company sets sail from England to the New World, Puritan lawyer John Winthrop urges his fellow passengers on the Arabella to “be as a city upon a hill,” alluding to a phrase from Jesus’s Sermon on the Mount. The colonists must make New England a model for future settlements, he notes, as the “eyes of all people are upon us.”

1776
In “Common Sense,” revolutionary pamphleteer Thomas Paine describes America as a beacon of liberty for the world. “Freedom hath been hunted round the globe,” he explains. “Asia, and Africa, have long expelled her. Europe regards her like a stranger, and England hath given her warning to depart. O! receive the fugitive, and prepare in time an asylum for mankind.”

1840
Reflecting on his travels in the United States in his seminal work, Democracy in America, French intellectual Alexis de Tocqueville writes that the “position of the Americans” is “quite exceptional, and it may be believed that no democratic people will ever be placed in a similar one.”

1898
“There is but a single specialty with us, only one thing that can be called by the wide name ‘American.’ That is the national devotion to ice-water.… I suppose we do stand alone in having a drink that nobody likes but ourselves.” —Mark Twain

1914
U.S. President Woodrow Wilson infuses Paine’s notion of the United States as a bastion of freedom with missionary zeal, arguing that what makes America unique is its duty to spread liberty abroad. “I want you to take these great engines of force out onto the seas like adventurers enlisted for the elevation of the spirit of the human race,” Wilson tells U.S. Naval Academy graduates. “For that is the only distinction that America has.”

1929-1930
Coining a new term, Soviet leader Joseph Stalin condemns the “heresy of American exceptionalism” while expelling American communist leader Jay Lovestone and his followers from the Communist International for arguing that U.S. capitalism constitutes an exception to Marxism’s universal laws. Within a year, the Communist Party USA has adopted Stalin’s disparaging term. “The storm of the economic crisis in the United States blew down the house of cards of American exceptionalism,” the party declares, gloating about the Great Depression.

1941
Echoing Wilson, magazine publisher Henry Luce urges the United States to enter World War II and exchange isolationism for an “American century” in which it acts as the “powerhouse” of those ideals that are “especially American.”

1950s
A group of American historians — including Daniel Boorstin, Louis Hartz, Richard Hofstadter, and David Potter — argues that the United States forged a “consensus” of liberal values over time that enabled it to sidestep movements such as fascism and socialism. But they question whether this unique national character can be reproduced elsewhere. As Boorstin writes, “nothing could be more un-American than to urge other countries to imitate America.”

1961
President John F. Kennedy suggests that America’s distinctiveness stems from its determination to exemplify and defend freedom all over the world. He invokes Winthrop’s “city upon a hill” and declares: “More than any other people on Earth, we bear burdens and accept risks unprecedented in their size and their duration, not for ourselves alone but for all who wish to be free.”

1975
In a National Affairs essay, “The End of American Exceptionalism,” sociologist Daniel Bell gives voice to growing skepticism in academia about the concept in the wake of the Vietnam War and the Watergate scandal. “Today,” he writes, “the belief in American exceptionalism has vanished with the end of empire, the weakening of power, the loss of faith in the nation’s future.”

1980
Ronald Reagan counters President Jimmy Carter’s rhetoric about a national “crisis of confidence” with paeans to American greatness during the presidential campaign. “I’ve always believed that this blessed land was set apart in a special way,” Reagan later explains.

1989
The final days of the Cold War raise the prospect that the American model could become the norm, not the exception. “What we may be witnessing is not just the end of the Cold War” but the “end of history as such, that is … the universalization of Western liberal democracy as the final form of human government,” political scientist Francis Fukuyama famously proclaims.

In my mind it was a tall, proud city built on rocks stronger than oceans, wind-swept, God-blessed, and teeming with people of all kinds living in harmony and peace.” —Ronald Reagan

1996
In a speech justifying NATO’s intervention in Bosnia, President Bill Clinton declares that “America remains the indispensable nation” and that “there are times when America, and only America, can make a difference between war and peace, between freedom and repression.”

2000
American exceptionalism becomes a partisan talking point as future George W. Bush speechwriter Marc Thiessan, in a Weekly Standard article, contends that there are two competing visions of internationalism in the 21st century: the “‘global multilateralism’ of the Clinton-Gore Democrats” vs. the “‘American exceptionalism’ of the Reagan-Bush Republicans.”

2004
“Like generations before us, we have a calling from beyond the stars to stand for freedom. This is the everlasting dream of America.” —George W. Bush

2007-2008
Amid skepticism about America’s global leadership, fueled by a disastrous war in Iraq and the global financial crisis, Democrat Barack Obama runs against Bush’s muscular “Freedom Agenda” in the election to succeed him. “I believe in American exceptionalism,” Obama says, but not one based on “our military prowess or our economic dominance.” Democratic pollster Mark Penn advises Hillary Clinton to target Obama’s “lack of American roots” in the primary by “explicitly own[ing] ‘American'” in her campaign.

2009
As critical scholarship — such as Godfrey Hodgson’s The Myth of American Exceptionalism — proliferates, Obama becomes the first sitting U.S. president to use the phrase “American exceptionalism” publicly. “I suspect that the Brits believe in British exceptionalism and the Greeks believe in Greek exceptionalism” — a line later much quoted by Republicans eager to prove his disdain for American uniqueness.

2010
80 percent of Americans believe the United States “has a unique character that makes it the greatest country in the world.” But only 58 percent think Obama agrees. —USA Today/Gallup poll

2011-2012
With the presidential race heating up, the phrase gets reduced to a shorthand for “who loves America more.” After making the “case for American greatness” in his 2010 book No Apology, GOP candidate Mitt Romney claims Obama believes “America’s just another nation with a flag.” The president, for his part, invokes Bill Clinton’s “indispensable nation” in his State of the Union address and later declares, in response to Republican critics, “My entire career has been a testimony to American exceptionalism.” If Stalin only knew what he started.

On the campaign trail, Mitt Romney contrasts his vision of American greatness with what he claims is Barack Obama’s proclivity for apologizing for it. The “president doesn’t have the same feelings about American exceptionalism that we do,” Romney has charged. All countries have their own brand of chest-thumping nationalism, but almost none is as patently universal — even messianic — as this belief in America’s special character and role in the world. While the mission may be centuries old, the phrase only recently entered the political lexicon, after it was first uttered by none other than Joseph Stalin. Today the term is experiencing a resurgence in an age of anxiety about American decline.

1630
As the Massachusetts Bay Company sets sail from England to the New World, Puritan lawyer John Winthrop urges his fellow passengers on the Arabella to “be as a city upon a hill,” alluding to a phrase from Jesus’s Sermon on the Mount. The colonists must make New England a model for future settlements, he notes, as the “eyes of all people are upon us.”

1776
In “Common Sense,” revolutionary pamphleteer Thomas Paine describes America as a beacon of liberty for the world. “Freedom hath been hunted round the globe,” he explains. “Asia, and Africa, have long expelled her. Europe regards her like a stranger, and England hath given her warning to depart. O! receive the fugitive, and prepare in time an asylum for mankind.”

1840
Reflecting on his travels in the United States in his seminal work, Democracy in America, French intellectual Alexis de Tocqueville writes that the “position of the Americans” is “quite exceptional, and it may be believed that no democratic people will ever be placed in a similar one.”

1898
“There is but a single specialty with us, only one thing that can be called by the wide name ‘American.’ That is the national devotion to ice-water.… I suppose we do stand alone in having a drink that nobody likes but ourselves.” —Mark Twain

1914
U.S. President Woodrow Wilson infuses Paine’s notion of the United States as a bastion of freedom with missionary zeal, arguing that what makes America unique is its duty to spread liberty abroad. “I want you to take these great engines of force out onto the seas like adventurers enlisted for the elevation of the spirit of the human race,” Wilson tells U.S. Naval Academy graduates. “For that is the only distinction that America has.”

1929-1930
Coining a new term, Soviet leader Joseph Stalin condemns the “heresy of American exceptionalism” while expelling American communist leader Jay Lovestone and his followers from the Communist International for arguing that U.S. capitalism constitutes an exception to Marxism’s universal laws. Within a year, the Communist Party USA has adopted Stalin’s disparaging term. “The storm of the economic crisis in the United States blew down the house of cards of American exceptionalism,” the party declares, gloating about the Great Depression.

1941
Echoing Wilson, magazine publisher Henry Luce urges the United States to enter World War II and exchange isolationism for an “American century” in which it acts as the “powerhouse” of those ideals that are “especially American.”

1950s
A group of American historians — including Daniel Boorstin, Louis Hartz, Richard Hofstadter, and David Potter — argues that the United States forged a “consensus” of liberal values over time that enabled it to sidestep movements such as fascism and socialism. But they question whether this unique national character can be reproduced elsewhere. As Boorstin writes, “nothing could be more un-American than to urge other countries to imitate America.”

1961
President John F. Kennedy suggests that America’s distinctiveness stems from its determination to exemplify and defend freedom all over the world. He invokes Winthrop’s “city upon a hill” and declares: “More than any other people on Earth, we bear burdens and accept risks unprecedented in their size and their duration, not for ourselves alone but for all who wish to be free.”

1975
In a National Affairs essay, “The End of American Exceptionalism,” sociologist Daniel Bell gives voice to growing skepticism in academia about the concept in the wake of the Vietnam War and the Watergate scandal. “Today,” he writes, “the belief in American exceptionalism has vanished with the end of empire, the weakening of power, the loss of faith in the nation’s future.”

1980
Ronald Reagan counters President Jimmy Carter’s rhetoric about a national “crisis of confidence” with paeans to American greatness during the presidential campaign. “I’ve always believed that this blessed land was set apart in a special way,” Reagan later explains.

1989
The final days of the Cold War raise the prospect that the American model could become the norm, not the exception. “What we may be witnessing is not just the end of the Cold War” but the “end of history as such, that is … the universalization of Western liberal democracy as the final form of human government,” political scientist Francis Fukuyama famously proclaims.

In my mind it was a tall, proud city built on rocks stronger than oceans, wind-swept, God-blessed, and teeming with people of all kinds living in harmony and peace.” —Ronald Reagan

1996
In a speech justifying NATO’s intervention in Bosnia, President Bill Clinton declares that “America remains the indispensable nation” and that “there are times when America, and only America, can make a difference between war and peace, between freedom and repression.”

2000
American exceptionalism becomes a partisan talking point as future George W. Bush speechwriter Marc Thiessan, in a Weekly Standard article, contends that there are two competing visions of internationalism in the 21st century: the “‘global multilateralism’ of the Clinton-Gore Democrats” vs. the “‘American exceptionalism’ of the Reagan-Bush Republicans.”

2004
“Like generations before us, we have a calling from beyond the stars to stand for freedom. This is the everlasting dream of America.” —George W. Bush

2007-2008
Amid skepticism about America’s global leadership, fueled by a disastrous war in Iraq and the global financial crisis, Democrat Barack Obama runs against Bush’s muscular “Freedom Agenda” in the election to succeed him. “I believe in American exceptionalism,” Obama says, but not one based on “our military prowess or our economic dominance.” Democratic pollster Mark Penn advises Hillary Clinton to target Obama’s “lack of American roots” in the primary by “explicitly own[ing] ‘American'” in her campaign.

2009
As critical scholarship — such as Godfrey Hodgson’s The Myth of American Exceptionalism — proliferates, Obama becomes the first sitting U.S. president to use the phrase “American exceptionalism” publicly. “I suspect that the Brits believe in British exceptionalism and the Greeks believe in Greek exceptionalism” — a line later much quoted by Republicans eager to prove his disdain for American uniqueness.

2010
80 percent of Americans believe the United States “has a unique character that makes it the greatest country in the world.” But only 58 percent think Obama agrees. —USA Today/Gallup poll

2011-2012
With the presidential race heating up, the phrase gets reduced to a shorthand for “who loves America more.” After making the “case for American greatness” in his 2010 book No Apology, GOP candidate Mitt Romney claims Obama believes “America’s just another nation with a flag.” The president, for his part, invokes Bill Clinton’s “indispensable nation” in his State of the Union address and later declares, in response to Republican critics, “My entire career has been a testimony to American exceptionalism.” If Stalin only knew what he started.

Uri Friedman is deputy managing editor at Foreign Policy . Before joining FP , he reported for the Christian Science Monitor, worked on corporate strategy for Atlantic Media, helped launch the Atlantic Wire, and covered international affairs for the site. A proud native of Philadelphia, Pennsylvania, he studied European history at the University of Pennsylvania and has lived in Barcelona, Spain and Geneva, Switzerland. Twitter: @UriLF


Weapons [ ]

". Stygius, the Blade of the Underworld, must have been among the finest weapons ever wielded, back when it was whole. "


The blade's default attack pattern is a three-swing long combo consisting of a mixture of wide and directional swings. The special creates a small burst around you after a short jump, and leaves you stationary for a short time.

". It must have been a sight when Lord Hades wielded Varatha the Eternal Spear versus the Titans, driving back those fiends into the depths, together with the help of his Olympian brothers and sisters. "


Repeated long-range stab attacks that can be charged to unleash a spin attack dealing high damage in a wide radius. The special throws the spear, which will damage enemies along its path until it stops. Activating special again recalls the spear, which will deal damage on the way back.

". Aegis, the Shield of Chaos, predecessor to the very Aegis wielded by Lord Zeus and by Athena, his most favored daughter. the Lord of Thunder defended his brothers and sisters using that very shield then, together, they conspired to drive the Titans back into the lowest reaches of the Underworld. "


The main attack is a single swing that hits in an arc and knocks enemies back. Holding the attack button will block damage from the front, while charging the "Bull Rush". Releasing this will perform a shield bash forwards, dealing damage to enemies hit. The special throws the shield, which bounces between enemies and objects before returning.

". Coronacht, the so-called Heart-Seeker, is certainly the finest bow ever conceived, and wielded once by none other than Mistress Hera, who stood by side with Zeus, on better terms back then, as they drove back the Titans under a storm of arrows and thunder. "


Ranged attacks that hit enemies at a distance. The main attack can be charged to increase distance and damage, releasing at the right moment for additional damage. The special sprays arrows in a cone in front of you, dealing 10 damage each.

". What is a weapon if not the extension of one's will to survive, to destroy? The ancient cyclopean forge-masters who created the Infernal Arms in accordance with the Fates' design must have understood this when they delivered the singular Twin Fists of Malphon in secret to the gods. "


Fast repetitive short-range combo attacks using fists in close-quarters combat. The special is an uppercut that hits twice. This weapon is unique due to it being the only Infernal Arm with a dash-special if the special is used while dashing it will uppercut faster but only hit once.

". Least known among the gods who stood together to depose the Titans is the Lady Hestia, reclusive goddess of the hearth, and one-time wielder of Exagryph, the Rail of Adamant an artifact of metal and of flame so dreadful that the gods themselves abandoned it once their fell work was done. "


Automatic or manual fire (depending on whether Attack is pressed or held) that must be reloaded once all ammo is used. The special launches a grenade to bombard the target area, which takes a short time to arrive but deals damage in an area once it lands.


International Human Rights Treaty

In 2006, the United Nations General Assembly adopted the Convention on the Rights of Persons with Disabilities, first major human rights treaty of the 21st century. The U.S. Congress has yet to ratify it.

Several years later, Congress passed the Hate Crimes Prevention Act, extending federal law to cover crimes motivated by a person’s disability.

Enormous strides have been made through the commitment of individuals with disabilities and their families. But much remains to be done. Stereotypes continue to permeate our society. Individuals with disabilities suffer a much higher incidence of bullying and other forms of abuse than the general population. Unemployment is high and too many people wishing to live within the community remain institutionalized.

It will require continued advocacy and protection of the hard-won services and supports available through government programs to achieve full inclusion of individuals with disabilities in the daily activities of society.


Watch the video: Special Aspects of Ethics and Medicine Dr Hussein Sheashaa Nov 11, 2012 (August 2022).