Hybrid Automobiles


In technology, as in biology, a hybrid is the result of ‘‘cross-fertilization,’’ in this case referring to the application of technologies to produce a similar yet slightly different entity. Recent research in the history of automotive technology shows that hybridization has been much more common than previously thought. Thus, the automobile itself can be viewed as a hybrid with a century-long history of crossover phenomena from electrical engineering to mechanical engineering that resulted in an ‘‘electrified gasoline car.’’
The term hybrid, however, is generally reserved for combinations of propulsion systems in automobiles. Most common in the history of the automobile is the thermoelectric hybrid, mainly a combination of the internal combustion engine (gasoline or diesel) and an electric propulsion system (electric motor, battery set). Thermomechanical hybrids are possible when a combustion engine is combined with a flywheel system in which part of the kinetic energy during braking can be stored and released the moment this energy is needed, for instance for acceleration from standstill. Similarly, thermohydraulic hybrids combine combustion engines with a hydraulic energy storage system (a pump and a hydraulic accumulator).

Electroelectric hybrids are also known; in these cases the actual propulsion is done by one electric motor, but the energy supply is a combination of battery storage and a supply from overhead trolley wires. Combinations of trolley systems and mechanical flywheel storage systems have also been built. Viewed from this perspective, the automobile as we know it at the end of the twentieth century is but one case among many possibilities. Thermoelectric hybrids are nearly as old as automotive technology. Before 1900, the Belgian automobile producer Henri Pieper developed a car that was equipped with an electromagetically controlled carburetor. His patents were later bought by car manufacturers like Siemens– Schuckert (Germany), Daimler (Coventry, U.K.) and the French Socie´te´ Ge´ne´ rale d’Automobiles e´lectro-Me´caniques (GEM). In 1908 the latter company proposed a Pieper-like hybrid called ‘‘Automobile Synthesis.’’ At about the same time, German battery producer AFA (now Varta AG) bought a Pieper to develop a special battery for hybrid car applications. Another famous hybrid vehicle builder was the French electrical engineer Louis Krie´ ger. He started hybrid development in 1902 and produced a car he drove during the rallye from Paris to Florence a year later. In 1904 his hybrid was the sensation of the Paris automobile show. In 1906 he conceived a drive train based on an electric propulsion system and a gas turbine, and in the same year he developed a hybrid taxicab, 100 of which were intended to be built.

In Austria, Lohner built 52 hybrids between 1898 and 1910, designed by electrical engineer Ferdinand Porsche. These cars were later sold by Daimler, Germany, which founded a separate company for this purpose, Socie´te´ Mercedes Mixte. In Germany several local fire companies built thermoelectric fire engines, some of these a combination of an electric motor with batteries and a steam engine. In this configuration, the electric drive system was meant for quick starting and for use during the first few kilometers of the trip. After ten minutes, when kettle pressure had built up, the steam engine took over to propel the truck to the fire location. All in all however, no more than a hundred or so hybrids were sold in Europe before World War I. In the U.S., there was even less hybrid construction activity during this period, the most famous being the Woods Dual Power, which was produced during the war. Hybrids were supposed to combine the advantages of two systems while avoiding their disadvantages. For instance, because the thermal element in the hybrid system was often used (in combination with the electric motor, which for this purpose had to be repolarized to become an electricity generator) to supply a part of the stored electricity, the battery set in a hybrid tended to be smaller. It was lighter than that in a full-blown electric motor where all the energy for a trip had to be stored in the batteries before the start of the trip. In most cases the combination of systems led to a more complex and expensive construction, jeopardizing state-of-the-art reliability standards, and complicated control problems, which would only be overcome with the emergence of postwar automotive electronics. Also, despite the lighter battery, the total drive train became heavier. For this reason hybrid alternatives were especially popular among producers of heavy vehicles such as buses and trucks in which the relative importance of the drive train weight is less. Well-known examples in this respect are the brands Fisher (U.S.), Thornicroft (U.K.) and Faun (Germany). The popularity of hybrid propulsion systems among engineers was not only, and according to some analysts not primarily, the result of technical considerations. During the first quarter century of automotive history, when the struggle between proponents of steam, electric, and gasoline engine propulsion was not yet over, hybridization often functioned as a strategic and social compromise as well. This was very clear in the case of the German fire engine community before World War I. A fierce controversy raged over the apparent unreliability of the gas combustion engine, but the proponents of electric drive trains, who boasted that electric drive trains guaranteed quick starting, high acceleration, high reliability, and no danger of fuel explosions in the neighborhood of fires, were not strong enough to monopolize the field. Several fire officials then opted for a hybrid drive, combining the advantages of electric with the advantages of the combustion engine (primarily a greater driving range), but they encountered heavy resistance from a combination of both other fire officials and the established automobile industry. Nevertheless, in 1910 the German fire engine fleet included about 15 heavy hybrids.

As with the electric alternative, hybrid automobiles experienced a revival during the last quarter of the twentieth century. This resulted in at least one commercially available hybrid automobile, the Toyota Prius. During this period, the issue of energy consumption played a role as well. Heavily subsidized by local, regional, and federal governments in Europe, Japan, and the U.S., hybrid projects used new light materials such as magnesium, plastics, and carbon fibers; and sophisticated electronic control systems (borrowed from related industries such as aerospace and information and communication technology) to enable very energy efficient solutions, initially to the surprise of many engineers. For example, a Dutch–Italian hybrid bus project resulted in exhaust emissions that were barely measurable and demonstrated very low energy consumption rates. Similar results in other experimental areas have been possible because of sophisticated combinations of small engines, flywheel systems with continuously variable transmissions, and even engine concepts that were considered obsolete, such as Sterling engines, micro gas turbines, and two-stroke engines. By now, the field of possible alternatives is so vast that several classification schemes have been proposed. The most common classification is that which distinguishes between ‘‘series hybrids,’’ where the electric element is positioned between the thermal element and the drive wheels, and ‘‘parallel hybrids,’’ where both the thermal and the electric element can be used separately to propel the vehicle. At the beginning of the twenty-first century, the ‘‘mild hybrid’’ was the latest development, in which the electric system is so small that it resembles the electric starter motor. If this development materializes, automotive history will have come full circle, producing a true compromise of an electrified gasoline car.

Parallel Worlds



A world neighbouring the world of experience, but displaced from it in such a fashion as to be imperceptible and inaccessible in normal circumstances. In the days when people routinely thought of the world as a plane, it seemed reasonable to think of parallel worlds above and below it, the former often being identified with the realm of the gods and the latter with the realm of the dead. In Greek mythology both realms were equipped with portals, Mount Olympus serving as a conduit between Earth and heaven while various caverns gave admittance to the Underworld. Both notions are reflected in cosmological ideas that persisted throughout the Middle Ages and into the Renaissance, although notions of divine reward and punishment often redistributed the dead between the two realms; they were preserved in various descendant schools of *occult science, in which the Notion of ‘‘higher planes’’—especially the ‘‘astral plane’’— retains considerable imaginative authority. The Underworld is associated with many western European folkloristic accounts of supernatural beings, cropping up in many of the tales that served as ancestors to modern fairy tales, but is confused with conceptualisations in which such beings live invisibly alongside human society, either as animistic ‘‘elemental spirits’’ or in variously veiled enclaves only partially or periodically accessible to humans. The latter version became the standard strategy of literary fairy tales, laying imaginative groundwork for the extrapolation of the notion that there might be an array of parallel universes laterally displaced from ours in a fourth dimension. The latter notion was popularised at the end of the nineteenth century by such writers as C. H. *Hinton and dramatised in such stories as H. G. Wells’ ‘‘The Plattner Story’’ (1896), William Hope Hodgson’s The House on the Borderland (1908) and The Ghost Pirates (1909), and Gerald Grogan’s A Drop in Infinity (1915). Such stories often retain echoes of the mythical thesis, placing the shades of the dead in a parallel world, while The House on the Borderland transplanted a notion commonly associated with dream fantasy, using the landscapes of a parallel world to map and symbolically display the psyche of its protagonist.

The idea of parallel worlds displaced in a fourth spatial dimension underwent a spectacular boom in twentieth-century fiction. It was established as a useful framework for the science-fictional accommodation of alternative histories in the 1930s, encouraged by such exercises in speculative nonfiction as J. W. Dunne’s attempts to explain supposedly prophetic dreams in An Experiment with Time (1927), which led him to construct an ambitious account of The Serial Universe (1934). It was accommodated into the pulp magazines before the advent of specialist science fiction pulps in such stories as A. Merritt’s classic portal fantasy ‘‘The Moon Pool’’ (1918), Austin Hall and Homer Eon Flint’s The Blind Spot (1921; book, 1951), and Philip M. Fisher’s ‘‘Worlds Within Worlds’’ (1922), and was thus established as a standard generic motif, given a more scientific gloss in such versions as Murray Leinster’s ‘‘The Fifth-Dimension Catapult’’ (1931) and ‘‘The Fifth-Dimension Tube’’ (1933).
The notion of Faerie as a parallel world made similar progress in twentieth-century fantasy fiction, generalised in J. R. R. Tolkien’s notion of fantasy settings as ‘‘Secondary Worlds’’. Secondary Worlds are usually conceivable as parallel worlds even when the inclusion of connective portals does not make the relationship explicit. The narrative utility of fantasies featuring such portals is obvious, in that they allow characters to step from the experienced world into a Secondary one, arriving as naive and inquisitive strangers whose own learning process educates the reader; Farah Mendlesohn’s ‘‘Towards a Taxonomy of Fantasy’’ (2003) identified portal fantasy as a major sector of modern fantastic fiction, intermediate in its narrative technique between immersive fantasy and intrusive fantasy. Although much science-fictional portal fantasy deals with shortcuts through space and trips through time rather than shifts into paralel worlds, the development of parallel worlds in genre science fiction made a very significant contribution to the broader genre of portal fantasy, evolving a new jargon of ‘‘dimensional doorways’’ and ‘‘gates’’ that helped to add psychological plausibility to their fantasy counterparts.

Science-fictional portals retain the same essential magicality as well as the same narrative function as portals to Faerie and its analogues, and such devices became—very appropriately—a key motif of the hybrid subgenre of science-fantasy. They facilitated genre crossovers with the same ease that they facilitated transfer between primary and secondary worlds, as illustrated by such archetypal hybrids as A. Merritt’s The Face in the Abyss (1923–1930; rev. book, 1931), C. L. Moore’s The Dark World (1946; book, 1965; by-lined Henry Kuttner) and Andre Norton’s Witch World (1963), and the chimerical crossovers that became typical of Astounding Science Fiction’s fantasy Companion Unknown, whose key templates were established by L. Sprague de Camp. The occult tradition of parallel worlds fiction, which had latched on to the notion of the fourth dimension in the late nineteenth century, also gave rise to a hybrid subgenre, carried forward by such works as John Buchan’s ‘‘Space’’ (1911) and Algernon Blackwood’s ‘‘The Pikestaffe Case’’ (1924). This too was imported into the pulp magazines, most conspicuously by H. P. Lovecraft—whose deployment of the relevant jargon was echoed by his many disciples, including August Derleth, Frank Belknap Long, and Clark Ashton Smith. Some of these writers brought a new ingenuity into their developments of the idea, especially Smith, whose ‘‘City of the Singing Flame’’ (1931) introduced Merrittesque portal fantasy into the science fiction pulps, and whose ‘‘The Dimension of Chance’’ (1932) attempts to describe a parallel world with variant physical laws.

Early pulp science fiction writers initially mined the melodramatic potential of parallel worlds in a brutally straightforward fashion, in such accounts of monstrous invasion as Edmond Hamilton’s ‘‘Locked Worlds’’ (1929) and Donald Wandrei’s ‘‘The Monster from Nowhere’’ (1935) and such accounts of heroic expeditions as Clifford D. Simak’s ‘‘Hellhounds of the Cosmos’’ (1932) and E. E. Smith’s Skylark of Valeron (1934; book, 1949). Its uses became more sophisticated in the 1940s, in such stories as Harry Walton’s ‘‘Housing Shortage’’ (1947), but it enjoyed a spectacular leap forward in the 1950s and 1960s in the context of what eventually came to be called the ‘‘multiverse’’: an infinitely extendable manifold of alternative histories.

The notion of the multiverse is implicit in such early pulp science fiction stories as Harl Vincent’s ‘‘Wanderer of Infinity’’ (1933) and ‘‘The Plane Compass’’ (1935)—the latter refers to a ‘‘superuniverse’’— and became more explicit in such time police stories as Fritz Leiber’s Destiny Times Three (1945) and Sam Merwin’s House of Many Worlds (1951) before Michael Moorcock pasted the new label on it, and demonstrated its utility as a framing concept linking the very various worlds described within his texts into an inherently chimerical superstructure. Clifford D. Simak’s Ring Around the Sun (1953) is an early celebration of the extrapolation of the idea of paralel worlds to embrace an infinite series of Earth clones— all empty of humankind in this version, and hence available for *colonisation. Simak went on to examine the possibilities of interparallel trade in ‘‘Dusty Zebra’’ (1954) and ‘‘The Big Front Yard’’ (1958). Traditional notions of parallel existence continued to echo in science fiction—as the notion of invisible coexistence did in Gordon R. Dickson’s ‘‘Perfectly Adjusted’’ (1955; exp. book 1961 as Delusion World) and transfigurations of dream fantasy in Christopher Priest’s Dream Archipelago series (1976–1999)—but the more interesting developments of the Notion involved its extension in new philosophical directions. These included the extensive exploration of paralel selves in such existential fantasies as Adolfo Bioy Casares’ ‘‘La trame ce´leste’’ (1948; trans. as ‘‘The Celestial Plot’’), Robert Donald Locke’s ‘‘Next Door, Next World’’ (1961), Brian W. Aldiss’ Report on Probability A (1968), Larry Niven’s ‘‘All the Myriad Ways’’ (1969), and Graham Dunstan Martin’s Time-Slip (1986). Other existential fantasies employing parallel worlds include Richard Cowper’s Breakthrough (1967), Robert A. Heinlein’s The Number of the Beast (1980), and Kevin J. Anderson’s ‘‘The Bistro of Alternate Realities’’ (2004), and such tales of transuniversal tourism as Robert Silverberg’s ‘‘Trips’’ (1974), Robert Reed’s Down the Bright Way (1991), and Alexander Jablokov’s ‘‘At the Cross-Time Jaunter’s Ball’’ (1987) and ‘‘Many Mansions’’ (1988). One significant narrative advantage of the use of parallel worlds is that it cuts out the necessity for elaborate modes of *transportation between fictional constructions. Faster-than-light travel is no less arbitrary a facilitating device than interdimensional portals, as is evident in the synthesis of the two kinds of portal in the ‘‘stargate’’, but the idea of a galactic community did retain an imaginative advantage by virtue of its resonance with the majesty of the night sky: the ‘‘higher’’ of the two original parallel worlds.

For much of the twentieth century, the idea of parallel worlds was regarded by scientists as an amusing corollary of mathematical fancy, but it became increasingly significant in theoretical physics as atomic theory and quantum mechanics became increasingly complicated, eventually acquiring a certain respectability when it was co-opted in 1957 by Hugh Everett and John Wheeler as the ‘‘many worlds’’ interpretation of quantum mechanical uncertainty. The number of dimensions theoretically required to account for the exotic behaviour of subatomic particles increased dramatically with the advent of string theory, and the notion of parallel universes became a key element of some versions of inflationary cosmology. Parallel worlds stories illuminated by ideas drawn from these developments in theoretical physics include Isaac Asimov’s The Gods Themselves (1972), Bob Shaw’s A Wreath of Stars (1976), Frederik Pohl’s The Coming of the Quantum Cats (1986) and The Singers of Time (1991; with Jack Williamson), and Stephen Baxter’s Manifold trilogy (1999–2002). This is, however, one instance in which fiction has conspicuously failed to keep imaginative pace with the theory. One of the originators of string theory, Michio Kaku, became an outspoken advocate of the notion that the real existence of parallel worlds is no mere metaphysical hypothesis, but can be proven, providing a definitive summary of the issue in Parallel Worlds (2005). The inflationary version of the many worlds hypothesis was given an added twist by the proposition that there must be an ongoing process of ‘‘natural selection’’ favouring the proliferation of those universes that are most hospitable to the formation of new sub-universes, and that this intra-multiversal evolutionary process might be responsible for the implication of intelligent design inherent in the cosmological anthropic principle. Scientific American devoted a special issue to such questions in May 2003. Liza Randall’s Warped Passages: Unraveling the Universe’s Hidden Dimensions (2005) calls individual universes ‘‘branes’’ (short for membranes) and the multiverse ‘‘the bulk’’.

Soul


Medieval and Renaissance scholars understood anima (soul) as the entity whose presence made a thing alive. Following Aristotle (384–322 B.C.E.), they believed that plants and animals as well as humans possessed souls but that only the human soul survived death. United with a properly prepared body, the human soul carried out vegetative and sensitive functions. In the view of most Aristotelians down to the Renaissance, the soul did not require a body for intellectual functions. The mechanical philosophers of the seventeenth century, while not denying the existence of the human soul, argued that organs alone were sufficient for vegetative and sensitive functions. A comparison of the mechanist theories of René Descartes (1596–1650) with the vitalist theories of William Harvey (1578–1657) in physiology and embryology illustrates how early attempts to banish soul from the science of life foundered upon the variety and complexity of vital functions.

In De motu cordis (On the Motion of the Heart, 1628), Harvey showed, contrary to the prevailing Galenic physiology, that blood returned to the heart through the veins and that systole was the active phase of heart motion. Although he likened the heart’s motion to that of a pump, Harvey was no mechanical philosopher. He believed that the blood was the seat of the soul and that the heart restored and perfected the blood upon its return from the extremities before pumping it out again. Descartes readily accepted the circulation of the blood but denied that the heart possessed any “unknown or strange faculties” for the restoration of the blood. He claimed that the heat of the heart was sufficient to explain not only the restoration of the blood but cardiac motion as well. Where the vitalist Harvey could readily accept an active systole, the mechanist Descartes found an active diastole easier to accommodate. Descartes dismissed Harvey’s assertion of an active systole and claimed, instead, that drops of blood entered the ventricles, were vaporized by cardiac heat, distended the ventricles, and so achieved enough pressure to force open the valves and enter the arteries. Unable to explain active systole in a heart deprived of the souls vital powers, Descartes returned to the theory of active diastole, which Harvey had already shown was false.

In De generatione animalium (1651), Harvey, relying chiefly on the examination of chick eggs at different stages of development, proposed that fetal development took place by epigenesis, by the sequential derivation of parts from a principal particle that, for vertebrates, was the blood. Harvey believed that the blood—the first material to emerge from the homogeneous mass of the egg—became the seat of the soul and, as the source of animal heat and vital spirits, guided all subsequent differentiation. Harvey’s willingness to attribute epigenesis to the soul rather than to mechanical processes allowed him to avoid the absurd consequences of preformation.

Timekeeping


Mankind first developed a sense of time from observations of nature. For the short term, he observed the movement of heavenly bodies—the sun and moon held particular importance, heralding the seasons and the months. For the long term, birth and death events—of themselves and their livestock—marked the passage of time. One of the earliest inventions was the astrolabe, which astronomers used to track stars and planets. The first such instrument may have been made in the second century by the greatest astronomer of ancient times, the Greek Hipparchus, and was brought to perfection by the Arabs. Early artificial means of timekeeping to provide an estimate of the hour were all analogue in nature, whether passive, like the sundial, or dynamic, like the sandglass or water-clock, which measured time by rate of flow. The sundial probably began with a stick thrust into the ground; the position of its shadow corresponded to the hour of the day. Very elaborate sundials have been constructed which compensate for the sun’s relative position during the year, and ingenious pocket versions have been popular from time to time. However, all such instruments are worthless if the day is cloudy, and after the sun goes down. Therefore, particularly for stargazers, independent means of time estimation were important. The simple sandglass, in which sand is made to run through a small opening, was adequate only for short durations. However, running water can power a mechanism indefinitely. The greatest water-clock ever made was a building-sized astronomical device, constructed by Su Sung in China in 1094, to simulate the movements of sun, moon and the principal stars. Chinese philosophers thought that because water flows with perfect evenness, it is the best basis for timekeeping. However, in this belief they were wrong; the secret of accurate timekeeping is to generate and count regular beats, which is a digital rather than an analogue process. This may be surprising, and not just to the ancients in China, because except for intra-atomic events time can be regarded as a continuous phenomenon.

Monorails


It was during the period before the First World War that monorails first caught public attention, although as early as 1824, H.R.Palmer had proposed a monorail in which divided vehicles would hang down either side of a single rail supported on trestles. An experimental horse-powered line was built, without lasting effect. About 1880, however, the French engineer C.F.M.T.Lartigue built about 190km (120 miles) of similar lines in North Africa along which wagons which hung down either side of the rail like panniers were hauled by horses or mules. The system was demonstrated in London in 1886, but with a steam locomotive designed by Mallet: it had a pair of vertical boilers and two grooved wheels. This led to construction of the 15km (9 miles) Lartigue monorail Listowel & Ballybunion Railway in Ireland, cheap to build, opened in 1888, steam worked, and successful in operation until 1924.

The managing director of the Lartigue Railway Construction Company was F. B.Behr who developed a high-speed electrically powered monorail, demonstrated in Belgium at 132kph (82mph) in 1897. He promoted a company which was authorized in 1901 to build such a line between Liverpool and Manchester, upon which the cars were to travel at 175kph (109mph): but the Board of Trade was hesitant over their braking abilities and capital for construction could not be raised. It was at this period, however, that the Wuppertal Schwebebahn was built in Germany, an overhead monorail with electrically powered cars suspended from it. The first section was opened to traffic in 1901; the line eventually extended to some 13km (8 miles) and continues to operate successfully to the present day. In 1903, Louis Brennan patented a monorail system in which the cars would run on a single rail and use gyroscopes to maintain their stability; this was built and demonstrated in 1909 with a petrol-electric vehicle, but despite the attraction of cheap construction was never put into commercial use. There have been many subsequent proposals for monorails, some of which have been built. The Bennie monorail, in which a suspended car was driven by an airscrew, was demonstrated in the 1920s; the most notable of many systems proposed since the Second World War has been the Alweg system in which the track is a hollow concrete beam supported on concrete pylons; not strictly a monorail, for a narrow track on the top of the beam is used for the carrying wheels of the vehicles, while additional wheels bearing on the edges of the beam cater for side thrust. With the exception of the Wuppertal line, monorails and the like have seen little use in public service, and have always had greater success in exciting public imagination.

Probes to the Moon


While most of the publicity and glory of lunar exploration has been given to the manned Apollo missions, it was the unmanned probes of the early and mid- 1960s that paved the way for these missions. The first four American attempts at launching a lunar probe were unsuccessful and on 12 September 1959 the Soviet Union launched the Luna 2 probe, which impacted 800km (500 miles) north of the visual centre of the moon. It thus became the first man-made body to reach a celestial object. Very soon after Luna 2, the Russians again achieved a space ‘first’ when Luna 3 photographed the invisible face of the moon. After these early days, the pace of lunar probe launches accelerated. The American Ranger series of spacecraft were intended to photograph the lunar surface in advance of the Apollo landings. The first six Ranger missions were failures, but Ranger 7 (launched 28 July 1964) sent back more than 4000 high-resolution photographs before impacting in the Sea of Clouds. Two more later Rangers returned more than 13,000 images between them.

In 1963 the Russians were planning for a lunar soft landing. The first attempts were unsuccessful. Luna 9 finally succeeded in 1966 and the spacecraft returned the historic first pictures from the moon’s surface. The rapid sequence of the Russian lunar launches leading up to Luna 9 was a direct response to its American competitor, the Surveyor spacecraft. Surveyor 1 softlanded on the moon barely four months after Luna 9, returning 11,000 pictures over a six-month period. The Surveyor craft were more sophisticated than the Luna vehicles. Further Surveyor landings examined the surface in regions representative of Apollo landing sites. At the same time as the Surveyor craft were landing on the moon, the Americans were launching Lunar Orbiter spacecraft aimed at returning very high resolution photographs of the lunar surface.

During 1969 while all the American efforts were directed towards the Apollo programme, the Russians were landing more lunar craft in a bold attempt to soft-land and return to earth with lunar soil samples. This ambitious programme failed to pre-empt the Apollo 11 landing, but in 1970 Luna 16 did achieve the goal of returning a sample to earth. The Russians never tried to send men to the moon, concentrating solely on robot explorers. Luna 21 carried a rover vehicle (called Lunokhod) which for four months roamed over 37,000 metres (23 miles) on the surface under command from ground control.

Beta Testing


Beta Testing, in computer science, the formal process of soliciting feedback on software that is still under development. In a beta test, software is sent to select potential customers and influential end users (known as beta sites), who test its functionality and determine whether any operational or utilization errors (bugs) still exist in the program. Beta testing is usually one of the last steps a software developer takes before releasing the product to market; however, if the beta sites indicate that the software has operational difficulties or an extraordinary number of bugs, it is common for the developer to conduct another beta test. A beta test usually includes a draft version of a product’s documentation, which is reviewed along with the software.

Chipset


In personal computers a chipset is a group of integrated circuits that together perform a particular function. System purchasers generally think in terms of the processor itself (such as a Pentium III, Pentium IV, or competitive chips from AMD or Cyrix). However they are really buying a system chipset that includes the microprocessor itself and often a memory cache (which may be part of the microprocessor or a separate chip—see cache) as well as the chips that control the memory bus (which connects the processor to the main memory on the motherboard.) The overall performance of the system depends not just on the processor’s architecture (including data width, instruction set, and use of instruction pipelines) but also on the type and size of the cache memory, the memory bus (RDRAM or “Rambus” and SDRAM) and the speed with which the processor can move data to and from memory.

In addition to the system chipset, other chipsets on the motherboard are used to support functions such as graphics (the AGP, or Advanced Graphics Port, for example), drive connection (EIDE controller), communication with external devices, and connections to expansion cards (the PCI bus). At the end of the 1990s, the PC marketplace had chipsets based on two competing architectures. Intel, which originally developed an architecture called Socket 7, has switched to the more complex Slot-1 architecture, which is most effective for multiprocessor operation but offers the advantage of including a separate bus for accessing the cache memory. Meanwhile, Intel’s main competitor, AMD, has enhanced the Socket 7 into “Super Socket 7” and is offering faster bus speeds. On the horizon may be completely new architecture. In choosing a system, consumers are locked into their choice because the microprocessor pin sockets used for each chipset architecture are different.

Optical Character Recognition Devices


An optical character recognition device, often abbreviated as OCR, is able to recognize text that is printed in a specific type font. Early OCR equipment could only read one type face (like this one) in dot matrix form. Scanners are a form of OCR family that can read almost any type font and their accuracy depends in large part on the text or recognition software used. The device converts light—an analog continuous wave form—into digital binary bits of zero and one [0,1] which is a discrete wave form. To accomplish this, scanners use electronic components such as charge-coupled devices (CCD), a diode that is light sensitive when electrically charged, or photomultiplier tubes (PMT), a light sensitive tube that detects light at any intensity by amplifying it. PMTs are usually associated with drum scanners. Some examples of scanners are as follows.

The Avengers


In Movie Theaters:
May 4, 2012 

Directed by:
Joss Whedon

Starring:Robert Downey ... Tony Stark / Iron Man
Scarlett Johansson ... Natasha Romanoff / Black Widow
Samuel L. Jackson ... Nick Fury
Jeremy Renner ... Clint Barton / Hawkeye
Chris Evans ... Steve Rogers / Captain America
Chris Hemsworth ... Thor

Distributed by:
Walt Disney Pictures

Genres:
Action Adventure 3D Shot-In-3D Superhero
Synopsis:
The Avengers will bring together the super hero team of Marvel Comics characters for the first time ever, including Iron Man, Captain America, Thor, The Hulk and more, as they are forced to band together to battle the biggest foe they’ve ever faced.

Additional Notes:
Based on the Marvel comic series that debuted September 1963.

Official Trailer:

G.I. Joe: Retaliation


In Movie Theaters:
June 29, 2012 Nationwide

Directed by:
Jon Chu

Starring:Channing Tatum ... Duke Hauser
Byung-hun Lee ... Storm Shadow
Dwayne Johnson ... Roadblock
Elodie Yung ... Jinx
D.J. Cotrona ... Flint
Ray Park ... Snake Eyes

Distributed by:
Paramount Pictures

Genres:
Action Adventure Sequel 

Synopsis: After grossing nearly $300 million worldwide, it's no surprise that "GI Joe: The Rise of Cobra" is getting a sequel. No plot details have been announced.

Official Trailer:

Madagascar 3: Europe's Most Wanted


In Movie Theaters:
June 8, 2012 Nationwide

Directed by:
Eric Darnell

Starring:
Ben Stiller ... Alex (voice)
Chris Rock ... Marty (voice)
David Schwimmer ... Melman (voice)
Jada Pinkett-Smith ... Gloria (voice)
Sacha Baron Cohen
Frances McDormand

Distributed by:
Paramount Pictures

Genres:
Comedy Action Adventure Family Animation 3D Sequel

Synopsis:
Alex the Lion, Marty the Zebra, Gloria the Hippo, and Melman the Giraffe are still fighting to get home to their beloved Big Apple; King Julien, Maurice and the Penguins are along for the adventure. This time the road takes them through Europe where they find the perfect cover: a traveling circus, which they reinvent Madagascar style!

Official Trailer:

Men in Black III


In Movie Theaters:
May 25, 2012 in 3,000 theaters

Directed by:
Barry Sonnenfeld

Starring:Will Smith ... Agents J
Tommy Lee Jones ... Agent K
Josh Brolin ... Young Agent K
Jemaine Clement ... Boris (villian)
Rip Torn ... Zed
Emma Thompson ... Agent Oh

Distributed by:
Sony Pictures

Genres:
Action Fantasy Comedy Sci-Fi IMAX 3D Shot-In-3D Sequel

Synopsis:
The MIB duo of Agent Jay (Will Smith) and Agent Kay (Tommy Lee Jones) are back in action. When the world is threatened by an evil alien, Agent Jay travels back in time to 1969, where he teams up with the younger Agent Kay to stop an evil villain named Boris (Jemaine Clement) from destroying the world in the future. Emma Thompson will play take-charge MIB operative Agent Oh, who is monitoring a prison breakout.

Additional Notes:
The first "Men in Black" was made for $90 million and went on to earn around $590 million in worldwide ticket sales. The sequel, Men in Black II, was released five years later, was made for $140 million and earned $442 million worldwide.

Official Trailer:

Wrath of the Titans


In Movie Theaters:
March 30, 2012 Nationwide

Directed by:
Jonathan Liebesman

Starring:Sam Worthington ... Perseus
Ralph Fiennes ... Hades
Liam Neeson ... Zeus
Edgar Ramirez ... Ares
Toby Kebbell ... Agenor
Bill Nighy ... Hephaestus

Distributed by:
Warner Bros. Pictures

Genres:
Action Adventure 3D Post-3D Sequel

Synopsis:
After Ares betrays Zeus to the Titans, Perseus treks to the underworld to rescue Zeus, overthrow the Titans and save mankind.

A decade after his heroic defeat of the monstrous Kraken, Perseus—the demigod son of Zeus—is attempting to live a quieter life as a village fisherman and the sole parent to his 10-year old son, Helius. Meanwhile, a struggle for supremacy rages between the gods and the Titans. Dangerously weakened by humanity’s lack of devotion, the gods are losing control of the imprisoned Titans and their ferocious leader, Kronos, father of the long-ruling brothers Zeus, Hades and Poseidon. The triumvirate had overthrown their powerful father long ago, leaving him to rot in the gloomy abyss of Tartarus, a dungeon that lies deep within the cavernous underworld.

Perseus cannot ignore his true calling when Hades, along with Zeus’ godly son, Ares (Edgar Ramírez), switch loyalty and make a deal with Kronos to capture Zeus. The Titans’ strength grows stronger as Zeus’ remaining godly powers are siphoned, and hell is unleashed on earth. Enlisting the help of the warrior Queen Andromeda (Rosamund Pike), Poseidon’s demigod son, Argenor (Toby Kebbell), and fallen god Hephaestus (Bill Nighy), Perseus bravely embarks on a treacherous quest into the underworld to rescue Zeus, overthrow the Titans and save mankind.

Additional Notes:
Instead of converting into 3D, Clash of the Titans 2 will be shot in 3D to improve on the first installment.

Official Trailer:

Underworld: Awakening


In Movie Theaters:
January 20, 2012 in 3,078 theaters

Directed by: 
Mans Marlind
Bjorn Stein

Starring:
Kate Beckinsale ... Selene
Michael Ealy ... Detective
Scott Speedman ... Michael Corvin
Charles Dance

Distributed by:
Sony Screen Gems

MPAA Rating:
R for strong violence and gore, and for some language.

Genres:
Action Horror 3D Sequel 

Synopsis:
Kate Beckinsale, star of the first two films, returns in her lead role as the vampire warrioress Selene, who escapes imprisonment to find herself in a world where humans have discovered the existence of both Vampire and Lycan clans, and are conducting an all-out war to eradicate both immortal species.

Official Trailer:

Limits of Science


What does all this have to do with nonmathematical topics like religion, agnosticism, and metaphysics? The answer is that Goedel’s theorem points out a basic limitation of science. We notice that all of science taken as a whole is an example of an infinite mathematical system to which Goedel’s theorem does apply. The axioms of the system may be taken to be the “laws” or theories that have been discovered in the various disciplines. Giving credit to the scientists, let us assume that the laws discovered by them have been thoroughly examined so that they are not mutually contradictory. In other words, we are assuming that the set of axioms is not inconsistent. This is a statement in favor of science, because if the axioms are indeed inconsistent, then as it stands now, there is something wrong in science that needs to be rectified.

We can now apply Goedel’s theorem. We conclude that the set of laws that we have is incomplete, that there exist questions in the system that cannot be answered yes or no using these laws. The system under consideration is nothing but nature itself, so we conclude that the laws of science as they stand now cannot answer all questions about nature. Now, take a particular question. To answer it, we shall need to add a new axiom—that is, discover a new law. This particular question will now get answered, but science will now be a new system to which Goedel’s theorem shall again apply. Now there will be some other question that cannot be answered.

Notice that this process will never come to an end. Even if we worked for a million years, science at that time would still be an incomplete bunch of axioms, and there would be questions about nature that cannot be answered yes or no. We thus conclude that science has a basic limitation: that there will be no time in the future when it has completely fathomed the depths of nature. It is a set of axioms that will always remain incomplete. This is a fact that gives us a glimpse into reality.

Inkjet Printers


Although inkjets were available in the 1980s, it was not until the 1990s that prices dropped enough to make them more affordable to the masses. Canon claims to have invented what it terms “bubble jet” technology in 1977, when a researcher accidentally touched an ink-filled syringe with a hot soldering iron. The heat forced a drop of ink out of the needle and so began the development of a new printing method. Inkjet printers have made rapid technological advances in recent years. The three-color printer has been around for several years and has succeeded in making color inkjet printing an affordable option; but as the superior four-color model became cheaper to produce, the swappable cartridge model was gradually phased out. Inkjet printers of high quality use six inks—photo magenta, magenta, photo cyan, cyan, yellow, and black. They come with six separate tank units. The latter design was intended for controlling cost in the event one color is depleted faster than the others due to high usage. This way, the unit used most frequently can be replaced for that color. They also have high picoliter count, a measurement used to determine how finely ink is sprayed onto the paper.

Flash Memory


Flash memory refers to memory chips that can be rewritten and hold their content without power. It is widely used for digital camera film and as storage for many consumer and industrial applications. Flash memory chips earlier replaced read only memory basic input/output system (ROM BIOS) chips in a PC so that the BIOS could be updated in place instead of being replaced. Flash memory chips generally have life spans from 100 to 300K write cycles.

DVD-ROM


The DVD (alternatively, Digital Video Disc or Digital Versatile Disc) is similar to a CD, but uses laser light with a shorter wavelength. This means that the size of the pits and lands will be considerably smaller, which in turns means that much more data can be stored on the same size disk. A DVD disk typically stores up to 4.7 GB of data, equivalent to about six CDs. This capacity can be doubled by using both sides of the disk. The high capacity of DVD-ROMs (and their recordable equivalent, DVD-RAMs) makes them useful for storing feature-length movies or videos, very large games and multimedia programs, or large illustrated encyclopedias.

The development of high-definition television (HDTV) standards spurred the introduction of higher capacity DVD formats. The competition between Sony’s Blu-Ray and HD-DVD (backed by Toshiba and Microsoft, among others) was resolved by 2008 in favor of the former. Blu-Ray offers high capacity (25GB for single layer discs, 50GB for dual layer).

Bar Codes


Bar code is an automatic identification technology that allows rapid data collection and does it with extreme accuracy. It provides a simple and easy method to encode information which is easily read by inexpensive electronic devices. Bar code is a defined pattern of alternating parallel bars and spaces, representing numbers and other characters that are machine readable. Predetermined width patterns are used to code data into a printable symbol. Bar codes can be thought of as a printed version of the Morse code in that the narrow bars represent dots and the wide bars represent dashes.

The bar code reader decodes a bar code by scanning a light source across the bar code and measuring the intensity of light reflected back to the device. The pattern of reflected light produces an electronic signal that exactly matches the printed bar code pattern and is easily decoded into the original data by simple electronic circuits. There are a variety of different types of bar code encoding schemes called symbologies, each developed to fulfill a specific need in a specific industry. Bar code technology is designed to function best with a red light as a scanning spot and is designed to be bidirectional to increase performance. The bidirectionality remains the same if bar codes are read from top to bottom/bottom to top, or left to right/right to left. Currently over 40 bar code symbologies have been developed with their unique scanning device.

Since bar code symbologies are like languages that encode information differently, a scanner programmed to read a particular code cannot read another. Some of the commonly encoding schemes or “symbologies” are code 39 (Normal and Full ASCII), Universal Product Code (UPC-A, UPC-E), and European Article Numbering system (EAN-8, EAN-13). The EAN symbologies adhere to the same size requirement.

In summary, the use of bar coding systems assists in the collection of data for production control, automatic sorting systems, or monitoring work-in-progress. As such, bar coding provides accurate, immediate information; thereby, it facilitates the decision making process.

Limits of Computation


Computer technology has advanced at a pace unprecedented by other technologies. Although such achievements have transformed the human experience, they have also fueled misconceptions about the limits of computation itself. From a popular perspective, the scope of problems that can be solved by computers may seem unlimited. In reality, computers are limited by several factors—some contingent and others more fundamental.

Contingent limitations apply to the current state of computer technology. Actual computers are artifacts of engineering; as such, they are subject to certain physical limitations and may lack sufficient resources (e.g., speed, memory) to solve certain problems. For example, to list all possible partitions of 100 countries into two even halves (the city partition problem) would require at least 2100 steps, which would take more than 30 trillion years on a computer at the speed of 1 billion steps per second. Note that this is a problem that is solvable by computers in principle but that we just cannot afford the waiting time given the speed of the current computers. Moreover, further improvement in computer power does not seem to help much. For example, even with the computer speed improved by 1,000 times, the city partition problem still would require more than 30 billion years of computer time.

Other contingent limitations apply to our current knowledge of computer algorithms. Computer algorithms have been a very active research area in computer science. Many powerful algorithmic techniques have been developed during the past four decades. For example, now it is a trivial task using computers to schedule a minimum cost trip from a given city to another given city (the shortest path problem). The problem on a map of 100 cities takes a computer only miniseconds. This algorithm has been used very popularly by travel agents. On the other hand, if we add the condition that the tour must pass through all cities on the map (the traveling salesman problem), a seemingly moderate condition, the problem becomes much more difficult. The problem is solvable in principle; we can simply enumerate all possible such tours and pick the one with the minimum cost. However, such an enumerating algorithm would take more than 2100 steps on a computer, and this, as already calculated for the city partition problem, would require an unaffordable waiting time.
On the other hand, in spite of much effort by computer scientists and mathematicians over the past four decades, no one has been able to develop an algorithm for the traveling salesman problem that is essentially better than the trivial enumerating algorithm. Therefore, currently no known computer program is able to solve the problem in a practical manner. Note that the traveling salesman problem is different from the city partition problem; the 2100 computer steps are necessary for solving the city partition problem, whereas much faster computer algorithms might exist for the traveling salesman problem but simply have not been discovered yet.

During the past 40 years of research in a variety of areas in computer science, a class of more than 1,000 problems, all similar to the traveling salesman problem, has been identified. It is known that if any of these problems can be solved efficiently, all problems in the class can be solved efficiently using a technique of problem reductions. On the other hand, many of these problems have been studied for decades, and no efficient algorithms have been found for any of them. Thus, it is natural to conjecture that no efficient algorithms exist for these problems and that all of the problems in the class are computationally intractable. These problems have been named NP-complete problems.

Many important computational problems in a large variety of areas are known to be NP-complete. Studying NP-completeness has been central in the research in theoretical computer science. But this is only half of the story. Beyond the physical limitations on what can be computed in practice, there are deeper and more fundamental limitations on what can even be computed in principle. These limitations apply to the nature of computation itself. In fact, classic results in computational unsolvability were studied by mathematicians and philosophers prior to the construction of the first computers.

After all, computers are built based on Boolean logic, and the execution of a computer program can be regarded as a process of mathematical logic reasoning. According to Gödel’s incompleteness theorem, no formal mathematical system can verify the validity of all mathematical statements. Translated into the language of computer science, this says that no computer program can test the correctness of all other programs. In fact, it has been proven using formal mathematics that there are a large number of very simple computer programs whose correctness cannot be verified by any computer program. For example, suppose that a computer programming teacher has taught his first class and assigned his students to write their first computer program that simply prints “Hello, World.” It would be quite natural for the teacher to expect to write a “testing program” that can test the correctness of the programs submitted by the students. However, fundamental research in computational unsolvability has shown that no such testing programs exist even for such a simple task. Note that the impossibility results here are outright and have nothing to do with the computational resource. These fundamental impossibility results have also found wide applications in science and engineering.

In summary, limitations of computation have been studied thoroughly from both practical and theoretical points of view. These studies may play important roles in the research in geographic information science (GIScience). In particular, research directions in GIScience, such as computable city, electropolis, Digital Earth, and virtual field trip, all are closely related to computer and information technology. A clear understanding of what is possible and what is impossible using computer technologies seems to be necessary for the research in these areas.

Magnetic Compass



The magnetic compass is an instrument that indicates the whole circle bearing from the magnetic meridian to a particular line of sight. It consists of a needle that aligns itself with the Earth’s magnetic flux, and with some type of index that allows for a numeric value for the calculation of bearing. A compass can be used for many things. The most common application is for navigation. People are able to navigate throughout the world by simply using a compass and map. The accuracy of a compass is dependent on other local magnetic influences such as man-made objects or natural abnormalities such as local geology. The compass needle does not really point true north but is attracted and oriented by magnetic force lines that vary in different parts of the world and are constantly changing. For example, when you read north on a compass you are reading the direction toward the magnetic north pole. To offset this phenomenon we use calculated declination values to convert the compass reading to a usable map reading. Since the magnetic flux changes through time it is necessary to replace older maps with newer maps to insure accurate and precise up-to-date declination values.

How Do You Know When You’re At the Pole?

Remember that your magnetic compass would be reading magnetic north but you want to get to geographic north. What would you do to figure out where you were going, particularly when the end of your journey was on ice that was moving? If you’re standing at the North Pole, all points are south of you (east and west have no bearing). Since the earth completes a full rotation once every 24 hours, if you’re at the North Pole your speed of rotation is quite slow—almost no speed at all, compared to the speed of rotation at the equator of about 1,038 miles per hour.
Arctic explorers carried a sextant with them to take measurements as they got closer and closer to the pole. Prior to GPS, this was the only accurate way to determine if you actually made it!

Where Exactly Is the South Pole?

Like the geographic North Pole, the South Pole lies at the point where the lines of longitude converge (it’s like the part of the orange where all the sections come together). How would you know if you’ve actually reached the South Pole? As at the North Pole, you would use a sextant to measure the angle of the sun off the horizon. When Amundsen reached the South Pole he sent three men out at 90 degrees to each other with the instructions to travel 10 miles in those directions and to plant a flag. That way he knew he had encircled the pole. Amundsen stayed at the pole so that his team could take a series of hourly observations with the sextant over a 24-hour period to determine their actual position. They found they were 6 miles from the pole so they moved their camp.

What is Latitude and Longitude?

Knowing latitude and longitude is a simple way to identify location. Navigators talk about their north-south position using parallels of latitude—the lines running across the map, chart, or globe, from left to right, west to east. A latitude coordinate tells how far north or south you are from the equator, the line that goes around the middle of the globe dividing it into the Northern and Southern Hemispheres. A longitude coordinate tells how far east or west you are from the prime meridian, the line of longitude that runs through Greenwich, England. Lines of longitude, which are also called meridians, run north and south on a map and converge at the poles.

Distance is written in terms of degrees. The equator lies at 0 degrees and the parallels of latitude north of the equator are identified as north, and those south of the equator are identified as south. The North Pole lies at latitude 90 degrees north, and the South Pole at 90 degrees south. The prime meridian lies at 0 degrees longitude. Meridians of longitude east of the prime meridian are designated as east, and those west of the prime meridian are identified as west.
Where longitude 180 degrees west meets longitude 180 degrees east in the Pacific Ocean is the International Date Line, the place where the date actually changes. Fortunately the International Date Line doesn’t go through any islands—it zigs and zags along the 180-degree meridian—otherwise for people living on one side of the date line it would be today, and for their neighbors living on the other side it would be tomorrow, which could get very confusing. Without the International Date Line, travelers going westward would discover that when they returned home, they had spent one more day on their vacation than they thought, even though they had kept careful tally of the days. This happened to Magellan’s crew after their first circumnavigation of the globe. Likewise, a person traveling eastward would find that one fewer day had elapsed than he or she had recorded, as happened to Phileas Fogg in Around the World in Eighty Days by Jules Verne.
Each degree of latitude and longitude is divided into 60 minutes, and each minute is further divided into 60 seconds (think of how time is divided and you’ll never forget this). Navigators measure distance in nautical miles. One nautical mile equals one minute of one degree and has been set at 6,080 feet. So one degree of latitude or longitude equals 60 nautical miles (or 70 land miles). Any location on earth is described by two numbers—its latitude and its longitude. If a ship’s captain wants to specify position on a map, these are the “coordinates” they would use. Think of position coordinates like you think of street addresses. When position coordinates are given, it’s just a way to pinpoint a place by identifying where lines of longitude and latitude intersect. This can be particularly helpful in the middle of the ocean where there are no visible landmarks. Coordinates are always read by stating the latitude first and the longitude second. One very famous set of position coordinates is latitude 41 degrees 33 minutes north, longitude 50 degrees 01 minute west. On April 14, 1912, this is where the ocean liner Titanic struck an iceberg in the northern Atlantic Ocean and quickly sank.