The transportation industry was radically altered with the invention of the first automobile. The history of the automobile has gone through several transformations, from early attempts at human-powered transportation to fully autonomous and networked electric cars. An important development throughout the industrial revolution, this breakthrough has had far-reaching social and economic effects.
Never before have so many people, on both an individual and a societal level, come to embrace and even idolize a technological object. Manufacturers nowadays are developing smaller vehicles with cutting-edge safety features and engines (electric, solar, hydrogen, etc.).
The First Car in History
Joseph Cugnot’s “fardier à vapeur” was the first automobile in history.
Carriages were the means of transportation before the invention of the first automobile. Carriages were either sleds or carts with wheels and were drawn by a person or a pack animal. As early as the early 18th century, scientists and engineers attempted to fit a steam engine inside a car, but it wasn’t until the late 18th century that they saw any real success. In 1770, Frenchman Joseph Cugnot created the first car in history.
The military engineer Cugnot decided to mechanize the artillery cart (“fardier”) by replacing the horse with a third wheel powered by a steam engine. The aim was to use the cart to transport heavy parts like cannon barrels. These trials were not definitive, but they did serve to highlight the potential of high-pressure steam as driving power.
In 1769, Joseph Cugnot first created a small-scale version of his project which he called “fardier à vapeur.” In 1770, he constructed a full-scale version of the fardier à vapeur, with the intended capabilities of carrying four tons and traveling 4.8 miles (7.8 km) in one hour. The car weighed 2.5 tons and was controlled by a pair of handles.
The fardier had to stop every 15 minutes to refill the machine with wood. The steam engine was too cumbersome and inefficient to be really useful, therefore Joseph Cugnot’s efforts ultimately failed along with those of his French, American, and British successors.
Engineers focused on bettering the train instead, setting aside the automobile until the development of the internal combustion engine, which was lighter, more efficient, and less bulky.
First Car with an Internal Combustion Engine
1885, the Benz Patent-Motorwagen, the first car with an internal combustion engine. (Photo by Nick.Pr, CC BY)
The first car in history powered by an internal combustion engine, a three-wheeled vehicle, was built in 1885, and its creation is credited to German inventor Carl Benz. The car was called Benz Patent-Motorwagen (“patent motorcar”) and it was also the first mass-produced car in history. In 1886, it was shown to the public after receiving a patent for it.
In the latter months of mid-1888, the Patent-Motorwagen became the world’s first commercially available car. Patent-Motorwagen was sold for $4,750 in today’s money. This first Motorwagen used the Benz 954 cc (58.2 cu in) single-cylinder four-stroke engine, and produced 0.75 hp (0.55 kW).
At its debut in Mannheim, Germany, Karl Benz drove the car at a peak speed of 10 mph (16 km/h). The car’s internal combustion engine helped establish Karl Benz as the father of modern cars. He built 25 units of Patent-Motorwagen in total.
First Auto Show
A 1910 poster of the Paris Motor Show. (Image, CC0)
In 1898, Paris’s Grand Palais hosted the world’s first motor show, known as the “Paris Motor Show.” Even after 125 years, it remains a major event in the automotive industry. The retrospective exhibition at the tenth edition of the show in 1907 was even more magnificent and comprehensive, with artifacts ranging from the Cugnot cart, a real “prehistoric” legend from 1770, to a De Dion-Bouton tricycle from 1885, a German Daimler draisine from 1887, and a De Dion-Bouton steam omnibus from 1896.
There were still Brasier and Renault automobiles on exhibit, along with electric vehicles, but sadly absent was Etienne Lenoir’s gas-powered vehicle, which was said to have been utilized in Paris in the early 1860s and inspired Jules Verne.
First Car to Break 100 km/h
La Jamais Contente reached 105.9 km/h (65.8 mph) in 1899. (Image, CC0)
While the issue of motorization had yet to be resolved, the first accomplishment of the automobile concerning the vehicle’s global conception did not change much over the next few decades in terms of transmission, steering, or bodywork. While some may have seen the near dead end of the electric automobile coming, it wasn’t proven until the century’s conclusion when competition between different forms of propulsion finally pushed an electric car above 62 miles per hour (100 km/h).
On April 29, 1899, Belgian engineer Camille Jenatzy finally crossed this psychological boundary of a “speed in three digits” in his La Jamais Contente (“The Never Contented”). It was the first car that broke the 100 km/h barrier (62 mph) and reached 105.9 km/h (65.8 mph).
The First Affordable Car
Ford Model T runabout, probably 1913–1914. (Photo, Pierre Poschadel, CC BY SA)
Henry Ford’s goal was to mass-produce an automobile that was easy to build, reliable, and affordable. He was successful in his gamble by introducing assembly line production, which had a profound impact on the automobile industry across the world.
When Henry Ford unveiled his vision for the Model T in 1908, Ford Motor Company was already five years old. He aimed for a simple car design that would allow the vehicle to be assembled quickly and a solid build quality that would demonstrate its reliability. Both of which contributed to the Model T’s minimal maintenance costs.
After-sale service was first introduced by Ford with the introduction of the Model T. Ford established a network of carefully selected agents by 1908, when other automakers still ignored regular maintenance as a cost of doing business. This increased the brand’s awareness.
Ford’s Highland Park plant, located outside of Detroit, produced 3,900 vehicles daily in 1914. Furthermore, between the years of 1908 and 1927, 16.5 million Model Ts were sold. In the early 1920s, the Model T was the world’s most popular car, with production facilities in the United Kingdom, Germany, France, Denmark, South Africa, and Japan.
The Model T lacked modern conveniences such as a wiper (since the windshield was optional), a radio, and even a driver-facing door. The accelerator was on the steering wheel. In 1972, the Volkswagen Beetle became the best-selling automobile of all time, surpassing the Ford Model T.
The Automobile Becomes an Industry
Adolf Hitler ordered the production of the Volkswagen to ease the transportation burden on the general population. The ultimate design of the automobile that would become the best-selling vehicle of all time may be seen in this 1935 V-series prototype. (VW)
Already in 1900, more than 9,500 automobiles were manufactured in the United States, France, and Germany. Henry Ford came up with the Model T eight years later and had millions of them mass-produced using his assembly lines. He is the one responsible for the birth of fast, personal transportation for everyone. At the turn of the 20th century, Ford standardized manufacturing throughout the automotive industry and beyond.
Several million Volkswagen “Beetles,” Renault 4CVs, and Citroen 2CVs were mass market successes at the end of World War II, making vehicle ownership more accessible to the average European. Because of the dramatic growth in the number of cars on the road and their average speed, legislation mandating the use of driving licenses and establishing highway codes became necessary.
The automobile caused major shifts in society, especially in how people saw and used their personal space. It encouraged the growth of trade and communication between nations, as well as the construction of several new facilities (roads and highways, parking lots). Everyone still recognizes the names of the great pioneers of the car industry, from Ferdinand Porsche to André Citroen and Louis Renault.
Contemporary Challenges of the Car
The car industry faced several difficult changes in the 1970s. Rising gas prices and concerns about supply were a direct result of the 1973 and 1979 oil shocks. Automobile companies started making more compact models. They improved the engines so that vehicles could go longer on the same quantity of fuel. They made the bodies more aerodynamic by altering their shapes.
As the number of drivers increased and automobiles became safer and quicker, so did public concern about road safety. Both the frequency and severity of accidents increased.
Another major issue is exacerbated by the growing number of automobiles on the road: air pollution. Although catalytic converters are effective at reducing a variety of harmful emissions, carbon dioxide (CO2), a major contributor to the greenhouse effect and, by extension, global warming, is one of the few gases they cannot remove. Manufacturers like Tesla are shifting their attention to electric vehicle research and development in order to combat this environmental threat.
Bibliography
Ervan G. Garrison, 2018. History of Engineering and Technology: Artful Methods. Routledge. ISBN 978-1351440486.
L. J. K. Setright, 2004. Drive On!: A Social History of the Motor Car. Granta Books. ISBN 1-86207-698-7.
The concept of a universal calculator, which originated in Britain in the 19th century with Charles Babbage and was developed by Alan M. Turing in the early 20th century, lies at the heart of the first computer’s creation. The development of advanced technology, especially in electronics, paved the way for the creation of the world’s first real computer, the ENIAC, in 1946.
Who invented the first computer?
Using Blaise Pascal’s work as a starting point, Charles Babbage developed the concept of programming.
Charles Babbage, an English mathematician, developed the ideas for the first programmed calculating machine between 1834 and 1837. Using Blaise Pascal‘s work as a starting point, this visionary innovator developed the concept of programming using punch cards, which was already in use in the automated creation of music (the Barrel Organ), among other things.
This was the first step toward the development of modern computers. Regrettably, he was unable to realize his vision; the primitive state of technology at the time ensured that his “difference engine” was never produced.
After waiting almost a century, another English mathematician by the name of Alan Turing made a monumental breakthrough in the development of computer technology. In 1936, he wrote, “On computable numbers, with an application to the Entscheidungsproblem” which paved the way for the development of “Turing machines” that unified computation, programming, and computation languages.
Programmable calculators
The Model K, 1937.
Major advancements were achieved during World War II, which owes a great deal to the contemporary age of computers. That’s why we ditched making mechanical parts and transitioned to digital computation and electronic circuits, including vacuum tubes, capacitors, and relays. This period saw the creation of the very first generation of computers.
Many programmable calculators appeared between 1937 and 1944:
The “Model K“, developed in a kitchen, 1937.
The “Z-series” computer, 1936.
The Atanasoff-Berry computer, or ABC for short, 1942.
The Complex Number Calculator (CNC), 1939.
The Harvard Mark I computer, 1944.
The “Bombe” vs. “Enigma”
Enigma, late 1930s.
The United Kingdom put forth a significant effort at Bletchley Park during World War II in order to crack the German military’s encrypted communications. Bombes, machines developed by the Polish secret agency and perfected by the British, were used to crack the primary German encryption system, Enigma.
In 1936, Alan Mathison Turing (1912–1954) published “On computable numbers, with an application to the Entscheidungsproblem,” which paved the ground for the development of the programmable computer. He explained his own idea of the computer, which he called “Turing,” the first universal programmable computer, and, by extension, he created the ideas of “program” and “programming.”
The replica of the first-ever electronic computer in history, the Colossus, 1943. (The National Museum of Computing)
Encryption keys, or ciphers, were discovered using these devices. The Germans developed yet another family of ciphers that were radically different from Enigma; the British dubbed them FISH. Professor Max Newman devised the Colossus, often known as the “Turing bombe,” to crack these systems. It was developed by engineer Tommy Flowers in 1943. The Colossus was later disassembled and buried due to its strategic value.
Colossus was not programmable as it was pre-programmed for a certain task. ENIAC, on the other hand, could be reprogrammed, albeit doing so could take many days. In this sense, ENIAC was the first programmable computer in history.
From ENIAC to modern computer
ENIAC, 1945.
The first completely electronic and programmable modern computer to be Turing-complete did not exist until 1946. This was with the ENIAC (Electronic Numerical Integrator and Computer) constructed by Presper Eckert and John William Mauchly.
In 1948, von Neumann architecture computers began appearing. These computers were distinct from their predecessors in that the programs were kept in the same memory as the data. This design allowed the programs to be manipulated just like data. It was pioneered in 1948 by the University of Manchester’s Small-Scale Experimental Machine (SSEM).
IBM 701 Electronic Data Processing System.
Afterward, IBM released its 701 models in 1952, and subsequent years saw the introduction of the so-called second-generation (1956), third-generation (1963), and fourth-generation (1971) computers, each of which saw the development of the microprocessor and continued the relentless pursuit of miniaturization and computing power.
The innovation of microprocessors, the newest incarnation of which is artificial intelligence, has made desktop computers, laptops, and several high-tech variants (graphics cards, cellphones, touch tablets, etc.) commonplace in our lives today.
The Nautilus, the world’s first nuclear-powered submarine, was launched by the United States in 1954. It could go at over 23 mph (20 knots) underwater and has an effectively infinite range. It was in early August of 1958 when the Nautilus conducted the first known voyage beneath the North Pole, traveling from Point Barrow, Alaska, to a location between Svalbard, Norway, and Greenland.
Nautilus: A long-forgotten fantasy
The Nautilus, which set sail on January 21, 1954, was 321 feet (98 meters) long and weighed more than 3,500 tons. In the years that followed, the vessel, which could travel at speeds of up to 23 knots (26 mph or 43 kmh), logged a total distance of approximately 91,300 nautical miles (170,000 kilometers), including 78,900 nautical miles (146,000 kilometers) traveled below the ocean’s surface, before needing a fuel refill.
As a result, Jules Verne’s long-held ambition to explore “twenty thousand leagues under the seas” (about 59,400 nautical miles, 68,350 miles, or 110,000 kilometers) was finally accomplished, making The Nautilus the first submarine to do so.
For a covert mission known as “Operation Sunshine,” the Nautilus departed Pearl Harbor on July 23, 1958, and sailed north. On August 3, it accomplished its mission by breaking new ground by passing the North Pole in a submersible.Upon its launch on January 21, 1954, the USS Nautilus became the world’s first operational nuclear submarine.
The Nautilus, crewed by 13 officers and 92 crewmen, participated in the 1962 blockade of Cuba (the Cuban Missile Crisis) and several other drills in the Atlantic and Mediterranean. After more than 500,000 nautical miles (a million kilometers) of service, the world’s first nuclear submarine was retired in 1980.
It wasn’t until two years later that plans were made to make the Nautilus a tourist attraction. It took three years, but Nautilus safely docked on the Connecticut River, not far from Groton.
We often explore the historical processes that led to the current state of the globe. We highlight individuals, events, and initiatives that have had a lasting effect on our culture. It’s not the same this time around. All of the historical ideas discussed in this article have ultimately failed, and in many instances, this was actually for the best.
If not mountains, at least ships across the Alps can be moved by faith in technology. Here are three ambitious projects designed to shape the planet but never implemented.
Atlantropa: A new supercontinent
A close guess of what Atlantropa would look like. (Credit: Ittiz)
The designs of Bavarian architect Herman Sörgel, for instance, call for enormous dams to be built in the Strait of Gibraltar and the Dardanelles. The Mediterranean Sea would be walled off from its most major tributaries and so be largely drained. Sörgel, an architect for the Bavarian government who was born in Regensburg in 1885, proposed the massive undertaking in the 1920s. If “Atlantropa” had existed, people may have walked to Africa from Europe.
Sörgel considered himself a pacifist who intended his efforts to contribute to a peaceful global order. There were imperialist undertones to his plan, since he was thinking primarily of exploiting Africa for its raw materials and minerals to benefit Europe’s economy. If the sea level had been lowered by 330 to 655 feet (100 to 200 m), an additional livable region the size of Spain would have been exposed along the beaches.
The visionary also believed that the enormous dams would provide an abundance of renewable energy, more than enough to power the whole new supercontinent.
This was to be supplied by a hydroelectric power plant at the massive dam located 18 miles (30 km) off the coast of Gibraltar.
He was unable to publish his work because the National Socialists rejected his designs. When the war ended, Sörgel had renewed faith that Atlantropa would one day be built. It was still an ideal world, however. That is understandable given the state of knowledge presently.
The human and environmental costs would have been huge, since all Mediterranean ports except Venice, which Sörgel planned to link to the sea by a canal, would have been cut off. If sea levels had risen by one meter over the world as a result of Atlantropa, the repercussions on climate and ecosystem would have been nearly unimaginable.
Whether Atlantropa could have been realized in this shape is an open issue at the end. If a dam of the required proportions would have held permanently is a matter of reasonable doubt. Herman Sörgel’s death in 1952 signaled the end of the development of the Atlantropa project, also known as Panropa.
In inclined tubes, the concept was that the rising water would act as an additional propulsion source for the ships. Water flows back and forth between two tubes in this setup.
A river over the Alps, connecting the Mediterranean and Lake Constance, was another incredible-sounding proposal that was never built but whose impacts would not have been as devastating. Pietro Caminada, an Italian engineer, envisioned a water route from Basel to Genoa that would stretch approximately 370 miles (600 km) in the early 20th century.
Caminada devised a novel pipe system with a complicated set of locks to handle ships 165 feet (50 m) in length, despite the difference in altitude. In inclined tubes, the concept was that the rising water would act as an additional propulsion source for the ships. Water flows back and forth between two tubes in this setup.
The designs had been judged practical by experts, and all that was left for Caminada to do was to begin building. The railroad, with its transalpine lines, was seen as the future of transportation, but by the time World War I began, at the latest, such massive projects were put on hold.
Geoengineering, or the intentional manipulation of Earth’s climate system, is another area where massive interventions in nature and the environment have been and are being planned. A long time before the word was coined, there were actual proposals that make us scratch our brains now.
In his seminal 1945 work “Outline of Weather Proposal,” television co-inventor Vladimir Zworykin proposed that Earth’s climate and weather events might, in theory, be controlled and regulated using computer models. By doing so, he foresaw the eventual realization of numerous concepts, such as the manipulation of clouds, that had previously been considered.
Nonetheless, he said in his paper that this was not enough. He planned to manipulate ocean currents and the planet’s energy system as a whole. John von Neumann, a founding father of computer science, was persuaded that strategic involvement in the climate system could be achieved using computer models, which Zworykin demonstrated to him.
This seems like an example of overconfidence in science and technology. At this point, we can at least make educated guesses about future climate change thanks to sophisticated computer models, but we have a long way to go before we can implement any kind of global climate intervention or change.
The invention of the transistor in December 1947 marked the beginning of a new age in electronics, one in which computers, mobile phones, the Internet, and almost all other electronic devices would not exist without transistors. This semiconductor component has had as much of an impact on technology, society, and daily life as the invention of the wheel or electricity. Let’s have a look at the invention and history of the transistor.
Transistors, in the form of integrated circuits and microchips, can be found in almost every modern electronic device. They are responsible for producing the ones and zeros that are the basis of all digital devices. Simultaneously, they serve as amplifiers, oscillators, and in numerous other fundamental roles in contemporary electronic devices. There were some radical innovations and some detours in the early stages of the development of transistors.
From the vacuum tube to the transistor
The transistor is a really revolutionary development, on par with the wheel and the discovery of fire. It has had maybe the greatest impact on our daily lives and our whole society. Without it, we wouldn’t have desktop computers, laptops, mobile phones, the Internet, or anything else that relies on a microprocessor. Calculator-powered computers would still take up a whole room.
A combination of a switch, rectifier, and amplifier
Numerous individual circuits are the foundation of electronics. Today’s microchips (left) use transistors (middle) as switches, while earlier designs used bulky vacuum tubes (right).
Most electronic parts and gadgets rely on electrical circuits, which determine whether or not a current flows and under what circumstances. But to do this, you’ll need switches that can be operated by electricity. The flow of a considerably greater current is controlled by a much smaller control current. A computer’s ability to perform digital computing tasks is based on the combination of these switches, or “logic gates,” into circuits. However, this can only be achieved if the electrical switches are very responsive and can be regulated accurately.
In addition, many technological applications need the amplification and modification of electrical signals, such as the production of radar or radio signals, the use of loudspeakers, microphones, and monitors, and the transformation of alternating current (AC) to direct current (DC). Again, the goal is to control the resultant electrical current flow in the most efficient and cost-effective method.
When vacuum tubes were first used
Fleming’s oscillation valve. For use in early radio receivers for electromagnetic wireless telegraphy, English scientist John Ambrose Fleming designed a thermionic valve or vacuum tube in 1904 that he termed the Fleming oscillation valve.
Mechanical switches in the form of relays were employed for these purposes before the invention of the transistor, but they switched between states slowly. Alternatively, vacuum tubes, which were first developed in 1904, were put into service. They are so-called triodes and are constructed out of an airtight glass, steel, or ceramic tube with three electrodes. With the third electrode in place, the electric field created between the anode and cathode can be used to control the amount of current flowing between the two.
However, vacuum tubes have a number of drawbacks, including their bulk, fragility, and high power requirements for regulation. Cabinets full of vacuum tubes were utilized for the circuits in the early computers, which meant they took up whole rooms. The British employed a machine called Colossus to decipher German signals during WWII. Meanwhile, the Americans were hard at work on ENIAC, the first general-purpose computer. Similarly, the vacuum tubes within the popular tube radios of the day made the receivers seem more like pieces of furniture than portable listening devices.
Another drawback is that the switching speed of tubes is restricted. This is despite the fact that they switch significantly quicker than mechanical relays. The need for rectifiers and amplifiers that could switch in the gigahertz range arose as a pressing issue, particularly during World War II, when the military’s radar systems were upgraded to greater power.
The discovery of semiconductors
Multiple components from earlier computers from 1962. The boards and their vacuum tubes from left to right belong to: ENIAC, EDVAC, ORDVAC board, and BRLESC-I.
Scientists and engineers set out to find a replacement for the fragile and cumbersome vacuum tubes that were the source of these issues. Within this context, semiconductors gradually came to the forefront. Depending on the temperature, chemical makeup, and orientation, these materials may either be insulators or conductors.
In 1874, German scientist Karl Ferdinand Braun noticed that these crystalline solids displayed a rectifier effect. “In a large number of natural and artificial sulfur metals, […] I found that their resistance varied with the direction, intensity, and duration of the current.” When an alternating voltage was supplied, electrons could only flow through the material in one direction, producing a direct current at the opposite end.
The concept of the field effect transistor
As a result of these discoveries, engineers created the first diodes in the 1920s using lead sulfide and copper oxide. They were used in a variety of radio receivers. By adding a third electrode, a semiconductor diode can function like a vacuum tube’s triode, as first described by scientist Julius Lilienfeld in 1925. He dubbed the apparatus a field-effect transistor (FET) because of the fact that the electric field affects the semiconductor’s current flow.
Unfortunately, no practical implementations of semiconductor triodes were successful thus far. Until recently, very little was understood about the physics at play inside semiconductors. Also, the metal oxide or metal sulfide semiconductors employed back then were chemical compounds, and the wide variations in their crystalline structure and electrical characteristics during production made them unsuitable for usage. In the end, the quest for crystalline vacuum tube replacements appeared fruitless.
The first approaches for transistors
The transistor wouldn’t exist without the discovery of semiconductors like germanium and silicon. They, along with many other electronic components, owe their existence to the crystalline solids’ unique electronic characteristics. There is a good reason why the semiconductor sector has become a multibillion-dollar, internationally competitive industry.
What matters is the bandgap
This image compares the bandgap in conductors, semiconductors, and insulators. (Credit: Energy Education)
Beginning in the late 1930s, semiconductors started their meteoric rise to prominence. Around this time, researchers had finally figured out the atomic-level causes of a semiconductor’s inconsistent electrical conductivity. They found that, unlike in metals, the electrons in these substances are restricted in their initial positions. Rather, the semiconductor electrons need to be stimulated to a particular level before they get excited and transition into the mobile state.
Semiconductors have a band gap of 0.1–4 electron volts (eV) between their insulating valence band and their mobile conduction band. When electrons in the semiconductor get energy in the form of heat, radiation, or electric current, they are able to jump the band gap. The semiconductor characteristics of substances are at their best when their atoms each contain four valence or outer electrons. Compounds like cadmium sulfide and gallium arsenide, as well as the elements silicon and germanium, fall within this category.
The effect of impurity
Also noteworthy is the discovery that germanium’s and silicon’s electrical characteristics can be tailored by the strategic introduction of foreign atoms into their crystal lattices. This doping results in either an excess of electrons (n) or a surplus of positively charged vacancies (p) in the crystal lattice, depending on the kind of impurity. Since the band gap is narrowed by the presence of these surplus local charges, the semiconductor’s local conductivity can be enhanced.
However, it was not feasible to create silicon and germanium crystals of adequate purity until after World War II, which was a necessary requirement for the fabrication of tailor-made doped semiconductors. The availability of semiconductors with a purity of 99.999 percent was only made possible by developments in crystal growth.
The attempt for the first transistor
Junction field-effect transistors (JFET), and how they work. (Credit: Chtaube/CC BY SA 2.5)
As a result, several laboratories, including the illustrious Bell Laboratories in New Jersey (home to the R&D division of a major U.S. telephone corporation), began experimenting with doped germanium and silicon semiconductors. In 1945, Bell Labs’ vice president, Mervin Kelly, began a research effort to find an alternative to vacuum tubes that could be made from semiconductors. His choice of leaders for the cross-disciplinary team included physicist William Shockley and chemist Stanley Morgan.
Shockley and his colleagues began testing out various configurations of two semiconductor layers with variable doping levels as part of this effort. They employed an external electric field in the form of a third electrode, similar to what Lilienfeld had described before it, to affect the current flow between two electrodes (source and drain) inside the material. To do this, the controlling gate electrode must be situated in an n-doped area of the semiconductor, in contrast to the drain and source electrodes, which must be situated in a p-doped zone.
The current flowing between the two other electrodes is restricted and regulated by the voltage provided to the gate electrode, which creates an exclusion zone in its vicinity. The current flow between the source and drain electrodes can be modulated by varying the size of the non-conducting exclusion zone, which acts like a lock gate. These days, we refer to this basic kind of field-effect transistor by its more modern name, a junction field-effect transistor (JFET).
Two independent groups
In 1945, Shockley and his colleagues produced a germanium-based junction field-effect transistor, but the external field’s modulating effect was too weak for practical application. Therefore, Shockley tasked two members of his semiconductor research team, John Bardeen and Walter Brattain, with digging more into the matter. He, too, originally looked elsewhere for answers.
On the other side of the Atlantic, two German physicists named Heinrich Welker and Herbert Mataré were tackling the same issue and, like their American counterparts, were successful in putting the principle into practice for the first time in 1945 by experimenting with precursors to the junction field-effect transistor. Unfortunately, the expected results were still too modest, just as they had been for Shockley and his group.
The two groups were now developing more effective transistor technology apart from one another.
Toward the first transistor
It seemed like a major development was on the horizon by the autumn of 1947. John Bardeen and Walter Brattain, two semiconductor researchers working at Bell Labs in New Jersey, had identified the root cause of the early transistor experiments’ failure. Their group’s transistor models, despite some promising methods, stubbornly refused to operate, responding only very weakly to the applied control field.
The scientists found that the early versions of field-effect transistors had insufficient semiconductor penetration for the regulating electric field of the control electrode. This field regulates the flow of electricity, and its failure meant that the “lock gate” was not working as intended. Instead, the material’s electrons were trapped in a thin barrier layer just below the surface. Because of this, the candidate transistor could no longer benefit from the regulatory action.
A plastic wedge, gold foil, and germanium
First transistor (replica), Bell Labs, 1947. Image: Mark Richards
Brattain and Bardeen started tinkering with a new building approach to see if they could find a workable answer. In this instance, a germanium plate was set above a metal base. This was n-doped, meaning that it had an abnormally high concentration of free electrons brought about by the incorporation of a foreign element, most often phosphorus or arsenic. Afterward, a very thin layer of p-doped germanium was grown on top. Boron and other alien atoms flooded this layer with “holes,” or a plethora of positive charges.
The aha moment finally arrived when Brattain and Bardeen came up with the idea of using a triangular plastic wedge linked to a spring and then wrapping it all in gold foil. They created two closely spaced contact points by scoring the conductive gold foil at the wedge’s tip. These were now suitable for use as two separate electrodes, with each one connecting to a separate circuit through its conductive metal base.
The first test of Brattain and Bardeen’s transistor model was performed on December 16, 1947, at their lab. It functioned, with a weak current flowing from the emitter side of the gold tip to the base and a corresponding change in the current flow in the second circuit, the latter of which was dependent on the amount of current provided to the first gold electrode.
The device worked as a variable switch and amplifier, increasing the input signal by a factor of a thousand, precisely as expected. After doing “the most important experiment” of his life, Brattain informed his team on the way home that night. But he didn’t say what it was just yet.
The first functional transistor
The first point-contact transistor’s conceptual layout.
On December 23, 1947, the two scientists finally showed their discovery to Bell Labs’ upper management. They used a microphone and a loudspeaker to show off the transistor’s amplification capabilities during the internal demonstration. The microphone was hooked up to the little circuit of the emitter electrode, while the loudspeaker was linked to the second circuit. This meant that the speaker had to faithfully duplicate the original sound in order for the transistor to have worked.
It was at this point that Brattain and Bardeen realized their dream of creating the first operational transistor. Finally, a semiconductor-based miniaturized component to replace bulky vacuum tubes has been created. At a press conference held by Ball Laboratories on June 30, 1948, the two scientists revealed their new invention to the world.
The double-point contact transistor
The point-contact transistor is based on the basic design idea pioneered by Brattain and Bardeen. These bipolar transistors differ from the prevalent field-effect transistors in that both positive and negative charge carriers participate in the amplification process. To rephrase, the “holes” and the “electrons” in the semiconductor are both activated or mobilized.
In 1951, mass-produced point-contact transistors were used in the construction of a variety of products including telephone switchboards, hearing aids, oscillators, and even the first prototypes of television sets. These devices were the quickest switching transistors available up to 1953. In 1956, Brattain and Bardeen were honored with the Nobel Prize in Physics for their work on this.
Aside from the two American scientists, two German physicists, Herbert Mataré and Heinrich Welker, independently developed a very similar concept to the point-contact transistor without knowing about the progress being made at Bell Labs. Similarly, they built a functional point-contact transistor in 1948, although they were behind the competition.
The first bipolar transistor
John Bardeen, William Shockley, and Walter Brattain in 1948. It is generally agreed that the two of them jointly deserve the title “fathers of the transistor.”
In 1947, two American scientists named John Bardeen and Walter Brattain devised the first point-contact transistor, and it was a significant advancement in electronics but not yet practical as the backbone of electronics and the foundation of the first computers. As a result of their thinness, the “point-contact” electrodes tended to break easily. The essential charge carriers in this transistor only flowed across a narrow region on the semiconductor’s surface, hence the amplification it could provide was poor.
However, this all changed with the invention of the bipolar junction transistor (BJT) at Bell Labs in New Jersey only six months later. After William Shockley’s original attempts to create a field-effect transistor device in 1945 were unsuccessful, he went on to construct this new device. In the first presentation of Brattain and Bardeen’s point-contact transistor, Shockley was left out. To his dismay, he was not credited as a co-inventor on the patent for the first transistor and did not play a significant role in its creation.
The fact that he wasn’t an “inventor” dampened his enthusiasm for the team’s accomplishments. “I was frustrated that my own efforts, which had after all been going on for eight years at the time, had not resulted in any significant invention.” But that was about to change.
Shockley had been studying the physics of p-n junctions in semiconductors, the activities that take place at the interface between the two differentially doped semiconductor zones, for a few months prior to this. The scientist was trying to find a method to improve the device so that charges could be transported farther than just over its small surface. Shockley created the first working transistor on New Year’s Eve, 1947, in a hotel room in Chicago.
Stacked layers
The basic operating concept of a bipolar transistor. Initially, the two n-p junctions form a barrier. Electrons flow from the emitter to the collector, and positively charged “holes” flow from the base to the emitter when a voltage is supplied to the base, reducing the barriers between the two regions.
The scientist created a sandwich out of three layers of germanium, which was the main innovation. A thin n-layer was sandwiched between two p-doped layers, or vice versa. Applying a voltage to the thin intermediate layer in such a surface transistor reduces the barriers at the n-p junctions of the semiconductor. Therefore, there is no longer any impediment to the free flow of electrons from one n-zone to the other, while positively charged holes from the p-layer can now move in the opposite direction.
Bipolar transistors were far more powerful amplifiers than point-contact transistors because they worked on a different operating principle. Plus, they were more durable. Bell Labs had filed for a patent on the transistor a few days before Shockley’s public unveiling of it at a press conference on June 30th, 1948.
Even though it wasn’t the first fully working transistor, the bipolar transistor was a major step forward for semiconductor technology and ultimately modern electronics. These reliable parts, which were simple to standardize because of advances in crystal-growing techniques, found quick adoption in applications requiring either high-speed circuits or substantial amplification. Along with John Bardeen and Walter Brattain, Shockley won the Nobel Prize in Physics in 1956 for his work on the bipolar transistor.
With their combined efforts, the three physicists sparked a wave of invention that would permanently alter the state of the art. The invention of the transistor has far-reaching consequences for our culture and economy. After all, it was because of their efforts that electronics and computer technology were able to be shrunk down to more manageable sizes.
However, a crucial step was still lacking at the dawn of the digital era.
The beginning of the silicon era
However, the shift to silicon, the semiconductor used in almost all modern electronic circuits, still remained to be achieved. It’s because most transistors in the 1950s and 1960s were fabricated from monocrystalline germanium. The lower melting temperature of this element makes it more amenable to processing, and the rapid movement of charge carriers inside the crystal lattice allows for rapid switching times.
The problem with germanium
Germanium, a semiconductor, was used to create the first transistors. (Credit: Jurii, CC BY 3.0)
However, germanium does have a few drawbacks when used as a transistor material. This semiconductor’s bandgap is just 0.64 electron volts, which results in undesirable leakage currents. Once the temperature rises above around 167°F (75°C), the thermal energy is enough to make the material conductive. That’s why germanium transistors can’t be used at temperatures higher than that, since they stop switching. However, silicon is far more reliable; transistors constructed of silicon can operate at higher temperatures because of their bigger band gaps.
The difficulty in creating high-purity monocrystalline blocks of silicon stems from the material’s greater melting point of 2,580°F (1,415°C) and its increased reactivity. The electrical performance of the crystal is negatively impacted since the molten silicon reacts with practically every crucible, including quartz glass. Thus, it was initially quite difficult to manufacture silicon semiconductors with the necessary purity and doping.
Race to create the first silicon transistor
In 1954, researchers finally succeeded in overcoming these obstacles, and again, this occurred twice independently but almost simultaneously. The chemist Morris Tanenbaum at Bell Labs developed a process by which the doping rate of the silicon could be controlled while the single crystal was being pulled out of the melt. Thus, on January 26, 1954, the team developed the first n-p-n silicon transistor that had a reliable amplification performance. Bell Labs, however, initially chose not to share or patent this discovery.
Simultaneously, Gordon Teal, a chemist who had previously worked at Bell Labs, was developing a method at Texas Instruments to create silicon semiconductors with the needed characteristics. Because of this lack of purity, Teal’s team devised a method to make the interlayer of the transistor sandwich exceedingly thin, on the order of 25 micrometers. As a result, the semiconductor’s charge carriers were able to traverse this barrier, despite the presence of certain impurities. A functional n-p-n silicon transistor was developed on April 14, 1954.
Silicon has replaced germanium as the dominant semiconductor in modern transistors.
A new age
“Contrary to what my colleagues have told you about the gloomy prospects for silicon transistors: I have several of them in my pocket right now,” Teal said at a technical conference on May 10, 1954, marking an early presentation of his new silicon transistor. This was in stark contrast to Bell Labs’ decision not to announce their breakthrough.
Teal then provided a concrete example of the benefits of these components by submerging a germanium transistor attached to a loudspeaker in hot oil, at which point the music abruptly stopped. He then tried it using a silicon transistor, and the sound remained intact. The PR stunt was successful; the Teal technique was adopted, in part because the rival method was more difficult to standardize. That’s how Texas Instruments got the ball rolling on mass-producing silicon transistors.
It was this discovery that ushered in the “silicon age” and paved the way for the explosive growth of the computer and semiconductor industries. New transistor designs allowed for mass manufacture, downsizing, and integration into integrated circuits, and silicon transistors gradually displaced older germanium devices during the 1960s and 1970s.
These semiconductors made it possible to carry about and operate on computers anywhere, and they also powered global communication infrastructure. The transistor’s reputation as the “nerve cell” of the computer age is well-deserved. It’s safe to say that life as we know it now couldn’t exist without the invention of the transistor.
Two hundred years ago, American William Miller was inspired by a Bible text to declare the end of the world was near. Let’s shed light on the events that led to this conclusion. While the beginning and conclusion of religious movements are often recorded, their demise is seldom dated to the minute. Millerites, who predicted the end of the world in 1844 but were wrong, suffered the same fate.
William Miller was the one responsible for starting a religious movement. He completed the standard educational path after being born in Pittsfield, Massachusetts, in 1782. There is no indication that he ever went on to receive formal education or training of any kind, yet even as a kid, he read voraciously. And he had plenty of opportunities as a young man to satisfy his need for knowledge by visiting numerous libraries.
Miller settled in the village of Poultney, Vermont, was married at age 21, and quickly rose to prominence among his peers there. He acted as a peacemaker in the community in addition to his other duties. He had a change of heart about religion after talking to many well-read folks in the neighborhood.
William Miller, 1782–1849, the American preacher, often forecasted that a certain day would mark the end of the world. Still, the inevitable conclusion never materialized.
He was raised in a Baptist family but abandoned that faith during college and now considered himself a Deist. There, he still considered himself to be a believer, yet he was firm in his belief that the key to understanding God was rooted in reason rather than in the revelation found in the Bible.
In light of his new beliefs, he no longer thought that miracles were signs from God. This was an outlook he would have to alter shortly.
Divine intervention
Miller, along with several other local males, went to New York State in 1812 to enlist in the United States Army to fight against the British. He was now exposed to the realities of battle. Miller’s fort was heavily bombarded by cannons during the 1814 Battle of Plattsburgh. Miller attributed his own survival to divine intervention when a shell landed near him, killing four troops and wounding two more.
Miller began to really contemplate the afterlife following the untimely deaths of his father and sister in 1815. He had second thoughts about his deism and started to read the Bible more intensively.
The Bible specifies the year of the end of the world
Miller drew one main conclusion from his reading of the Bible: the end of the world was near. Daniel 8:14 states: “He said to me, “It will take 2,300 evenings and mornings; then the sanctuary will be reconsecrated.”
Miller argued that the days in this passage should be read as years. Then Christ would return and cleanse the planet with fire. This meant that the end of the world was imminent, and it would come either before the year 1843 or in that year. 1822 had begun, so it wasn’t too far away.
But Miller waited nearly a decade before he started sharing his thoughts with the public. First proclaiming the end of the world to a small crowd in 1831, he submitted 16 pieces to the Baptist journal the “Vermont Telegraph” in 1832.
Since the end of the world was obviously not something that occurred daily, he was getting a lot of questions. Miller released a 64-page treatise in 1834 to save himself the trouble of personally responding to each inquiry about his beliefs.
The birth of Millerism
Miller’s estimations for the end of the world’s year, also the Second Coming’s year, in 1843, shown on a chart.
From now on, Miller’s ideas would be spread through an extensive publicity drive. A group of people in Boston, inspired by a clergyman named Joshua Vaughan Himes (1805–1895), worked to spread the word about Miller via a number of brand-new publications.
Numerous new magazines were published for various audiences in both the United States (especially in New York City) and Canada.
Many of the 48 publications were short-lived, but they all helped to turn Miller’s ideas from a relatively unknown to a national religious movement.
Miller had an immediate obligation to his followers to provide them with a precise date for the end of the world. But he could only provide a window of time: from March 21, 1843, to March 21, 1844.
Kind of disappointing
On this day in 1843, however, nothing of note occurred. Without Jesus’ second coming, there would be no final judgment. Many believers, though, were not concerned since they believed the end of the world was still a year away. A more precise date, April 18, 1844, was determined after some further math.
Therefore, Himes, the Boston preacher and Miller’s follower, conceded in a piece published in the “Advent Herald” on April 24 that he had perhaps overestimated a bit but that the end would really still come.
His argument was backed up by someone named Samuel S. Snow. The “Seventh Month Message” or “True Midnight Cry” was declared by this former skeptic—now a Millerite—in August 1844.
Miller’s followers continued to believe that the end of the world would occur on this day (it was a Tuesday) when the time was right. Despite the claims of Miller, Himes, and Snow, the only thing that vanished at the end of that day was the sun, which, in spite of their predictions, returned the next day.
The supporters are turning away
The fact that Miller’s predicted end of the world did not occur was very discouraging for many of his followers. Because most people had gotten rid of or sold their valuables in preparation for the end of the world.
Many farmers had stopped cultivating their land because they thought it was futile. But they were doomed to oblivion now. Miller lost the support of the vast majority of his followers after the “Great Disappointment,” as the incident came to be known, forced them to abandon him and his ideas.
However, Millerism did have repercussions outside the United States. On April 29, 1845, the “Albany Conference” convened, marking the coming together of the movement’s main members under the leadership of Himes and Miller. There, Miller’s ideas were again spelled out in detail and given a dogmatic stance. The Advent Christian Church was established as the offspring of the Evangelical Adventists after the conference.
A group that now numbers over 25,000 people in the United States alone. Another new religious movement developed in the wake of the “Great Disappointment,” and its members codified their beliefs in what is now known as the “Seventh-Day Adventist Doctrine.” Over 19 million people throughout the globe are currently part of it.
On December 20, 1849, Miller passed away, still believing that the end of the world was imminent. It was a fine ride nonetheless.
George Smith, a regular visitor to the British Museum in London, made a remarkable discovery in November 1872 while poring over a piece of a clay tablet inscribed in cuneiform. Smith allegedly leaped up in excitement and stripped down to his underwear. It’s debatable if this really occurred. Without a shadow of a doubt, the 32-year-old had just discovered the Epic of Gilgamesh, one of the greatest literary masterpieces ever written and thought to have been lost for nearly 2,000 years.
Despite his lack of any scientific training, Smith had a lifelong fascination with the history of bygone civilizations. Smith, who was born in 1840 to a working-class family in London, trained himself to read cuneiform. After serving his apprenticeship as a printer, he decided to try his hand at banknote engraving, but the task did not satisfy him. He liked to spend as much time as possible perusing the British Museum’s treasures during his lunch break, so he commuted there daily.
The earliest piece of literature was unearthed and translated by George Smith (1840-1876), an English Assyriologist.
Henry Rawlinson, a key figure in the decipherment of the cuneiform writing in the 1850s, noticed him there almost immediately. Rawlinson thought the young guy was promising and suggested that the museum hire him. The autodidact would now be responsible for reassembling the shattered clay tablets in the museum.
Throughout the 19th century, scholars gained a deeper familiarity with the cuneiform script. Smith uncovered what is perhaps the most crucial text in this cryptic writing.
A massive collection of cuneiform texts
These clay tablets were mostly sourced from modern-day Mosul in northern Iraq, on the site of the ancient city of Nineveh. Historically, it was the capital of the Assyrian Empire. Around 650 BC, King Ashurbanipal commissioned the construction of an exceptional library. The collection was based on copies or confiscations of as many texts as feasible. However, in 612 BC, the palace and its library were completely destroyed in a fire. Broken into several pieces, the clay tablets were eventually lost for generations.
Smith, more than two thousand years after the clay tablets were first created, had the monumental chore of going through them. A considerable portion of the texts in the biggest collection of cuneiform writings dealt with administrative topics, such as invoices and receipts.
But Smith was looking for books in a methodical way.The first known literary work, the Epic of Gilgamesh, was found and translated by George Smith.
That day in November 1872, he finally located his prize. The piece he was reading mentioned a powerful tide, a ship, and a lost bird on the lookout for land. Smith was instantly conscious of the feeling, and he realized that he was experiencing a reenactment of the Flood from the Old Testament. But the clay tablet was written so much earlier than the Bible.
Smith unearthed a piece of a tablet containing the Epic of Gilgamesh, one of the first works of literature ever written. Gilgamesh, ruler of Uruk (where cuneiform writing was developed in the 4th millennium BC), oppressed his people, so the gods created Enkidu to fight against him.
Enkidu and Gilgamesh became friends, and the two went on many adventures together until it was obvious that Enkidu must die.
In a conference, the gods reached a conclusion, and Gilgamesh was now on a mission to live forever. Once he reached Utnapishtim at the edge of the world, he learned about the Flood from him. Gilgamesh finally went back to his home city of Uruk.
In all, there are 12 panels making up the Epic of Gilgamesh. Currently, just 38% of the original text’s 3033 lines have been preserved in their entirety. The Library of Ashurbanipal provided the majority of the textual remains.
Smith conducted excavations in Mesopotamia to locate the missing pieces
Throughout the 19th century, scholars gained a deeper familiarity with the cuneiform script. Smith uncovered what is perhaps the most crucial text in this cryptic writing.
Due to his limited knowledge of the Mesopotamian Flood narrative, Smith in 1872 was unaware of this. So, he prepared for his own excavation at Nineveh. Except that the British Museum wasn’t prepared to foot the bill for an expedition, thus money was needed. The Daily Telegraph provided financial support, and in 1873 Smith traveled to the Ottoman Empire in search of further information on the Flood in the remains of Nineveh.
On May 7, 1873, excavations started, and one week later, the buzz was at its peak: Smith had found a real discovery. He sent a hasty telegram to London, which was a huge blunder. Because the Daily Telegraph immediately asked Smith to return to London. Smith went home in dismay and resolved to go back as quickly as he could. A lot happened very quickly; he began the second dig in 1873 and returned to London in the summer of 1874 with a lot more fragments.
Finally, he was able to piece together the details of the Flood and other tablets from the Epic of Gilgamesh.
The pace he set was astounding. By the end of 1874, he had published translations of all the literary writings he had unearthed, and the following year, he penned no less than four volumes. He acted as though he knew his time was running out.
Finally, at the end of 1875, he set out for Mesopotamia once more with the backing of the British Museum, but this time it was a tragic expedition. After first being denied access to the dig site, his travel partner later succumbed to cholera while they were still close to Baghdad. Smith was granted permission to excavate in Nineveh, but the extreme summer heat made the task hard.
When Smith got dysentery in Syria, he realized he couldn’t just go back to London without having accomplished anything. George Smith, a career switcher and self-taught scientist who paved the way for contemporary Assyriology, passed away in Aleppo on August 19, 1876. He was 36 years old.
Controlling machines via the study of cybernetics is one of the most important scientific discoveries of the twentieth century. Not only because it gave rise to the ubiquitous neologism “cyber,” which means “computer.” Instead, cybernetics is still influential in how we think about machines and technology today. However, initially, there were a variety of unconventional approaches, somewhere between the technological and the animalistic.
The history of the field of cybernetics may be traced back to the era of World War II. In order to better defend themselves against German aerial assaults, the Allies sought new and innovative methods of doing so. Ballistic calculations could determine where a projectile was going, but it was still very difficult to pinpoint the precise location of the target. This was especially true when the target was a moving airplane, the height, and speed of which may change at any moment.
In 1940, President Franklin D. Roosevelt’s administration in the United States established the National Defense Research Committee, which later evolved into the Office of Scientific Research and Development (OSRD). There was a government agency that supported studies on military issues. The issue of inaccurate air defense was literally given its own department. At that point in 1945, it had already supported 80 initiatives in this field. And there was a four-page project proposed by mathematician Norbert Wiener (1894–1964).
The goal of the postwar era was to establish cybernetics as a scientific discipline. However, there was a competing suggestion that needed to be taken into account.
The pigeon notion dates back to before cybernetics
Pigeons are sent off to a low-stimulation lab setting called a Skinner box, named after the psychologist who created it, B. F. Skinner (1904–1990).
Simultaneously, another well-known scientist tackled an issue with comparable implications. It was psychologist Burrhus Frederic Skinner, or B. F. Skinner (1904–1990), who worked on the rocket control system. His system was proto-cybernetic, rather than computer-based or mechanical control technology, and pigeons played a significant part in this.
The term “radical behaviorism” is often attributed to B. F. Skinner. He was curious about how organisms respond to the obvious environmental factors that affect their behavior. But this time he didn’t give any consideration to the mental or spiritual aspects since they were not relevant to his study.
One of his most influential research methodologies, “operant conditioning,” was included in his suggested solution for a novel rocket control system. By using the Skinner box, an apparatus designed for use in the laboratory, he demonstrated that operant behavior—that is, an activity that occurs independently of any conditioning—could be rewarded. The theory behind it is that when an experimental animal is rewarded for a certain action, it will continue to engage in that activity.
Pigeons as rocket pilots
Because of this, Skinner believed that the actions of all living things could be predicted and manipulated. He planned to use this knowledge for missile command in the future. He posited that if conditioned pigeons were placed in the rocket’s control room, things might go more smoothly. After all, the projectiles’ trajectories and courses need to be continually monitored and altered. Skinner thought pigeons were capable of doing this. Control systems were not feasible at the turn of the 1940s for many technological reasons.
The Aggregat 4 rocket, also known as the V2 rocket, was first fired by Nazi Germany in World War II in 1944. It had cutting-edge navigation technology that could automatically keep the flying object on track. It was an early example of a guided missile. The U.S. Office of Strategic Research and Development (OSRD) worked tirelessly to improve the country’s inadequate air defenses.
Pigeon-Guided missile nose cone by B. F. Skinner. (Credit: American History Museum)
Originally titled “Project Pigeon,” Skinner changed his concept to “Project Orcon” (from “Organic Control”). The capsule at the rocket’s tip was designed to hold up to three pigeons. Each pigeon sees the target displayed on a sensor-equipped screen in front of it. To control the rocket, the birds were trained to peck at a certain spot on a screen. If the missile swerved off course, the target would move out of the center of the screen, the pigeons would continue to follow it, and the sensors would pick up on the changes in trajectory and correct them.
During the experiment, a pigeon successfully is pecking the ship on the sensor equipped screen and guiding the rocket to the target.
Therefore, Norbert Wiener wasn’t the only one who obtained funding from the OSRD; Skinner also received $25,000 ($423,000 today). However, in October 1944, the OSRD decided that a more technological solution would be preferable to using pigeons to control rockets, and the project was canceled. However, Skinner’s theory was not entirely abandoned. U.S. naval officials gave it another look in 1948, this time dubbing it “Project Orcon.” The military gave up on the pigeon strategy in 1953.
Skinner’s theory was never put to the test or used in battle. However, his other efforts had a much greater impact on science and psychology. His studies provided the foundation for behavior treatment, for instance.
Rabbinic Judaism, Christian theology, and Islamic belief all agree that Moses existed. He was an Old Testament prophet who led the Hebrews out of Egypt and into the Promised Land and who is credited with receiving the Ten Commandments from Yahweh (God). There is minor information available about Moses’s life outside of the canonical books. To the point that in 1906, historian Eduard Meyer proclaimed that Moses did not exist. Moses, who we mostly know from the Bible and more specifically from the Torah, where he parted the waters of the Red Sea and brandished the Tablets of the Law, has nevertheless played a significant role in the history of the Jewish people.
Who was Moses?
The finding of Moses. (Painting by Nicolas Poussin (1594-1665, Italy)
Moses, a member of the Hebrew tribe of Levi, was born in the Egyptian province of Goshen somewhere in the 13th century BC. It is one of the twelve Hebrew tribes that settled in Egypt in about the 17th century BC.
Moses’ birth was timed just before the pharaoh (perhaps Ramses II or his successor Merneptah) began killing off Hebrew infants to quell any potential insurrection. Moses’ mother put him in a basket and hid him among the Nile reeds so he would be safe. Pharaoh’s daughter feels sorry for the orphan and decides to adopt him, raising him in the palace as a prince.
She “drew him from the waters,” and therefore, the Bible says, she named him Moses. According to the most plausible theory, the name Moses derives from the Egyptian word mosu (“son” or “child”). The second theory says the name originates from the Hebrew verb, meaning “to pull out” or “draw out” [of water].
Moses and the Burning Bush
Moses and the Burning Bush (by Arnold Friberg).
In his adulthood, Moses learns about his real origin and the plight of the Hebrews while visiting a construction site. As part of his rebellion, he murdered an Egyptian who was oppressing his people. Moses departed Egypt for the land of Midian after committing the crime. To help him feel more secure, the local priest, Jethro, arranged a marriage between Moses and his daughter. During this time, God spoke to Moses in a “burning bush” on Mount Horeb (in Sinai), charging him with rescuing the Hebrews from slavery in Egypt.
Moses, fortified by God’s message at the Burning Bush, went back to Egypt to lead the Hebrews out of slavery and into Canaan, the Promised Land. With the help of his brother Aaron and the miraculous powers bestowed upon him by Yahweh (God), Moses was able to get a meeting with Pharaoh and convince him to allow the Jewish people to celebrate Passover in the desert.
The Pharaoh still refuses and ramps up his persecution of the Hebrews, despite the miraculous transformation of Aaron’s rod into a serpent. Not deterred, Moses makes another appeal to the king, which is once again denied.
The Nile’s water turns to blood, the plague strikes Egyptian cattle, locusts blanket the ravaged nation, darkness falls for three days, and eventually, all the first-born Egyptians perish in a single night as Yahweh actively intervenes to demonstrate his power. The fact that Yahweh had killed his own son was probably a factor in Pharaoh’s decision to release the Hebrews. This was the start of a forty-year migration known as the Exodus.
Scientists contend that the red algae bloom at the time caused a lack of oxygen, which led to the death of fish in the Nile.
The leaving of Egypt and the crossing of the Red Sea
The children of Israel crossing the Red Sea. (by Frédéric Schopin (1804–1880), Art UK)
The Hebrews believe they have escaped the Pharaoh’s control, but the Pharaoh changes his mind and sends his chariots after them. The Egyptian army approaches as the Hebrews look out across the Red Sea (also known as the “Yam Suph”).
Moses stretched out his hand to the sea, and the waters parted, creating a channel between the liquid barriers through which the people could pass.
When the Egyptians passed through, Moses asked Yahweh to bring the sea back to where it was, killing the pharaoh’s army.
The Israelites kept marching forward under Moses’ leadership. The Hebrews now had to cross the desert to get to their ancestral homeland. To appease his people’s hunger and thirst, Yahweh sent quails, then a dew that, once evaporated, turned hard as ice (the manna [an edible substance] of the desert, foreshadowing the Eucharist [the Lord’s Supper]), and lastly, water that Moses mustered from a rock.
Tablets of Stone
On two tablets of stone, God outlines the Ten Commandments for Moses to read on Mount Sinai. (Painting by Joseph von Führich)
The Hebrews arrived in the Sinai desert three months after leaving Egypt. Moses left the people in the care of Aaron and walked to the base of Mount Sinai. After fasting for a total of forty days and nights, the prophet received the Ten Commandments from God.
These rules formed the basis of the covenant between Yahweh and his people.
When Moses returned to the Hebrews, he found that they had abandoned their trust in God and instead worshipped a golden calf that they had fashioned with the aid of Aaron.
In his rage, Moses destroyed the Tablets of the Law he had received from God and set the idol’s statue on fire after doing that. The prophet, however, begged Yahweh not to turn away from his people and to forgive them, and Yahweh listened to his petition and asked the prophet to renew the covenant atop Mount Sinai. Forty days later, Moses descended from Mount Sinai with two rewritten Tablets of Stone.
The Hebrew people pledged allegiance to the Law of Moses, which promoted strict monotheism and reverence and awe for a God who is both unseen and almighty. Once in Canaan, Moses led the Israelites to Canaan.
At the age of 120, Moses passed away on the verge of the Promised Land. His legacy transcends Jewish history and has played a crucial role in the development of the early Judeo-Christian Church.