William Harvey was a fascinating individual. It is striking to think that a single published work has changed the future of biomedical sciences and medicine more than any other publication in the past fifteen centuries. This work is Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus, published in 1628 by William Harvey. This thin book put an end to the physiology and medicine dogmas of Galenus of Bergama, which had been affecting the Western world since the beginning.
Who Was William Harvey?
William Harvey (1578-1657)
William Harvey was from a wealthy Folkestone family. He was the eldest of the seven brothers, of which five would later become traders in London. He got his bachelor’s degree from Cambridge University in 1597 and went on a long learning journey in France, Germany, and Italy. In 1602, he received a doctorate in medicine and philosophy from the University of Padova, where he attended the anatomy course of Fabricius ab Aquapendente (also known as Girolamo Fabrizio), who discovered the valves of the vessels. He returned to London immediately after graduation and was elected a member of the Royal College of Physicians in 1604.
Harvey had remarkable fame during his lifetime. He became the private physician of King James I and his son, Charles I. In addition to his intensive clinical practice, he became a physician at one of the oldest hospitals in London, St. Bartholomew’s. He was extremely patient, persistent, and careful as a scientist. It took 25 years for his studies to bear fruit. He once complained to a friend that his medical profession was damaged due to his publications on blood circulation and his colleagues’ being jealous of his reputation, which called him “crackbrained.”
He was aware that he was making a mark in the history of medicine and did his best to pursue his fame. He left part of his fortune to the Royal College of Physicians and gave Cambridge’s Caius College his birth house and the land around it.
William Harvey once said in On the Formation of Animals, 1651, that nature should be investigated; the paths it showed us should be boldly walked, because only by moving from lower to higher levels can we penetrate the very heart of the mystery of nature by consulting our appropriate senses.
Blood and heart
William Harvey performs an experiment to prove his hypothesis of blood circulation to King Charles I of England. (Rue Des Archives/album)
According to the teaching of Galen, blood was drawn from the liver and lung, flowing to the right side of the heart, and after passing through the ventricle, the tidal movement began between the left ventricle and the arteries. At the beginning of the 17th century, it was argued that the heart was the source of heat and that the lungs served to cool the blood. It was assumed that diastole (the rhythmic expansion of the heart) combines blood and air, and the warmed and revitalized blood enters the circulatory system. The blood being darker in the veins and lighter in the arteries was attributed to the different functions of the two vessels, such as the ability to feed tissues and maintain the vital spirit.
Harvey’s observations completely changed these ideas. He discovered that the left ventricle of the heart sends blood continuously and unidirectionally from the lungs to the main arteries and tissues, and that the blood from the right ventricle, through the last veins, is sent from here to the lungs. Harvey had come to the conclusion that the amount of blood coming out of the veins should enter the arteries for the systems to work properly. For this to happen, the outside blood had to move cyclically from the arteries to the veins. He thought that the same principle should apply to the circulation in the lungs; blood should flow from the right ventricle to the left ventricle through the lungs.
Ernest Board’s painting depicts William Harvey explaining his blood circulation theory to his tutelary Charles I.
The observation of the heartbeat being synchronized with the pulse resulted in the faulty idea that the heart and arteries may contract and relax simultaneously. It was believed that the heartbeat felt by the hand was the enlargement of the heart. Harvey was able to argue for this idea by directly observing the chest walls of the animals. The reason a heartbeat was felt by hand was due to the contraction of the heart when sending the blood out and the enlargement of the ribs when relaxing. So, the pulse in the arteries was not due to the diastole (enlargement), but to the systole (contraction).
William Harvey’s work, de Motu Cordis
A drawing from William Harvey’s de Motu Cordis, 1628, depicting forearm surgery.
To express his thesis about blood circulation in his work, de Motu Cordis, Harvey resorted to quantitative reasoning, which was a fairly new approach. He did not believe that the source of a large amount of blood, which was constantly entering the heart from veins, could only be the food consumed. He also noticed that the amount of blood flowing in the blood vessels was well above the amount needed to feed the various parts of the body. This simple reasoning resulted in the idea that a fixed amount of blood moves “cyclically” in the body. This contribution was truly revolutionary, and it would take many years for general acceptance.
But Harvey not only made discoveries but also created the experimental method that would be pursued in biology and medicine experiments hundreds of years later. The study always starts by asking questions (there were more than twenty such questions in the first part of de Motu Cordis), some of which are intended to politely demonstrate the absurdity of the current views, and the answers should be very clear.
These are followed by questions that form the basis of the experiment. At the heart of his method was vivisection—the practice of performing operations on live animals. This experimental technique opened all the doors for Harvey. The observation made at a certain point in time (i.e., the dissection of a dead animal) was not sufficient to answer some functional questions. Thus, continuous and sequential observations of live animals were necessary. The attachment, removal, and opening of the vessels of the body parts were the means of understanding and revealing normal physiology.
William Harvey’s curious mind
Exercitationes de Generatione Animalium (Exercises on the Generation of Animals) was published by Harvey in 1651.
From the first days of his professional career, Harvey realized that every new breed he worked with brought him new insight. The animal species he studied were diverse. In the later years of his life, when he switched to studying embryogenesis in animals, his highly curious mind was in the most open state possible. He had always had the maxim to be objective, but he could not help but admire the creation and the creator behind it. He was interested in embryology and its earliest stages of development. What happens first? What follows next? Such thoughts stand out in Harvey’s 1961 book, Exercitationes de Generatione Animalium.
In addition to his intense curiosity toward medicine, William Harvey was a revolutionary who pursued refined and disciplined thought processes while conducting biology experiments. The result was a fundamentally new understanding of how the human body works. He showed his students how to ask the right questions, deal with them, and answer them.
William Harvey quotes
“Very many maintain that all we know is still infinitely less than all that still remains unknown.”
“I profess to learn and to teach anatomy not from books but from dissections, not from the tenets of Philosophers but from the fabric of Nature.”
“The heart is the household divinity which, discharging its function, nourishes, cherishes, quickens the whole body, and is indeed the foundation of life, the source of all action.”
“Civilization is only a series of victories against nature.”
“Nature is a volume of which God is the author.”
“Moderate labor of the body conduces to the preservation of health, and cares many initial diseases.”
“I have often wondered and even laughed at those who fancied that everything had been so consummately and absolutely investigated by an Aristotle or a Galen or some other mighty name, that nothing could by any possibility be added to their knowledge.”
“Only by understanding the wisdom of natural foods and their effects on the body, shall we attain mastery of disease and pain, which shall enable us to relieve the burden of mankind.”
Who was Dmitri Ivanovich Mendeleev? All science students are familiar with the periodic table of chemical elements. The periodic table is a self-evident system: which way other than atomic weights could be used to arrange the elements? However, the origin of the periodic table is not at all simple; it required the synthesis of large amounts of fragmented and often faulty chemical and physical data into a stable system. Because of this, many scientists prefer to call it the “periodic law” to show how complex the network of relationships is, which includes how the elements in the periodic table are arranged.
In the 1860s, a few scientists were looking for a manual solution to the problem of arranging elements into a kind of table. However, many academics accept Dmitri Ivanovich Mendeleev’s work as the first successful system created in this field, which was announced in 1869, and they say he needed many more years to perfect his table.
Who Was Dmitri Mendeleev?
Mendeleev was born in a small town called Tobolsk in western Siberia, Russia. His father was the principal at a high school there, but in 1834, the year that Dmitri was born, he had to retire for health reasons with an inadequate pension. This put the family’s livelihood on the shoulders of his mother, who was from a family of Siberian merchants. His mother had to operate a glass factory inherited by her near Tobolsk to support his family. However, the family’s financial situation deteriorated gradually.
After his father’s death in 1847 and the factory burning in 1848, when Dmitri graduated from high school in 1849, his mother decided to accompany him in his admission to the university, first in Moscow and then in San Petersburg, but his efforts were futile. Dmitri Mendeleev enrolled in the St. Petersburg Main Pedagogical Institute in 1850, where his father graduated years ago. Shortly after starting his education, his mother died, but Mendeleev managed to continue her education and graduated in 1855. After a short time, after teaching at secondary schools in the south of Russia, he returned to Saint Petersburg and started his postgraduate education in chemistry.
In his early scientific studies, Mendeleev gained extensive knowledge about the chemical properties of elements and many compositions. His first published work looked at the relationship between crystals and their chemical composition. The master’s thesis investigated the existence of a relationship between the chemical composition and crystallographic forms of specific volumes of the compounds.
In 1859, Mendeleev went on a long study trip with a state scholarship. He traveled all over Europe but spent most of his time in Heidelberg, Germany, conducting original research for his doctoral degree. He attended the first international congress of chemists, called the Karlsruhe Congress, which helped standardize many different chemical concepts, such as atomic weight and valence, held in Karlsruhe in 1860. The conference had a profound effect on Mendeleev’s way of thinking.
He later helped in the approval or procurement of conditions (especially the standardization of atomic weights) that would prove to be important for the development of the Periodic Law. That also had a stimulating effect on how other scientists began developing tables to organize the elements. In the 1860s, many different element tables were proposed, the most notable of which was presented by Lothar Meyer and John Newlands.
When he returned to Russia in 1861, Mendeleev taught chemistry in various educational institutions and gradually worked on his doctoral thesis. Also, he published various works on chemistry. When he received his doctorate in 1865, he became a professor at the most prestigious university in the country, Saint Petersburg State University.
Systematize the elements
The first English translation of the periodic table is from the 1891 Fifth Edition of Mendeleev’s Principles of Chemistry book.
Mendeleev found the existing textbooks for chemistry in 1867 inadequate and decided to write one himself. This decision would be decisive in the discovery of the Periodic Law. Mendeleev decided to organize a large amount of chemical data in a useful and convenient way for pedagogical practice. Textbooks at the time dealt with the elements in dictionary style, just like metals and non-metals, by dividing them into general categories. Mendeleev sought a more suitable method.
He started his book, titled Principles of Chemistry, by evaluating the basic chemistry definitions in a wide range, along with the laboratory experiments that the students will do. From there, he switched to more common compounds and elements, such as salt, oxygen, carbon, nitrogen, and hydrogen. At this point, he realized that he would need another arrangement method for other elements, probably in late 1868 or early 1869.
He tried to take the atomic weight as the primary attribute for each element, which shortly after led him to the idea of the periodicity of the elements. He quickly obtained the preliminary result and published it in a Russian periodical magazine after a short presentation at a meeting of the Russian Chemical Society. Most people think that Mendeleev came up with the Periodic Law after having a dream on February 17, 1869. However, it seems much more likely that he wrote the book about the Periodic Law after a long period of elaborate thinking.
Mendeleev had developed the essence of the system but still needed to consolidate it with detailed chemical and physical data showing the periodic properties of the elements. For nearly two years after its publication in 1869, Mendeleev worked hard to support his initial understanding with extensive chemical and physical data derived from his own experiments and extensive research in scientific literature. He was looking for a “natural system” in which the properties of each element would be periodically related to those surrounding him in the table. At the end of 1871, Mendeleev was confident enough to publish his results in a distinguished German scientific publication with a lengthy article. He envisaged various chemical and physical properties of the elements.
Mendeleev’s manuscripts of the first periodic system of elements, February 17, 1869.
Mendeleev’s original publication of the Periodic Law attracted very little attention, except for a handful of scientists working for the same purpose. However, after 1875, and especially in the 1880s, this indifference began to change. The main reason for this was the discovery of some new elements with properties that closely match the characteristics of unknown elements envisaged by Mendeleev. In 1875, the new element gallium was discovered by the French chemist Paul Emile Lecoq de Boisbaudran. Mendeleev saw that the properties of gallium soon coincided with those of one of his elements. In 1879, Swedish chemist Lars Fredrik Nilson discovered scandium, and he said how compatible it was with Mendeleev’s predictions.
Many scientists declared that the periodic table fits the properties of the newly discovered elements as well as the known elements. In 1886, German chemist Clemens Alexander Winkler discovered germanium, and once again its properties were in harmony with Mendeleev’s predictions. The Periodic Law was on the way to becoming a widely accepted scientific principle. However, Mendeleev had priority quarrels with some scientists, especially Meyer, after the presentation of the Periodic Law. However, thanks to his fiery and resilient personality, he was the main explorer of the Periodic Table in the eyes of most people with whom he had close contact.
Mendeleev pursued a privileged career after his discovery of the Periodic Law. In addition to teaching and doing scientific research activities, he actively advised the Russian government and the private sector on many different economic issues, and he completed his professional life with the management of the Office of Weights and Measures. Dmitri Mendeleev became the icon of Russian science. He was recognized throughout Russia as a leading example of Russian scientific heroism.
Dmitri Mendeleev quotes
“There is nothing in this world that I fear to say.”
“I saw in a dream a table where all the elements fell into place as required. Awakening, I immediately wrote it down on a piece of paper.”
“Work, look for peace and calm in work; you will find it nowhere else.”
“It is the function of science to discover the existence of a general reign of order in nature and to find the causes governing this order. And this refers in equal measure to the relations of man – social and political – and to the entire universe as a whole.”
References
Strathern, Paul (2001). Mendeleyev’s Dream: The Quest For the Elements. New York: St Martins Press. ISBN 978-0-241-14065-9.
Elena Konovalova (2006). A Book of the Tobolsk Governance. 1790–1917. Novosibirsk: State Public Scientific Technological Library, 528-page, p. 15 (in Russian) ISBN 5-94560-116-0
Myron E. Sharpe, (1967). “Soviet Psychology”. Volume 5, p. 30.
A list of famous scientists would be incomplete without Charles Darwin, who determined the theory of evolution through the mechanism he called “natural selection.” However, Darwin rarely used the word “scientist,” and he never used it even for himself. He was famous for his great advances in the field of geology, but he rarely called himself a geologist; he greatly shaped our understanding of plant physiology, but he said he was not a botanist. Darwin considered himself a naturalist in the broadest sense of the word. His success was based on the way he pushed the boundaries between different fields of study and asked questions about them.
Who Was Charles Darwin?
Darwin did not have a laboratory. He used the Down House in Kent where he lived with his family.
There were many factors behind Darwin’s scientific education: his father was a medical doctor; he was the grandson of the famous inventor and philosopher Erasmus Darwin by his father; and the grandson of the Wedgwood Pottery founder, a technologist named I. Josiah Wedgwood, by his mother. As they grew up in Shropshire, he and his brother were curious about chemistry, and they found enough money and space to set up a lab of their own. This was Darwin’s only laboratory; he began and continued his scientific career at home.
As the family’s younger son, Darwin needed a profession. There were only a few respectable options. The most obvious one was to be a doctor like his father, so he studied medicine for a while in Edinburgh. But he left the school without graduating and went to Cambridge to study for a general undergraduate degree in 1828 to work in the church—it was a common career for people in his social class. In Edinburgh, Charles had a keen interest in natural history; although he did not like the anatomy lessons and skipped the class, he wrote his first known scientific work after spending hours on the beach with zoologist Robert Grant. This was an article on the seaweed-like sea creature Flustra.
Darwin was a popular and harmonious student in Cambridge. He went to concerts and parties. But he was also a passionate insect collector and became friends with Adam Sedgwick, a professor of geology, and John Henslow, a professor of mineralogy and botany. Both of these people had a big impact on his scientific education.
Charles Darwin to build his reputation
Charles Darwin discovered her letter proposing that Henslow participate in a naval survey of South America when she returned home from her post-graduation field trip with Sedgwick. He would join the Imperial Ship Beagle as a companion to the free naturalist and captain Robert FitzRoy. It was proposed for his budding scientific talent as well as his social position and calm temperament. “He’s not yet a natural scientist,” said Henslow.
He traveled the world with the royal Beagle for 5 years which created his scientific career.
The ship Beagle sailed on December 27, 1831. The voyage was supposed to last two years but was extended to five. For Darwin, this was a life-changing experience. When he left, he was a 22-year-old rookie with no long-term plans. When he came back from his trip around the world, however, he was a respected member of the scientific community.
Charles Darwin’s theory about Earth’s crust
Throughout the voyage, Darwin gained practical scientific skills: observation, compilation, protection, rigorous record-keeping, classification, and microscope use. He put samples of plants, birds, insects, fossils, and all kinds of sea creatures inside the crates and barrels and sent them to England. These collections were important but not unique; there were other collections in South America too. Darwin once jokingly said, “There are more Naturalists in the country than Carpenters or Shoemakers or any other honest trade.” Darwin was different from others because he knew how the things he had collected fit together by making large-scale comparisons in the fields of geography and time.
He traveled hundreds of miles on horseback in the Andes Mountains and saw the evidence of almost unimaginable changes in the terrain. The fossilized trees that once sank into the water now stand in the highest passages of the mountains. When he decided to sail with a ship after an earthquake, FitzRoy showed him the evidence of a slow but steady relative altitude change of the land and sea. In his newly published theory, Charles Lyell agreed completely with Darwin’s observations, claiming that the current Earth was not the result of a single major disaster but rather the gradual operation of known causes over thousands of years. He combined his observations throughout the continent onto roughly glued paper to create long geological cross-sections. Giant blocks rising and falling on the molten core below the Earth’s crust were used to explain the new theory.
New hypotheses from Charles Darwin
Although Darwin said that he blindly combines all kinds of phenomena and then draws general conclusions from them, his method was much more complex and ingenious than that. He especially brought together a large amount of evidence to support his printed arguments, but he was never afraid to establish an ambitious hypothesis at an early stage and then look for counter-evidence to test this hypothesis. It was suggested from his notebooks that Darwin began to take note of many surprising ideas only after returning to England in October 1936, including his theory of the origin of species, which he published several decades later. When he returned to England, he became famous in scientific circles, thanks to Henslow, who distributed his booklet on geology. He was now celebrated by prominent scientists, and he had many research opportunities.
Geological Observations on South America / 1846.
Using the uplift and subsidence hypotheses, Darwin came up with an answer to two of geology’s most controversial questions. One was a striking success, and the other was, in his own words, “a long-lasting huge pot.” Darwin was aware of the deadlock posed by ocean coral reefs. Because coral polyps could not live in waters deeper than sixty meters, proving their existence was difficult. Lyell recently argued that they must have grown in underwater volcano cones (mouths).
However, based on the large-scale subsidence theory, Darwin suggested that corals first formed in shallow waters around the islands and then followed the gradual collapse of the islands within generations. This elegant solution ensured Darwin’s place among the world’s leading scientists. When he returned to England, he was elected secretary of the Geological Association in March 1883, and he quickly started to evaluate the possibility of a university career.
Charles Darwin and a biographical sketch of an infant
Darwin had now turned his attention to a series of stunning terraces stretching around the Great Glen valley system, called “parallel roads,” in Glen Roy, on the mountainous terrain of Scotland. Geologists argued that the valleys, which were once thought to be man-made, previously contained lakes, and the lowering and rising waters carved these roads. But there was no trace of the huge dam sequence implied by this theory. Darwin suggested that the seawater, which was discharged as the landmass increased, must be the reason. But this was also problematic, as there was not a trace of sea creature fossils. Shortly after it was published, Louis Agassiz opposed Darwin’s theory with the argument that these large dams were made of ice from the Ice Age. This was a lesson for Darwin to be more prudent about what he made public.
Shortly after his field trip to Glen Roy, Darwin was elected a member of the Royal Society in January 1839. Five days later, she married her cousin Emma Wedgwood, and soon their first child was born in December of the same year. Darwin, who was always an observer, recorded all aspects of his son William’s life during his infancy. He used his notes for the first time in his book The Expression of the Emotions in Man and Animals and later in his article “A Biographical Sketch of an Infant,” published in Mind, a psychology publication founded in 1877.
The evolution of evolutionism
In 1842, Darwin and his family moved to the village of Downe, near the city. It was carefully calculated to be close enough to London to maintain professional and personal ties while being far enough away to deter unwanted guests. For the rest of his life, this place became his home and workplace. His wife and children provided the loving and stable life he needed. Now, Darwin was a prominent writer who proved himself with his best-selling book about his Beagle voyage. He also had a chronic illness. Nevertheless, he made a very important friend, both personally and professionally.
Kew is still one of the most important botanical buildings in the world today.
The young botanist and explorer Josef Hooker, who would be the manager of the Royal Botanical Garden in Kew, was appointed to identify Darwin’s Beagle plant specimens. They started a correspondence, and for the next forty years, Hooker became the touchstone of Darwin’s ideas.
Darwin was now settled, and all the amateur and professional scientists around the world who wanted to share their findings with Darwin about plants, animals, and people were only allowed to talk with Hooker. This included explorers, diplomats, and those who settled in a colonial land. Hooker would be one of the few people with whom Darwin shared his theory about the diversity of organic life and how all living things came from a single common ancestor. Darwin had been developing this theory since the 1830s.
Darwin’s new profession: Ecologist
During his voyage on the Beagle, Darwin became increasingly aware of the difficulty of distinguishing between different species and variants, the surprising similarity of many extinct fossils and existing creatures, and the precise harmony of many organisms with their environment. The idea that species may transform over time was a matter of disagreement but not new; French biologist Jean-Baptiste Lamarck suggested that the beneficial properties acquired by one generation could be passed on to the next. Authors like William Paley explained that this harmony was a result of the divine design in nature.
During his journey around the world, Darwin encountered an enormous diversity of species, including some explicit relatives. On his return to England, he explored dog and pigeon breeders, observing the dramatic change in the characteristics of pets that could be produced in just a few generations, guided by the naturally occurring diversity. Thomas Malthus’s Essay on the Principle of Population Study, which explains how population growth is limited to the point of competition for resources, gave Darwin the piece he needed to draft his new theory.
He argued that any trait that would help an individual survive enough to give birth, be it animal, plant, bird, or human, would be transferred to subsequent generations disproportionately. This principle of “natural selection” was true for all physical traits, whether it was coloration for camouflage, the ability to fight or escape, or the ability to access food sources that others could not reach.
With this mechanism, Darwin realized that all the populations, like finches, that he collected from different islands in the Galapagos, even though John Gould’s subsequent work clarified this, could adapt to local conditions. Given enough time, the offspring of such organisms could diversify, occupy different environmental niches, and evolve into new species. Although the word “ecology” was not available in English until 1876, Charles Darwin was already an ecologist in many respects.
The birds coming from Malaysia
Charles Darwin and Alfred Russel Wallace
The way things went with geology also happened with biology: Darwin’s whole research program was shaped by the bird species once he outlined the global theory. Even the eight-year taxonomy study he conducted on living and fossil rock mussels, which was finally published in 1851, was a conformity study if it is considered in the light of his main theory. While working on mussels, Darwin was working on a new book called Natural Selection, which would never be published, and he was collecting data from all areas of natural history, mostly through the growing network of correspondence.
One of them was a naturalist and commercial collector, Alfred Russel Wallace, who sent bird samples from Malaysia to Darwin in late 1856. They both knew they shared similar ideas. Darwin told Wallace that he plans to publish a book on the species problem. Except for some chapters of this big book, they had drafts explaining almost all the theory, one of which was previously written in 1842 and one in 1844. In 1858, Wallace sent Darwin a text with a similar idea to his theory about how species change over time. This helped Darwin’s theory get published faster.
Fearing that he would lose his priority in the publication but also distracted by two of his children having serious illnesses, Darwin sent Wallace’s text to Charles Lyell as requested. Lyell and Hooker arranged for both Wallace’s text and Darwin’s loosely written text to be read at the meeting of the Linnean Society. Darwin was destroyed by his baby son’s death, so he was not there. Almost nobody noticed his article, but Darwin did not give up. He quickly expanded the original text with materials from his main book and completed his book “On the Origin of Species by Means of Natural Selection” in less than a year. Contrary to Wallace, Darwin had a social position, a scientific reputation, and a depth of knowledge that would allow him to be taken seriously in his new book. This book was the biggest breakthrough in the life of Charles Darwin.
His creative experiments
Darwin did not end his career with the publication of The Origin of Species. Although he suffered from chronic disease attacks, he was only fifty years old, and most of his books had not yet been published. He still had no theory of how heredity worked. Gregor Mendel developed this theory, now called genetics, in the 1860s. But his work was not understood until after Darwin’s death, nor did Darwin ever hear of Mendel’s work.
Ten years after the Origin of Species, he published his extensive two-volume work The Variation of Animals and Plants under Domestication, which resulted in Darwin’s description of the hypothetical mechanism of inheritance he called “pangenesis.” He argued that particles circulating in body fluids that could be passed from parents to children, namely sediments, act as catalysts for the development of certain organs. But not many people were convinced, and his cousin Francis Galton’s experiments with blood transfusions didn’t show anything to back up his claims.
Pangenesis and pigeon varieties.
The next two books, The Descent of Man and Selection in Relation to Sex and The Expression of the Emotions in Man and Animals, were written to show that some human-appearing dimensions such as aesthetic sensibility, conscience, and even religious feelings might be part of the evolutionary continuity in the rest of the animal kingdom. The process of natural selection was helped along by the process of sexual selection. Organisms not only had to live long enough to reproduce, but they also had to have traits that would appeal to their offspring.
However, Darwin’s main struggle was not with people or even animals, but with plants. He saw humans only as a distinct variety of primates, but there were no strict boundaries between animals and plants. In Down, he conducted innovative experiments in his garden and conservatory, including hybrid crossing to ensure greater diversity. He studied plants that show animal-like behaviors, such as vines or insect-eating plants, and that are sensitive to external stimuli.
Charles Darwin’s last book
Though he was criticized later in his life for the amateur-looking nature of his working methods, Darwin was a skilled experimenter and by no means away from the advances of science. He was also in contact with the university laboratories established in Germany, thanks to his botanist son Francis, who conducted experiments for him. He examined the importance of regulating the habitat for the cultivation of seeds that are brought from one end of the world to another. He was one of the first people to use scientific surveys and sign petitions to Parliament about how science affects public life.
Darwin’s latest book, The Formation of Vegetable Mold through the Action of Worms, published in 1881, was a return to the line of investigation that began decades ago and was the result of experiments he made with his children at home. It was also a pioneering work in showing the importance of seemingly insignificant creatures as well as the full cycle of nature in a wider environment. Darwin saw his last book sell more than all of his previous books, and the next year Charles Darwin’s life ended. Because of his groundbreaking ideas in science and his fame, he was buried in a beautiful ceremony at Westminster Abbey in London.
Let’s talk about the development history of surgery. Columbia University School of Medicine once was referred to as “Columbia School of Physicians and Surgeons”, as it refers to the independent development of medicine and surgery, sometimes opposite each other. Even today, some surgeons in the UK use the title “Mr.” with great pride rather than defining themselves as a Doctor. They have already deserved it.
The surgical interventions of ancient times were still in use until the Middle Ages.
The first surgery performed on a human was probably done after the invention of cutting tools. Evidence of the first surgical intervention dates back to the Neolithic Age (10th to 6th century BC). Even the first brain surgeries date back as early as the 8th century BC. Trepanation, which was the process of piercing the skull with a surgical saw to reduce pressure on the brain, was a common technique even at that time. Any of the three skull bones (anterior, lateral, or posterior bone) of a living patient were taken out to reach the dura mater, the hard, fibrous membrane that forms the outermost layer of the brain. Although there were no pain prevention methods such as anesthesia or antisepsis, in these surgeries performed thousands of years ago, the chances of survival for the patient were quite high, unless the dura mater was damaged.
Until the 2500s BC, Egyptians successfully circumcised both men and women, as evidenced by their many carved works that have survived to the present day. They also performed various amputation operations (surgical cutting of various parts of the body) using the willow bark and leaves as a medicine against infection. Ancient Egyptian medical papyri give detailed instructions on a wide range of surgical procedures, such as how to fix a broken leg and sew up a big cut.
Hindus in ancient Indian civilization used surgical methods to remove bladder stones, tumors, and even sick tonsils and to treat a broken leg. But their biggest contribution to medical science is in the field of plastic surgery. The history of this surgery began in the Ancient Indian civilization in 2000 BC, when plastic surgery operations were so much needed because of the nose- or ear-cutting punishment imposed on criminals.
Sushruta Samhita, a text on surgery
The surgical methods applied to people whose noses or ears were cut during these years were first described in a book named Sushruta Samhita. Sushruta had focused on nasal surgery but, surprisingly, also performed eye-related surgical interventions such as the removal of cataracts in the eye. In his book, he divided surgical interventions into seven categories; Chedya (excision), Lekhya (scarification), Vedhya (puncturing), Esya (exploration), Ahrya (extraction), Vsraya (evacuation), and Sivya (suturing). The following text is a section from the book:
When a person’s nose is cut or damaged, the surgeon takes a leaf of the plant into the shape and size of the damaged areas as a model. This leaf is placed on the patient’s cheek and a leaf-sized skin piece is taken from the cheek (but during this procedure, one side of the skin piece should remain attached to the cheek). Then, the cut edges of the nose are removed again and the piece of skin taken from the cheek is carefully closed over this damaged area and suturingis done from the edges. The physician then places two thin straws in the nostrils to facilitate breathing and prevent the sutured skin from collapsing. Then, apply licorice root and barberry powderin this area and cover it with cotton. When the sewn skin begins to grow with the skin of the nose, the physician completely cuts the skin from the cheek.
Sushruta Samhita
Corpus Hippocraticum, and Hippocratic Oath
Hippocrates refusing the gifts by Artaxerxes, king of the Persians and enemy of the Greeks asking for help to cure a plague.
In the 4th century BC, the Greek physician Hippocrates made great contributions to surgery with his writings, which depict various surgical operations in detail. But these attempts always required courage and strength. The surgical instruments used at that time were made of iron, copper, or copper alloy. These rough tools were used to remove bladder stones as described in the Corpus Hippocraticum. But when we think of the Hippocratic Oath, which prohibits breaking the kidney stone with the words “I will not cut, even for the stone, but I will leave such procedures to the practitioners of that craft,” there is irony. The ancient Greeks inserted a hollow metal tube through the urethra to empty the bladder. This pipe, known today as the catheter, is made of straight copper for women and S-shaped copper or lead for men. This was a dangerous procedure that greatly hurt the patient, like all procedures performed at that time.
The philosopher Celsus says a surgeon must be vigorous. His intuition must be strong and sharp, and his nature must be brave; he must be merciful enough to treat his patient, but he must not be affected by his painful screams and rush his job. He must act as if he never heard the painful screams.
The barber surgeons
Barber surgeons working on a boil on the forehead of a man.
Before the Middle Ages (5th–14th centuries), surgery was considered a profession of the proletariat class. Between these centuries, surgeons were kept at a much lower level than trained physicians. Surgical interventions were performed only by barbers. For this reason, the field of surgery experienced a long period of stagnation during this period. It is interesting that during this period, the barbers who wandered from town to town were only trusted when they bled a patient’s blood in large quantities to remove a tumor in the body, pull a tooth, stitch cuts, and treat various diseases. The barber’s pole sign–a staff or pole with a helix of colored stripes–is from the period when barbering and surgery were not separated. At that time, the barbers shed the blood of the people to protect them from diseases, and the red strip was for the patient’s blood, the white strip was for the bandage wrapped around the patient’s arm, and the cylinder was the stick that was given to the patient.
The surgery is on the rise
The guide to surgery and practical medicine Chirurgia Magna enlightened the people of the Dark Ages.
Until the French surgeon Guy de Chauliac wrote his great work “Chirurgia Magna” in 1316, there was no improvement in surgery. This work allowed surgery to regain respectability in Europe, if not the rest of the world. After three hundred years, both surgery and medicine have witnessed major developments. French surgeon Ambroise Pare developed the ligation method in which the vessels are tied to stop bleeding. The discovery of the ligation put an end to cauterization, an ancient method of cauterizing the wound with a hot iron or oil to stop bleeding. With a better understanding of how blood moves through the body and why capillaries are there, surgical procedures have become much more effective.
However, until the 1840s, surgeons rarely had the opportunity to examine the depths of the human body and tamper with vital organs due to the risk of infection and the pain the patient felt. Then, on March 30, 1842, medical doctor Crawford Williamson Long managed to remove one of the two tumors in the neck of a patient named James Venable, using ether as an anesthetic. This was the first time that anesthesia was used for surgical purposes in the history of medicine. However, Dr. Long’s failure to publish the successful results of his work until 1848 caused William Morton, who worked in the same field, to collect praise for this great invention.
The birth of modern surgery
Antiseptic surgery environments were designed by Joseph Lister.
By minimizing the pain felt in surgical interventions, the only obstacle that was preventing surgery from being a complete revolution and remedy for the treatment of numerous diseases was the risk of infection. Louis Pasteur’s discovery that fermentation or decay was caused by bacteria in the air was the first step toward the understanding of infection. With British surgeon Joseph Lister’s innovation of antiseptic surgery by applying Pasteur’s findings in 1865, modern surgery was born.
Lister developed a series of antiseptic products with carbolic acid spray in order to provide hygiene in the operating room. Today, the mouthwash named “Listerine” is named after him. Australian physician Ignaz Semmelweis and American physician Oliver Wendell Holmes later improved the hygiene sensitivity in medical practices and insisted physicians wash their hands and wear clean clothes just before the surgery.
When astronomer Galileo Galilei turned his newly built telescope into the sky on January 7, 1610, he saw a light spot that slowly traveled through the clear night over the city of Padua, Italy. This was the point he was looking at: the planet Jupiter. In amazement, he realized that “all four little stars” accompany the planet. These were actually the moons orbiting the giant planet.
Galileo Galilei’s First Telescope
The landscape he saw was against the traditional understanding of the Church, which assumed that all the objects in the sky revolved around the Earth. Galileo’s discoveries would change the way the Church thought about the universe in a revolutionary way.
Traditional beliefs about the nature of the universe were rigid and had been in place for a long time. For 300 years, the Catholic Church and universities accepted the Ancient Greeks’ theories, particularly those of Aristotle and Ptolemaios: the Earth is stable in the center of the universe, and other objects in the sky revolve around it. Since no one could see the roughness of the moon, planets, and stars, their models were considered to be perfect circles, the most flawless of shapes.
The church has adopted the perspective of the ancient age in terms of supporting its mission to help sinful humanity turn towards divine perfection. Two beliefs supporting this view were vital. First, the sky should always be considered perfect. Second, the world in which humans live should have been the constant center of everything, as the focal point of divine creation. Theories put forward by Aristotle and Ptolemy, which support this understanding, gradually became religious dogma.
By the time Galileo lived, this dogma had lost some of its former strength. Polish pastor and mathematician Nicolaus Copernicus suggested in 1543 that the motion of the planets could be more easily explained if the Sun were considered to be the center of everything. Giordano Bruno was burned at the stake in 1600 due to his religious convictions (one of which was the insistence that the Earth was orbiting the Sun). In short, Galileo knew what he was facing.
The Third Telescope
The war that Galileo entered with the Church might have caused his death.
Galileo began his work at the age of 18 in 1582, which would later help shape various basic laws of motion. In 1609, while teaching mathematics in Padua, he came across the name of Dutch spectacle-maker Hans Lippershey, who created two primitive telescopes by combining two lenses in a tube. Galileo made himself a telescope that could magnify objects only three times. He made a second telescope in a few months and then magnified it 32 times. It was this third telescope that he used to study the sky in 1609-1610.
The discoveries quickly followed one another, each a challenge to the teachings of the Church. The lunar surface wasn’t perfect. There were craters, mountains, and plains. The Sun, which was regarded as “untainted” until then, had spots, and Jupiter also had moons that did not rotate around the Earth, and Venus could not be seen unless it revolved around the Sun rather than the Earth. And many previously unknown stars could now be observed.
In short, Galileo’s telescope provided convincing evidence that we live on a planet that revolves around the Sun, not at the center of the universe. Copernicus was right, and the Church’s doctrine on this subject was wrong, as Galileo clearly showed in Sidereus Nuncius (Starry Messenger or Sidereal Messenger), which he wrote in 1610.
The War Lost by the Church
Galileo published his views in a “dialogue” that argues various states in which the Earth and the Sun stay at the center of our planetary system. But this smart move could not save him from the wrath of the Church.
Based on the detailed observations of German astronomer Johannes Kepler (1571-1630), a colleague of Danish astronomer Tycho Brahe (1546-1601), it could be seen that the planets were not following an elliptical circle but rather ellipses. Kepler designed three laws to explain the speed and orbit of a planet, thereby supporting the theories of Copernicus and Galileo. But it was still not easy for Galileo to prove that the Earth was moving. In 1615, the Church responded to the challenge.
The Pope stated that the doctrine that explains the Sun as the central and fixed figure was “false, absurd, and formally contrary to religious belief and the Bible.” Galileo was informed that he should change his views. He withdrew from public life until 1632, and that year he made a dangerous comeback with the book Dialogo sopra i due massimi sistemi del mondo (Dialogue Concerning the Two Chief World Systems), which supported Copernicus’ idea.
Galileo was ordered to return to Rome in 1633 and retract his words because he opposed religious belief. He did so because he was threatened with torture, but it was rumored by some that he murmured “E pur si muove” (And yet it moves). Galileo was actually under house arrest until his death in 1642, but he could not prevent the spread of his ideas. The religious imposition was beginning to unravel in the face of science. But in 1992, it would take 359 years for Pope John Paul II to consider Galileo’s conviction as “tragic mutual incomprehension.”
When Did We Start Observing the Sky?
Tycho Brahe rejected Copernicus and remained true to the idea that the planets were turning around the Sun and the Sun around the Earth. Brahe mapped the stars and placed them on the 12 bands of the zodiac signs, which are thought to affect people’s lives.
Since the ancient Babylonian civilization, that is, approximately 2000 BC, the sky has been explored not for objective information but because it is believed to control people’s lives. It was evident that the Sun and the Moon were linked to human life as they managed the seasons and tidal events; therefore, people worshiped them as gods. Other celestial bodies were also seen as divine beings with their own special powers over humans. Thus, it was thought that the information in the stars must have been the information of the future. Mystical beliefs and practical knowledge were increasing together. Astronomy and astrology were the same thing in the Old World.
Babylonians, Egyptians, Greeks, and Chinese have long had a tradition of observing the sky. The Babylonians were able to calculate the length of a year with an error of four and a half minutes, but in all these cultures, astronomy was considered a means of supporting astrological prophecies. It was the Babylonians who designed the signs of the zodiac that are still in use today by astrologers. Ptolemy’s geometric universe depiction based on circular movement was the main theory of his work, The Almagest. Ptolemy argued that behavior, height, appearance, and even national character traits of humans could be read from the stars.
The belief in the prophetic power of the stars survived after the emergence of Christianity and eventually experienced a great resurrection in Medieval Europe. Though his meticulously crafted catalog of stars would play a crucial role in proving that astrology was not a science, Tycho Brahe still believed in the reality of astrology. Tycho’s student, Johannes Kepler, also made his living by preparing horoscopes.
But Brahe, Kepler, and Galileo’s works provided proof that astronomy was a science that looked into the universe and had a different purpose than astrology.
How Did Galileo First Start Researching?
As a young boy, his father sent him to Pisa University to study medicine, but instead he studied mathematics.
The origin of Galileo’s research is based on an event that occurred while he studied medicine at Pisa University. While looking at a lamp swinging from the ceiling of a church, he realized that even if the swinging distance was shortened, the back and forth movements always seemed to be of the same length. Seven years later, while teaching mathematics, he proved that the time it takes for a pendulum to complete its swing remains the same regardless of how much up or down it swings. The duration changes only when the length of the pendulum changes.
Galileo also found that altering the weight of a pendulum did not make a difference in timing, in complete contradiction with Aristotle’s discrete “law” that heavy objects fall faster. In later experiments with balls rolling down prone surfaces, he discovered that falling objects always accelerated in the same way, which was not related to weight.
Giovanni Cassini discovered the four moons and rings of the planet Saturn.
A new generation of people emerged after Galileo’s death, improved his two main scientific tools (the telescope and pendulum clock), and significantly expanded the revolution he started. Galileo’s small telescope could magnify 32 times. Astronomers continued their experiments in the mid-XVII century with 11.9 ft (3.6 m) long and 50-times magnifying equipment, and some even had 200 ft (60 m) long giant devices. Meanwhile, telescopes were inserted with new devices to calculate the position of the stars and the size of the planets.
Galileo also suggested that pendulums could be used to adjust clocks. The use of pendulums resulted in a striking improvement in the management of the clocks. For the first time, clocks could measure both minutes and seconds and the duration of the movement of celestial bodies. More discoveries were made later. Johannes Hevelius, a Polish scientist, mapped the Moon. In the Netherlands, Christiaan Huygens discovered that it was Saturn’s rings that changed the way the planet appeared. An Italian named Giovanni Cassini spotted a giant stain in the atmosphere of Jupiter and discovered that it took 9 hours and 56 minutes for the planet to spin around its axis.
The telescope and the pendulum clock were combined in a brilliant experiment in 1670. Two astronomers, Cassini, who lives in Paris, and Jean Richer, from French Guiana, simultaneously measured Mars’ position. Since there was a distance of about 4,000 miles (6,400 km) between the two people, the images appearing in both astronomers’ telescopes were slightly different due to Mars’ relative position to the fixed stars in the background. This difference made it possible for Cassini to calculate the distance between Mars and Earth.
Thus, Cassini had a measuring rod that made it possible to measure the distance from any planet or the Sun. He calculated the distance of the Sun as 87 million miles (140 million km), and his error was only 6%. For the first time, man was learning not only the structure of the solar system but also its incredible dimensions.
The Development of Science After Galileo
The telescope has revealed a new world to explore, and while this has allowed us to find answers to some questions, it has also created many new ones. The most important of these questions was: Why were the Earth, other planets, and moons following a rotation like this? Galileo’s telescope and subsequent improved models showed that the sky was somewhat like Earth, with material objects directed by the same rules. This discovery was the basis for a major revolution in astronomy that would result in Isaac Newton’s works, a generation after Galileo’s death.
The atmosphere of thought that prevailed in England in the middle of the 17th century was very different from that in Italy. Since they did not have to fight the impositions of the Church, British scientists had great freedom to experiment and develop theoretical science.
In 1665, Newton analyzed the nature of light at his home in Lincolnshire, Woolsthorpe. He then invented a powerful mathematical tool, an amazing calculation method that investigates the rate of change in quantities. It was the first time he began to understand the power that controlled the movements of celestial bodies, but it would take another 20 years for his findings to be published in his book Principia Mathematica (Principles of Mathematics).
Here he first describes the force of gravity. Objects attract each other with a force proportional to their mass. This power decreases in inverse proportion to the square of the distance between them. In other words, doubling the distance reduces gravity by a quarter. Behind Newton’s mathematics lies an insight into the intriguing simplicity that existed when Newton simply saw an apple falling in the garden by the force of gravity.
This phenomenon is so well known that its reality can be seen as suspicious. However, the story is true, even if it didn’t actually fall on the scientist’s head. The same rules govern apples, moons, pebbles, and planets. The law of gravity is universal and can be defined in the language of mathematics. This was Galileo’s true legacy.
Galileo and the Leaning Tower of Pisa
A story that is always told says that Galileo dropped two spheres of different weights and sizes from the top of the Leaning Tower of Pisa, and he saw them fall at the same speed and reach the ground at the same time. But there is no record that this experiment was really done. Even so, it would be difficult to observe and correctly evaluate the results.
All objects tend to fall at the same speed in a vacuum. But in an environment with air, the rate of the decline reflects the complex relationships between the density and size of objects. Small objects fall slower than larger ones, even if they are made of the same material, because they have a larger surface area than their weight. A mouse can survive a fall, which can shatter a horse. Pendulums and inclined planes were undoubtedly more reliable scientific research tools.
Bibliography:
Sharratt, M. (1994). Galileo: Decisive Innovator. Cambridge: Cambridge University Press. ISBN 978-0-521-56671-1.
Drake, S. (1973). “Galileo’s Discovery of the Law of Free Fall”. Scientific American. 228 (5): 84–92. Bibcode:1973SciAm.228e..84D.
Drake, S. (1990). Galileo: Pioneer Scientist. Toronto: The University of Toronto Press. ISBN 978-0-8020-2725-2.
Shapere, D. (1974). Galileo, a Philosophical Study. University of Chicago Press.
When William Harvey discovered the fact that the blood was circulating in the body in a closed system, it led to the idea of blood transfusion. A few moments after, Oxford physician Richard Lower started working on the dog-to-dog and animal-to-animal blood transfusions. In the 16th and 17th centuries, blood, milk, and saltwater transplants were carried out in various countries to meet the patient’s blood needs, but these often caused terrifying situations.
Discovery of Blood Transfusion
Karl Landsteiner.
By the 20th century, when microbiological research was prevailing on the scene, numerous scientific details were acquired about some infections, and more importantly, significant improvements were made to human cellular functions (white blood cells) and the responses of antibodies. The blood transfusion, and the revolutionary developments it brought with it, caused the 20th century to be called the “age of immunology.” The biggest development in this field happened in 1901.
Austrian physician Karl Landsteiner discovered this while working with unsuccessful blood transfusion samples. He mixed small amounts of blood samples with the blood samples of different patients, and the result was an agglutination (clustering) of red blood cells, but not always. However, the same agglutination sometimes resulted in deaths and illnesses. Because this was a result of the inappropriate blood transfusions. Based on this experience, Landsteiner discovered that this was due to the presence or absence of two antigens called A and B that adhere to the outer layer of the cell membrane in the red blood cells.
Later, he mixed the blood samples he took from his physician friends and separated them into either group A or group B as compatible or clustered. He found a type of blood type that did not lead to clustering when mixed with both A and B groups. Understanding that no antigens adhere to the outer layer of the cell membrane in the red blood cells, Landsteiner called this blood type the 0 (zero) group (which later turned into the letter “O”). After a series of shuffles and observations, he discovered a new group called AB, where the cells carry both antigens.
Unfortunately, the remaining details about blood transfusion could not be easily and quickly discovered. It would take another forty years to embrace the life-saving function of blood transfusion with the discovery of other antigens (Rh, M, N, andP), the emergence of anticoagulant drugs and blood storage products that prevent blood clotting, the establishment of blood banks, and the beginning of World War II. As a result of the widespread use of blood transfusions, diseases such as viral hepatitis and, later, AIDS, have emerged after blood transplantation. This has been eliminated by scanning the donors’ blood with the help of radioimmune analysis and other methods.
Blood Transfusion, and the Race Wars
The Nuremberg Law for the Protection of Blood and German Honor.
Today, we understand that the word “value” has deviated from its meaning and degenerated to be turned into a political tool, especially in the recent past. Mostly, when we look at the examples mentioned above, we realize this degeneration better. Before written history, even a single drop of blood was valued highly; it reflected our personalities, determined our race, and symbolized living species. In 1935, German doctor Hans Serelman had a patient in need of an emergency blood transfusion. This happened at a time when blood banks (except the first blood bank established in Leningrad in 1932) were not yet established and blood transfusions were made directly from the donor’s vein to the recipient’s vein. Serelman gave the patient his blood because there were no compatible donors. While he had to be praised for saving the patient’s life, he was sent to the concentration camp because he was a Jew, and the blood of the German race was contaminated.
In the years that followed, Germany began working on preventing eight thousand Jewish doctors from doing their job and reducing the Jewish “influence” in the field of medicine, among many other applications. German immunology studies focused on finding the differences between pure Aryan blood and Jewish blood, which had no contribution to science. The Nuremberg Law for the Protection of Blood and German Honor imposed significant limitations on the blood to be removed from the donor during the blood transfusion, “in the case of the donor carrying sufficient pure Aryan blood”, to create a pure Aryan breed.
American Racism
The US started accepting the blood of black people after the Pearl Harbor attack.
If we look at the United States, we see similar racial practices. The army was divided into two, and the Red Cross refused to collect blood from the blacks. After the Pearl Harbor attack, the need for blood was so high that the institution began to accept the blood of black people. He labeled and processed the blood taken from them differently. In the late 1950s, the state of Arkansas enacted a law requiring the separation of black and white people’s blood, while the state of Louisiana enacted a law that criminalized this for physicians who gave “blood of the black” to white people without permission.
Today, it is unknown how many deaths and suffering have resulted from widespread racial discrimination among black and white people. Some of the legislators that support these laws, policies, and practices are also the heads of the parties that govern us; It is they who decide who can get health care, the distribution of organs in transplants, who has the right to abortion, stem cell research, who can access secret medical documents, who will have the human genome, and the quality of the air we breathe and the water we drink.
When Santiago Ramón y Cajal was a young anatomist in 1887, he saw the nerve fibers stained by the new method described by Italian scientist and medical doctor Camillo Golgi for the first time. He described this moment of enlightenment as follows:
What an unexpected sight! Sparse, smooth, and thin black filaments or thorny, thick, triangular, stellate, or fusiform black cells could be seen against a perfectly translucent yellow background! One might almost liken the images to Chinese ink drawings on transparent Japanese paper […] this is the Golgi method
Santiago Ramón y Cajal, Histologie du Systeme Nerveux de L’Homme et des Vertebres (Histology of the nervous system of man and vertebrates), 1909
Who Was Santiago Ramón y Cajal?
Santiago Ramón y Cajal was born on May 1, 1852, in a small village in the Spanish countryside. His father was a barber-surgeon. Later, with great effort, he managed to get a medical diploma and asked his son to go to medical school. But like many talented young men, Cajal was a rebellious adolescent, and he didn’t like the strict discipline of the school. He intended to become an artist, as he was talented at it from an early age. This idea terrified his father, and his father eventually won; Cajal graduated from medical school in Zaragoza in 1873.
He immediately joined the medical service in the army. But, in less than a year, he contracted malaria and tuberculosis in Cuba and was sent home. Santiago Ramón decided to start a career that relied heavily on his artistic skills, as anatomy was the only subject in which he was interested or inclined. He reached increasingly prestigious academic positions in Valencia, Barcelona, and Madrid and was internationally recognized for his discoveries in the cellular architecture of the nervous system; in 1906, he won the Nobel Prizein Physiology and Medicine with Golgi.
Seeing Inside the Brain
The appearance of the neuron via the Golgi staining technique shows the fine structure of dendrites and axons.
Until the middle of the 19th century, the use of a microscope to examine cells and tissues was limited to primitive techniques. This was a particular problem, especially when examining the nervous system. The “cell theory”—the argument that all living things are made up of cells—was explicitly stated in 1839 and considered valid for almost all organs; only the nervous system was an exception. Yes, the nervous system was functioning with electricity, but how was electricity produced and transferred, and what was its relationship with this fine structure?
Even the brain and spinal cord seemed to consist mostly of fibers. Although the cells were visible, they were hardly distinguishable in the middle of the fiber masses, and it was not known whether they served a secondary purpose, such as feeding the fibers, or whether they had a greater significance. Golgi solved the problem by finding a way to stain nerve tissues with silver chromate. This made it possible to see the nerve cell pathways in the brain for the first time since only a small number of cells could be stained properly.
But Golgi staining was not very reliable, was difficult to reproduce, and was not used much in the 1880–1885 period, as Golgi reported. Golgi’s courage was broken when he moved away from neurohistology and started working on malaria, which had brought him new fame. Santiago Ramón took over his method, made innovations in the staining technique, and progressed to make many original discoveries over the years. He had observed numerous different types of nerve cells—now called neurons—and their relationships with each other in different parts of the central nervous system. The depth and breadth of his observations are so impressive that today’s neuroanatomists still refer to him when they write their findings.
Drawing of a Purkinje neuron (Jan Evangelista Purkinje) from the human cerebellum by Santiago Ramón y Cajal, 1899
Certain features of Cajal’s personality had important effects on his success. He had the imagination to choose the most important problems to work on and to identify exactly where they were in the central nervous system. Since it was rare to see the entire neuron at a glance, and it was never seen by most people, he traced it patiently and meticulously by taking different looks at the neuron. Later on, he synthesized the details of these glances, describing what he saw clearly.
He had no equal in capturing the relationship between the neurons, rarely seen together at a glance from a microscope. His strongest part was not to trace long paths along with the nervous system but rather to evaluate the connection between cells in certain areas, such as the cerebellum, brain shell, and web layer. He would not avoid drawing a gap between a neuron and the fibers of the adjacent neuron to highlight the discontinuity he felt was there when it was not possible to see it.
Santiago Ramón y Cajal: The Founder of Modern Neuroscience
Drawing of Cajal showing complex connections between layers in the hippocampal region in the rat brain, emphasizing the one-way pathway of nerve impulses along dendrites, cell body, and axons.
His work was important in establishing the main principles that we consider essential to understanding the nervous system today. The first principle is about the basic units of the neuron system. Shorter dendrites and longer axons branch out of the cell body, which provides nutrition to these elongated ridges. The second principle is that nerve impulses move only in one direction in one neuron: from dendrites to the cell body, then from the cell body to dendrites or axons and branches that connect to the cell body of another neuron. The third principle is that each neuron maintains its integrity. There is a physical discontinuity between neurons. Connections do not involve physical association with the extensions of another neuron. Nerve stimulation only moves in one direction during this discontinuity (later called the synapse); that is, nerve pathways are neuron and synapse chains.
He was an uncompromising advocate of the concept of autonomous neurons that follow restricted conduction pathways throughout the nervous system, known as the Cajal neuron doctrine. He reasoned that the localization of certain functions in specific areas of the brain and spinal cord could explain the rapidly evolving evidence. It was especially beneficial to study the cells in the brain shell. These were the basis for mapping the visual, tactile, and motor representations at different locations in the shell.
The neuron theory continued to evolve and had such a great power of explanation and foresight in the physiology and pathology of the 20th century that the theory was considered the foundation of modern neurobiology, where Cajal was the founder of modern neuroscience. However, there were criticisms of the neuron theory when Cajal was still alive. When Cajal died in 1934, he was still defending the theory. The theory was proven only in the 1950s by the visualization of synapses with an electronic microscope.
Santiago Ramón y Cajal Quotes
“Any man could, if he were so inclined, be the sculptor of his own brain.”
“As long as our brain is a mystery, the universe, the reflection of the structure of the brain will also be a mystery.”
“Perseverance is a virtue of the less brilliant.”
“Nothing inspires more reverence and awe in me than an old man who knows how to change his mind.”
“Heroes and scholars represent the opposite extremes… The scholar struggles for the benefit of all humanity, sometimes to reduce physical effort, sometimes to reduce pain, and sometimes to postpone death, or at least render it more bearable. In contrast, the patriot sacrifices a rather substantial part of humanity for the sake of his own prestige. His statue is always erected on a pedestal of ruins and corpses… In contrast, all humanity crowns a scholar, love forms the pedestal of his statues, and his triumphs defy the desecration of time and the judgment of history.”
“The mediocre can be educated; geniuses educate themselves.”
Who is Edwin Powell Hubble, and what exactly has he discovered about our universe? Edwin Hubble is undoubtedly one of the greatest astronomers of modern times. He proved that the Milky Way, including our Sun, is an ordinary galaxy and the objects known as “spiral nebulae” are different galaxies. Thus, he made revolutionary changes in our understanding of nature and our understanding of the size of the universe.
Who Was Edwin Hubble?
Hubble’s career was very turbulent at first. He was born in Wheaton, Illinois, where he focused on mathematics and astronomy and received a bachelor’s degree in basic science in 1910. He then continued his education in Oxford for three years and was one of the first students to receive the school’s Rhodes scholarship; he did not give up the British attitudes that he adopted here throughout his life. When he returned to the United States, he received a law degree, started teaching at a school in Indiana, and completed his doctorate in astronomy at the University of Chicago.
During the First World War, he served in the US Army and quickly became a Major. Although he never served on an active mission, he would always like to be called “Major Hubble.” After the war, he was called to join the staff of Wilson Mountain Observatory, established near Pasadena, California, in 1919. He stayed there for more than thirty years until his death. He had a happy marriage; his wife Grace was still alive when he died.
He lived in an exciting time in astronomy. The large 100-inch Hooker reflective telescope had just been placed on Mount Wilson. It was not only the largest and most powerful telescope in the world, but it was also unique. Hubble took advantage of it. It had long been known that objects called nebulae were of two different types: some were supposed to be made of prominent gas clouds like M-42 in Orion and others from stars like M-31 in Andromeda. (In the naming of the galaxies, the letter M is used in memory of the French astronomer Messier, who made a catalog for more than a hundred nebula bodies in 1781; M numbers are still in use today.)
Edwin Hubble was sure that the gas nebulae belonged to the Milky Way system, but he was not sure about the star nebulae. Was it possible that they were completely separate from each other and far apart? Of course, they were so far from the Earth that it was not possible to measure their distance with the existing methods at the time. Many, including the M-31, were spirals resembling a pinwheel. There was another distinctive feature: The calculations made elsewhere, usually the Arizona calculations of Vesto Slipher, showed that they were moving away at very high speeds. Slipher used the spectroscopic method for the calculations; as a result, the light from the stellar nebula shifted slightly red, which meant the velocity of departure — a famous Doppler effect.
The Cosmic Disagreement
100-inch Hooker projection telescope at Wilson Mountain Observatory in California in 1917. Hubble used this telescope to discover that the Universe was expanding in 1929.
Shortly after his arrival at Mount Wilson, Hubble was convinced that the spirals were independent systems. But other famous astronomers disagreed. Harvard College Observatory Manager Harlow Shapley, one of the first to measure the dimensions of our galaxy, was one of those astronomers. Another was the Dutch astronomer Adriaan van Maanen, who had been on Mount Wilson since 1912.
Van Maanen tried to measure the internal velocity of the spirals; he found that the stars within them moved relative to each other, which meant that they could not be as far away as Hubble believed. Because, even in 100,000 light-years, the shift in individual stars would be too small to measure. Therefore, the spirals must be in our galaxy, Van Maanen said. (The disagreement among colleagues at the observatory was also deepened by their dislike of each other.)
Hubble decided to try a completely different method. He would reference the stars known as Cepheid Variable stars. Many stars, including the sun, have shone more or less the same over the centuries, but some are different; some shine and fade at regular, some at unpredictable intervals. The Cepheid Variables, named after Delta Cepheid, the best-known member of the class, have absolute regular periods, ranging from several days to weeks, so that it can always be known how to act; Delta Cepheid, which can be easily seen with the naked eye in the northern hemisphere, has a 5.4-day period—that is, the highest brightness value is reached after 5.4 days. The relation between the true luminosity and the periods of the Cepheids was also known: the luminosity of the star was increasing as the period was extended.
Another Northern Hemisphere Cepheid, Eta Aquilae, has a 7.2-day period and has more luminosity than Delta Cepheid. If we measure the period of a Cepheid, we can find its luminosity and thus its distance, moreover, all Cepheids are very bright, and they can be easily seen from many light-years beyond the galaxy. (Recall that a “light year” is the distance traveled by a ray of light in a year, which is more than 5.9 trillion miles (9.5 trillion kilometers). The most recent measurements give Delta Cepheid’s distance as 982 light-years.)
“Further, faster”
Hubble’s discoveries brought fame to both him and the Wilson Observatory. In 1931, Albert Einstein, Walter Mayer, and other scientists toured the facility.
Hubble would now set out to find Cepheids in star nebulae and spiral galaxies. Only the 100-inch telescope on Mount Wilson was powerful enough to do this, and this telescope was at Hubble’s disposal. He soon managed to find what he was looking for. He found Cepheids in many spirals, including the M-31, and showed that they were too far away to take part in the Milky Way. Indeed, these were different galaxies, and the discovery announced on January 1, 1925, changed our view of the Universe. Van Maanen made a completely honest mistake: when measuring the star plates, he did not take into account certain photographic effects that gave the appearance of an unrealistic movement.
Hubble gained fame and continued his research throughout the rest of his life, making important discoveries. His chief assistant was Milton Humason, who began his career as a driver of mules carrying materials to the mountain while the Wilson Mountain Observatory was being built and later became a world-renowned astronomer at the same institution. In particular, they found a link between the distance of a galaxy and the speed of departure: it all depended on the “further, faster” rule.
They discovered that the entire universe is expanding, but to say that all galaxies are moving away from each other was not exactly correct. They form clusters, and each cluster was moving away from all other clusters. Our galaxy and the Andromeda spiral are members of the cluster called the Local Group. They will all eventually collide, but if we are lucky, this will not happen in one billion years.
Universe Prior to Edwin Hubble
The view of the Orion Nebula (also known as Messier 42) from the Hubble Space Telescope, which is 1,300 to 1,400 light-years from Earth. It is one of the brightest clouds and can be seen in the night sky with the naked eye. It is estimated to be 24 light-years wide and 2,000 times the mass of the sun.
Today we can see galaxies 10,000 million light-years away, but since we can’t reach 13,700 million light-years away, we can’t yet see the universe right after the moment of the Big Bang when astronomers believe the entire cosmos suddenly formed. For now, we don’t know if we can see that day. Before Hubble began researching Cepheid variables, we believed that the whole universe was the Milky Way.
In addition to the scientific and technical articles, Hubble took the time to write some popular books, the most well-known being The Realm of the Nebulae. He was not the most popular astronomer on Mount Wilson as his colleagues found him stiff and distant. However, according to Patrick Moore, this was not the case: “We came together many times after the Second World War. This young British man who was mainly interested in the moon was very respectful and helpful for an amateur.”
He received almost all the awards that the scientific world could offer. If he did not suddenly die in 1953, he would most likely have received the Nobel Prize in Physics for that year. He will never be forgotten, and of course, it was a good decision to name the first large space telescope orbited by Discovery in 1990 after Edwin Hubble.
Edwin Hubble Quotes
“With increasing distance, our knowledge fades, and fades rapidly. Eventually, we reach the dim boundary—the utmost limits of our telescopes. There, we measure shadows, and we search among ghostly errors of measurement for landmarks that are scarcely more substantial. The search will continue. Not until the empirical resources are exhausted, need we pass on to the dreamy realms of speculation.”
“The history of astronomy is a history of receding horizons.”
“Science is the one human activity that is truly progressive. The body of positive knowledge is transmitted from generation to generation.”
Johannes Kepler transformed the ancient cosmological tradition and laid the foundation for modern science by acknowledging astronomy as a branch of mathematical physics, which until then was seen as the humanities. The planetary motion laws referred to by his name laid the groundwork for Isaac Newton’s law of universal gravity; these laws still explain the orbits of not only planets but also dwarf planets, comets, asteroids, trans-Neptunian objects, and even superior planets, that is, planets of distant stars. He also laid the groundwork for modern optical science with his work with lenses, mirrors, and the human eye.
Who Was Johannes Kepler?
Johannes Kepler was born in Weil der Stadt, an autonomous royal member of the Holy Roman Empire, to a German Lutheran family.He received his bachelor’s and master’s degrees from Tübingen University. Here he was the student of Michael Maestlin, one of the first to believe Copernicus’ Sun-centered theory. Kepler not only believed in the reality of heliocentrism but also thought it to be religiously important. He defended Copernicus’ views with both a theoretical and theological stance.
Although he wanted to become a priest, Kepler agreed to work as a mathematician in Graz, Austria, in 1594. As the mathematician for the region, he was expected to prepare an annual calendar and make astrological predictions. His work in 1595 made him known as a successful astronomer. The geometry he used in his lessons was effective in the publication of his book Mysterium Cosmographicum (Cosmographic Mystery) in 1596. This work is the first published defense of the Copernican System. It describes the divine structure of the universe based on the relationship between the five platonic solids, the speeds of the planets, and their distances to the Sun. Some of the people who read his work were the Italian mathematician Galileo Galilei, who admitted to being a supporter of Copernicanism, and the Danish nobleman Tycho Brahe, who said that his observations could help improve Kepler’s theory.
In 1600, Protestants were forced to leave the city of Graz. Kepler went to Prague to escape the growing religious tension, and Tycho became the royal mathematician of Emperor Rudolf II. When Tycho Brahe died in 1601, Kepler replaced him. It took more than 25 years to complete Tycho’s Rudolphine Tables by using his data. In the meantime, he worked on projects that would become the basis for two of his later masterpieces: Astronomiae Pars Optica (The Optical Part of Astronomy) from 1604 and Astronomia Nova (New Astronomy) from 1609.
Kepler’s First Two Laws of Planetary Motion
The prologue of Kepler’s Astronomia Nova.
This first volume was the founding text of modern optics, which also describes the refraction of light, the inverse square law for calculating the intensity of light (as an intuitive assumption at the time), reflection in plane and convex mirrors, the pinhole camera, the correction of myopia and hyperopia with lenses, and the anatomy of the human eye and vision. The second volume was on the principles of planetary motions, which would bear Kepler’s name and make him literary.
Based on Tycho’s data on the orbit of Mars, Kepler proposed two of the laws for the movement of planets in the Astronomia Nova, now known as Kepler’s three laws:
All planets move about the Sun in elliptical orbits, with the Sun as one of the foci.
A radius vector joining any planet to the Sun sweeps out equal areas in equal lengths of time.
Astronomia Nova didn’t come out until 1609 because he had to talk with Tycho’s heirs, who were legally in charge of the data. Meanwhile, Tycho’s son-in-law, Frans Tengnagel, wrote a preface that warned Kepler’s readers of the author’s physical arguments. Despite this warning, Astronomia Nova is a mathematical masterpiece. Even though Newton’s law of gravity later disproved one of Kepler’s points, which said that the magnetic force of the sun caused the planets to move, Kepler’s laws about how the planets move are still right.
A Message From Galileo to Kepler
Sidereus Nuncius is a short astronomical thesis published by Galileo Galilei in New Latin on March 13, 1610.
News that Galileo was using a new invention, the telescope, to study the sky was heard in Prague. Galileo sent a copy of Sidereus Nuncius (The Starry Messenger), to Kepler in the hope that positive evaluations from royal mathematicians would bring him more fame. Without a telescope, Kepler was not able to confirm Galileo’s observations. But he wrote a book called “Conversation with the Sidereal Messenger” (Dissertatio cum Nuncio Sidereo) to show that Galileo was the one who found Jupiter’s four moons.
Anti-Copernicans argued that if the Earth had revolved around the Sun, it would have lost its moon, but Jupiter still had its moons around it. So Kepler became the first astronomer to publicly support Galileo. In his 1611 book Dioptrice (Refraction), Kepler described how the arrangement of concave and convex lenses in Galileo’s telescope worked, and he introduced his new tool that has two convex lenses and provides more magnification compared to Galileo’s telescope—which is now called the Kepler telescope.
Kepler lived his golden age in Prague. In 1611, Emperor Rudolf II lost his mind, and his brother Matthias took over all his powers. When Rudolf died a year later, his brother Matthias became the Emperor. Matthias also appointed Kepler as an imperial mathematician. But he chose to allow himself to move away from political and religious affairs and go to the Upper Austrian Assembly in Linz as a mathematician.
The Success and Conflict in Linz
This folded page included in Kepler’s Mysterium Cosmographicum book dated 1596. Here Kepler suggested that six known planets are arranged according to the five Platonic solids in a sphere–tetrahedron, hexahedron (cube), octahedron, icosahedron, and dodecahedron.
The fourteen years Kepler spent in Linz were as productive as his time in the imperial palace. His position allowed him to continue his research and also pursue his work on the Rudolphine Tables. Kepler sold wall calendars and tables that show the daily positions of the planets to support his income. In his 1613 book, he wrote about the optimal arrangement of packing objects–for example, placing oranges in crates–and its accuracy was only proven in the late 20th century. In 1615, Kepler’s Nova Stereometria doliorum vinariorum (New solid geometry of wine barrels) was the first book to be published in Linz. Aside from the surprising subject, in this book he contributes to the development of integral calculus and methods for the 20th century. In 1618 and 1620, he published the first and second volumes of his book Epitome Astronomiae Copernicanae, which is the textbook for the Copernican theory. The third and final volume came out in 1621 in Frankfurt.
Kepler’s Third Law of Planetary Motion
Tycho Brahe showcases his Tychonic System to Emperor Rudolf II in Prague.
However, in addition to these professional achievements, Kepler also faced several challenges in his life in Linz. His first wife, Barbara, died before he left Prague, and he later met eleven candidates until he found 24-year-old Susan in 1613 eligible. Later, in 1615, he learned that his mother Katharina had poisoned another woman and was brought to court, accused of witchcraft. In 1617 and 1618, his two daughters and stepdaughter died in succession. In 1619, he went against the Lutheran Church with his strange ideas. For example, he said that not only Lutheranism but also Calvinism and Catholicism had religious truths.
However, he continued to work on the Rudolphine Tables and managed to concentrate on another important project. Kepler’s Harmonices mundi libri V book (Five Books on the Harmony of the World), published in 1619, included the third law of planetary motion. In this third law, he describes the relationship of a planet’s orbit with its distance from the Sun:
The square of the orbital period of the planet is directly proportional to the cube of its average distance from the sun.
This law later formed the basis of Newton’s proof, which is that the force that keeps the planets in orbit, namely gravity, changes inversely proportional to the distance of the planet from the Sun. Thus, he harmonized the orbits of both the innermost planet Mercury, a planet that completed its orbit in 88 days, and the planet Neptune, discovered in 1846, which completed its orbit in 164 years. (Neptune has only completed its orbit once since it was discovered in 2010.)
Thirty Years’ War
Clashes between the Catholic and Protestant factions caused the Protestant Bohemian Lords to throw the Catholic representatives of the Holy Roman Emperor Matthias from the window of the Bohemian Chancellery on May 23, 1618.
Johannes Kepler spent the last 12 years in the shadow of the Thirty Years’ War, which happened between Catholics and Protestants in the Holy Roman Empire but spread across Europe for political domination. Shortly after the war started, Emperor Matthias of the Holy Roman Empire died. Months after, Catholic Archduke Ferdinand II took his place and ordered the people in Bohemia and Austria to be forced into Catholicism. Kepler’s ongoing work on the Rudolphine Tables obliged him to work in the Imperial Palace. But would it be possible to complete these astronomical tables by being a Protestant?
The solution to this problem was interrupted in 1620 when Kepler went to Württemberg to help her mother defend herself. His efforts were in vain. His mother was convicted and sentenced to 14 months in prison. When released in 1621, Kepler returned to Linz relatively peacefully. His appointment as an imperial mathematician was approved. He was told that he would be safe, but just before the Rudolphine Tables came out, Emperor Ferdinand II told all Protestants that they had to become Catholic or leave the city. This caused a Protestant uprising.
Despite being exempted from the order, Kepler was affected by the situation. In the summer of 1626, the villagers surrounded the city and set the outer quarters of the city on fire. The manuscripts of the Rudolphine Tables were saved, but the fire devastated the printing machine of the press. After the siege was lifted, Kepler asked if he could leave the city to find a shelter where he could print the Rudolphine Tables. This refuge was the city of Ulm (where Einstein would be born 250 years later), 100 miles from the Regensburg city of Bavaria, where Kepler had settled his family at the time.
Johannes Kepler and the First Science Fiction Novel
Kepler is also known as the father of science fiction: he wrote the first science fiction novel Somnium (“The Dream”).
The Rudolphine Tables took advantage of the recently invented logarithm so that users could calculate the positions of the planets thousands of years ago or later. The telescopes and tables made it possible to observe the passage of Mercury in front of the Sun for the first time in history in 1631. In 1629, Kepler published a booklet to draw attention to this transit (and to the transit of Venus, although it couldn’t be observed from Europe). Kepler’s prediction of Mercury’s transit is considered one of the most successful predictions in the history of science. (However, he could not predict Venus’ transit in 1639. This phenomenon was later predicted by the modifications made in Kepler’s work.)
Johannes Kepler did not live to witness the transit of both planets. In December 1627, Kepler presented a copy of his tables to the Emperor, who rejoiced over the commander-in-chief of the imperial troops, Albrecht von Wallenstein, for his successful suppression of the Protestant uprising. Wallenstein was appointed to the Duchy of Zagan. In 1628, Kepler came to Zagan as Wallenstein’s personal mathematician. A few months after Kepler’s arrival, the Protestant majority of Zagan was forced into Catholicism or exiled. However, Kepler was spared by them. Wallenstein was dismissed in 1630 by the electoral princes at the meeting of the imperial assembly in Regensburg.
Worried about her future, Kepler went to Regensburg to assess the situation. He got sick on the road and died on November 15, 1630, in a house now known as the Kepler Museum. His last publication, which he started when he was a student in Tübingen but was published after his death, is about the astronomical applications made by the possible creatures on the Moon. Kepler’s The Dream is considered the first example of the science fiction genre. So, this early thinker of the scientific revolution is also thought to be the creator of a type of writing.
Johannes Kepler Quotes
“If my false figures came near to the facts, this happened merely by chance….These comments are not worth printing. Yet it gives me pleasure to remember how many detours I had to make, along how many walls I had to grope in the darkness of my ignorance until I found the door which lets in the light of truth….In such manner did I dream of the truth.”
“I used to measure the skies, now I measure the shadows of Earth. Sky-bound was the mind, earthbound the body rests.”
“I believe Divine Providence arranged matters in such a way that what I could not obtain with all my efforts was given to me through chance; I believe all the more that this is so as I have always prayed to God that he should make my plan succeed, if what Copernicus had said was the truth.”
“We do not ask for what useful purpose the birds do sing, for song is their pleasure since they were created for singing. Similarly, we ought not to ask why the human mind troubles to fathom the secrets of the heavens… The diversity of the phenomena of Nature is so great and the treasures hidden in the heavens so rich, precisely in order that the human mind shall never be lacking for fresh nourishment.”
“… not my own opinion, but my wife’s: Yesterday, when weary with writing, I was called to supper, and a salad I had asked for was set before me. ‘It seems then,’ I said, ‘if pewter dishes, leaves of lettuce, grains of salt, drops of water, vinegar, oil and slices of eggs had been flying about in the air for all eternity, it might at last happen by chance that there would come a salad.’ ‘Yes,’ responded my lovely, ‘but not so nice as this one of mine.”
Johannes Kepler
Bibliography:
Barker and Goldstein. “Theological Foundations of Kepler’s Astronomy”, pp. 112–13.
Kepler. New Astronomy, title page, tr. Donohue, pp. 26–7
Kepler. New Astronomy, p. 48
Epitome of Copernican Astronomy in Great Books of the Western World, Vol 15, p. 845
Stephenson. Kepler’s Physical Astronomy, pp. 1–2; Dear, Revolutionizing the Sciences, pp. 74–78
Barker and Goldstein. “Theological Foundations of Kepler’s Astronomy,” pp. 99–103, 112–113.
Caspar. Kepler, pp. 65–71.
Field. Kepler’s Geometrical Cosmology, Chapter IV, p 73ff.
Dreyer, J.L.E. A History of Astronomy from Thales to Kepler, Dover Publications, 1953, pp. 331, 377–379.
Caspar. Kepler, pp. 29–36; Connor. Kepler’s Witch, pp. 23–46.
Koestler. The Sleepwalkers, p. 234 (translated from Kepler’s family horoscope).
Caspar. Kepler, pp. 36–38; Connor. Kepler’s Witch, pp. 25–27.
The scientific initiative covers a wide range of activities. It includes the research conducted in the course of discovery, the construction of new information about the natural world, the use of scientific knowledge and methods for practical and technological purposes, the transfer of this information to others, the ideological role of science in society, the development and implementation of scientific policy, and the management of scientific institutions. Those engaged in science tend to concentrate on one of these issues. However, what distinguishes Michael Faraday from others and makes him the most renowned scientist of all time is his high quality of work in all these areas of science during his career. As a result, he became one of the most famous people in Europe. But these do not say enough about where his success and achievements came from.
Who Was Michael Faraday?
Michael Faraday was born in south London, where his family moved from Westmorland in northwestern England a few years before his birth. His father was a blacksmith and a member of a very small neo-Calvinist Christian group called the Sandemanians, who interpreted the Holy Book. Faraday remained completely committed to this cult throughout his life. The fact that he came from a relatively poor family and was not affiliated with the Anglican Church meant that Faraday could not go to university. Instead, he worked as an apprentice for a bookbinder from 1805 to 1822. However, during this period, he participated in scientific conferences, and conducted a limited number of chemical experiments.
Towards the end of his apprenticeship, with an extraordinary decision, he preferred to have a scientific career instead of being a bookbinder, which was a more secure profession. He expressed his desire to Sir Humphry Davy and got his attention. When Davy was 34 years old, he was about to leave his post as a chemistry professor at the Sir Humphry Royal Institute after marrying a rich widow. Faraday was appointed as a laboratory assistant in 1813 and spent almost all his professional life at the Royal Institute. He was promoted to laboratory management in 1825 and, in 1833, appointed to the position of Fullerian Professor of Chemistry, which was opened specially for him.
During these years, Faraday’s electromagnetic rotation (1821) and induction (1821) discoveries in the basement laboratory of the Royal Institute resulted in the invention of the electric motor, thetransformer, and the generator. These works, which he began in the late 1800s, appeared to lay the groundwork for electrical engineering and, by extension, our modern world. Although such a perspective is no longer acceptable, the centenary of the discovery of the induction (including a two-week Faraday exhibition at Royal Albert Hall and a speech commemorated by the British prime minister) and his picture on the 20-dollar bill in the 1990s, significantly contributed to Faraday’s ongoing reputation.
Electromagnetic Field Theory
Faraday discovered electromagnetic induction on August 29, 1831, through this iron ring: he placed two coils with insulated copper wire on the opposite sides.
But the most important contribution that Faraday made to our understanding of the natural world was the theory of electromagnetic fields that he formulated. This was the result of his discoveries about the magneto-optic effect and diamagnetism in 1845—Faraday’s experiments proved how magnetism influenced the behavior of light and that all matter was sensitive to magnetic force. At least from the beginning of the 1830s, Faraday strongly opposed the idea that matter was made up of indivisible chemical atoms. In 1834, he ceased to believe in the practicality of the matter concept, as the only thing we could work on was the force—weight, electrical repulsion, etc.
Faraday even used lines of force to interpret some of his early experiments, such as rotation. In the early 1840s, he began to see matter as points where force lines meet in space. This meant that all matter was structurally identical. But at that time, only three kinds of matter were known to have magnetic properties, and magnetism was kind of an anomaly. During 1844 and 1845, Faraday intensified his experimental efforts to find a solution to this problem, and his discovery of diamagnetism—including the magneto-optic effect—proved that magnetism was a universal attribute similar to gravity.
These experimental discoveries gave Faraday the confidence he needed to begin formulating the field theory in 1846, which defined how electric fields and magnetism interact with each other. Although the field theory was initially quantitative, it solved the urgent engineering problem of laying a telegraph cable from Ireland to Newfoundland across the Atlantic Ocean and replaced the mathematical electromagnetic influence theories developed by scholars such as Andre-Marie Ampere first in the UK and then in Europe. In the hands of mathematicians William Thomson (later Lord Kelvin) and James Clerk Maxwell, Faraday’s field theory has been one of the cornerstones of modern theoretical physics.
Although he was a mathematician and sometimes even doubted the value of mathematics in natural philosophy, Faraday was still one of the top theorists at the time (once he even complained about the hieroglyphs that Maxwell used). Maxwell saw that Faraday’s field theory was essentially geometrical, so it could be subject to the precision of Cambridge’s mathematical algebra analysis. Albert Einstein expressed his thoughts on this matter in 1936: “Faraday and Maxwell represents probably the most profound transformation experienced by the foundations of physics since Newton‘s time.“
The Government Agency Board of Longitude
Page 75 from Faraday’s book, containing the results of his experiments on electromagnetic rotation, is dated September 3, 1821.
Faraday’s ability to formulate the theory of fields was based on experiments in discovering the magneto-optical effect and diamagnetism, the property that caused an object to create a magnetic field opposite the magnetic field applied from an external source. These experiments were also linked to another aspect of Faraday’s career, scientific consulting. One of the unique functions of the Royal Institute, which was founded in 1799, was to provide scientific advice to those who needed it, mostly the state and its institutions. Michael Faraday fulfilled this task by consulting institutions such as the East India Company, the British Navy Command, the Ministry of the Interior, the National Art Gallery, and, most importantly, the Trinity House—the General Lighthouse Authority for England and Wales. Almost a fifth of Faraday’s letters since 1836, when he was appointed consultant to Trinity House, are related to lighthouses.
In the second half of the 1820s, Michael Faraday worked on the joint committee of the Board of Longitude and the Royal Society, which was responsible for the development of optical glass to be used in telescopes. However, Faraday did not achieve this goal, and by 1829, he had lost so much hope for the project that he started negotiations to be appointed as a professor of chemistry at the Royal Military Academy. However, Davy, who died in May and was the person behind the project, tended to abuse Faraday’s abilities.
Soon, Faraday managed to leave the glass project, and for the next 15 years, he saw it as a complete waste of time. However, in 1845, he reused a piece of lead-borate glass that he had made to discover the magneto-optic effect in the 1820s. The light source he used for this experiment was a very powerful kerosene lamp that he tested for Trinity House. These and other experiments show how closely Faraday’s research and his work in the real world were connected.
Michael Faraday Teaches the Public
Lithograph showing Faraday giving lecture at the Royal Institute on December 27, 1855.
Another main function of the Royal Institute was to inform middle-class and aristocratic listeners about the miracle of science. Davy had quickly earned his reputation at the Royal Institute with very popular conferences. Faraday inherited this role from him and succeeded even more.
He started the Friday Evening Discourses, which became one of the main ways of transmitting science knowledge in the early Victorian era and continue to this day. Every week, one-hour lectures were given by competent scientists on their topics. At these conferences, Faraday was able to show the important scientific discoveries he made in his laboratory to the members of the Royal Institute, and thus to the rest of the world through print media.
He held a series of lectures on the value of scientific education, despite his strong opposition to the craze of spirit removal and hypnotism that emerged in the early 1850s. He provided evidence supporting his view on this craze to the Royal Committee dealing with education. He allowed the last two of the Christmas lessons (19 in total) to be printed for teenagers, probably due to his deep concerns about this issue. The Chemical History of a Candle is the most-read science book ever written. Since 1861, the English edition has never run out, and at least a dozen other languages have translated it.
As a result of his research, conferences, and practical work, Michael Faraday became the most famous person of his time (and later, indeed). He was a personal friend of Prince Albert, and the Prince allowed him to be given a house at Hampton Palace to honor him. Faraday spent more and more of his time in this house from 1858 to 1867, until his death. He was one of the eight foreign lecturers of the French Academy of Sciences; this was the most important indicator of reputation and approval after the Nobel Prize.
The depiction of Michael Faraday in the 1850s, from the tile series at Cafe Royal in Edinburgh, built in 1886 by John Eyre in memory of this famous scientist.
He was proposed twice as president of the Royal Society, the top post in British science. But unlike Davy, who was interestingly unsuccessful in this business, Faraday was not fooled by such fame. In both cases, he refused the position, saying that it was a corrupt and degenerating position, and when he refused it for the second time, he said that he could not vouch for a single year in honor of his intellectual effort if he had accepted the position. Although Faraday claimed that he was powerless in the face of the Sandemanian god and advocated humility in scientific research, he frequently had to keep his ego in check. It unconsciously emerged in some ways, expressed itself, and resulted in many striking images in the media: he published oil paintings, pastel paintings, marble drawings, prints, and mostly photos (he announced the discovery of photography at a conference in 1839).
Despite his religious beliefs and unusual approach to the world of research, Faraday wanted to be recognized by society. The tension he felt inside could have been the source of his creativity and his desire to do everything perfectly. It could also explain why he made such important contributions to how we understand the world.
Michael Faraday’s Inventions
Michael Faraday produced an electric current from a magnetic field, invented the electric motor and dynamo, demonstrated the relation between electricity and chemical bonding, discovered the effect of magnetism on light, and discovered diamagnetism—the peculiar behavior of certain substances in strong magnetic fields. The transformer and generator were his inventions, along with many other achievements, including the “Faraday cage.”
Michael Faraday Quotes
“Nothing is too wonderful to be true if it be consistent with the laws of nature.”
“There’s nothing quite as frightening as someone who knows they are right.”
“It is right that we should stand by and act on our principles; but not right to hold them in obstinate blindness, or retain them when proved to be erroneous.”
“A man who is certain he is right is almost sure to be wrong.”
“Shall we educate ourselves in what is known, and then casting away all we have acquired, turn to ignorance for aid to guide us among the unknown?”
“But still try for who knows what is possible!”
“I will simply express my strong belief, that that point of self-education which consists in teaching the mind to resist its desires and inclinations, until they are proved to be right, is the most important of all, not only in things of natural philosophy, but in every department of daily life.”
“In place of practising wholesome self-abnegation, we ever make the wish the father to the thought: we receive as friendly that which agrees with, we resist with dislike that which opposes us; whereas the very reverse is required by every dictate of common sense.”
“No matter what you look at, if you look at it closely enough, you are involved in the entire universe.”
Reiser, Anton (1930). “VI”. Albert Einstein: A Biographical Portrait. New York: Albert and Charles Boni. p. 194.
James, Frank A. J. L. (2011) [2004]. “Faraday, Michael (1791–1867)”. Oxford Dictionary of National Biography (online ed.). Oxford University Press. doi:10.1093/ref:odnb/9153. (Subscription or UK public library membership required.)
For a concise account of Faraday’s life including his childhood, see pp. 175–183 of Every Saturday: A Journal of Choice Reading, Vol III published at Cambridge in 1873 by Osgood & Co.
“Michael Faraday.” History of Science and Technology. Houghton Mifflin Company, 2004. Answers.com 4 June 2007