This is a common joke stereotype, often seen in popular culture, like in the cartoon “The Flintstones.” However, sometimes proponents of alternative history seriously claim this to be true. According to them, humans allegedly lived alongside dinosaurs, which is why legends of many peoples feature dragons and similar creatures.
Some believe humanity existed for hundreds of millions of years and thus witnessed dinosaurs. Others claim that ancient reptiles went extinct quite recently, often supporters of biblical chronology. A third group argues that humans personally eradicated all dinosaurs, turning them into meat patties, which is why they no longer exist in modern nature.
Just keep in mind: dinosaurs went extinct 65 million years ago, and the first hominids appeared 2-3 million years ago.
So, the idea that these creatures could have crossed paths is absurd.
That said, dinosaurs could have seen our distant ancestor, the small mammal Purgatorius, the earliest known primate. It resembled a mix between a squirrel and a mouse, was no more than 15 cm long, and most likely had no idea its descendants would launch rockets into space and dominate the planet.
As for certain ancient world artifacts where early humans are depicted alongside dinosaurs, these are all fakes, created for cheap sensationalism. For instance, on the famous Ica stones found in South America, even reptiles that never existed there are depicted — yet they’re easily recognizable.
Prehistoric Humans Loved Clubs
Another stereotype about early humans is their fondness for huge clubs. In movies, cartoons, and comics, ancient humans are always seen carrying cone-shaped heavy branches, using them to hunt or defend against predators like saber-toothed tigers (most of which, by the way, went extinct before humans appeared). When not in use, the club is slung over the shoulder or used as a walking stick.
In reality, there is no significant evidence of widespread use of clubs by early humans.
They mostly hunted with spears tipped with stone points or sharpened sticks hardened by fire. Axes could also be used for blows, but spears were the primary weapon.
A spear could inflict far more serious damage to an animal or another human than a stick. Plus, thrusting is easier, and a spear can be thrown if necessary. So, clubs were unlikely to be a common weapon, though hitting small animals with sticks wasn’t out of the question.
The stereotypical image of a hairy man with a huge club probably originated a long time ago, perhaps in the Middle Ages, and persisted to this day.
In European mythology from the 1200s, there were forest-dwelling half-animal barbarians covered in fur who fought with heavy branches. This is how early humans are commonly depicted now, even though it’s inaccurate.
And They Lived in Caves
The very name “caveman” suggests where they supposedly lived. The term comes from the word “troglodyte,” which in Greek means “cave dweller.” Ancient authors like Herodotus and Pliny used this term to describe savages living on the western coast of the Red Sea.
Later, the naturalist Carl Linnaeus used this word to label the supposed wild, ape-like ancestors of humans. Today, laypeople habitually call all fossil human ancestors “cavemen” and “troglodytes.” But this term is essentially incorrect. Early humans rarely lived in caves: they were dark, damp, and drafty.
Our ancestors were nomadic, moving from place to place in search of food and didn’t specifically settle in caves.
If a suitable cave appeared along the way, where they could set up a temporary camp, great, but people could get by without it.
Archaeological finds in caves are more common not because people lived there more frequently, but because such locations have a higher chance of preserving artifacts. Open-air camps were quickly washed away by rain, while in secluded caves, they remained untouched for thousands of years.
Moreover, caves were often homes to predators like bears and leopards, which dragged their prey there to avoid sharing it with hyenas. So, “cavemen” didn’t always enter caves voluntarily.
Early Humans Were Much Healthier Than Modern Ones
The idea of a club-wielding prehistoric human persists for a reason. For some reason, it’s believed they were much stronger and healthier than modern people: they lived in harmony with nature, ate only healthy, natural food (or were even vegans), and had constant physical activity.
In contrast, modern weaklings sit in their offices all day and only occasionally lift dumbbells.
In reality, you can’t call the life of a early human healthy. Studies of human remains from the Paleolithic, Mesolithic, and Neolithic periods show they suffered from infections, rickets, dental problems, and numerous chronic diseases.
Early humans certainly had plenty of physical activity, and it was strenuous. But due to heavy labor, our ancestors experienced spinal microfractures, spondylolysis, hyperextension, lower back twists, and osteoarthritis.
Men lived slightly better than women, as hunters received more nutritious food and didn’t risk dying in childbirth. But they more often died in encounters with wild animals. On average, people lived between 30 and 40 years, and such a life can hardly be called healthy. Although there might have been some long-livers, they were likely very few.
Medicine was rudimentary. Diseases were treated by eating clay, applying it to the body, and using various herbs — you can imagine the effectiveness of such therapy. In severe cases, they turned to a shaman, who would perform trepanation to release evil spirits, which not everyone survived.
…Because They Led a Sober Lifestyle and Followed a Paleo Diet
No, early people were certainly not fans of a healthy lifestyle because they had no idea what that was. Their diet had nothing in common with the modern paleo diet.
Ancient humans could not eat as much meat and fish as modern enthusiasts of these foods do, but they consumed roots, flowers, and herbs that no present-day vegan would touch: thistles, water lilies, and reeds. They also didn’t shy away from less exotic foods like wild olives and water chestnuts.
But no matter how much you try, you won’t be able to replicate their diet.
The fact is that not only humans but the world around them has changed over millennia. All the fruits, vegetables, and roots you have access to are the result of long-term selection, and their wild forms are long gone.
For instance, corn was once a small weedy grass called teosinte, with only 12 kernels in its ears. Tomatoes were tiny berries, and wild ancestors of bananas had seeds.
Take a look at this painting, made between 1645 and 1672. This is what watermelons used to look like. And even earlier, 6,000 years ago, they were berries no bigger than 5 centimeters, as hard as walnuts, and so bitter they would give a modern person heartburn.
The food of early people, coarse and poorly prepared (or completely raw), pales in comparison in taste and nutrition to modern food.
And even in the Stone Age, people were not fans of a sober lifestyle. There is evidence that as early as 8,600 BCE, humans were using mind-altering substances: hallucinogenic mushrooms, cacti, opium poppies, and coca leaves. The very first alcoholic beverage—a fermented mixture of rice, honey, wild grapes, and hawthorn fruit—was consumed in China during the Neolithic era, about 9,000 years ago.
This desire for such indulgences likely came from our primate ancestors, who intentionally consumed overripe, fermented fruits to get tipsy. So don’t think that people in the past were more responsible about their health than you. Considering the harsh living conditions back then, it’s hard to blame them.
The Earth Used to Be Populated by Giants
Another common pseudo-scientific hypothesis suggests that in the past, there were extraordinarily tall human ancestors—three meters (10 feet) or more in height. Sometimes, this is used to explain the existence of the Egyptian pyramids and Stonehenge, as regular people supposedly could not have lifted the massive stones during construction, but giants could have.
Then, the giants left behind monuments of ancient architecture and a few skeletons before either disappearing, going extinct, flying back to Nibiru, or degenerating into people of our height.
However, from a scientific perspective, giant human ancestors can be lumped together with massive trolls and one-eyed ogre cannibals—there’s simply no reason to believe in any of these characters.
For example, the famous photograph of a giant skeleton supposedly found in India is a photomontage. The Canadian illustrator, known by the pseudonym IronKite, admits he created the image for a photo manipulation contest on Worth1000. He didn’t expect that his work would be widely circulated and that thousands of alternative history enthusiasts would use the image as evidence of ancient titans.
The origin story of this skeleton varies from version to version. Some claim it was found in India, while others say it was discovered in Saudi Arabia, confirming the existence of giants mentioned in the Quran.
But this image, like many others, is simply a fake, created for a contest and then unexpectedly going viral.
Sometimes, the remains of gigantic humans are mistakenly identified as the skeletons of Gigantopithecus—massive ancient orangutans. These creatures, which could grow up to 3 meters (10 feet) tall, did indeed exist, but they are no more related to humans than modern apes are.
And yes, if you compare the sizes of the remains of human ancestors with today’s population, you’ll notice a trend toward increasing, not decreasing, height over time. So, we are the giants compared to the people of the past, not the other way around.
The “Missing Link” Has Never Been Found
When Charles Darwin published On the Origin of Species in 1859, science had not yet discovered the intermediate forms that illustrate the possibility of one species evolving into another. Darwin considered this a weak point in his theory, but he believed that such organisms would eventually be found. And they were: a few years later, the skeleton of Archaeopteryx—a transitional form between reptiles and birds—was discovered.
Opponents of evolutionary theory argue that there are no transitional forms between ape-like creatures and modern humans. Therefore, humans did not share a common ancestor with present-day primates and must have emerged through some other means. But this isn’t true: since Darwin’s time, so many transitional forms have been found that it’s impossible to remember them all.
Cave People Had a Matriarchal Society
The theory that women ruled in primitive societies was popular in the 19th century. It was promoted by ethnographer Johann Jakob Bachofen.
In his book Mother Right, he built the following logical chain: those who possess property hold power. Since sexual relations in the Stone Age were random, determining the father of children was impossible, and they were raised solely by their mothers. Therefore, long-term intergenerational relationships were only possible between women. Mothers passed on their property to daughters, exclusively through the female line, and fathers did not participate in inheritance.
This sounds quite reasonable, but Bachofen based his ideas not on precise data, but on… ancient myths. He saw echoes of matriarchy in the tales of Homer—in the stories of Queen Arete of the Phaeacians and the warrior Amazons. Thus, Bachofen’s theory was purely speculative. Nevertheless, his works were highly regarded by Friedrich Engels, which is why Soviet science avoided disputing the theory of matriarchy in primitive societies.
However, modern studies of archaic societies show that matriarchy was extremely rare. Among the Tasmanians, Pygmies, Bushmen, Native Americans, Inuit, and other similar tribes, it was not typical. Sometimes women could hold high positions and even hunt alongside men, but there was no talk of them ruling.
So, purely matriarchal societies were rare and were unlikely to have been widespread among early humans.
Moreover, female dominance is not observed among closely related great apes.
Some scholars, like anthropologist Marija Gimbutas, consider the widespread presence of so-called Paleolithic Venuses—stone and bone figurines of very full-figured women—as evidence of matriarchy among early human. These figures are associated with fertility and abundance cults.
However, the fact that early humans made figurines of women doesn’t necessarily mean that they ruled society. Future anthropologists could just as easily argue that there was matriarchy in our time, given the number of curvaceous women posted daily on Instagram.
Human Development Stopped Since the Stone Age
Some people ask: if the theory of evolution is true, why don’t we observe the development of life forms? It seems as if changes have frozen in place—people today are no different from their great-grandparents. Even animals, birds, and plants around us are the same as centuries ago.
However, living organisms (including us, humans) continue to evolve. For example, over the past 20 years, evolution has been observed in beetles, mosquitoes, bedbugs, and other pests, as well as various species of fish, among others. The most noticeable changes occur in bacteria, viruses, and unicellular organisms since they reproduce faster than all others.
Humans also evolve, though not as rapidly, making these changes harder to observe.
Research in molecular genetics supports this. For instance, evolution has helped Tibetans adapt to life at high altitudes—a process that took 100 generations.
In short, if you want to witness human development as a biological species, you would need to live for a hundred thousand years or so. Only over such a long period will external changes become visible to the naked eye.
Darwin Renounced the Theory of Evolution at the End of His Life
The idea that Charles Darwin was the first to propose the animal origin of humans is deeply ingrained in popular consciousness. There’s also a belief that, in old age, Darwin supposedly rejected this heretical idea, but by then it was too late—his theory of evolution had already spread worldwide.
But this is completely untrue. Firstly, various theories about the evolution of living organisms existed before Darwin, proposed by figures such as Buffon, Lamarck, Haeckel, Huxley, and others. Even Leonardo da Vinci and Aristotle had hinted at such explanations for the origin of species.
Secondly, Darwin did not disavow his theory or convert to religious faith on his deathbed, as some claim. This myth was invented by Baptist preacher Elizabeth Hope three decades after Darwin’s death.
She fabricated a story about Darwin’s renunciation during a church service, and many believed it.
Later, Hope published her fictional account in the national Baptist magazine The Watchman-Examiner, from where it spread worldwide.
But Darwin never recanted his theory, and while he was not a militant atheist, he wasn’t particularly religious either. This was confirmed by his children, son Francis Darwin and daughter Henrietta Litchfield.
Do we really require so much water to function? For the first time, an equation quantifies how much water an average person needs every day, along with the elements that have the greatest impact on this number. When comparing men and women of the same age and body mass index (BMI), males need around 0.5 liters more water. Weight, temperature, and exercise all contribute to a rise in hydration needs. On the other hand, as you get older and your body fat percentage goes up, your water needs go down. Researchers reveal in “Science” that socioeconomic status and geographical location are also important.
It has been unclear which factors determine an individual’s unique water needs.
Without this valuable liquid, our bodies’ metabolic functions and cells would not work correctly. Certain sensors alert us when our fluid levels get dangerously low. Once this happens, our brain gives us a clear message: we’re thirsty. We need to continually replace the water in our bodies by eating and drinking since it is lost through urine, perspiration, and other bodily functions.
However, how much water does the human body really need every day? The average daily water consumption guideline is between 1.5 and 3 liters; however, these numbers are mainly derived from estimations and broad generalizations. For the first time, a multinational team headed by Yosuke Yamada of Japan’s National Research Institute for Health and Nutrition examined in detail the quantity of water the human body really processes, wastes, and absorbs on a daily basis.
Deuterium Isotopes to Detect Water Circulation
To accomplish this, almost 5,600 participants from 26 different nations took part in the study. Each participant consumed 100 mL of water with the heavy hydrogen isotope deuterium instead of the lighter hydrogen atoms typically found in water. Water can be isotopically tagged so that its distribution and dilution throughout the body can be tracked. The human body’s water content can then be calculated.
However, the fluctuating isotope levels are what ultimately determine the water requirements of each individual. The amount of water a person’s body is replenishing may be calculated by monitoring the rate at which these stable isotopes are excreted in their urine over the course of a week. The researchers calculated this water output by factoring in each participant’s age, gender, height, weight, and level of physical activity, in addition to the weather.
Age, Gender, Body Weight, and Fat Percentage
The results showed that people had substantially varying water needs. Adults require between one and six liters each day, according to the research. There are even outliers with up to 10 liters a day. Age, gender, body weight, and fat percentage are the most important characteristics. The researchers also discovered that factors such as climate, geographic location, and affluence had an effect, in addition to physical activity and fitness.
Researchers were able to quantify and integrate all of these interrelationships into a single equation based on their findings. The model estimates human water consumption in response to anthropometric, economic, and environmental parameters. For the first time, thanks to this equation, it is feasible to estimate how much water each person needs, accounting for at least some of the wide range of individual variances.
The Water Requirements of Women Are Lower Than Those of Men
When comparing men and women of similar ages and environments, the results show that males need around 0.5 liters more water each day. A male of moderate activity who weighs 70 kilograms (20 years old) needs around 3.2 liters of water per day. In these settings, a woman’s body of average size and age will turn over around 2.7 liters of water every day.
Because fatty tissue retains less water than muscles and other organs, the disparities between the sexes and age groups are mostly a reflection of variances in body fat percentage. Among other things, this is why the water needs of adult males peak between the ages of 20 and 30, and then gradually decline afterwards. On the other hand, in females, it does not change much until beyond the age of 50. Their normal daily water needs only rise by around 0.7 liters during pregnancy.
Physical Activity, Muscles, and Size
A trained athlete requires around a liter more water per day than a non-athlete, even if both are equally sedentary that day; therefore, there are other physical elements that may be assessed. However, physical activity also sharply increases water needs: for a 50% increase in our energy metabolism, we need to consume around one liter more water. Both height and weight matter; every extra 50 kg requires an additional 0.7 liters of water because the body must nourish a greater number of water-hungry tissues.
Men need almost half a liter more water per day than women, even when both genders are in identical conditions.
As we age, the rate at which our bodies replenish their water stores also changes. Newborns have the greatest water needs since their rapid metabolism necessitates that they replenish around 28% of their body fluids by drinking every day. The typical hydration needs of a young adult, beyond the first few months of life, are just nine percent of the total body water. And as we age, this decreases more; at 80, we need around 0.7 liters less water than we did at 30.
A Low Development Index Uses 200 ml More Water
The environment also has an impact on the amount of water we require. Heat, humidity, sea level, and latitude all have a demonstrable impact on daily water needs, as the team reveals, highlighting the clear relationship between climate and geography. It reaches its maximum near the equator and its minimum at around the 50th degree of latitude.
However, the economic and developmental status of a nation may have a significant effect. It has been calculated that, given identical environmental and climatic circumstances, people in nations with a low development index use 200 ml more water per day than those in highly developed countries. The researchers think that this is because individuals in wealthier nations are more likely to spend time in air-conditioned environments, even in hot weather.
These results together show that physical and environmental variables have a role in determining the water requirements of humans. According to Yosuke Yamada’s research, “there is no one-size-fits-all approach” to the quantity of water required on a daily basis. The widely accepted wisdom that one should consume two liters of water daily lacks empirical backing.
For archaeologists interested in the history of clothes and footwear, a major roadblock is presented by the great fragility of the remains. Even so, there are a few unusual findings, usually in the form of traces or prints or extremely tenuous inferences about their likely existence. But it looks like ancient clothing must be a daily practice of our ancestors, even though we lack the clarity to track their evolution. Let’s interpret the evolution of prehistoric clothing.
They decompose rapidly
Clothing does more than only keep us safe; it also stamps us as individuals in both history and the present. As well as its practical uses, clothing also have social and symbolic purposes. The diachronic history of clothing is essential for the prehistoric anthropologist, since it documents the transition from one style to the next, the acculturation of one group by another at a specific period, and the acceptance or imitation of new clothing conventions.
Despite being ubiquitous in modern society, clothes and footwear technologies are notoriously difficult to unearth in archaeological digs due to their rapid decomposition. This is the beauty and the sorrow of Paleo-Mesolithic archaeology: with very few exceptions, we have only a little evidence of the existence of soft and worked materials derived from animals (fur, skins, leathers, ties) or plants (braided fibers, sewed threads).
Ötzi and his clothes
The so-called iceman Ötzi was discovered with a leather quiver, leather clothes, and fur headgear. However, it was sometimes impossible to establish the origin of the components due to their advanced state of degradation. Since no clothing remnants have been directly passed down to us, we can only highlight a few unusual findings, frequent traces, prints, or extremely indirect conclusions of their likely existence.
Adaptation to the cold
The tropics and the equator are known for their high rates of near-naked inhabitants (Fuegians), yet this extreme lifestyle is not limited to those places. Covering up is primarily an adaptation to the cold that occurred simultaneously with the spread of hominins to the high latitudes of the Northern Hemisphere.
According to anthropologists, the lack of fur might be the driving factor in the development of clothing. This would have been the beginning of the evolution of complex shapes better fitted to withstand the cold among human communities, which would have occurred in tandem with the development of more specialized tools.
“Paleo-lice” for the evolution of clothing
Lices are a particularly useful hint since they are indirect and unanticipated. The systematic wearing of clothes appears to be originated in the Middle Paleolithic period in North Africa, according to genetic analyses of human body lice and their derivation from head lice.
The genetic analyses of lice in clothes enable us to determine the time period in which humans might have acquired clothes first, which is anywhere from 84,000 to 107,000 years ago (the start of the latest ice age or the Last Glacial Period) or perhaps the preceding ice age cycle (170,000 years ago).
Miniature traces left on fabric
The Ötzi mummy, found by stupefaction in the Ötztal Alps on the Italian-Austrian border in 1991, is one of the most extraordinary findings of prehistory, or more accurately, the Late Neolithic. Incredibly, the fully attired man was only dated between 3300 and 3350 BC.
In Georgia’s Dzudzuana cave, wild flax fibers and bovid hair dating back between 31,000 and 13,000 years were found. And weaving baskets or making clothes out of these fibers would have been possible.
Other forms of indirect evidence, such as prints or imprints of clothes, give detailed information about prehistoric sartorial habits. Although such evidence is difficult to come across, when it does it allows us to see how people’s lives, fashions, and social mores have changed through time.
Thus, there are textile imprints found on the pieces of baked clay at Pavlov and Dolni Vestonice (Czech Republic) between 31,000 and 30,000 years ago. Exciting summaries of this topic have been produced by American anthropologists sometime ago (PDF).
Also, there are impressions of what could be fur clothing in the Wahl Gallery of the Fontanet Cave (in France) from the Magdalenian period and dated to 14,000 years ago.
The earliest prehistoric shoes
The footprints found in the Theopetra Cave.
And what about the first shoes, or possibly the sandals? Children’s shoeprints from 135,000 years ago, when Neanderthals were expanding across Europe, were preserved in Greece’s Theopetra Cave.
Scientists investigated the footprints of a group of Gravettians who visited the Dordogne cave of Cussac about 30,000 years ago and left tracks in the clay floor of the cave. Experiments led them to conclude that the shoes were worn, thus it was concluded that naked feet were not present in the area.
Given the technological sophistication of these cultures and the periglacial environment of the recent Paleolithic, the presence of bare feet would actually look peculiar.
Multiple sites in West America (Oregon, Nevada, California) have yielded a large number of plant-fiber sandals that have been dated to the late Pleistocene/early Holocene. The Paleo-Indian braided sandal, which dates back to between 10,000 and 11,000 years ago (Fort Rock Cave, Elephant Mountain Cave), has been documented by a number of scholars, some of whom have even proposed a typology and geography of the footwear.
Examples like these are few during the Paleolithic eras in Europe. The Areni-1 Cave in Armenia yielded the first direct examples of “archaeological” shoes in Europe, dating back to between 3627 and 3377 BC, similar to the age of Ötzi (5,300 years old) and other indirect imprints.
Areni-1 Cave prehistoric shoe.
Tools for making clothes
Even if the clothing’ fibers deteriorate and disintegrate over time, the equipment used to produce them can still provide useful information. The first clothes were likely made between 120,000 and 90,000 years ago, as shown by an examination of bone tools discovered in Morocco’s Smugglers’ Cave.
To be sure, early modern humans, like their Neanderthal counterparts, understood the manipulation of skins to change them into leather (“proto-tanning”) or their direct usage as fur, as shown by the caves at Pech de l’Azé I, Combe-Capelle, Dordogne (60,000–45,000 years old).
While these early implements are not direct proof that real clothes were being made, they do show that skins were being worked and used in various ways to cover the body and decorate dwellings. The remains of carnivores that were killed and skinned rather than eaten, point in the same general direction.
Yet, there is one of these indirect technical hints—the eyed needle—that definitively shows that clothing was made: A needle and thread can be threaded through the openings, allowing the user to stitch together two pieces of fabric to create a garment or blanket. This tool, whose shape and purpose have not altered since the Paleolithic era (albeit the material, currently steel, was bone back then), was first reported in a limited sense in Europe during the Solutrean period, some 24,000 years before the present.
It has been stated that an even earlier object, from the Denisova cave in the Russian Altai, dates back 45,000 years. However, this figure is still up for question.
The funerary adornment sewed on clothes
On the other hand, the beads that were strung on garments back in the day have survived to this day because they were constructed of durable materials like bone, ivory, and stone. They testify to the use of first sewed and embroidered clothing and bonnets with the remains of the dead in their original location.
One of the most remarkable pieces of evidence of the use of clothing is found in the lavish graves of Sungir (near Wladimir, 118 mi / 190 km east of Moscow, Russia). We can only marvel at the wealth and ingenuity of these burial clothes after seeing the intricate arrangement of thousands of beads found on the bodies of Sungir’s dead, whose dates vary from 34,500 to 32,600 years.
Similarly, the remarkable headgear stitched with hundreds of beads and found in the double grave in Grimaldi (Children’s Grotto), Italy, dates back 14,000 to 15,000 years.
Paleolithic art with clothing, headdresses, necklaces or bracelets
Mal’ta statuettes
Numerous sculptures and engravings show the use of textiles or what can be textiles, such as headdresses or bracelets. Certain female figurines, often referred to as “Venus,” have been successfully dated to the Gravettian era, namely the latter stages of the period’s middle and later periods (31,000 to 26,000 years ago).
This includes the statuettes from Willendorf (Austria) and the Czech or Russian equivalents from Dolni Vestonice, Pavlov, and Kostyonki, as well as Brassempouy and the bas-relief sculptures from Laussel (France). In place of stylish hair, they seem to be wearing some kind of headdress, bonnet, or mesh covering on their heads.
Belts are seen being worn by one of the female Cussac figures (Kostyonki), and necklaces and bracelets are also shown. The Mal’ta and Buret’ statuettes from Siberia, west of Baikal in Russia, date back just 23,000 years; they are fully clothed, suggesting the use of sophisticated clothing if not tattoos or scarification.
Images of similar adornments, often on nude bodies, are likewise well-known to the Magdalenian culture (in the Isturitz cave, in the Basque Country, and in the Laugerie-Basse shelters, in the Dordogne, in particular).
This evidence and findings demonstrate a fundamental truth: clothing production is a prehistoric activity, possibly practiced by several human species, and which has evolved through time. So far, the most impressive Paleolithic artifacts regarding the earlier clothes and shoes are located in Russia and Siberia.
Over time, clothing has evolved to reflect several demographics, temporal, and social characteristics of a given human population, social group, or culture beyond its original, solely defensive role. Especially if we count the embellishments on the garments.
The written version of our genetic instruction manual, which has 3 billion letters, would take up many volumes’ worth of space in real life. However, it is only contained inside the tiny cellular structure of our body. From our gender to our physical characteristics to our susceptibility to disease, practically every aspect of our lives is determined by the choices our ancestors made. However, for a very long time, no one knew what this genetic code looked like and what it contained. Scientists eventually uncovered the shape, language, and exact function of our DNA, with some unexpected findings along the way.
The genetic specifications for all known creatures and many viruses are stored in a deoxyribonucleic acid or DNA, a polymer made up of two polynucleotide chains that coil around each other to create a double helix. DNA governs development, functioning, growth and reproduction.
Two men did change the world of science
In a statement made 70 years ago, according to James Watson, only a few discoveries have been of such exquisite beauty as DNA. Watson was referring to the double helix, a structure that is 2.5 nanometers in diameter, looks like a helically twisted rope ladder, and stands 7,2 feet (2,2 meters) in length if fully unfolded.
On April 25, 1953, James Watson and Francis Crick published a single page in Nature proposing a model for the three-dimensional structure of deoxyribonucleic acid (DNA), the molecule that encodes human genes.
It seemed that the two researchers were confident in the long-term relevance of their model since they cited “novel features of considerable biological interest” at the start of their paper.
Zero interest in chemistry
Although at first glance, it did not seem likely that two “scientific clowns,” as scientist Erwin Chargaff dubbed them, would produce such a groundbreaking discovery. James Watson, who was very talented, began studying biology at the University of Chicago when he was only 15 years old. Birds were his major focus at the time, thus he was able to avoid taking any science classes.
The zoologist’s understanding of chemistry and physics was quite limited when he first arrived at the Cavendish Laboratory in Cambridge, England, in the autumn of 1951, at the tender age of 23. In England, he met British scientist Francis Crick, who was 13 years older and whose loud laughing was the bane of his colleagues’ existence. Francis Crick’s prior life as a researcher was summed up by the institute’s director, Sir Lawrence Bragg. According to him, Francis was talking ceaselessly and had come up with next to nothing of decisive importance.
A scientific footrace
In 1949, Erwin Chargaff discovered that the DNA bases adenine, thymine, cytosine, and guanine always occur in DNA at a 1:1 ratio, or most likely in pairs. The next step was to figure out the structural integrity of the bases and how they fit together. Watson, who was originally uninterested, attended a presentation by neighboring King’s College London scientist Rosalind Franklin in November 1951, during which she shared recent X-ray diffraction photographs of DNA.
Waston found intriguing her speculation that DNA could exist in a twisty helical shape with two, three, or four twists. As soon as Watson and Crick got back to Cambridge, they set out to try to replicate this structure. They hypothesized, based on chemical calculations, that the structure would consist of three chains joined in a helix by magnesium ions, with the molecular arms pointing in all directions.
Success through failure
Watson, however, was not paying close attention, and the team’s model of the chemical reaction turned out to be incorrect. They made a disappointing appearance in front of Rosalind Franklin and London-based biophysicist Maurice Wilkins.
Colleagues were quite harsh in their criticism. Previous X-ray images produced by these two scientists had demonstrated conclusively that the supporting chains could not lay within, refuting the premise of Watson and Crick, and that magnesium ions were scarcely capable of maintaining this structure.
In July of 1952, Erwin Chargaff visited Watson and Crick in the lab and delivered a similarly damning assessment of their scientific prowess: “enormous ambition and aggressiveness, coupled with an almost complete ignorance of, and a contempt for, chemistry…”
When it became public that famous scientist Linus Pauling, on the other side of the Atlantic, shared a fascination with the structure of genetic information and suggested a model for it, the scientific reaction intensified. Urgency necessitated swift action.
Then, towards the end of 1952, Maurice Wilkins offered Watson and Crick an X-ray structural study of his colleague, Rosalind Franklin, which proved to be a pivotal event that ultimately led to triumph. It was, in fact, a picture of a recently discovered DNA structure. But this was all without Franklin’s consent.
By the end of the study, the two scientists had reached a consensus: DNA is made up of two strands that intertwine with each other like rungs on a rope ladder. Hydrogen bonds hold their molecular appendages, the complementary bases, together. Finally, Watson and Crick assembled their metal double helix structure like pieces of a jigsaw. This variation won over even the most skeptical people.
Recognition and respect
Many scientists date the beginning of molecular genetics to the publication of the “Watson-Crick Model” of the structure of DNA. James Watson, Francis Crick, and Maurice Wilkins all split the 1962 Nobel Prize in Medicine and Physiology equally. In contrast, Rosalind Franklin, whose research offered the last vital piece of the puzzle, came up empty. Sadly, she passed away from uterine cancer in 1958, when she was only 37 years old, without seeing the fruits of her labor. It’s safe to say that most people nowadays have forgotten who Franklin and Wilkins were. But the names Watson and Crick will forever be linked to the double helix model of DNA.
The exchange of information
Translation of genetic code
Since 1953, scientists have understood the fundamental nature of our DNA and that it includes the blueprints for every aspect of our identities, from physical appearance to health. It was quickly understood that each base pair represented a different letter in the manual. The question is how to give form to these inscrutable directives and create a live, breathing, and authentic human being.
In every human cell, two sets of 23 chromosomes are created when an egg from the mother and a sperm from the father fuse. The maternal contribution to these roughly X-shaped structures is half, whereas the paternal contribution is half.
We store and carry our DNA, or genetic information, in a compact form called chromosomes. All of our DNA, together with its protective envelope structures, is stored in a very condensed form on the many chromosomes in our bodies.
The sequence of bases as the alphabet of life
The typical RNA codon table is structured in the form of a wheel.
Adenine (A), Guanine (G), Thymine (T), and Cytosine (C) are the four bases that make up DNA. The two strands of DNA’s framework are held together by the pairs A-T and C-G. They serve as the rungs on this hereditary ladder. When you align the ladder segments of a single DNA strand, you’ll see a lengthy string of base letters.
And it is in them where the genetic code is found. Three of these letters are put together to spell out a word that specifies where in the process the production of protein should include a certain amino acid. A string of these letters forms a phrase, which in turn becomes the blueprint for a protein. And these molecules, in turn, play the role of a biochemical housekeeper who makes sure everything from cell and tissue formation to signal transduction and metabolic processes go smoothly.
Transcription and translation of DNA
Genes and proteins
How, however, does the blueprint for a structure end up as a protein? The process of making proteins from scratch is called biosynthesis. At this phase, the genome must be unpacked so that the information for a protein can be read from DNA. Generally found as a double strand, DNA separates into two single strands. Thus, the free arms of the rope ladder become accessible.
Enzymes have now made it possible to make carbon copies of this segment of the strand by simply joining a complementary base to each of the free arms. This time around, though, the base uracil bonds to the adenine instead of the thymine. In this case, however, the ribonucleic acid (RNA) serves as the scaffolding for these newly joined bases. After the copy is complete, enzymes cut the RNA strand and its accompanying DNA bases away, creating a copy of this region of the genome that can be moved throughout the cell, the messenger RNA (mRNA). Transcription refers to this process of making a copy of the genetic material and rewriting it.
Translation into protein building blocks
A ribosome’s translation of mRNA and protein synthesis is shown in this diagram.
However, this is just the beginning. mRNA now transports the genetic information copy from the nucleus into the cell plasma, where it will be read by the ribosomes and used to make proteins.
The mRNA is sandwiched between the two subunits, one larger than the other; this creates a reading unit similar to the needle on a tape recorder. It decodes the genetic code by identifying which of the three bases (and hence which genetic code word) is present in each instance.
Simultaneously, many of the amino acids that will make up the future protein accumulate on the ribosomes, each of which has a tiny piece of RNA consisting of precisely three base letters connected to it. These letters and numbers serve as a label, identifying the specific amino acid bound to this transport RNA (tRNA). It is the job of the ribosome to dock and connect the component of the amino acid that corresponds to the coding of the next amino acid in the read-out mRNA.
Polypeptides, or chains of amino acids, are produced in this fashion and are the building blocks from which proteins are assembled. DNA can only perform its job via translation, the process by which the genetic information is converted into a chain of amino acids.
Junk DNA
Discarded material transformed into a control center
A gene is a set of instructions for making a particular protein; it consists of a specific sequence of the base pairs cytosine, adenine, guanine, and thymine. It’s the blueprint for these critical messengers of our body’s processes.
It was quickly discovered, however, that significant portions of human DNA lacked any recognizable construction instructions. Sequences in an organism’s DNA that do not code for proteins are known as noncoding DNA (ncDNA). They looked to be made up of illogical and repeated DNA sequences that had no discernible purpose. Therefore, scientists called these pieces of DNA “junk DNA.”
Only 2% of your genes are real
However, scientists were baffled when they looked more closely at the breakdown of our genetic material and saw that around 44% of it is “junk” in the form of several copies of genes and gene fragments (repeats).
In addition, 52% seems to be useless as well and does not code for proteins. However, only around 2%–4% of human DNA is made up of genuine protein-coding genes.
It has long been a puzzle as to why evolution has preserved so much irrelevant DNA in addition to these gene sequences. But this issue was first answered by research in 2004. Scientists in the United States revealed a startling discovery about this “living genome deserts” that many regions of DNA that do not code for proteins were far from inactive. They include sequences that may activate or silence other genes, even if they are located far away.
A regulator made of “junk”?
This suggests that the genome’s so-called “junk” is playing a significant role in regulating gene activity, helping to shed light on the basic differences across species even though their genes are, on average, just a few percent different.
Also, scientists from LLNL and JGI found that different parts of junk DNA have experienced different degrees of modification during evolution. There are several non-coding regulatory elements in the “desert areas” that are resistant to rearrangement and defend themselves via repeating junk DNA patterns. It appears that genomic regions known as stable genome deserts are essentially hidden gene regulatory components that preserve the intricate function of neighboring genes.
About two-thirds of the genome deserts and about 20% of the overall genome could be gene segments that are completely useless for biology, indicating that most of the genome is redundant. At least, 75 percent or more of our genetic material is really just junk and only around 8–14% of our DNA is functional in some way.
Our genome is governed by junk DNA
Junk DNA and genome desert.
The term “junk DNA” refers to the 98% of the human genome that does not code for proteins but the truth is actually more complicated.
Because this notion of mostly useless junk DNA kind of shattered in 2011. The international ENCODE project discovered something astounding: almost all of our junk DNA functions as a massive control panel for our genome, containing millions of molecular switches that can activate and deactivate our genes as needed, including in regions where only an “unstable desert” had been suspected.
The “junk” has millions of switches
Scientists created a detailed map of the locations and distributions of control elements, which revealed that the control switches are often located in inconveniently distant genomic regions from the genes they regulate. However, due to the complex three-dimensional shape of the DNA strands, they can still come close and exert their regulatory effects.
That means, our genome is only functional because of switches: millions of buttons that control which genes are active.
Genes derived from junk DNA
But junk DNA has other, non-regulatory functions too; scientists from Europe identified a gene on mouse chromosome 10 that appeared out of nowhere but originated between 2.5 and 3.5 million years ago via genome-wide comparisons. The gene was the only one positioned in the center of a lengthy non-coding chromosomal part. This area is present in all other mammalian genomes as well. However, the gene is only found in mice.
There was some speculation that a gene may emerge at a place in the genome that had never been used before, but no evidence for this had ever been found. Yet it was discovered that the mutations that only occur in mice could be responsible for the new formation of the gene.
This demonstrates that the regions of DNA that do not code for proteins are an essential component of our genome and that they have long played a significant role in a variety of modern genetic analyses.
DNA and forensic science
It was all solved via a DNA analysis
You can identify a criminal by his genetic fingerprint from as little as a drop of saliva on a Coke bottle or cigarette filter, a few skin cells beneath the victim’s fingernails, or blood on his clothes.
Most of our bodily fluids also contain cells from our body, and with them our genetic information. Skin cells are always left behind when a hand is dragged along a rough surface or is scratched, and these cells carry our DNA.
However, there is a catch: there is far too little genetic material in the crime scene that remains for analysis, and this is precisely the reason why DNA analyses from such relics were unattainable for a long time. These genetic material remains can tell investigators whether or not their suspect was the perpetrator.
The polymerase chain reaction for DNA testing
But in 1983, US scientist Kary B. Mullis came up with a plan to multiply the few amounts of DNA that obtained and, in the process, devised one of the most pivotal techniques in genetics and biotechnology: the polymerase chain reaction (PCR).
A DNA fragment of up to 3,000 base pairs in length is heated to 201 to 205 degrees Fahrenheit (94 to 96 degrees Celsius), which breaks the hydrogen bonds between the bases of the double strand, resulting in the separation of the helix into two single strands. Two primers are then added to the DNA solution.
They bind to certain places on the DNA segments (based on their structures) and signal the beginning of the copying process, which is carried out by a heat-stable enzyme called polymerase.
At a temperature of 140–160 degrees Fahrenheit (60–70 degrees Celsius), it joins DNA-building components floating in solution to produce a perfect replica of the sequence designated by the primers, leading to another double strand and doubling the original amount of sequences.
Once the PCR is finished, the few remnants from the murder scene become a solution containing millions of copies of the perpetrator’s DNA, thanks to the process of repeated cycles in which the double strands are split from one another and then supplied with new halves by the polymerase.
A unique repeat pattern
The testing phase can now begin, with researchers comparing only small fragments of the DNA rather than the full sequence (which would take too long and be too laborious).
These fragments are found in the genome’s non-coding regions and are made up of several repeating base sequences termed “short tandem repeats” (STRs), which provide a unique genetic fingerprint since their numbers vary from person to person.
In many countries, a standard DNA analysis at a criminal lab includes testing for eight STR systems over several chromosomes and one sex-differentiating characteristic, which should be more than enough to rule out the possibility of a chance match.
Estimates suggest that the number of people with whom our unique STR pattern is shared is less than one in a billion, with the exception of identical twins. If a suspect’s genetic fingerprint matches that found at the crime scene, then it’s likely that s/he committed the crime in question; her/his own DNA has in fact convicted him.
Probing the paternity of a child
The mother’s identity is generally evident since she gives birth to the kid (barring surrogate moms), but the identity of the father is not always so clear.
It’s possible that the question of paternity won’t come up until the child is an adult if the woman has cheated on her partner in secret or if she gets pregnant shortly before breaking up with her partner and keeps the baby from him.
Numerous laboratories around the world have long offered such gene-based paternity tests online, and the process for those willing to take the test is very simple: just send in a saliva sample, a few hairs with a hair root, a baby’s pacifier covered with spit, or a piece of chewing gum that has been well chewed.
First, the DNA is extracted from the samples and amplified by polymerase chain reaction (PCR) in the lab; next, the DNA is compared to samples of the same genetic material from the child’s father or, ideally, the mother.
Short tandem repeats (STRs) are also used in forensic DNA analysis, and the frequency with which a given base sequence is repeated within an STR marker varies from person to person but is passed down from parents to offspring. Each person carries two STR marker variants at each gene locus, one inherited from mother and one from father.
In contrast, if the genetic material of the parent and child differs at three or more STR markers, paternity or maternity is considered to be ruled out. The probability that two unrelated people will have the exact same pattern of repeats at these markers is just one in 100 billion, according to current estimates.
The Human Genome Project (1990-2003)
Human Genome Project. Image credit: Encyclopædia Britannica, Inc.
Learning by reading life’s book
In the year 2000 in the United States, Bill Clinton and his British counterpart, Tony Blair, arranged for an unusual news conference in Washington. Nothing less than the human DNA itself was at stake here. The decoding of our genetic composition has been publicly announced by Clinton and, following him, by representatives of two rival research organizations, one government and one private.
And in 2022, scientists finally announced that they finished decoding the entire human genome. According to that, about 30,000 human genes are housed in the nucleus of each human cell, where they are contained in 23 chromosomal groups.
Humanity’s next big thing
An early version of the “Book of Life” has been deciphered by both the worldwide Human Genome Project (HGP) scientists and genetic engineering pioneer Craig Venter and his business Celera. About 3.1 billion letters make up our genome, which is composed entirely of apparently random sequences of the four nucleotide bases (adenine, cytosine, guanine, and thymine).
From the neurons that carry impulses throughout the brain to the immune cells that help protect us from external attack, each of the trillions of cells that make up our bodies has the same 3.1 billion DNA base pairs that make up the human genome.
It’s still not fully known what words and sentences may be constructed from these letters, as well as where certain functional units of genetic material are buried.
The decipherment of the human genome paved the way for novel approaches to illness prevention, diagnosis, and treatment. But these 3.1 billion letters of sequence in one human DNA were only the beginning of the long road to deciphering the human genome.
Interesting, but impossible
Things looked very different 25 years ago. In 1985, a group of genetics experts at the University of California, Santa Cruz, were approached by biologist Robert Sinsheimer with an unusual proposal: Why not try to sequence the human genome? The response was as unanimous as it was unequivocal: bold, exciting, but simply not feasible. Decoding even small sections of DNA was still too laborious at this time.
However, one of the researchers involved, Walter Gilbert of Harvard University, did not give up on the idea. About 20 years ago, he and a colleague were the first to develop a method for reading out the genetic code or genetic sequencing.
However, potential backers were still cautious, asking, “What if it turns out that the entire thing is not worth the massive effort?” and “Shouldn’t we possibly start with the genome of a small, less sophisticated creature, such as a bacterium?”
Genome arms race
Finally, in 1988, the U.S. National Institutes of Health (NIH) was convinced to organize a project to decode the human genome, led by none other than James Watson, one of the two discoverers of the double helix structure of DNA.
Understanding of the disease genes
However, progress has been sluggish since researchers were always debating whether or not it would be more efficient to begin by searching for illness genes rather than meticulously sequencing everything.
Craig Venter, a geneticist at the National Institute of Neurological Disorders and Stroke (NINDS), stood out because he and his colleagues had created a novel approach to discover gene fragments at an unparalleled rate, but without understanding their function.
Watson opposed and publicly complained about the sellout of genetic material which was met with early enthusiasm by NIH leaders since, if patented, these genes could be converted into cash. The fallout was seen when Watson was replaced as project head by Francis Collins in April 1992.
Not fast enough
Collins made a dismal prediction in 1993 that human genome sequencing wouldn’t be finished until 2005 at the earliest if things kept moving at their current rate. Part of the reason for this was the lack of resources that have so far prevented the development and widespread use of state-of-the-art DNA sequencers, which would greatly facilitate the automation of the genome decoding process.
On the other side, achieving a success rate of 99.99 percent was a need. After all, international research institutes were joining the effort at an increasing rate.
Upon meeting Craig Venter in 1995, the HGP researchers and management were rudely roused. As part of his new job at a commercial corporation, Venter released the first genome of a fully developed organism, that of the Haemophilus influenzae bacteria. He had accomplished it in a year because of the cutting-edge computing power available at the time. While progress was being made by Collins and the HGP researchers, it was slower than some would like.
Head to head
At the 1998 annual gathering of genetic experts, Venter pulled off his next move by announcing that his new firm would be able to decode the human genome in three years for a quarter of the cost of the HGP. He would be assisted by an automated sequencing system currently under development.
At this point, Collins and his group must take action. Six months later, they announced that instead of waiting until 2005, full genome sequencing was now expected to be completed in 2003 thanks to increased efforts. They wanted to provide the first functional version of the human genome that is around 90% valid by spring 2001.
It seemed like Craig Venter and his business, Celera, were in for a close finish. In reality, though, efforts to establish a mutually agreeable human genome resolution had already begun behind the scenes.
The HGP suggested holding a combined press conference to announce the initial versions of both projects at the same time on July 26, 2000. While the HGP had been publishing their sequencing in the British journal “Nature,” Venter and his colleagues had been contributing to the rival American journal “Science.” The unveiling of the virtually entire human genome was announced two years ahead of schedule, on April 14, 2003.
Thus, in April 2003, the Human Genome Project (HGP) was announced as completed but only around 85% of the genome was actually included. 15% of the remaining human genome was sequenced only by January 2022.
Dictionary of genetics
Amino acids
20 amino acids are the fundamental building blocks of proteins, and the genes determine the order in which these amino acids are put together to create a chain.
Bases
The nucleotides adenine (A) and thymine (T) and cytosine (C) and guanine (G) are paired with one another in double-stranded DNA through the complementary base pairing concept. Thymine (T) is switched out for uracil (U) in the ssRNA (single-stranded RNA).
Chromosomes
Chromosomes, which contain an organism’s genetic information, number 46 in humans thanks to the duplication of the 23 chromosomes found in each of our cells.
Codon
An amino acid’s genetic code is encoded in a sequence of three bases.
DNA
Deoxyribonucleic acid is a double-stranded molecule composed of a sugar backbone (deoxyribose) and a phosphate group, and a linear series of base pairs. The two single strands are complementary to each other, run in an antiparallel direction, and are kept together by base pairs.
DNA sequence
An order of the DNA molecule’s construction order.
Dolly
A well-known cloned sheep that was cloned in 1996 from an adult sheep’s single cell.
Gene
Genes are sections of DNA. In eukaryotes, genes are often made up of coding sections (exons) and noncoding sections (introns). Coding portions (exons) carry the genetic information for creating proteins or functional RNA (e.g., tRNA).
Genetics
Molecular genetics investigates the fundamental laws of heredity at the molecular level, whereas classical genetics focuses on the inheritance of characteristics, especially in higher species. Applied genetics focuses on the breeding of economically highly productive crops and animals.
Genetic code
The genetic code is a kind of encryption used to store information on DNA, and it is represented by a set of three-base pairs in all known forms of life.
Genetic fingerprint
Genetic fingerprints are unique to each person and are generated by using so-called restriction enzymes and undergoing further analytical processes.
Genome
A genome refers to the whole set of genetic instructions for a certain organism.
Human Genome Project
An international effort funded by many agencies to investigate the DNA sequence, protein function, and regulatory mechanisms of the human genome.
Gamete
Gametes are sexually reproducing cells (eggs, sperm) that contain just one copy of each of the 23 genes found in the human genome, which is called haploid (in humans).
Cloning
Producing offspring with the same genetic material by cell division or nuclear transplantation.
Mutation
Mutations, which may be caused by anything from exposure to ultraviolet light or naturally occurring radioactivity to the simple passage of time, are the fundamental mechanism by which new species are created and evolve.
Nucleic acids
Both DNA and/or RNA
Nucleotides
A phosphate group, a sugar, and a base make up the three components of the DNA-building block.
Nucleus
The nucleus is the membrane-bound organelle that houses the cell’s chromosomes.
Peptides
Peptides are compounds made up of two or more amino acids, which can be the same or different. Peptides are classified according to their length, with dipeptides consisting of two amino acids, tripeptides of three, oligopeptides of two to nine or ten amino acids, polypeptides of ten to ninety-nine or one hundred amino acids, and macropeptides of one hundred amino acids or more being considered proteins.
Polypeptide
Chain of ten or more amino acids held together by peptide bonds.
Polymerase
The protein-making enzyme uses DNA as its template.
PCR (Polymerase Chain Reaction)
In 1985, Kary Mullis devised a method of enzymatically amplifying tiny amounts of DNA to provide enough material for genetic analysis of nucleic acid sequences.
Protein biosynthesis
Translation and transcription are two steps in the protein production process, which takes place on ribosomes inside a cell. Enzymes, hormones, and antibodies are all examples of proteins. Protein is a class of molecules that is predominantly made up of 20 distinct amino acids.
Proteome
Complete set of proteins in a cell, organ, or tissue fluid.
Purine bases
Adenine and guanine are two examples of purine bases.
Pyrimidine bases
DNA and RNA both use the pyrimidine nucleotide uracil, however, RNA uses cytosine and DNA uses mostly thymine.
Restriction enzymes
DNA scissor enzymes are enzymes that detect a particular sequence of letters on DNA and cut the DNA at that sequence.
Ribonucleic acid (RNA)
Ribonucleic acid (RNA) is the “little sister” of deoxyribonucleic acid (DNA), a single-stranded nucleic acid molecule involved in protein production in which the nucleotide uracil (U) replaces thymine.
Ribosome
The ribosome is the cell’s “protein factory,” where proteins are made by reading a copy of a gene.
Telomeres
Normal cells may undergo around 2,000 cell divisions before showing signs of wear and tear, during which time DNA ends (telomeres) that do not carry genetic information shrink.
Transcription
The overwriting of a gene’s DNA into messenger RNA (mRNA).
Translation
The method carried out by ribosomes whereby a protein is synthesized from its constituent amino acids.
Virus
Pathogenic biological structure made up of proteins and nucleic acids that may infect, replicate, and kill host cells.
Viruses are dangerous because they rely on a “host” organism for their metabolism.
Cell
DNA is packed into chromosomes in the cell nucleus, making the cell the smallest reproducing unit in higher animals.
Even in the womb, unborn offspring respond to taste cues, learning what their mother consumes via the amniotic fluid. Recently, scientists have used ultrasonography to directly see this response for the first time. Babies seemed to smile when they tried the sweet carrots, but their mouths scrunched up when they smelled the bitter-tart kale. The smell of kale causes the fetus (in the picture above) to put up a defensive face.
The unborn child’s sense of taste develops before its other senses, including hearing and sight. In the eighth week of pregnancy, the first taste receptors appear, and by the time the baby is 15 weeks along, it is able to taste the amniotic fluid it is ingesting. By this time, the infant has picked up on the mother’s eating habits. Numerous studies with infants provide evidence that these first tastes significantly influence what kids want to eat as they grow up.
Vegetable Smackdown: Carrots vs. Kale
Now, researchers led by Beyza Ustun of Durham University are utilizing high-resolution 4D ultrasound pictures to show how a fetus reacts to different tastes in the amniotic fluid. These photographs provide the first clear glimpse into the unborn child’s reaction to various flavors. In their research, for the first time, they were able to see these responses.
One hundred pregnant women consumed a capsule of a test flavor on an empty stomach at 32 and 36 weeks of pregnancy. Each capsule included either 400 milligrams of sweet carrot powder, 400 milligrams of tart and bitter kale powder, or 400 milligrams of a neutral-tasting control material. In order to prevent her reaction from influencing her child, the mother was unable to tell which flavor she was receiving while swallowing. The researchers started documenting the baby’s responses through ultrasonography after the capsule had made its way through the stomach.
Unborn Babies Expressed Their Emotions Clearly
After consuming amniotic fluid, this unborn kid smiles because it responds favourably to the delicious carrot powder that the mother had previously consumed. (Image: “Fetal Taste Preferences Study (FETAP)/ Durham University)
Indeed, fetuses’ facial expressions were seen within 30 minutes after the mothers ingested the aroma capsules. In this little time frame, the aroma compounds had made their way from the small intestine into the circulation and then through the placenta into the amniotic fluid. The unborn babies’ mostly neutral facial expressions were altered in a distinctive manner depending on the exposed aroma.
When their mother had ingested the delicious carrot powder, the offspring would open their mouths wide, as if smiling, or pucker their lips, as if sucking. The expression was different when the pregnant women were exposed to the bitter taste of kale, as their unborn children’s responses included squeezing their lips together and/or rising their upper lips. According to the research group, their faces mirrored the defensive emotions of a newborn child.
Watching the babies’ faces light up as they smelled the sweetness of carrots or the earthiness of kale, and then sharing that moment with their moms, was a genuinely unforgettable experience, according to the team.
Perception of Taste in the Womb Has a Long-Lasting Effect
These findings provide conclusive evidence that fetuses can detect the aroma of their mothers’ foods while still in the womb. Scientists discovered advanced fetal perception and its capacity to discriminate between distinct taste cues from the mother’s diet.
Prenatal exposure to a variety of tastes helps shape a child’s food preferences. According to scientists, the potential long-term effects of these early sensory experiences are significant. This is because a mother’s diet influences her child’s food preferences from a young age via early exposure to tastes. Scientists now want to understand if the habituation effect dampens these initially adverse responses. (Psychological Science, 2022; doi: 10.1177/09567976221105460)
Every time you sneeze, it’s like an explosion with high-pressure air shooting out of your mouth and nose with droplets and other mucus fluids. Muscles in your face stiffen up without your knowledge, and you find yourself temporarily closing your eyelids. But for what purpose do you close your eyes when you sneeze?
Maybe it’s to keep you safe from the bacteria and mucous that are released during the sneeze. Or, can hiding your face protect your eyes from the internal pressure that may cause them to bulge and be permanently damaged? Both of those renditions are widely shared on the Web. It’s widely known that you can’t keep your eyes open when sneezing since it’s a reflex. Does this hold water, though?
Not a true reflex
The act of sneezing does not constitute a true reflex. The sneeze stimulus is more nuanced and is not under pure spinal-cord control. Foreign objects in the nose, infections, and allergies are only a few of the many causes of sneezing.
Those pesky irritants on the nasal mucosa are shot out the window as the air is released at a speed of around 90 mph (150 kph), the head is jerked forward, and we hear “explosion” sounds. However, closing your eyes as a result of a sneeze is not always a reflex. There are actually recordings of individuals sneezing with their eyes open.
How does it work?
There are two widely shared theories for why people have to automatically shut their eyes when they sneeze. To start, the whole body, not just the chest and breathing muscles, tenses up when you sneeze. That is why a drop of pee or gas may be released during a sneeze.
The face and the eyes in particular are tense during a sneeze. When you tense up, the muscles around your eyes shut your lids.
But the idea that this response is meant to shield the eyes from the resulting higher pressure is nonsense.
Nerve network joins nose and eyes
However, there is a nerve that runs between the eyes and the nose. The nasociliary nerve divides into two branches, one of which travels to the top of the nose and the other to the eyelids and the sclera (whites of the eyes). This is because there is a tight relationship between the nose and the eye. Sneezing is an example of an inflammatory response in the eye region, which may also be caused by irritation to the nerve branch that leads to the nose.
Some individuals, for instance, may sneeze in response to really bright light, demonstrating the intimate relationship between the eye and the nose. This is due to the proximity of the optic nerve to a branch of the nasociliary nerve. The sun’s rays stimulate the optic nerve, producing an electric current. When this current goes down the nerve fiber and across to the next nerve, it makes you sneeze.
In any event, shutting your eyes when you sneeze makes biological sense. One of the first forms of self-defense is the simple act of closing your eyes. All painful stimuli cause you to shut your eyes automatically. Our ancestors knew this organ was critical to their existence; therefore, they guarded it instinctually.
Around eight million years ago, our ancestors lived in Africa. We emerged seven million years later and developed skills such as advanced toolmaking and agriculture which allowed us to establish settlements all over the world. Humans are now a distinct species on the planet, but this was not always the case.
Strange human ancestors once roamed the Earth, such as Nutcracker Man or Homo floresiensis, whose tiny bodies resembled hobbits. Fossils show that Homo sapiens once interacted with Neanderthals, and a recently discovered species known as the Denisovans adds to the evidence that modern humans and some of our forefathers lived close to each other.
Timeline of human history
7 million years ago: Hominin and chimpanzee lineages merged.
7 – 6 million years ago: The species Sahelanthropus tchadensis may have lived before the Hominin-Chimpanzee divergence.
5.7 – 5.2 million years ago: Ardipithecus kadabba, a similar species of A. ramidus.
4.4 million years ago: Ardipithecus ramidus (“Ardi”) looked like a chimpanzee but definitely walked on two legs.
4.1 – 2 million years ago: Australopithecus afarensis had a bigger brain than modern chimpanzees, but still climbed trees.
3.6 million years ago: The date of hominin footprints found in volcanic ash at Laetoli, Tanzania.
3.5 – 2 million years ago: The first identifiable hominin fossils of Australopithecus africanus.
3.3 million years ago: The first marks made on bones with stone tools in Dikika, Ethiopia, point to the first stone tool use and meat consumption.
Skeletal cast of “Lucy”. (Credit: H. Lorren Au Jr/ZUMA Press/Corbis)
3.18 million years ago: “Lucy” represents the actual appearance of Australopithecus afarensis; 13 males and females of different ages formed the “First Family” of A. afarensis fossils.
2.6 million years ago: The first known stone tools appeared in the Gona region of Ethiopia.
2.5 – 1.2 million years ago: The “Nutcracker Man”, Paranthropus boisei, had large teeth and jaws for grinding food.
2-1 million years ago: Paranthropus robustus, the first Paranthropus discovered.
1.9 – 1.6 million years ago: Homo habilis (“handy man”) is believed to have made tools and left bone markings.
1.8 million years ago: Homo ergaster was much taller and slimmer than its ancestors.
1.7 million years ago: The first known fossils of hominins found outside Africa are those of Homo georgicus, which was discovered in Eurasia, in Dmanisi, Georgia.
1.65 million years ago: The Acheulean hand axes represent an important step toward human intelligence.
1.65 – 1 million years ago: Homo erectus remains were discovered in Java, but they were most likely from 1.5 million years ago.
1.6 million years ago: The first known hand tools in China were thought to have been made by Homo erectus, but it was later realized that they are 0.8 million years younger than the fossils recently found in the region.
1.5 million years ago: The “Turkana boy” is a nearly complete skeleton of an adult Homo ergaster found in Tanzania.
1.5 – 1.4 million years ago: Signs of fire sites were found in South Africa, but they could have occurred naturally.
1.2 million years ago: The emergence of the first Europeans; Homo antecessor.
0.79 million years ago: The first reliable evidence for the control of fire at Gesher Benot Ya’aqov, Israel.
0.78 million years ago: Earth’s magnetic field began to polarize as it does today.
0.6 million years ago: Homo heidelbergensis is now widespread.
0.4 million years ago: Typical Neanderthal anatomy found across Europe.
0.4 million years ago: The Clacton Spear or Clacton Spear Point, was found at Clacton-on-Sea in 1911. It is the oldest dated piece of crafted wood, dating back 400,000 years.
0.3 million years ago: Evidence for hubs used for making tools with multiple parts.
0.3 million years ago: Modern human skeletal features appear in African Homo heidelbergensis.
0.28 million years ago: Shaped stones found in Israel may be the first examples of art.
0.28 million years ago: The first evidence of the use of natural colors.
0.2 million years ago: Mitochondrial Eve (DNA) was the last common ancestor of all humans.
0.186 – 0.127 million years ago: Neanderthals engaged in mass hunting and killing.
0.16 million years ago: Age of the Homo sapiens idaltu; the skull has some primitive features, but shares distinctive features with modern humans.
130,000 – 115,000 years ago: Increased consumption of fish and marine mammals in South African sites.
120,000 years ago: First possible Neanderthal grave.
110,000 – 90,000 years ago: The first Homo sapiens arrived in the Levant.
75,000 years ago: Advanced “blade” technologies, shell grains, and jagged ochre from Blombos Cave in South Africa.
The eruption of Mount Toba. (Credit: Unknown artist)
73,500 years ago: The eruption of Mount Toba in Sumatra led to a global drop in temperatures.
46,000 years ago: The earliest fossils of modern humans found in South Asia.
45,000 years ago: Widespread human settlement in Australia.
45,000 years ago: The first Homo sapiens reached Europe.
40,000 years ago: The first settlements in New Guinea.
40,000 years ago: Late Homo erectus lived in China.
40,000 years ago: The first Homo sapiens appeared in China.
37,000 years ago: Campanian ignimbrite eruption in Italy; ash covers much of Europe.
36,000 – 28,000 years ago: Some tools found suggest that Neanderthals and humans interacted in Europe.
35,000 years ago: Aurignacian technology spread across Europe, including typical stone tools and examples of art.
32,000 years ago: The first Homo sapiens appeared in Japan.
32,000 years old Chauvet cave paintings.
32,000 years ago: Chauvet cave paintings, France.
28,000 years ago: The earliest Neanderthal settlements.
28,000 – 21,000 years ago: The birth of Gravettian culture.
27,000 years ago: The date of the complex settlements of hunter-gatherers on the Russian plains.
21,000 years ago: Solutrean technologies emerged.
21,000 – 18,000 years ago: Last Glacial Maximum.
18,000 years ago: Magdalenian technologies emerged.
18,000 years ago: The appearance of the controversial Homo floresian specimen, a.k.a the “hobbit”.
17,000 years ago: The first known spear-throwing humans arrived in the Combe Sauniere region of France.
16,000 – 15,000 years ago: Beginning of re-settlement in areas of northern Europe previously abandoned due to poor climatic conditions.
15,000 years ago: Lascaux cave paintings.
The domestication of dog. (Credit: Ettore Mazza)
15,000 years ago: Controversial early South American settlement site in Monte Verde, Chile.
14,000 years ago: The domestication of dog.
11,500 – 9000 BC: Rapid settlement of the Americas by various peoples using Clovis stone tools.
10,800 – 9600 BC: Younger Dryas glacial period, probably caused by melting ice sheets; temperatures rose rapidly after 9600 BC.
10,500 BC: The first domestication of cereals occurred with rye, wheat, and barley. The earliest evidence is from Syria, around 8000 BC.
9500 – 8000 BC: Construction of a temple by hunter-gatherers in Göbekli Tepe, Turkey.
9000 BC: The first domesticated animals in Western Asia.
9000 – 7000 BC: Archaeological sites in Cyprus show that the island was inhabited and that sheep, goats, cattle, and pigs were transported by ships.
9000 – 3000 BC: Increased rainfall created the “Green Sahara”; lakes, rivers, marshes, and meadows in North Africa.
8500 – 7300 BC: A stone wall built around a large village in Jericho, Israel, probably to prevent flooding, not warfare.
8000 BC: In China, early agricultural settlements grew corn in the Yellow River Valley.
“Green Sahara”, 9000 – 3000 BC. (Credit: Carl Churchill)
8000 BC: Mesoamerica was the first place where squash was domesticated, and Ecuador is where the oldest squash and beans were found.
7000 BC: The first domestication of Zebu animals occurred in Mehrgarh, West Pakistan by farmers growing wheat, rye, and lentils.
7000 BC: The first domestication of cattle occurred by hunter-fisher communities in the Green Sahara.
7000 BC: New Guinea plantations were the first to grow bananas, taro, and yam.
6500 BC: Simple irrigated agriculture began in Central Mesopotamia.
6200 BC: Establishment of the first farming communities in the Euphrates Valley in Southern Mesopotamia.
6000 BC: Production of fish and rice, as well as pig and chicken raising, in Yangzi Valley villages (China).
6000 BC: The first native corn developed in Mexico from the wild teosinte plant.
5500 BC: Independent development of copper craftsmanship in the Balkans.
5100 BC: Copper mining in the Ai Bunar region of Bulgaria.
5000 – 1000 BC: With locally mined, cold-processed copper trade and industry, ancient copper culture flourished in North America’s Great Lakes region.
5000 BC: In West Asia, North Africa, and Europe, the first domestic animals were kept for milk and plowing, as well as for meat.
A potter’s wheel from 3500 BC.
4000 BC: The first domestication of grapes and olives occurred in the Eastern Mediterranean.
4000 BC: In China, irrigated rice cultivation began in water-covered, ploughed fields.
3500 BC: The first wheeled transport which was used for local needs and military purposes emerged and spread across large areas of Eurasia.
3500 BC: Stamp seals began to be used for administrative and economic purposes in Western Asia.
Even when we believe we are focusing on a single target, our eyes are actually moving rapidly from one location to the next. This occurs even when our attention is focused on a single place. When we move our heads from side to side, our eyes jerk very quickly and shake violently. This also occurs when we alter the direction in which we are looking. We, on the other hand, are so used to these unpredictable eye movements that we rarely perceive them at all. Then, what might be the cause of this?
Jerky eye movements
It is a straightforward experiment in which you start by focusing on your right eye while standing in front of a mirror. After then, you allow your sight to slowly shift to your left eye. Whatever you do, you do not see your own eyes moving. Observers have been able to show that our eyes don’t stay still when we move our focus from one place to another.
This is because of a combination of two factors: first, the motions of our eyes, and second, the method by which our brain interprets those movements. Saccades are short, jerky, and, most crucially, very rapid motions that occur when the eyes move. Saccades are the technical term for these types of eye movements. When we allow our focus to wander, the movement of our eyes is not equal. Instead, our attention skips about from one thing to another, and it almost always goes to the places and things that capture our eye.
The brain fills in the blanks
But why is it that we don’t even seem to notice these jumps? Because saccades are performed at such a high rate of speed, the images that are shown on the retina of the eye become distorted at the time that they are being performed. Because the brain is unable to make use of this picture in any way, it is simply thrown away. The brain automatically fills in the gap of a few microseconds so that we do not experience temporary blindness. The gap is filled with a picture of the target point rather than the starting point. What we think we are seeing is actually always delayed by a few microseconds.
This can be proven with another simple experiment: while watching a ticking clock, it is common to see that the initial second appears to last for an exceptionally long amount of time. After that, the hand moves ahead at its usual slow and steady speed. This particular optical illusion is sometimes referred to as “chronostasis,” which literally means “time standing.”
This happens because the brain is able to make a connection between the moving eyes and the still picture of the clock. This may account for as much as one-tenth of a second of the total time. Now, if you look at the clock precisely when the hand has just moved, you will see the whole second, as well as a tenth, added onto it.
Why do we get tired in the afternoon? Most of the time, we arrive at work in good physical shape. Because we are mentally and physically robust, even challenging labor is not a problem for us. However, after the lunch break is through, things often take on a quite different appearance. We are so full that on occasion, our eyes will shut during the team meeting. And we will find that we are unable to focus on the lengthy explanations that the manager is providing. But why do we hit a wall of exhaustion around lunchtime?
It is very common practice in many parts of the globe to sleep for a few hours in the afternoon. People are able to rest and replenish their energy levels as a result of this. On the other hand, taking a nap in the middle of the day is considered pretty rude in many other cultures. There are very few individuals who, particularly in their professional lives, have the chance to get enough sleep. Either there is a lack of time, a space with sufficient privacy, or just an absence of social acceptability inside the organization. It’s common for people to feel tired in the middle of the day.
The hands on the internal clock are moving
Many sleep researchers working in the modern era operate on the assumption that taking a nap during the middle of the day is beneficial and fulfills a natural need. This is due to the fact that our body’s internal clock naturally causes us to feel a little lethargic in the middle of the morning and early afternoon. Even the most knowledgeable scientists do not yet have a definitive answer to the question of why it is that humans feel the need to sleep.
However, in addition to our body’s internal clock, there are other logical variables that contribute to an increase in sleepiness, particularly around the time that we eat lunch. Lunch is one of the reasons. The digestion of a meal that contains a lot of carbohydrates involves a significant amount of energy expenditure by the body. After that, we don’t have the energy for cerebral exercise. One other possibility is that the body could be missing something essential, such as oxygen or movement.
Deep slumber or power napping
However, one thing that is abundantly obvious is the fact that giving in to exhaustion in a healthy way is extremely beneficial. A number of studies suggest that taking a nap in the middle of the day has a beneficial impact on the cardiovascular system. While others have seen an increase in both physical and mental fitness as a result of taking naps.
It is thus advisable to take a power nap in order to overcome the slump that occurs around the middle of the day. On the other hand, this does not imply that you should take a long sleep; rather, it refers to a brief time of relaxation lasting no more than 20 minutes. This would have a calming impact on us while also enhancing our performance.
It is essential that we do not enter a state of deep sleep. Since this makes it more difficult to awaken and causes the sleep phase to have a tendency to have an impact that is antagonistic to its intended purpose. A lengthy nap may also cause problems falling or staying asleep throughout the night.
When there are children involved, the scenario is substantially altered. They have a propensity to sleep more deeply and for a longer period of time during the noon hours. In point of fact, naps are very comparable to sleep that occurs throughout the night. Youngsters that get enough sleep are better able to integrate the information they have learned, and this contributes to emotional steadiness as well.
Getting back to the awake state
Even if you are unable to take a power nap at work, there are still a few things you can do to help your body transition back into an awake state. After all, environmental factors also play a role in the development of weariness. Proper ventilation is essential, particularly in enclosed spaces like workplaces, to ensure that there is sufficient oxygen in the air.
You may choose to consume meals that include more proteins as opposed to foods that are highly loaded in calories and fat. These are simpler to digest while providing the necessary amount of fuel. We may get back in shape by participating in sports or going for a brief stroll. Fresh air, physical activity, and sufficient sunlight—all of which contribute to our fitness and affect how often we get tired in the afternoon.
In order to succeed while public speaking, singing, or performing, you need confidence. Unfortunately, when it all comes down to it, our faces tend to blush with excitement, which may make us feel even more vulnerable. Sometimes all it takes to blush is the notion of thinking about it. Why, then, does it always seem like this sudden heat sensation in our faces starts to rise suddenly at the worst possible time?
For what reason do we feel the need to hide our faces?
Sports and saunas are two other scenarios where people often blush; for others, a warm room or the ingestion of alcohol could be the trigger. The physical effort increases blood circulation in the body, so we blush. These cases of blushing are easily explained. However, the phenomenon known as “social blushing” is different.
When we’re put in circumstances that make us feel threatened, ashamed, or furious, the muscle tension increases as a result of the adrenaline rush. The autonomic nervous system responds to stress by activating the so-called sympathetic nervous system, which speeds up many of the body’s processes.
A Response by the Autonomic Nervous System
Brain sends hormones into the body that cause blood pressure to rise. Concurrently, the heart rate accelerates and more blood is pumped to the brain.
The face becomes red because the blood vessels expand and the blood flow rises. This is due to the fact that the face has an exceptionally rich capillary bed. Sweaty palms are a common occurrence of blushing. But this is a totally typical physiological response for a healthy human being.
A person’s sensory threshold determines how and how frequentlythey blush. An individual’s susceptibility to stress is also important when it comes to blushing. There are doctors who specialize in helping people with erythrophobia (an abnormal fear of blushing). The prevalence of congenital disorders of sympathetic nervous system regulation is estimated to be roughly 1 in 200 persons according to studies. This causes the individuals to become visibly more agitated and flushed with anger than usual.
An Evolutionary Defense Mechanism
Mark Twain, an American novelist, once said, “Man is the only animal that blushes. Or needs to.” His elucidation was spot-on, and it demonstrated that there is more to a blush than merely a surge of blood to the face. Blushing makes our feelings obvious, since our face flushes red mostly from embarrassment or when we are stressed out for something.
However, the exact mechanism of why a person’s face becomes red when they’re embarrassed still remains a mystery.
Many hypotheses have been proposed, but none have been conclusively demonstrated. It has been hypothesized that becoming red in the face in an embarrassing scenario is an evolutionary defense mechanism designed to prevent the individual from being shunned by his social group after committing a violation, which likely meant death in pre-historic times. Turning red in the face serves an “apologetic” purpose, signaling “I realize I made a mistake, I’m sorry.
Studies have shown that people who blush after making a mistake in public, such as tripping over their feet in a store, are more likely to be seen with sympathy than pity. They are less likely to believe that the person did the thing on purpose, and thus the person is less likely to be judged harshly or excluded from society. In this sense, blushing may serve as a kind of defense, perhaps against the potential social repercussions of an action.
There Is a Catch
However, it’s unlikely that blushing serves any essential purpose. Prehistoric humans, as seen through the lens of evolutionary biology, had uniformly dark skin. Since blushing is only so prominently visible in light-skinned individuals, it makes little sense for it to have a vital survival purpose.
And being shy doesn’t stick with you forever. It is well known that infants do not blush. This only begins at the age of three, and it reaches its pinnacle around adolescence, when emotional and physical changes make you more vulnerable to the judgmental gazes of others. The volume of blushing frequently decreases beyond that point.
Even yet, blushing is still embarrassing and humiliating, particularly when it draws others’ notice. Experts propose relaxation and breathing techniques for individuals who blush often; these methods don’t eliminate blushing, but they can release inner stress. Likewise, those who concentrate less on their red faces will feel calmer, which is why talking to others may be helpful in stopping blushing.
Extreme blushers may benefit from specialized behavioral treatments that aim to identify the causes of their anxiety and then direct them to deliberately seek out social settings where they may blush. Successful treatment usually results in less frequent blushing over time.