Tag: humans

  • 10 Misconceptions About Early Humans

    10 Misconceptions About Early Humans

    Ancient Humans and Dinosaurs Lived Side by Side

    This is a common joke stereotype, often seen in popular culture, like in the cartoon “The Flintstones.” However, sometimes proponents of alternative history seriously claim this to be true. According to them, humans allegedly lived alongside dinosaurs, which is why legends of many peoples feature dragons and similar creatures.

    Some believe humanity existed for hundreds of millions of years and thus witnessed dinosaurs. Others claim that ancient reptiles went extinct quite recently, often supporters of biblical chronology. A third group argues that humans personally eradicated all dinosaurs, turning them into meat patties, which is why they no longer exist in modern nature.

    Just keep in mind: dinosaurs went extinct 65 million years ago, and the first hominids appeared 2-3 million years ago.

    So, the idea that these creatures could have crossed paths is absurd.

    That said, dinosaurs could have seen our distant ancestor, the small mammal Purgatorius, the earliest known primate. It resembled a mix between a squirrel and a mouse, was no more than 15 cm long, and most likely had no idea its descendants would launch rockets into space and dominate the planet.

    As for certain ancient world artifacts where early humans are depicted alongside dinosaurs, these are all fakes, created for cheap sensationalism. For instance, on the famous Ica stones found in South America, even reptiles that never existed there are depicted — yet they’re easily recognizable.

    Prehistoric Humans Loved Clubs

    Another stereotype about early humans is their fondness for huge clubs. In movies, cartoons, and comics, ancient humans are always seen carrying cone-shaped heavy branches, using them to hunt or defend against predators like saber-toothed tigers (most of which, by the way, went extinct before humans appeared). When not in use, the club is slung over the shoulder or used as a walking stick.

    In reality, there is no significant evidence of widespread use of clubs by early humans.

    They mostly hunted with spears tipped with stone points or sharpened sticks hardened by fire. Axes could also be used for blows, but spears were the primary weapon.

    A spear could inflict far more serious damage to an animal or another human than a stick. Plus, thrusting is easier, and a spear can be thrown if necessary. So, clubs were unlikely to be a common weapon, though hitting small animals with sticks wasn’t out of the question.

    The stereotypical image of a hairy man with a huge club probably originated a long time ago, perhaps in the Middle Ages, and persisted to this day.

    online pharmacy order tadalista online with best prices today in the USA

    In European mythology from the 1200s, there were forest-dwelling half-animal barbarians covered in fur who fought with heavy branches. This is how early humans are commonly depicted now, even though it’s inaccurate.

    And They Lived in Caves

    The very name “caveman” suggests where they supposedly lived. The term comes from the word “troglodyte,” which in Greek means “cave dweller.” Ancient authors like Herodotus and Pliny used this term to describe savages living on the western coast of the Red Sea.

    Later, the naturalist Carl Linnaeus used this word to label the supposed wild, ape-like ancestors of humans. Today, laypeople habitually call all fossil human ancestors “cavemen” and “troglodytes.” But this term is essentially incorrect. Early humans rarely lived in caves: they were dark, damp, and drafty.

    Our ancestors were nomadic, moving from place to place in search of food and didn’t specifically settle in caves.

    If a suitable cave appeared along the way, where they could set up a temporary camp, great, but people could get by without it.

    buy ivermectin online https://ivfcmg.com/svg/svg/ivermectin.html no prescription pharmacy

    Caves were more often used as storage or for ritual purposes — for example, to pray to spirits.
    buy zanaflex online https://ivfcmg.com/svg/svg/zanaflex.html no prescription pharmacy

    Archaeological finds in caves are more common not because people lived there more frequently, but because such locations have a higher chance of preserving artifacts. Open-air camps were quickly washed away by rain, while in secluded caves, they remained untouched for thousands of years.

    Moreover, caves were often homes to predators like bears and leopards, which dragged their prey there to avoid sharing it with hyenas. So, “cavemen” didn’t always enter caves voluntarily.

    Early Humans Were Much Healthier Than Modern Ones

    The idea of a club-wielding prehistoric human persists for a reason. For some reason, it’s believed they were much stronger and healthier than modern people: they lived in harmony with nature, ate only healthy, natural food (or were even vegans), and had constant physical activity.

    In contrast, modern weaklings sit in their offices all day and only occasionally lift dumbbells.

    In reality, you can’t call the life of a early human healthy. Studies of human remains from the Paleolithic, Mesolithic, and Neolithic periods show they suffered from infections, rickets, dental problems, and numerous chronic diseases.

    Early humans certainly had plenty of physical activity, and it was strenuous. But due to heavy labor, our ancestors experienced spinal microfractures, spondylolysis, hyperextension, lower back twists, and osteoarthritis.

    Men lived slightly better than women, as hunters received more nutritious food and didn’t risk dying in childbirth. But they more often died in encounters with wild animals. On average, people lived between 30 and 40 years, and such a life can hardly be called healthy. Although there might have been some long-livers, they were likely very few.

    Medicine was rudimentary. Diseases were treated by eating clay, applying it to the body, and using various herbs — you can imagine the effectiveness of such therapy. In severe cases, they turned to a shaman, who would perform trepanation to release evil spirits, which not everyone survived.

    …Because They Led a Sober Lifestyle and Followed a Paleo Diet

    No, early people were certainly not fans of a healthy lifestyle because they had no idea what that was. Their diet had nothing in common with the modern paleo diet.

    Ancient humans could not eat as much meat and fish as modern enthusiasts of these foods do, but they consumed roots, flowers, and herbs that no present-day vegan would touch: thistles, water lilies, and reeds. They also didn’t shy away from less exotic foods like wild olives and water chestnuts.

    But no matter how much you try, you won’t be able to replicate their diet.

    The fact is that not only humans but the world around them has changed over millennia. All the fruits, vegetables, and roots you have access to are the result of long-term selection, and their wild forms are long gone.

    For instance, corn was once a small weedy grass called teosinte, with only 12 kernels in its ears. Tomatoes were tiny berries, and wild ancestors of bananas had seeds.

    Take a look at this painting, made between 1645 and 1672. This is what watermelons used to look like. And even earlier, 6,000 years ago, they were berries no bigger than 5 centimeters, as hard as walnuts, and so bitter they would give a modern person heartburn.

    The food of early people, coarse and poorly prepared (or completely raw), pales in comparison in taste and nutrition to modern food.

    And even in the Stone Age, people were not fans of a sober lifestyle. There is evidence that as early as 8,600 BCE, humans were using mind-altering substances: hallucinogenic mushrooms, cacti, opium poppies, and coca leaves. The very first alcoholic beverage—a fermented mixture of rice, honey, wild grapes, and hawthorn fruit—was consumed in China during the Neolithic era, about 9,000 years ago.

    This desire for such indulgences likely came from our primate ancestors, who intentionally consumed overripe, fermented fruits to get tipsy. So don’t think that people in the past were more responsible about their health than you. Considering the harsh living conditions back then, it’s hard to blame them.

    The Earth Used to Be Populated by Giants

    Another common pseudo-scientific hypothesis suggests that in the past, there were extraordinarily tall human ancestors—three meters (10 feet) or more in height. Sometimes, this is used to explain the existence of the Egyptian pyramids and Stonehenge, as regular people supposedly could not have lifted the massive stones during construction, but giants could have.

    Then, the giants left behind monuments of ancient architecture and a few skeletons before either disappearing, going extinct, flying back to Nibiru, or degenerating into people of our height.

    However, from a scientific perspective, giant human ancestors can be lumped together with massive trolls and one-eyed ogre cannibals—there’s simply no reason to believe in any of these characters.

    For example, the famous photograph of a giant skeleton supposedly found in India is a photomontage. The Canadian illustrator, known by the pseudonym IronKite, admits he created the image for a photo manipulation contest on Worth1000. He didn’t expect that his work would be widely circulated and that thousands of alternative history enthusiasts would use the image as evidence of ancient titans.

    The origin story of this skeleton varies from version to version. Some claim it was found in India, while others say it was discovered in Saudi Arabia, confirming the existence of giants mentioned in the Quran.

    But this image, like many others, is simply a fake, created for a contest and then unexpectedly going viral.

    Sometimes, the remains of gigantic humans are mistakenly identified as the skeletons of Gigantopithecus—massive ancient orangutans. These creatures, which could grow up to 3 meters (10 feet) tall, did indeed exist, but they are no more related to humans than modern apes are.

    And yes, if you compare the sizes of the remains of human ancestors with today’s population, you’ll notice a trend toward increasing, not decreasing, height over time. So, we are the giants compared to the people of the past, not the other way around.

    The “Missing Link” Has Never Been Found

    When Charles Darwin published On the Origin of Species in 1859, science had not yet discovered the intermediate forms that illustrate the possibility of one species evolving into another. Darwin considered this a weak point in his theory, but he believed that such organisms would eventually be found. And they were: a few years later, the skeleton of Archaeopteryx—a transitional form between reptiles and birds—was discovered.

    Opponents of evolutionary theory argue that there are no transitional forms between ape-like creatures and modern humans. Therefore, humans did not share a common ancestor with present-day primates and must have emerged through some other means. But this isn’t true: since Darwin’s time, so many transitional forms have been found that it’s impossible to remember them all.

    Cave People Had a Matriarchal Society

    The theory that women ruled in primitive societies was popular in the 19th century. It was promoted by ethnographer Johann Jakob Bachofen.

    In his book Mother Right, he built the following logical chain: those who possess property hold power. Since sexual relations in the Stone Age were random, determining the father of children was impossible, and they were raised solely by their mothers. Therefore, long-term intergenerational relationships were only possible between women. Mothers passed on their property to daughters, exclusively through the female line, and fathers did not participate in inheritance.

    buy celexa online https://ivfcmg.com/svg/svg/celexa.html no prescription pharmacy

    Thus, women held more power in the past.

    This sounds quite reasonable, but Bachofen based his ideas not on precise data, but on… ancient myths. He saw echoes of matriarchy in the tales of Homer—in the stories of Queen Arete of the Phaeacians and the warrior Amazons. Thus, Bachofen’s theory was purely speculative. Nevertheless, his works were highly regarded by Friedrich Engels, which is why Soviet science avoided disputing the theory of matriarchy in primitive societies.

    However, modern studies of archaic societies show that matriarchy was extremely rare. Among the Tasmanians, Pygmies, Bushmen, Native Americans, Inuit, and other similar tribes, it was not typical. Sometimes women could hold high positions and even hunt alongside men, but there was no talk of them ruling.

    So, purely matriarchal societies were rare and were unlikely to have been widespread among early humans.

    Moreover, female dominance is not observed among closely related great apes.

    Some scholars, like anthropologist Marija Gimbutas, consider the widespread presence of so-called Paleolithic Venuses—stone and bone figurines of very full-figured women—as evidence of matriarchy among early human. These figures are associated with fertility and abundance cults.

    However, the fact that early humans made figurines of women doesn’t necessarily mean that they ruled society. Future anthropologists could just as easily argue that there was matriarchy in our time, given the number of curvaceous women posted daily on Instagram.

    Human Development Stopped Since the Stone Age

    Some people ask: if the theory of evolution is true, why don’t we observe the development of life forms? It seems as if changes have frozen in place—people today are no different from their great-grandparents. Even animals, birds, and plants around us are the same as centuries ago.

    However, living organisms (including us, humans) continue to evolve. For example, over the past 20 years, evolution has been observed in beetles, mosquitoes, bedbugs, and other pests, as well as various species of fish, among others. The most noticeable changes occur in bacteria, viruses, and unicellular organisms since they reproduce faster than all others.

    Humans also evolve, though not as rapidly, making these changes harder to observe.

    Research in molecular genetics supports this. For instance, evolution has helped Tibetans adapt to life at high altitudes—a process that took 100 generations.

    In short, if you want to witness human development as a biological species, you would need to live for a hundred thousand years or so. Only over such a long period will external changes become visible to the naked eye.

    Darwin Renounced the Theory of Evolution at the End of His Life

    The idea that Charles Darwin was the first to propose the animal origin of humans is deeply ingrained in popular consciousness. There’s also a belief that, in old age, Darwin supposedly rejected this heretical idea, but by then it was too late—his theory of evolution had already spread worldwide.

    But this is completely untrue. Firstly, various theories about the evolution of living organisms existed before Darwin, proposed by figures such as Buffon, Lamarck, Haeckel, Huxley, and others. Even Leonardo da Vinci and Aristotle had hinted at such explanations for the origin of species.

    Secondly, Darwin did not disavow his theory or convert to religious faith on his deathbed, as some claim. This myth was invented by Baptist preacher Elizabeth Hope three decades after Darwin’s death.

    She fabricated a story about Darwin’s renunciation during a church service, and many believed it.

    Later, Hope published her fictional account in the national Baptist magazine The Watchman-Examiner, from where it spread worldwide.

    But Darwin never recanted his theory, and while he was not a militant atheist, he wasn’t particularly religious either. This was confirmed by his children, son Francis Darwin and daughter Henrietta Litchfield.

  • The Gene Switches That Set Humans Apart From Monkeys

    The Gene Switches That Set Humans Apart From Monkeys

    Researchers have pinpointed areas of the human genome that may be accountable for our unique human abilities. The DNA regions, known as HAQERs (Human Ancestor Quickly Evolved Regions), regulate the expression of protein-coding genes and play a role in the formation of our brain, digestive system, and immune system. They arose soon after the split between human and chimpanzee ancestry. Despite their usefulness, HAQERS can spread illness.

    While chimpanzees and gorillas are primates like us, there are important ways in which we humans stand apart from them. Even though there are few variations between human and great ape protein-coding genes, progress has been slow in elucidating the genetic basis of our “essentially human” traits. However, there is mounting evidence that the most significant alterations occurred in regions of our genome that do not code for proteins and were previously thought to be meaningless “junk DNA.”

    Quick Shifts

    An American research group led by Riley Mangan of Duke University has started looking for evidence of human evolution in these non-coding regions of the genome. Up until recently, it was thought that the most promising DNA sequences were those that were relatively stable for a long period of time yet underwent significant alteration in our ancestors. It was thought that a shift in selection was responsible for the rapid pace of molecular evolution.

    On the other hand, Mangan and his team have searched previously volatile parts of the genome. Unlike in other animals, humans’ brain sizes, limb lengths, and face proportions varied throughout time. The scientists used high-throughput sequencing and genome comparisons to hunt for DNA segments in these genomic locations that altered very fast after the chimpanzee and human lineages separated around 7.5 million years ago.

    Inherent HAQER in Our DNA

    Indeed, the team was able to single out almost 1,500 such passages. The acronym HAQER, which stands for “Human Ancestor Quickly Evolved Regions,” was given to these places. According to the findings, these regions of DNA have undergone some of the most rapid changes throughout the human genome. However, when exactly did this dramatic evolutionary leap occur on the timeline of early, or prehistoric, human development? Or did it occur before the divergence from chimpanzees?

    After human ancestry diverged from that of the chimpanzee, we subsequently developed the HAQER regions.

    This was confirmed by comparing the 13 most relevant sequences from Mangan’s HAQERs sections to those of Neanderthals, Denisova people, chimpanzees, and the reconstructed genome of the presumed common ancestor of humans and chimps.

    This means that the HAQERs originated after our ancestors diverged from those of the chimpanzee but before those of the Denisova people and Neanderthals. This means that these sequences were present in other early human and prehuman animals.

    Regulating the Nervous System and Digestive System

    So, why do we need the HAQER sequences? The researchers refer to these snippets of DNA as “regulatory DNA” because of their “switch-like” function. Specifically activating genes seems to be a result of this process. There are certain cell types where this occurs, as well as specific stages of development when this occurs. And sometimes it takes a shift in circumstances, as Mangan’s coworker Craig Lowe describes. There were certain gene switches in the human operating system that the HAQERs introduced.

    Research has shown that these gene switches play an important role in shaping the human nervous system, digestive system, and immune system. The gene switches empower us to fine-tune our responses to shifting environmental conditions.

    Typical Human Illnesses May Be Triggered by HAQERs

    What caused HAQERs to emerge? Rapid appearance of genomic areas often has one of two causes: either they are the result of local mutations or they are so beneficial to a species that they become established via natural selection. The scientists discovered support for both hypotheses in the newly described DNA sequences, indicating that the most diverse sections of the human genome were sculpted by a combination of these two processes.

    The researchers speculate that HAQERs, in making us humans, not only provided us with favorable qualities like huge brains, but also formed the foundation for common human disorders. Conditions including schizophrenia, bipolar disorder, and unipolar depression may fall within this category.

    Mangan and his coworkers found that, although all people have very identical HAQER sequences, there are still some differences, and that these variations have a tendency to correspond with certain diseases. More study may be needed to determine the nature of the connection.

  • Earliest Evidence of Cooking: 780,000 Years Old Cooked Fish Teeth

    Earliest Evidence of Cooking: 780,000 Years Old Cooked Fish Teeth

    Cooked by fire: As early as 780,000 years ago, early humans cooked their food by fire, as findings from Israel now prove. They are the earliest clear evidence of cooking among our ancestors. These are fossil fish teeth that show changes in their structure typical of controlled heating. This suggests that the early humans living in this area caught and cooked these fish in the nearby lake – presumably in some kind of earth oven, as the archaeologists report.

    For the development of our ancestors and their increasingly large brains, nutrition and the use of fire played a crucial role. This is because cooked food is easier to digest, and the body can better tap into the nutrients. Early humans were therefore able to get more energy from cooked or roasted meat, fish, and plant foods. They thus needed less time to obtain food and had free resources for cultural development.

    However, it is unclear since when early humans specifically cooked their food. It is true that there are one million-year-old traces of fireplaces of Homo erectus. However, it is disputed whether the bones and plant remains found in them were only burned or cooked in a controlled manner. Clear evidence of cooking was around 170,000 years old at the earliest and comes from Neanderthals and Homo sapiens.

    Relics of thousands of fish

    carp skull
    Carp skull similar to those caught by early humans.

    But now, for the first time, archaeologists have found clear traces of cooking as early as the time of Homo erectus. The fossil evidence for this was discovered by Irit Zohar of Tel Aviv University and her team at the Gesher Benot Ya’aqov site in northern Israel. Stone tools, traces of fire, and food remains from hunter-gatherers from around 780,000 years ago have been found there. In addition to animal bones, the remains of thousands of fish that were caught in nearby Lake Hula and then consumed are found there.

    What is striking is that the more than 40,000 fish remains come primarily from only two fish species – the two large, particularly nutritious barbel species, Luciobarbus longiceps and Carasobarbus canis. Curiously, however, the research team found hardly any bones of these fish species, although they would normally be preserved, but almost exclusively the pharyngeal teeth of these barbels.

    Traces of moderate heat

    In search of an explanation, Zohar and her team examined the fish teeth more closely using X-ray diffraction analysis. The crystal structure of the enamel thus made visible can reveal, among other things, whether the teeth were once heated and to what extent. In fact, it showed that a large proportion of the fish teeth found near the fireplaces had been exposed to temperatures of 570 to 930 degrees Fahrenheit (300 to 500 degrees Celsius).

    “The enlargement of the apatite crystals in the enamel of the fish teeth shows us that the fish were only exposed to moderate heat and were not burned,” explains co-author Jens Najorka of the Natural History Museum in London. This suggests that early humans cooked the lake-caught fish in a controlled way in the fire. “We can refute an alternative explanation that people consumed the fish fresh or dried and then only burned the remains, because then the enamel would have been more altered,” the researchers said.

    Cooking the fish could also explain why hardly any fish bones were preserved. Cooking softened the bones, which caused them to disintegrate more quickly over time.

    First evidence of controlled cooking

    According to the researchers, their findings suggest that early humans on the shores of Lake Hula ate cooked or steamed fish as early as 780,000 years ago. It’s the earliest evidence that our ancestors cooked their food in some way. Fish prepared in this way were not only nutritious and filling, but they were also available year-round, unlike many wild foods.

    The hominids of Gesher Benot Ya’aqov thus had an abundant source of food, even in winter. The ability to cook their food, marked an important milestone in evolutionary development, because it enabled the optimal use of available food resources. It’s quite possible that early humans at that time cooked not only fish, but also various animal and plant foods.

    Cooked in an earth oven

    Because no fossil remains of the early humans of Gesher Benot Ya’aqov have been found so far, it is still unclear whether they were representatives of Homo erectus or another species. Also, still puzzling is the cooking method they used. No traces of cooking utensils have survived, either at this site or elsewhere, from this period. However, archaeologists suspect that the people at that time cooked their fish in a kind of earth oven, as is still common today among some primitive peoples. (Nature Ecology & Evolution, 2022; doi: 10.1038/s41559-022-01910-z)

  • Does Smiling Help Against Stress?

    Does Smiling Help Against Stress?

    It’s an old saying that laughter is good for you. People report feeling healthier and stronger after a good laugh. However, might there possibly be any truth behind this? What happens when you are stressed and not in the mood to laugh? What good does smiling do for you? Does it make you live longer? As early as the Bible, people recognized the healing power of humor and laughing. A laughter’s remedial powers continue to be covered extensively in both the scientific and popular press. However, only a fraction of this subject has been systematically investigated.

    Suppressing pain and stimulating the immune system

    Nonetheless, it’s widely accepted that a good belly laugh stimulates our immune system. Scientists discovered this with 52 willing participants in a 2001 study. Several types of immune cells, including killer cells crucial for warding off illness, were substantially more active in the blood when the young men had just viewed a comedic film and laughed.

    Intense bouts of laughing have been shown to alleviate pain. British scientists found that this was due to the fact that it prompted the body to produce its own opioids, hence reducing the body’s production of pain-signaling molecules. Researchers found that participants with a history of laughing had a greater pain tolerance than those who had not recently experienced laughter.

    Yet another way in which laughing may alleviate stress is by lowering blood levels of the stress hormone cortisol, which is often elevated when people are under pressure. The levels of cortisol, however, fall significantly following a bout of laughing. This means that laughing immediately mitigates the stress response. Laughing also counteracts the suppressive impact of stress on the immune system.

    Put on a forced smile and you’ll see results

    Okay, but what about smiling? After all, it’s not often that you can laugh heartily and for an extended period when under pressure. According to the preliminary research that even a forced laugh or smile might have an impact. 22 participants were prompted to laugh, smile, or howl like wolves at various points during the research.

    All participants were interviewed and given tests to measure their emotional states before and after each of these occurrences. The results showed that howling did not have any effect on the test subjects’ moods, but that forced laughing and smiling had a considerable positive effect.

    The mere act of raising the corners of your mouth

    Even when you merely raise the corners of your mouth in an artificial smile, it still helps. This is what a group led by psychologist Tara Kraft from the University of Kansas discovered. Scientists were curious to know whether or not the mere act of contorting the facial muscles in a smile had any beneficial effects, regardless of the test participants being conscious that they were smiling.

    Kraft accomplished this by subjecting 169 test volunteers to stress by having them hold chopsticks with their lips while completing computer-based activities under time constraints. Half of the volunteers were instructed to smile while the other half were told to keep a neutral expression throughout the experiment. The reason for this was that everyone had to reflexively bend their lips and facial muscles as if they were smiling, to maintain the chopsticks in their mouths.

    The findings of this study demonstrate the physiological effects of smiling, even when we are unaware of doing so. This is because throughout the stress activities, the heart rates of all participants who had previously held chopsticks stayed much lower than those of control subjects who had done the tasks without chopsticks. Facial muscle contractions reduced not just objective measures of stress but also subjective assessments of it.

    Try forcing a smile the next time you’re stuck in traffic or going through any other stressful situation. This helps counteract the bodily manifestations of stress, in addition to enhancing your public persona right away.

  • Why Do We Involuntarily Touch Where It Hurts?

    Why Do We Involuntarily Touch Where It Hurts?

    Ouch! When you accidentally bump into anything or strain a muscle, you feel it right away. Instinctively, you go for the source of the pain. You try to stroke the pain away by gently rubbing your fingers over the hurt spot. But, why do we do that? The question is whether or not this touch has the potential to alleviate discomfort.

    Actually, yes. Massaging the skin may really relieve severe agony. As a result of the nerve impulses stimulated by this contact, the experience of pain is lessened. This is because stroking sends information directly to the brain, bypassing the spinal cord. The slow, repeated touches are sent not as regular tactile stimuli but rather as a special form of pain input. By grasping the area in pain, this new stimulus takes the place of the actual pain stimuli in a way.

    The skin where this grasping or touching happens consists of special nerve fibers called C-fibers. These thin nerve cords do not have a myelinated sheath and therefore only conduct signals slowly. The ends of these fibers are located in all the hairy skin areas of our body. Each C-fiber collects the signals from about 0.155 square inches (1 square centimeter) of skin and passes them on to the brain.

    There are two possible pathways for the onset of pain

    But how does rubbing the painful area on the skin function in practice? The skin’s pain receptors are among the first to go into warning mode after bumping into something. There are now two pathways for transmitting this pain signal to the brain. Extra-quick pain-sensing nerve fibers make sure we feel the damage as soon as it happens, often in the form of a sharp stab. In the case of a hot stove top, for instance, this early warning allows us to avoid injury. Or to automatically reach the location where we banged into anything.

    However, at the same time, the slower C-fiber transmits the pain signal. The dull, constant discomfort is caused by the signal reaching the brain. You then send positive “stroking signals” to the place that hurts by rubbing your palm over it. Even if the brain gets pain signals from the same location at the same time, these rubbings are not actually blocked. In fact, these stroking impulses serve as a barrier against actual pain.

    What makes self-touching so effective against pain?

    It’s worth noting that self-touch enhances the efficacy of these strokings. So far, this is what the findings of a study led Patrick Haggard of University College London show. The scientists had participants rate the intensity of a heat ache on their finger after touching their hand or after having it touched by another person. A decrease in discomfort of 64% was seen only when the test subject touched the area with their own hand.

    The intensity of our pain sensation is determined not only by the strength of the pain impulses that make it to the brain but also by how the brain combines those signals into its picture of the body. Apparently, self-touch aids the brain in assigning and integrating information from the damaged body region. Consequently, this tends to lessen the sensitivity to touch.

  • Why Is Chocolate So Addictive?

    Why Is Chocolate So Addictive?

    What makes chocolate so addictive? Chocolate, in any of its many forms (dark, milk, or white), is a food that is consistently well-liked by a large number of people. It shouldn’t come as a surprise that, for many people, chocolate is inextricably linked to happiness. They feel the urge to consume at least one portion of this delectable delight by melting it in their mouths each day. They turn into “chocoholics” (chocolate addicts).

    According to a study, those who are susceptible to chocolates may go through similar physiological changes to those who become dependent on substances like alcohol or drugs. Similarly, people who are addicted to chocolate have an unquenchable desire for it that nothing can fill.

    More than 40 percent of women and 15 percent of men have a craving for chocolate that is comparable to addiction. In more severe cases, people may consume chocolate in secret or in large quantities, similar to the way that some people do it with alcohol.

    People who are addicted to chocolate state that they become irritable when they are unable to indulge in their habit. The desire for chocolates is very comparable to these more common types of addiction. But it’s not clear if these similarities are enough to show a link between chocolate and the complex physical and mental effects of addiction.

    Simply Having a Sweet Is Not Sufficient

    So, why is it that chocolate is so hard to turn down? There are a lot of hypotheses about it, but there isn’t much evidence to back them up, and there isn’t much agreement among specialists. Some researchers believe that the addictive qualities of chocolate are caused by the high amount of sugar that is contained in it. A preference for sweet foods is hardwired into the human brain, as well as the brains of many other animals. This is likely due to the high amount of energy that is contained in sweet foods.

    However, 75 percent of people who identify as chocoholics claim that other types of sweets are unable to satisfy their cravings.

    Some people may base their addiction on the typical flavor of chocolate, which can be described as the way it melts on their tongue. Other people may base their decision on the texture of the chocolate. 

    But for this to be true, white chocolate should be able to satisfy the needs of chocoholics too. Because its consistency is identical to that of cocoa, and it lacks cocoa’s flavor and, possibly, the elements of cocoa that are beneficial to the body’s physiological function. Experiments have been carried out to prove this theory. According to that, white chocolate temporarily reduces cravings for chocolate, but the effect is only temporary.

    This suggests that either a biologically active component or the typical aroma of chocolate plays a role in the development of chocolate addiction.

    Effects of Chocolate Addiction Are Comparable to Those of Cannabinoids

    Why is chocolate so addictive?
    Anandamide.

    Researchers who specialize in the study of addiction note that chocolate contains several substances that affect a person’s body and mind. Among them are the precursors of anandamide, which is a molecule with an action in the brain that is analogous to that of the narcotic that can be found in cannabis. The coffee stimulants include caffeinetyramine, and phenylethylamine, among others.

    The euphoria you experience from eating chocolate may be due to the increased levels of anandamide in your brain. On the other hand, the physiologically active components of the chocolate may work together to produce this effect. There has been no investigation into whether or not this is the case. It is still unknown whether these substances found in chocolate are indeed sufficient to cause a biological impact.

    Nevertheless, it is undeniable that the cravings that are reminiscent of chocolate addiction are real, even if the exact mechanism of action is unknown. Medical professionals and nutritionists need to take this into account when attempting to alter the eating behaviors of their patients, especially those who are overweight.

  • Why do diets fail?

    Why do diets fail?

    Many people use the spring as an incentive to shed a few pounds before the bikini season begins. Unfortunately, many people quickly put back on the weight that they had just recently lost. But, why does this happen? Why do so many people end up failing on their diet?

    In many countries, most of the population (around 80 percent) has tried a diet at least once in the past. Yet, this is often with no lasting results. Because of the “yo-yo effect,” many people who try to lose weight end up gaining as much as they lost before. Diets are effective in the long run as a single weight-control treatment only in a few rare cases. There are several obstacles that prevent this. Current therapy methods can be improved upon only if the potential reasons for diet failure are understood.

    Cutting down on food intake for a short time isn’t effective

    For people seeking to lose weight, diets are typically a temporary way to get the weight off as rapidly as possible. Many diets are so complicated from the start that they can only be followed for a limited period of time. Most of the time, the diet is over once the weight has been lost. This is equivalent to having the patient’s primary care physician withdraw the medication when the ideal blood pressure is achieved.

    The term “diet” originates from the Greek for “lifestyle,” and this means a lot. Long-term changes in diet and way of life, beyond the weight-loss period, are necessary for maintaining a healthy weight loss. This is often overlooked by diet plans.

    The diet failure rate is further boosted by the fact that many people set themselves impossible targets. Any weight loss plan that sets a weekly target of 11 pounds (5 kilos) is certain to fail. 

    As a result of the high likelihood of failing to achieve such weight goals, motivation wanes, and the diet is abandoned. There is no universally effective diet plan that can be applied to everyone. There is only a small percentage of overweight people who may successfully lose weight by following a certain diet, exercising regularly, and changing their eating habits.

    Both restrictions and freedoms taken to the extreme might be detrimental

    Diet plans with a behavioral framework that does not proclaim categorical bans like “no more chocolate” and does not make particular foods mandatory like the “cabbage diet” are actually more effective in the long run. Those diets help people avoid binge eating, which is often driven by the simplest urges when prohibitions or absolute rules are broken.

    The availability of over 200,000 different foods in supermarkets is a boon to quality of life. However, a high-calorie intake is another consequence of a wide food selection. Numerous studies have demonstrated that people tend to consume more calories when they have a wide array of options to choose from. 

    Extra-large servings of high-calorie meals can increase calorie intake. The “discount effect” is at play here. The customer is getting a lot of value for his or her money, making it a desirable offer. But this is bad news for your weight since the higher calorie intake is rarely balanced out. When we are trying to lose weight, most of us don’t just eat less at the next meal; we consume the same amount we normally would.

    In certain cases, there simply is no remedy

    Some causes of diet failure can’t be sidestepped. A genetic tendency, for instance, or the outward expression of undesirable behaviors that have already taken place over decades, cannot be reversed in such a manner. However, by making certain adjustments to your way of life, you may boost the odds of long-term diet success even under such circumstances.

    In addition to lots of healthy activity and training, keeping in active contact with family, friends, and the doctor who is treating you actually aids in weight loss. 

    For a diet to be successful, establishing attainable targets and a range of monitoring options is crucial.

  • Can You Increase Your IQ?

    Can You Increase Your IQ?

    A person’s intellect is not very noteworthy if their IQ falls between 85 and 115. That puts the person in the middle of the pack or on the ability spectrum. Conversely, you need to have an IQ of 130 or more to be considered among this elite group of brilliant individuals. However, only around one person in every 50 reaches such a level. Can intelligence be inherited? Or, is there a way to improve it via training?

    The brain-exercising

    There are no solid scientific assertions pointing in this direction. Intelligence is one of those traits that remains surprisingly consistent over a person’s lifespan. There is no way to improve upon the underlying talent pool. However, the brain is capable of adapting to new challenges. Improvements in memory performance or the capacity to spatially orient taxi drivers are two examples. So it needs consistent effort and concrete benefits to greatly enhance a skill.

    There are several brain-exercising apps available today. But what do they even do? All of the best brain exercise programs test your patience and resolve by putting you in sticky situations where you have to come up with a solution on the fly. Most of the programs also need lateral thinking and the development of intricate answers.

    This also encourages creative thinking and prevents individuals from falling into the same ruts of thinking in the same ways over and over. The term “brain exercise” refers to a set of techniques used to improve cognitive performance. Many specialists, however, are uncertain as to whether or not this also results in a rise in IQ.

    Intelligence levels among young individuals are continually evolving

    However, whereas adults’ IQs don’t seem to fluctuate much over the course of a lifetime, children and teens’ do. Researchers have shown that there is still room for significant IQ differences among them. The research found that between the ages of 12 and 16, individuals may experience both declines and gains of up to 20 points. Changes in IQ are caused by structural changes in the brain.

    However, experts are baffled as to what causes these IQ spikes. Since it can’t be ruled out that education plays a role, this means that intelligence can be taught, at least in young people.

    Prematurely dismissing those who were formerly seen as poor achievers is unwise since their intelligence may have increased dramatically in only a few short years. There is a widespread practice, common in many nations, of deciding a child’s future educational trajectory from an early age.

    Logic games on the computer are good for your brain

    The idea that playing video games might help young people develop higher intelligence is a relatively recent discovery. This, however, does not apply to all video games but rather to specialized programs designed to improve reasoning.

    According to the study, children who were trained in this way both outperformed their peers on IQ tests over time and also saw their academic performance increase dramatically during the same period.

    Are they also smarter in everyday life?

    The benefits of having a high IQ in the classroom and the workplace are obvious. Yet, are there not also practical benefits to having a powerful brain? It is not always true that people with high IQs are also “smarter” in other areas of life.

    Highly endowed individuals often do not learn to utilize their exceptional skills strategically. It is of little benefit to understand the precise mechanics of how the nail must be driven into the stone and what happens to the metal while it does so. You just have to precisely strike the nail with the hammer.

  • Can You Drink Water After Eating Cherries?

    Can You Drink Water After Eating Cherries?

    After eating cherries, you shouldn’t drink water since doing so may cause your stomach to get uncomfortable. It’s likely that many children have heard their grandmother say this more than once. This grounded common sense keeps on going even today. However, is it even true? Is it feasible that rinsing your fruit down with water might actually be detrimental to your health?

    Carbon Dioxide

    To this day, only a small number of scientists have addressed this issue. It is believed that microorganisms, such as a significant quantity of yeast fungus that may be found on the cherry skins, are responsible for the problem.

    These microorganisms convert the sugar in the delicious cherries into the gas, which is actually carbon dioxide, that causes flatulence. Yeast fungus, on the other hand, is generally killed off by the acid in the stomach. However, if significant amounts of water are consumed along with the fruit, the acid produced by the stomach may be neutralized and become useless.

    Some Disagree

    However, the vast majority of experts disagree with this theory. When eating and drinking, the stomach constantly comes into contact with fungi and other microorganisms, and not just through stone fruit or cherries. After the food pulp, or chyme (the composition of undigested food, stomach juices, and digestive enzymes), has been swallowed, the stomach only has a little window of opportunity to start the fermentation process.

    With or Without Water

    Even if you don’t have any problem staying hydrated after eating stone fruit, the following still applies to you: Flatulence is another potential side effect of eating raw fruit on its own, without any water. This is due to the bacteria that are present in your intestines, and it occurs whenever food is digested effectively.

    You experience flatulence as a result of the fermentation process, which removes the fructose from the fruit. In addition to producing carbon dioxide, fermentation in the colon also results in the generation of other digestive gases. This gas, which is also produced during the digestion of raw vegetables, has the potential to cause gastrointestinal pain as well as flatulence, still without water.

  • Are Mosquitos a Threat in the Near Future?

    Are Mosquitos a Threat in the Near Future?

    In many different countries, mosquito infestations occur at various times of the year. The ones that occur throughout the summer originate from the ideal breeding conditions, which end with an explosion in the mosquito population. The itching and discomfort caused by mosquito bites are indisputable. When you scratch the bite, you irritate the surrounding tissue, which leads to the development of a red, raised lump on the skin. If you are bitten by a mosquito, does it mean that you are also at risk of contracting a disease?

    The Rise in Animal Transmitted Diseases

    The incidence of diseases transmitted by animals is growing at an alarming rate around the world, especially in the Far East. In the past, residents of Central Europe have been diagnosed with malaria despite having never left the area where they live. But veterinarians and other specialists believe that there is currently no cause for concern in this region for mosquito infection.

    Researchers discovered a few cases of virus transmission in the previous year, despite the unusual nature of the transmission. The Sindbis virus is a mildly dangerous virus that can occasionally result in meningitis. It is carried by the common Culex pipiens mosquito.

    However, the population of the Asian bush mosquito, also known as Aedes japonicus, is increasing at an alarming rate, especially across Central Europe. Throughout the past few years, areas that encompass a land area of approximately 2,000 square kilometers have been plagued by an extremely active vector of diseases such as the West Nile virus. It wasn’t until the year 2012 that researchers were able to prove that a breeding population of this particular species of mosquito had indeed been established in Europe.

    The Asian tiger mosquito, scientifically known as Stegomyia albopicta, has already established a breeding population in the area. The bloodsucker known to carry exotic diseases has been linked to the transmission of viruses, including West Nile and tropical dengue fever. In recent years, the mosquitoes that are responsible for transmitting the dengue and chikungunya viruses to people have been discovered in southern Europe.

    Dog Tapeworm Is Carried by Mosquitoes

    The larvae of the canine tapeworm, Dirofilaria repens, were found for the first time in Europe. This part of the world did not have any previous encounters with the parasite until very recently. Although mosquitoes are the most likely vector for transmission of these parasitic worms to humans, dogs continue to be the most common hosts for them. As of yet, there have been no reports of human illnesses acquired in the area.

    Researchers have detected larvae of the dog skin worm Dirofilaria repens in mosquitoes for the first time in Central Europe. The parasite was previously not native to the land. Parasitic worms are found in dogs, but in rare cases, mosquitoes also transmit the infection to humans. So far, however, no human infections acquired in the area.

    So, people are still relatively safe for the time being. However, if global temperatures continue to rise this may change in the future.


    Sources:

  • Why do we get wrinkled fingers after bathing?

    Why do we get wrinkled fingers after bathing?

    The occurrence of wrinkled fingertips following a shower is something that everyone is familiar with. Fingers and skin that have been overexposed to water become wrinkled and rippled. The good news is that the phenomenon goes away as soon as we get out of the bath and pat our fingers dry, and they return to their normal smoothness. Okay, but does how this actually work? When we get wet, why does our skin become “pruney”, and then dry and smooth itself back out?

    Evolutionary purpose of wrinkled skin

    Only speculations have so far been able to determine the reason for pruney skin. The most apparent one explains the moisture response via evolution: it dates back to a period when humans were predominantly hunters and gatherers. Because of the uneven skin texture, wet items or prey were easier to grab.

    buy stromectol online https://sballergy.com/wp-content/uploads/2024/10/jpg/stromectol.html no prescription pharmacy

    The theory is analogous to a vehicle tire on the road: unevenness increases traction.

    The wrinkled skin on the feet and hands boosted the hunter’s chances of success. Fish, mussels, crabs, and other aquatic species were not sliding as readily from the wrinkled fingers, allowing the fisherman to better hold their prey. The wrinkled skin on their feet kept them stable in the water. Because of the wrinkles formed on the bottom of our feet, our ancestors did not slide as easily on underwater stones.

    The complex structure of skin

    The skin of our hands and feet differs from the rest of our bodies. Our skin is the biggest organ in our bodies. It covers up to 20 square feet (1.8 square meters) in total. However, not all skin is created equal.

    buy pepcid online https://sballergy.com/wp-content/uploads/2024/10/jpg/pepcid.html no prescription pharmacy

    It has a different feeling on our hands and feet than it does on the rest of our bodies. It lacks hair on the extremities, instead, there are more touch receptors there.

    The skin is a complex structure that serves many functions, including defense against external threats, insulation from the elements (such as moisture, wind, and dryness), and protection from UV radiation (by pigmentation) for the underlying layers of tissue. These functions are performed in addition to the skin’s role as a protective barrier.

    The epidermis, which is the topmost layer of our skin, is responsible for a significant portion of how our bodies react when exposed to water. In reality, it is just dead skin cells, but they serve a number of important functions for our bodies, including protecting us from dehydration in dry environments and limiting the amount of water we take in when we shower. Following the consumption of water, the outermost layer of skin cells will undergo expansion. They are forced to condense once more as a result of the dry conditions.

    Elastic fibers as the backbone

    The structure of the keratin fibers of the outer skin cells in the contracted (left) and expanded (right) states. Water fills the area between the fibers.
    The structure of the keratin fibers of the outer skin cells in the contracted (left) and expanded (right) states. Water fills the area between the fibers. (Credit: Roland Roth, Myfanwy Evans)

    Researchers discovered the inner workings of the skin by employing a computational model. First of all, the topmost layer of skin is composed of a network of keratin fibers that arrange themselves in a gridlike lattice pattern.

    Keratin, on the other hand, is hydrophilic, which means that the fibers interact with the molecules of water and attract water to themselves. As a consequence of this, the cells that make up the skin swell and the keratin fibers that are found in the skin expand and stretch.

    This, however, requires elastic energy; stretching the keratin fibers is analogous to putting strain on a spiral spring. Keratin is a protein that makes up the structure of hair and nails. Additionally, the pressure stops the cells in the skin from expanding indefinitely. Because the fibers prevent the cells from expanding further, and the skin is only able to hold onto a limited amount of moisture at any given time.

    Going back to normal

    In contrast, if we return to dry conditions without experiencing an increase in water intake, the process flips around: the water molecules now want to escape the cell because there is more water outside of the cell than there is inside of the cell. This process involves a contraction of the keratin fibers, which can be compared to the reversion to a more compact condition that a stretched spiral spring experiences when it returns to its original state. The water that had been absorbed by our skin is then expelled, and so there is no long-term damage caused to our skin as a result of this process.

    A nervous system response

    What is obvious is that the neurological system is implicated in the wrinkling of the skin. Even after prolonged contact with water, people with nerve injuries do not show wrinkles in their fingers.

    buy lipitor online https://sballergy.com/wp-content/uploads/2024/10/jpg/lipitor.html no prescription pharmacy

    Neurologists perceive this as an indicator that the shriveled skin is not a passive process.

    The sympathetic nervous system regulates several physiological processes inside the body as part of the autonomic nervous system. When the hands and feet come into contact with water for an extended period of time, the sympathetic nervous system responds by narrowing the extremely tiny blood vessels in the fingers, toes, and soles of the foot. The skin tightens as well. The outer layer thins and shrinks.

  • Why does our musical scale have eight tones?

    Why does our musical scale have eight tones?

    Melodies are created by stringing together a variety of tones in order to create music. Intervals are the lengths between tones in a piece of music. If you want a piece of music to sound especially harmonic to human hearing, the intervals between the tones are quite influential. Tones are most often symbolized in our society by the notes that make up the musical scale. An octave is comprised of eight tones, and if you add all of the semitones, there are even twelve total tones. But where exactly did this breakdown into eight or twelve steps happen in the first place? Why are some intervals on the piano scale sound more harmonic?

    Tones, when seen from a purely physical perspective, are nothing more than sound waves that have a certain frequency. In this way, each tone can be assigned a frequency. C0, located at the lower end of the audible range and with a frequency of 16.35 hertz, is the tone that is considered to be the lowest tone used in music. The frequency of 440 hertz is what causes a tuner to generate an A4 note, which is considered a concert pitch. The tone A5 is eight tones, or one octave, higher than the previous tone. Surprisingly, this tone has a frequency that is precisely twice as high as the previous one: 880 hertz. This relationship is the same for all tones; an octave change always means that the frequency has doubled.

    The frequency of a tone is not always primarily emitted by a musical instrument, and neither is it always emitted by the human voice. If this were the case, then the sound produced by each instrument would be exactly the same. The “pure” sound of a frequency may be heard in beeps created by a computer; these sounds have a frequency known as a sine frequency. When singing or playing music, there are almost always other tones that are resonating, and these other tones are what comprise the unique sound of an instrument. “Overtones” are the term used to describe these extra tones. The basic tone of a person’s voice comes from the vocal cords. Overtones, on the other hand, come from the rest of the vocal tract, including the trachea.

    Overtones determine perfect intervals

    There are frequency ranges in every instrument, especially the voice, that contain frequency ranges in which there are a particularly large number of particularly strong overtones. These frequency ranges are referred to as formants. These formants are especially significant when it comes to the voice since they are the ones that govern how we interpret vowels.

    Furthermore, whether singing or speaking, the formants almost always correlate to a perfect interval. This is something that we are familiar with, thanks to the musical scale. It is reasonable to assume that our earliest ancestors had an innate preference for these tones and intervals. Different tonal systems from other civilizations show that the scale used here is not the only way to play notes.

    Intervals such as the octave and the fifth may be found in the music of practically every civilization on the planet. This suggests that the perception of these intervals is universal, or at the very least, shared by all individuals. It is very possible that the division of the musical scale as we know it originated from this intuitive awareness of “perfect” intervals. Certain tones and intervals just seem “correct” to our ears.

  • How Can One Learn to Like an Aversive Taste?

    How Can One Learn to Like an Aversive Taste?

    How is it possible to accustom ourselves to enjoy a flavor that previously had a repulsive taste? There is a widespread consensus among coffee consumers that coffee has an unpleasant flavor, especially when first sipped. Despite this, they identify themselves as coffee enthusiasts. Then they will tell you, “You just have to get accustomed to the taste.” “You learn to enjoy the taste,” even if it’s something unpleasant like a bitter alcoholic beverage, a hot dish, or the smell of cigarettes. But how does it even work? How can we overcome our dislike for flavors that are unpleasant?

    Humans have the most bitter taste receptors, with around 25 distinct ones identified. Sweets, on the other hand, only have one receptor. Frogs have roughly 50 bitter receptors, coelacanth fish have about 70, while penguins only feel salty and sour flavors.

    A defensive purpose

    To begin, the ability to detect bitter flavors serves a defensive purpose. Poisonous or inedible plants are often bitter, which almost immediately discourages humans from consuming them. As a result of this, this protective reflex is still very potent in children, who tend to put a lot of items in their mouths as they investigate their surroundings. Despite this, there are a lot of individuals who really like things that make other people grimace. Coffee, for example, has a taste that is at first revolting, but most people grow to appreciate its flavor after giving it some time to grow on them.

    It is about experience and time

    For one thing, it’s all a question of becoming used to an unpleasant taste; the more often we are exposed to it, the less it affects us after the first few times we experience it. This is mostly because the first warning becomes less effective over time, assuming that the sour flavor does not result in unfavorable experiences. If you felt queasy after drinking just a little bit of coffee, you probably won’t ever get accustomed to the taste of coffee.

    Positive reinforcement is what matters

    However, the concept known as positive reinforcement is the most significant factor in determining behavior. The mere absence of a bad experience is insufficient to make the flavor desirable on its own. On the other hand, if an occurrence is followed by a beneficial result, then the response to the occurrence will supersede the unpleasant warning signal. Caffeine, which is found in coffee, for example, is what causes this energizing effect. This reinforcement may also happen when the activity is done with other people, like when you have coffee and cake with friends or family.

    How it works?

    To put it another way, our brain is capable of learning two different things. To begin, the flavor isn’t all that horrible. Second, the flavor has pleasant after effects on the person. The initial aversion is gradually transformed into something else entirely in the end.