Whether it’s disgust, illness, or a swaying ship deck, when we feel nauseous, our appetite disappears. Neurobiologists have now deciphered what happens in the brain during this process, uncovering a previously unrecognized mechanism. According to their findings, during nausea, specific cells in the amygdala fire and send appetite-suppressing signals to areas throughout the brain. Even intense hunger struggles to override their stop signals. However, these neurons do more than just cause loss of appetite, as reported by the team in “Cell Reports.”
There are various factors that can suppress our appetite: Intense stress, an infection, motion sickness, or the sight of something repulsive can make us feel nauseous. We then lose our desire to eat, even if we’re actually hungry. This break from eating gives the body time and resources to focus on immediate issues.
But what triggers the typical loss of appetite during nausea?
While the brain regions and circuits that regulate normal feelings of satiety and hunger are known, it was unclear whether they were also responsible for nausea-induced loss of appetite.
To address this question, Wenyu Ding from the Max Planck Institute for Biological Intelligence in Martinsried and her colleagues focused on a brain region: the amygdala. It is a crucial center for processing emotions, particularly fear.
However, the central part of this brain area also contains neurons that control satiety. But which of these neurons fire during nausea? And how far do these signals extend?
For their experiment, Ding and her team induced nausea in mice using a chemical and then observed, among other methods using fluorescent markers and electrodes, how different groups of neurons in the central amygdala responded. Additionally, they tested the mice’s behavior: did they eat less when nauseous, even though they had previously fasted? How did nausea affect their behavior?
We Have Discovered a Completely New Type of Neuron
Surprisingly, they found something unexpected: In addition to the known satiety neurons in the amygdala, there is another type of brain cell in this area. These so-called Dlk1 neurons, however, do not fire during satiety but during nausea. “We found that these neurons are activated by nausea-inducing agents, bitter tastes, and gastrointestinal disturbances,” the research team reported. These neurons receive signals from many brain regions, including those processing disgust and unpleasant odors.
When these newly discovered brain cells fire, their signals block even strong feelings of hunger. When artificially stimulating these Dlk1 neurons in their brains, hungry mice stopped eating and even drinking. Conversely, turning off the Dlk1 cells in the amygdala caused the mice to eat even when they felt nauseous. This indicates that there is a specific circuit and unique brain cells for appetite loss due to nausea in both mouse and human brains, according to Ding and her team.
More Than Just A Loss of Appetite
But that’s not all. The team also found that these appetite-suppressing cells are unusually widely connected. While the known satiety neurons mainly target neighboring cells within the amygdala, the extensions of the Dlk1 cells send inhibitory signals to distant brain regions. As a result, this inhibitory effect extends even to the so-called parabrachial nucleus, a central interface between the cerebellum and brainstem.
This has consequences that go beyond mere loss of appetite and could also explain typical human behaviors during nausea: When the mice felt nauseous, they were less social than usual and sought less contact with their conspecifics. “However, these effects are not due to anxiety or altered movement,” Ding and her team explain.
Instead, the nausea-activated Dlk1 neurons are responsible for the signals.
The discovery of this specific appetite-blocking circuit provides valuable insights into the effects nausea has on our brains, as well as which neurons and circuits are involved. At the same time, it offers new insights into the complex regulation of our appetite and eating behavior.
The history of the periodic table marks a search for the order of the elements, the fundamental components of all matter, and the building blocks of our existence. Dmitri Mendeleev, who hails from the farthest reaches of Siberia, changed the world with a simple table. What Dmitri Mendeleev proposed in 1869 contained the order of the elements, which is the foundation of the universe. The question of whether there was a fundamental order or regularity underlying the elements was first posed by ancient scholars. They attempted to discern order among the elements after realizing that some substances resembled one another more than others.
But Dmitri Mendeleev and his periodic table of the elements, which others further developed several years later, represented the critical advance. Along with organizing the known elements according to their atomic weights and formative properties, Mendeleev also identified the underlying regularities.
While entire worldviews have crumbled in physics and biology over the past few decades, the Russian chemist’s Periodic Table has stood the test of time. Whether it be nuclear fission, the discovery of new elemental groups, or the understanding of the atomic structure, all advancements in chemistry and atomic physics have only served to confirm and extend Mendeleev’s brilliant design. But what exactly was his discovery’s secret?
What Is An Element?
Aristotle and Alexander the Great. Aristotle envisioned four primordial elements: earth, air, water, and fire.
Finding the Primordial Substance
What is the origin of all things? Why is metal hard and water liquid? Why do coal and sulfur behave differently than copper and silver? These are by no means modern-era issues; ancient people were already troubled by them. They, too, ceased to simply take the natural world for granted and yearned to comprehend its fundamentals.
Early Greek naturalists and philosophers agreed that there must be some kind of order or unifying principle at work in the universe. Maybe there is a primordial material that explains how different substances behave and how they have certain properties. The solution was obvious to the Greek philosopher Thales of Miletus in the year 600 BC: water. He thinks that everything on the Earth’s disk is just a different aspect of this primordial substance. Today we know that when heated sufficiently, even solid, hard metals can change into a liquid state. At most, it was debated whether fire or air—as Heraclitus postulated—was the enigmatic primordial substance.
The Four Elements of Aristotle
Aristotle unified these theories into his four-element doctrine about 300 years later. He gave each “element” characteristics there; for instance, the earth is cold-dry and the water is cold-moist. Because all matter was a combination of its original constituent parts, a substance’s properties and behaviors depend on the ratio of those parts. But above all, in Aristotle’s view, there was still the quintessence, the fifth element, the ether, which gave life to everything else and was the source of all existence.
What Aristotle did not know at the time was that his ideas would influence the Western worldview for almost 2,000 years. The four fundamental elements were referred to by everyone from the Greeks and Romans to the alchemists and doctors of the Middle Ages and the Renaissance.
Robert Boyle: The Irish Rebel
Robert Boyle’s “The Sceptical Chymist” (1661). Credit: Van Pelt Library, Pennsylvania University
A young Irishman named Robert Boyle, the youngest son of the Earl of Cork, announced the sudden end to this age and the start of modern chemistry in the 17th century. Boyle spent some time in the Kingdom when he was twelve years old, as was customary for aristocrats at the time. He visited France, Switzerland, and Italy and studied under the greatest scholars of the day while also making friends with many quacks and alchemists. In 1644, he returned to Ireland and started his own experiments, boiling, evaporating, and distilling while examining the objects under a microscope. He was particularly intrigued by the behavior of gases.
Boyle began to question the four-element theories of Aristotle and Paracelsus more and more as they were based on conjecture rather than actual scientific research. Boyle observed that whether substances combine or just mix does make a difference and that many substances can be broken down to a very great extent, but some cannot. One such substance was phosphorus, which the pharmacist and alchemist Hennig Brand discovered a few years earlier. This substance, which was extracted from urine by evaporation and annealing, appeared to be incredibly stable, resisting being broken down further.
In 1661, Boyle was finally certain that there must be more than four elements. All substances must also be elementary if they cannot undergo further chemical reactions. The Irish scientist and natural philosopher ultimately put an end to the age of alchemy with his new concept, which he published in his book “The Sceptical Chymist”; modern chemistry had arrived.
Karlsruhe Congress Lit the First Spark
The First International Congress of Chemistry
Karlsruhe, September 3, 1860: The second meeting room is humming with activity. 140 men shake hands, talk, and gesticulate while most are dressed in festive dark suits. Then it finally begins. “As provisional chairman, I have the honor of opening a congress that is unprecedented, that has never existed before. For the first time the representatives of a single and indeed the newest natural science have gathered here,” says Karl Weltzien, chemist and meeting organizer at the Polytechnic School in Karlsruhe.
The first-ever International Chemical Congress, which went down in history, was officially opened with these words. Because Karlsruhe Congress served as a catalyst for some of the most important developments in contemporary chemistry, including the Periodic Table. In addition to well-known researchers like Robert Bunsen, Dmitri Mendeleev, August Kekulé, Jean-Baptiste Dumas, Carl Fresenius, and Louis Pasteur, the congress brought together the brightest minds in this young field.
Atomic Weight Disarray
However, there was still a great deal of bickering, debate, and fighting going on. Because the congress was about nothing less than the basis of all things: the atoms. Or more precisely, the atomic weights.
Atoms were already universally acknowledged to be the fundamental units of matter. However, there were six different ways to state an atom’s weight. While some considered hydrogen to be the standard for all things and assigned it the atomic weight 1, others viewed oxygen as the supreme force and based all other atomic weights on it. But not only did this lead to uncertainty, but it also made it challenging to determine whether an analyzed substance was an element or not.
A young Russian chemist named Dmitri Mendeleev was one of the congress attendees. Mendeleev was able to attend the prestigious university in Saint Petersburg despite being from the remotest Russian province, the town of Tobolsk east of the Urals. He had even been sent on a two-year study tour to France and Germany after receiving his Master’s degree. Mendeleev focused primarily on physical chemistry while pursuing his doctorate in the Heidelberg laboratories of Robert Bunsen and later of Gustav Kirchhoff. His research included measuring the molecular weights and the precise volume of liquids.
An Italian Solution
The speech by the Italian chemist Stanislao Cannizzaro was given on the third and final day of the congress. He suggested a solution based on Amadeo Avogadro’s gas law to address the problem of atomic and molecular weights. According to the latter theory, an ideal gas should always have the same number of particles at the same volume, pressure, and temperature. Avogadro used the term “particles” in this context to refer to diatomic molecules. Cannizzaro said in his speech, “The vapor density therefore provides us with a means of unequivocally determining the weight of molecules of different substances, whether atomic or as a compound.”
Dmitri Mendeleev in April 1861.
When Cannizzaro made this proposal, Mendeleev, who was only 26 years old, was also present in the room. In the form of a periodic table, this talk was a starting point for Mendeleev. After all, he had been preoccupied with the issue of how to categorize the roughly 60 known elements for a number of years. Mendeleev later stated in one of his writings that “It is one of the functions of science to discover the existence of a general principle of order in nature and to find the reasons that govern this order. The cathedral of science requires not only material, but a design, a harmony.”
Mendeleev was not alone in holding this opinion; the congress in Karlsruhe, which led to the correction and unification of many atomic weights, set off a veritable race to see who could explain the properties and behavior of the elements first.
Mysterious Similarity Between Various Elements
Even before the 1860 Karlsruhe Congress, many chemists had noted that while all elements were the same, some were more similar than others. This suggested that some elements could be related in some way. For instance, all three of the following elements—chlorine, bromine, and iodine—are gaseous or barely volatile, intensely colored, and highly reactive with metals and hydrogen. However, some substances, like lithium, sodium, and potassium, are solid and have a strong reaction with water.
Even among these related groups, however, there are clear distinctions: Lithium moves leisurely across the surface of the water, reacting with it and giving off hydrogen until it disappears. A lump of sodium, however, moves across the surface with a furious hoist, but catches no fire. Potassium, on the other hand, catches fire the moment it touches the water, burning with a pale purple flame and spraying globules of itself everywhere.
The Law of Triads
Johann Wolfgang Döbereiner, a German chemist, also noticed these similarities in 1829. When he placed the related elements calcium, strontium, and barium next to one another, the atomic weight of strontium was exactly equal to the average of the other two, indicating that atomic weights appeared to be a factor in these affinities. Lithium, sodium, and potassium created another trio that shared this property. After all, Döbereiner’s “Law of Triads” allowed for the grouping of 30 of the 53 elements that were known to exist at the time. The other elements, such as the gases oxygen, hydrogen, and nitrogen, didn’t want to adhere to his theory. And his contemporaries did as well. They found the entire situation to be “insufficiently conclusive.”
The Law of Octaves
Octaves by Newlands. Each horizontal row indicates a different group of elements.
35 years later, the English chemist John Reina Newlands attempted an elemental sorting as well, bolstered by the “tidied up” atomic weights in Karlsruhe. Newlands started by listing the known elements in ascending order of atomic mass. He quickly noticed that the properties of the elements in this order followed a pattern: the fundamental characteristics of the seven earlier substances repeated after every eighth element. This made John Newlands think of music and the octave principle, which states that the same note can be repeated with a different pitch. He called his discovery the “Law of Octaves” as a result.
Newlands reorganized his series and now arranged the elements in columns with seven rows each to make the parallels even more obvious. There were now a startlingly large number of comparable elements in the table’s rows. In this way, magnesium was next to calcium, strontium, and barium, while fluorine was next to chlorine, bromine, and iodine.
There were also significant outlier elements in between, such as silver in the row between lithium and sodium, or nickel and palladium, which he placed between chlorine and iodine. But this cannot be possible today because of how different their elemental properties are from those of their neighbors. Other series, especially the lowest, were made up entirely of disparate elements with no indication of a system or even of similarities.
Yet why? Newlands believed he was getting close to finally ordering the elements and that his system could not be wholly incorrect. But he was unable to control the “outliers” problem.
But there was also someone else who was also obstinately working toward this problem’s solution thousands of miles to the east: Dmitri Mendeleev.
Mendeleev’s Periodic Law
“It’s all formed in my head”
Early 1869 in Saint Petersburg. Dmitri Mendeleev, who was currently a professor of pure chemistry at the University of Saint Petersburg, had been sequestered in his office for several weeks. Other than his lectures, he didn’t have any free time. Not even for his hairdresser, not for his wife and two kids, not for evening outings to the ballet or the theater. He was searching for something. Like many of his contemporaries, he was searching for the fundamental arrangement of the elements, and he felt as though he was getting close.
Ability of an Element to Bond
How many bonds can an element have? Symbolized right here in Brussels at the Atomium.
Mendeleev had long understood that there must be a link between atomic weights and the characteristics of chemical elements, just like John Newlands. But in contrast to the latter, he also considered valence or value. This characteristic of elements, identified by Edward Frankland in 1852, showed how strong was their ability to form bonds with other atoms, such as hydrogen atoms. Or, to put it another way, the number of “free arms” available for bonding between atoms.
Like Newlands, Mendeleev first arranged the elements in ascending order of atomic mass. He then turned his attention to the valences, noting that these too appeared to occur in cycles. From lithium to fluorine, they rose from one to seven, but sodium then started at one, and the valences rose once more to chlorine. Then, however, the scheme of seven came to an end: the following period was already significantly longer; it appeared that at this point, the elemental behavior had departed from Newland’s rigid octave scheme.
Dmitry Mendeleev in his office. Credit: Serge Lachinov
Elemental Card Game
Which scheme, then, did the elements fit into? When a friend paid Mendeleev a visit in his office at the university in February 1869, this was the exact question that Mendeleev was still struggling with. The friend asked him in horror what he was working on as he turned to face the hollow-cheeked, red-eyed figure with completely wild hair and a beard. Mendeleev explained that although he had found that there was a periodic system of elements, he was unable to create a matching table system or a law. The worn-out chemist laments, “It’s all formed in my head, but I can’t express it.”
Mendeleev, however, was persistent. In a later statement, he said, “When you’re looking for something – be it mushrooms or some kind of law – there’s no other way than to keep looking and trying again.” His primary tool for these experiments was a stack of plain cardboard cards rather than a Bunsen burner or any other chemical apparatus. He noted an element’s atomic mass and its most distinctive properties, such as valences, on each of them. He then shuffled the cards on his table until a system was finally formed.
The Seven Groups
Everything suddenly appeared to be straightforward: the elements were divided into seven groups, each of which combined elements with related properties perpendicular to one another. The atomic masses and valences of the elements rose in the rows of the system’s structure, or its periods, as a result of the placement of these groups next to one another. The group of metals from the alkaline earth came first, then the alkali metals. The “earth metals,” led by boron, were then followed by the carbon, nitrogen, and oxygen groups. The conclusion was formed by the halogens, which included fluorine, chlorine, bromine, and iodine.
Mendeleev now understood that this was the only way the elemental order could function fundamentally. However, at first glance, he too appeared to have some anomalies that disturbed the overall picture.
What Made Mendeleev’s Periodic Table Unique?
German chemist Lothar Meyer also worked on a nearly identical periodic table almost simultaneously with Mendeleev. Although Meyer also understood the fundamental concept of groups and periods, he could only come up with six groups. Meyer also encountered a few factors that significantly disrupted the overall picture, similar to Mendeleev.
How Should Errors and Gaps Be Handled?
For instance, beryllium, which was thought to be the third lightest element, belonged up front. However, it couldn’t fit into the plan because the first group was made up of alkali metals, which beryllium could be a part of. Tellurium, an element with an atomic weight of 127.6 and a valence of 2, also appeared to be positioned incorrectly. According to its atomic weight, the element should be behind iodine, but its valence clearly corresponded to that of the oxygen group, placing it in the element group prior to iodine and the other halogens.
Mendeleev, in contrast to Meyer, had no hesitation whatsoever. Because he was so confident in his theory, he could only conclude that the atomic weights of these anomalies were calculated incorrectly. Despite its lower atomic weight, Mendeleev immediately moved beryllium to the fourth position. Tellurium was categorized with oxygen.
In addition, Mendeleev did something else that his peers found utterly outrageous: he left gaps in the periodic table where, in his view, elements that have yet to be discovered should be included. In his paper, Mendeleev stated that “We must expect the discovery of many yet unknown elements, for example … aluminum and silicon, whose atomic weight would be between 65 and 75.”
The Basic Arrangement of the Elements
Mendeleev published an article in the “Journal of Chemistry” in 1869.
Mendeleev published his periodic table on March 6, 1869, just a few months before Meyer, under the title “On the Relationship of the Properties of the Elements to their Atomic Weights,” in which he also explained atomic weight changes and gaps and stated the laws that underlie his periodic table:
When the elements are arranged according to their atomic weight, their properties exhibit periodicity.
Elements that share similar chemical properties have atomic weights that are either nearly the same (platinum, iridium, osmium) or atomic weights that increase at regular intervals (eg, potassium, rubidium, cesium).
The order of elements and groups in increasing atomic weight corresponds to their valences and, to some extent, to their properties.
The atomic weights of the elements can be used to predict some of their distinctive properties.
For Mendeleev, it was obvious that his periodic system reflected the basic arrangement of the elements, effectively constituting the natural law of chemistry, rather than simply being an arbitrary order.
Mendeleev won the race when his Periodic Table was published. He was the first to be successful in both meaningfully arranging the elements and illuminating the patterns underlying their properties. But his contemporaries didn’t exactly applaud him for it. The exact opposite was the case.
“Unheard of and Unproven”
A direct rival of Mendeleev was Lothar Meyer.
The responses ranged from apathetic coolness to outright rejection. After all, this Russian chemist from the far reaches of the country dared to virtually expose a number of his most illustrious contemporaries and predecessors, accusing them of producing inaccurate results. Such behavior was unacceptable. Meyer chastised his rival in front of the public for his “unjustified speculations.” Others predicted that his system would only last a short time because it would soon become outdated in any case. People were utterly unaware of what Mendeleev’s periodic table was about to bring to applied science.
Concerning the “Eka” Elements
Even so, Mendeleev refused to be deterred. According to Mendeleev, no law of nature, however fundamental it may be, was established all at once. Its laws were always preceded by numerous prejudices. Instead, in 1870, Mendeleev went a step further and made predictions about yet-to-be-discovered elements, saying that under each of silicon, boron, and aluminum, another element was still undiscovered and needed to be found.
Mendeleev named these elements Eka-aluminum, Eka-boron, and Eka-silicon after the Sanskrit word for “one” — “eka” — and had previously described their characteristics, including their atomic masses, specific weights, the types of salts they formed, and even the location of their melting points. Eka-aluminum, for instance, would be a silvery-white metal with an atomic weight of 68 and a density of 6. Again, Mendeleev received more derision for this than praise.
However, Gallium Does Exist
Gallium. Credit: W. Commons.
This would not alter until November 1875, five years later: French chemist Paul-Émile Lecoq de Boisbaudran found two violet spectral lines in a zinc ore’s emission spectrum that did not match any of the known element signatures. Soon after, Boisbaudran was successful in removing an unidentified silvery-white metal from the zinc blende. He gave it the name gallium, probably in tribute to his native France. He and others did not realize that his gallium corresponded precisely to the eka-aluminum predicted by Mendeleev until after he published his discovery.
Later, in 1879, the second element predicted by Mendeleev, eka-boron, was found by the Swedish chemist Lars Fredrik Nilson. German chemist Clemens Winkler made the discovery of the third element, germanium, in a mine close to Freiberg, Saxony, in 1885. Mendeleev had by this point, at the very latest, been not only vindicated but also formally acknowledged as one of the greats in chemistry.
But a few years later, something occurred that could potentially rock his entire periodic system once more.
Discovery of Noble Gases From Mendeleev’s System
Elements to Disrupt the Periodic Table
Lord Rayleigh’s experimental design was based on Henry Cavendish’s earlier research. Credit: Encyclopaedia Britannica, 1911
In a lab at the University of Cambridge in England in 1892, John William Strutt, Lord Rayleigh, a physicist, leaned intently over his workbench. In front of him, oxygen bubbled through ammonia liquid before vanishing into a copper tube that was extremely hot. The other end showed the emergence of much smaller bubbles. The gas that Rayleigh was worried about in this situation was nitrogen. When oxygen and ammonia’s hydrogen reacted, this is what was released.
In an effort to finally determine the atomic weight of nitrogen, Rayleigh had been attempting to measure the density of this gas more precisely than before for some time. Rayleigh repeated the test several times to make sure that there were no methodological flaws. He also employed a different measurement technique in which the nitrogen was taken straight out of the air by passing it over hot copper.
An Enigmatic Deviation
William Ramsay.
The physicist described how “again a series in good agreement with itself resulted” in his 1904 Nobel Prize lecture. According to him, the densities obtained by the two methods, had differed by a thousandth, which, though small, was completely beyond possible experimental errors. The nitrogen gas extracted from the ammonia was smaller in density than that from the air. And the question arose whether the difference could be attributed to recognized impurities.
Rayleigh consulted William Ramsay, a Scottish chemist conducting research at University College London, to address this issue. Both believed that the nitrogen in the atmosphere might contain an unidentified element. A tiny bubble of colorless gas, that was denser than nitrogen and could not be animated to react by anything, always remained, and this was confirmed by no fewer than two experimental approaches. But what was the point? Ramsay created a picture based on the spectral lines, or the chemical fingerprint, in spectroscopy, a relatively new technique.
Elements Without Space
The lines that emerged from the gas analysis were distinct from all known elemental signatures. It must be an entirely new, unexplored element. In honor of the Greek word “argos,” which means inert, Ramsay and Rayleigh gave the gas the name “argon” and made their discovery official on January 31, 1895. A closer look revealed that the atomic weight of argon was just under 40. Therefore, theoretically, it would be positioned in the periodic table between calcium (atomic weight 40) and potassium (atomic weight 39).
But of all places, Mendeleev did not discover a gap between alkali and alkaline earth metals. Was perhaps his periodic table incorrect after all? How could there be an element that did not fit into the scheme if he allegedly understood the law of elements?
Formation of a New Group
The typical purple glow from Argon gas
Nothing less than the overthrow of the recently established periodic order was threatened by an intense discovery. Ramsay, however, found a solution. As the gas didn’t want to interact with anything or anyone, he concluded that argon must have a valence of 0. As a result, it did not fit between potassium and calcium, which had different valences. It would instead be a member of a different, as of yet undiscovered group in the periodic table. Ramsay then positioned argon behind chlorine to form the eighth group of gases, the noble gases. However, there must be other elements in this group because no element in the periodic table could exist alone.
In fact, Mendeleev’s “Law of the Elements” turned out to be eerily prescient once more: Ramsay discovered in 1895 that a previously unidentified gas had been isolated from a uranium mineral in the USA and examined it in the spectroscope shortly after. The lines were made of helium, an element that had been seen in the Sun before but has not yet been classified. Ramsay had now established that helium was a noble gas as well because it was holding the group’s top position with an atomic weight of four. Ramsay found three more noble gases a few years later, naming them neon, krypton, and xenon, filling the eighth group.
The Periodic Table Today
Credit: Google.com
With the discovery of the noble gases, the Mendeleev-identified elemental order triumphed in its first trial by fire. The periodic table was now unshakeable, not even by the later discovery of yttrium and other rare earth elements. The laws that the Russian chemist discovered, however, are still in force today.
New Knowledge Reinforces the Old Order
Later discoveries have since upended entire worldviews in physics or biology, but Mendeleev’s “fundamental order of the elements,” the periodic table, has in fact turned out to be universal. Instead, a number of significant advancements in chemistry, from the structure of the atomic shell to the way that elements bond, have since supported his Periodic Table system.
Niels Bohr demonstrated in 1913 that the distribution of electrons in the atomic shell served as the basis for the bonding behavior of the elements by starting with the concept of valences, for instance. His discovery simultaneously gave physicochemical proof of Mendeleev’s theory that each group of elements in the periodic table has a specific number of electrons in their outer shells. They were largely responsible for the elemental properties, which in turn allowed Mendeleev to accurately predict the particular properties of as-yet-undiscovered elements.
Guiding Star for the Search for Elements
The “prediction” principle developed by Mendeleev is still employed by his successors today. Mendeleev’s periodic law is the guiding star. When the first artificial element was created in the 1940s, the big question was how to identify it and describe its properties. Because the International Union of Pure and Applied Chemistry (IUPAC) ruled that a newly discovered or created element was not officially recognized and named until its properties could also be described. For the extremely unstable and quickly decomposing transuranium elements, for instance, this is hardly feasible.
We now identify 118 elements, which means, there are 118 elements on the periodic table. It has proven to be quite easy to infer the properties of an element from its periodicity. Mendeleev and his contemporaries could not have imagined some of the exotic properties of those elements. And there are still more and more of them, produced in massive particle slingers under the harshest circumstances. Nobody is exactly sure where the upper limit is or how many more elements there could be in the periodic table.
Ouch! When you accidentally bump into anything or strain a muscle, you feel it right away. Instinctively, you go for the source of the pain. You try to stroke the pain away by gently rubbing your fingers over the hurt spot. But, why do we do that? The question is whether or not this touch has the potential to alleviate discomfort.
Actually, yes. Massaging the skin may really relieve severe agony. As a result of the nerve impulses stimulated by this contact, the experience of pain is lessened. This is because stroking sends information directly to the brain, bypassing the spinal cord. The slow, repeated touches are sent not as regular tactile stimuli but rather as a special form of pain input. By grasping the area in pain, this new stimulus takes the place of the actual pain stimuli in a way.
The skin where this grasping or touching happens consists of special nerve fibers called C-fibers. These thin nerve cords do not have a myelinated sheath and therefore only conduct signals slowly. The ends of these fibers are located in all the hairy skin areas of our body. Each C-fiber collects the signals from about 0.155 square inches (1 square centimeter) of skin and passes them on to the brain.
There are two possible pathways for the onset of pain
But how does rubbing the painful area on the skin function in practice? The skin’s pain receptors are among the first to go into warning mode after bumping into something. There are now two pathways for transmitting this pain signal to the brain. Extra-quick pain-sensing nerve fibers make sure we feel the damage as soon as it happens, often in the form of a sharp stab. In the case of a hot stove top, for instance, this early warning allows us to avoid injury. Or to automatically reach the location where we banged into anything.
However, at the same time, the slower C-fiber transmits the pain signal. The dull, constant discomfort is caused by the signal reaching the brain. You then send positive “stroking signals” to the place that hurts by rubbing your palm over it. Even if the brain gets pain signals from the same location at the same time, these rubbings are not actually blocked. In fact, these stroking impulses serve as a barrier against actual pain.
What makes self-touching so effective against pain?
It’s worth noting that self-touch enhances the efficacy of these strokings. So far, this is what the findings of a study led Patrick Haggard of University College London show. The scientists had participants rate the intensity of a heat ache on their finger after touching their hand or after having it touched by another person. A decrease in discomfort of 64% was seen only when the test subject touched the area with their own hand.
The intensity of our pain sensation is determined not only by the strength of the pain impulses that make it to the brain but also by how the brain combines those signals into its picture of the body. Apparently, self-touch aids the brain in assigning and integrating information from the damaged body region. Consequently, this tends to lessen the sensitivity to touch.
Even in the womb, unborn offspring respond to taste cues, learning what their mother consumes via the amniotic fluid. Recently, scientists have used ultrasonography to directly see this response for the first time. Babies seemed to smile when they tried the sweet carrots, but their mouths scrunched up when they smelled the bitter-tart kale. The smell of kale causes the fetus (in the picture above) to put up a defensive face.
The unborn child’s sense of taste develops before its other senses, including hearing and sight. In the eighth week of pregnancy, the first taste receptors appear, and by the time the baby is 15 weeks along, it is able to taste the amniotic fluid it is ingesting. By this time, the infant has picked up on the mother’s eating habits. Numerous studies with infants provide evidence that these first tastes significantly influence what kids want to eat as they grow up.
Vegetable Smackdown: Carrots vs. Kale
Now, researchers led by Beyza Ustun of Durham University are utilizing high-resolution 4D ultrasound pictures to show how a fetus reacts to different tastes in the amniotic fluid. These photographs provide the first clear glimpse into the unborn child’s reaction to various flavors. In their research, for the first time, they were able to see these responses.
One hundred pregnant women consumed a capsule of a test flavor on an empty stomach at 32 and 36 weeks of pregnancy. Each capsule included either 400 milligrams of sweet carrot powder, 400 milligrams of tart and bitter kale powder, or 400 milligrams of a neutral-tasting control material. In order to prevent her reaction from influencing her child, the mother was unable to tell which flavor she was receiving while swallowing. The researchers started documenting the baby’s responses through ultrasonography after the capsule had made its way through the stomach.
Unborn Babies Expressed Their Emotions Clearly
After consuming amniotic fluid, this unborn kid smiles because it responds favourably to the delicious carrot powder that the mother had previously consumed. (Image: “Fetal Taste Preferences Study (FETAP)/ Durham University)
Indeed, fetuses’ facial expressions were seen within 30 minutes after the mothers ingested the aroma capsules. In this little time frame, the aroma compounds had made their way from the small intestine into the circulation and then through the placenta into the amniotic fluid. The unborn babies’ mostly neutral facial expressions were altered in a distinctive manner depending on the exposed aroma.
When their mother had ingested the delicious carrot powder, the offspring would open their mouths wide, as if smiling, or pucker their lips, as if sucking. The expression was different when the pregnant women were exposed to the bitter taste of kale, as their unborn children’s responses included squeezing their lips together and/or rising their upper lips. According to the research group, their faces mirrored the defensive emotions of a newborn child.
Watching the babies’ faces light up as they smelled the sweetness of carrots or the earthiness of kale, and then sharing that moment with their moms, was a genuinely unforgettable experience, according to the team.
Perception of Taste in the Womb Has a Long-Lasting Effect
These findings provide conclusive evidence that fetuses can detect the aroma of their mothers’ foods while still in the womb. Scientists discovered advanced fetal perception and its capacity to discriminate between distinct taste cues from the mother’s diet.
Prenatal exposure to a variety of tastes helps shape a child’s food preferences. According to scientists, the potential long-term effects of these early sensory experiences are significant. This is because a mother’s diet influences her child’s food preferences from a young age via early exposure to tastes. Scientists now want to understand if the habituation effect dampens these initially adverse responses. (Psychological Science, 2022; doi: 10.1177/09567976221105460)
When the weather starts to get colder, it’s time to crack out the fireplaces and stoves. We love the calming warmth of the fire and the crackling sound of burning logs, even more, when it is chilly and dark outside. But why, when it burns, does wood crackle in the first place? What produces the sparking mini-explosions that sometimes radiate from the logs, and how do they do it?
Tension, heat, and contraction
The cracks happen because tensions in the wood eventually cause a fracture to form. The wood, in turn, tries to contract as a result of the heat, and this is what causes the tension.
The fractured beams in an Alpine Hut originate from the same concept. The moisture content of the wood fibers in the beam progressively evaporates, causing them to conform more closely to their surroundings.
As a direct consequence of this, the beam will progressively shrink as it dries; however, the contraction will occur more across the wood than in the longitudinal direction. Because solid wood does not have sufficient elasticity, it rips, and over time, apparent fractures emerge in the wood.
The situation is the same with fire, except that it develops more rapidly. The crackling and popping sounds are always produced when the wood structure tries to shrink but is unable to due to the strength of the wood itself. After then, it gives in to the strains and fractures. The noise is reminiscent of a branch being snapped in two.
The flying sparks are caused by the resin
However, there are moments when the crackles in the wood are very audible. Sparks often fly, much as they would in a smaller version of the explosion.
There is another explanation for the loud crackling sounds in the burning wood. The noise may be traced to the bursting of resin pockets. These are the cavities in the wood that hold the liquid plaster that the wood produces. The oily resin protects the tree from microbial infections because it has chemicals that kill microorganisms and seal wounds.
The heat of the fire causes the oils found in the resin to evaporate, which then causes the oils to expand. The surrounding hardwood structure is unable to absorb the oil vapors, which causes the material to crackle. This in turn causes the resin galls to rupture explosively. Because the oils that fly out are combustible, they often catch fire when they come into contact with flames, which results in sparks and small explosions or pops.
Cracking is less common in hardwoods and fir
But why do different types of wood crackle at different rates? One cause has to do with the varying amounts of the resin contained in the wood. Pine, for example, has a high resin content and, as a result, produces a lot of crackling noise as it burns. On the other hand, hardwoods rarely experience these kinds of explosions or pops since the hardwoods themselves do not contain any resin.
The sound produced by fir and spruce combustion is noticeably different from one another. The reason for this is that fir wood does not contain any resin, but spruce wood has a significant amount of oil-rich compounds.
The structure and form of the wood
However, the structure and form of the wood can still play a role in causing crackles and pops. For example, light woods like spruce are more likely to crackle than heavier woods. Their wood is not as sturdy, and it is more prone to shatter when subjected to tension. Heavy hardwoods like oak or beech, on the other hand, are less likely to be damaged due to the much denser and more robust structure of these hardwoods.
Crackling may also be affected by the form of the log and the method used to cut it from the trunk. The wood is less likely to crack if it is cut into smaller pieces. This is because the wood warps less in smaller pieces, and the tensions that arise from this are lower, making it less likely that the wood would crackle as a result of these forces.
If you continue to read in the dark, your eyes will be permanently damaged. This reprimand was most likely shared with many people when they were children. But should a parent even be concerned about anything like this? Could reading in inadequate light truly cause nearsightedness or other refractive errors?
People believed that genetics were the primary factor in determining nearsightedness and that environmental factors had only a very small role. This belief existed as late as fifty years ago. Experiments conducted with monkeys and birds showed, however, that this kind of impaired eyesight could be purposefully created.
For instance, hens were outfitted with specially designed matte glasses that obscured their eyesight. As a direct consequence of this, the chicks’ eyeballs started to grow. Because of this, the picture that was created by the eye lens was no longer projected precisely onto the retina, which caused the chicks to develop nearsightedness.
Vision Impairment Leads to an Increase in Eyeball Size
The trials demonstrated that it is necessary to have fine details on the retina in focus, to avoid excessive expansion of the eyeball. This also applies to human beings. For instance, if a child’s eye lens is cloudy when they are young, there is a chance that they may develop nearsightedness as they become older. This is because the eye will attempt to remedy the apparent farsightedness.
But a lack of light may also cause this effect. When scientists placed a form of sunglasses on chicks, they forced them to live in perpetual low light. They did this so they could study the impact of a lack of light. These chickens also acquired myopia, but to a far lower degree than their contemporaries who wore matte glasses.
Myopia Is Spreading Like Wildfire Among the Student Population
The question is, what does this imply for humans? Does reading in the dark corner of the room or beneath the blankets cause long-term harm to the eyes? The consensus is that “no” is the most appropriate response to this question. Because several studies have shown that the prevalence of myopia, or nearsightedness, has substantially grown over the last few years and decades, particularly among students.
There is a strong connection between the total number of hours spent in education and impaired vision. For instance, there has been a substantial increase in the number of youngsters suffering from myopia in Asia as the education levels in that region increased and children spent more time in school and on their homework.
Dopamine Triggered by Sunlight
But to what extent is reading the fault of impaired vision like myopia? The results are inconclusive. But experts believe it’s more probable that the culprit is youngsters spending more time sitting at home than the amount of time they spend reading. Recent research has indicated that encouraging youngsters to spend more time outside may help to reduce the risk of nearsightedness in childhood.
Because exposure to strong sunlight stimulates the neurotransmitter dopamine to be produced in the eye. This, in turn, stops the eyeball from expanding to an unhealthy degree. If youngsters spend more time outside and less time inside, there will be less of a negative impact.
People From Cities vs. Rural Areas
According to the study, this connection also explains why children living in cities have a higher risk of having nearsightedness than children living in rural regions: children living in cities spend less time playing outdoors than their counterparts living in rural areas.
There are experimental initiatives already underway in China and Singapore to encourage families to participate in more outdoor leisure activities. It is safe to say that bookworms who spend more time outdoors won’t be causing any damage to their eyes.
The findings of the study indicate that there is strong evidence that this may at least partly compensate for the strain that is placed on the eyes as a result of close-up vision.
A single snowflake drifts gently to the ground, where it may join the others of its kind. One snowflake follows another, and then a few million more, until the whole rooftop is blanketed in white. Snowflakes may fall from the sky by the billions, yet no two snow crystals are ever the same, as the old saying goes. In the same way that every person is different, each of these objects is also unique. But how accurate is that?
It’s indeed exceedingly rare that any two complicated snow crystals would look precisely the same. It’s so improbable that you probably wouldn’t even find carbon duplicates if you looked at every crystal ever produced. That is the essence of this query. It depends on what you mean by the “same” and what you mean by the term “snow crystal.” Because the matter is really a lot more complicated than that.
When does a snowflake form?
An ordinary hexagonal dendrite.
It is actually possible for two snow crystals made of just a few water molecules to be identical. There, the crystals are still too small to be seen with the naked eye or with a microscope.
If more water molecules bind to one of the two mini-crystals, causing them to grow bigger as they fall from the cloud to Earth, the likelihood is that one of the crystals will grow at a different rate than the other ones. Changing the environment just a little bit—for instance, by altering the temperature or the humidity—can produce a snowflake with whole new characteristics.
Also, different crystals form even if kept under the exact same circumstances. This is because the atoms never align with perfect regularity, which means they are prone to producing differences.
The development of a snowflake is to stock 10 items on a rack so that there are 10 possible positions for the first item, 9 for the second item, and 8 for the third item.
There are more than a billion possible arrangements for only 10 items. And as the items on the rack expand, the total combinations climb infinitely. This is also true with snow crystals. When they are bigger, that is, when they gather more atoms, it is less probable that two identical forms will be generated.
Do identical snow crystals exist?
All snowflakes have the familiar hexagonal symmetry shown in these 1902 photos. (Image: Wilson Bentley, “Monthly Weather Review” for 1902)
Almost a century and a half ago, farmer Wilson Bentley (1865–1931) planted the idea that no two snow crystals are alike. In his lifetime, he examined innumerable snowflakes and took hundreds of photographs of these crystals under a microscope. When he examined snowflakes in 1922, he said, “Every crystal was a masterpiece of design, and no one design was ever repeated. When a snowflake melted, that design was forever lost.” It would seem that throughout his lifetime, Bentley never came across any two snowflakes that were the same.
Nobody ventured to cast doubt on this plausible idea for quite some time. In the 1980s, Nancy Knight of the U.S. National Center for Atmospheric Research released photos of two snow crystals that, under the microscope, appeared exactly the same. It seemed like a dogma had been debunked.
The question is whether or not the likeness in appearance implies a likeness in character. Because an optical microscope can’t resolve the atomic detail. And if you go through a reasonable number of snow crystals, it’s not hard to envisage finding two that are indistinguishable under the microscope. Furthermore, identical snow crystals could be manufactured artificially, and although they may seem similar, they won’t be identical at the subatomic level.
When colorful, hard-boiled Easter eggs are everywhere it’s that time of year again: But if you want to get at their insides, you first have to peel them – and that’s where the problem begins. Because all too often, large parts of the “egg white” or albumen stick to the shell. As a result, your egg looks more like a ruin than a smooth, appetizing Easter snack. But why do some eggs peel so badly? Did they lack the cold shower after boiling? Or is this perhaps just a myth after all?
It is indeed a myth that cold water alone affects the peeling of an egg. Freshly laid eggs, if they are only a few hours or at most two days old, do not peel any better after cold watering than before.
Older is better for eggs – at least when it comes to peeling
The decisive factor for the peelability of eggs is not the cold water, but the age of the egg. That’s because it’s associated with important chemical changes inside the shell. The egg is made up of many proteins, and the shell skin is also bound to the shell, and the albumen, with these molecules.
In freshly laid eggs this bond is still very strong. When you peel them, you, therefore, tear off pieces of the albumen along with the shell and shell membrane.
How does it work?
As the egg gets older, it loses carbon dioxide. This slowly escapes through the fine pores in the eggshell. This changes the acidity level inside the egg – similar to sparkling water, from which the carbon dioxide escapes. The pH rises from near-neutral levels to 8 to 9, and the egg becomes more basic. A pH of 7 is considered neutral – neither acidic nor basic.
Binding of the shell skin to the protein weakens
When the pH in the egg changes, this in turn affects how the proteins interact with each other. The proteins change their binding properties; in the more basic range, their binding power is no longer as strong. As a result, the shell skin is also no longer as strongly bound to the protein. If you try to peel such an older egg, the shell skin easily separates together with the shell from the solid albumen – the egg remains intact and smooth.
And with an already older egg, even cold water can then help. If you then throw the egg into cold water, the shell contracts while the boiled egg white is still hot. This creates tension, which can help loosen the shell skin from the egg white. But that’s not the primary effect then. More important is the aging of the egg.
Cold water makes eggs spoil faster
In fact, the cold water can even do harm. The cold shower causes the egg white to cool abruptly and contract. This creates a vacuum under the shell of the egg which is sucking of air, water, and also bacteria through the porous lime shell into the interior.
As a result, cold water makes eggs spoil faster. This reduces the shelf life to just a few days. Normally, on the other hand, a hard-boiled egg can be kept for up to a month, and in the refrigerator for as long as six weeks.
Every time you sneeze, it’s like an explosion with high-pressure air shooting out of your mouth and nose with droplets and other mucus fluids. Muscles in your face stiffen up without your knowledge, and you find yourself temporarily closing your eyelids. But for what purpose do you close your eyes when you sneeze?
Maybe it’s to keep you safe from the bacteria and mucous that are released during the sneeze. Or, can hiding your face protect your eyes from the internal pressure that may cause them to bulge and be permanently damaged? Both of those renditions are widely shared on the Web. It’s widely known that you can’t keep your eyes open when sneezing since it’s a reflex. Does this hold water, though?
Not a true reflex
The act of sneezing does not constitute a true reflex. The sneeze stimulus is more nuanced and is not under pure spinal-cord control. Foreign objects in the nose, infections, and allergies are only a few of the many causes of sneezing.
Those pesky irritants on the nasal mucosa are shot out the window as the air is released at a speed of around 90 mph (150 kph), the head is jerked forward, and we hear “explosion” sounds. However, closing your eyes as a result of a sneeze is not always a reflex. There are actually recordings of individuals sneezing with their eyes open.
How does it work?
There are two widely shared theories for why people have to automatically shut their eyes when they sneeze. To start, the whole body, not just the chest and breathing muscles, tenses up when you sneeze. That is why a drop of pee or gas may be released during a sneeze.
The face and the eyes in particular are tense during a sneeze. When you tense up, the muscles around your eyes shut your lids.
But the idea that this response is meant to shield the eyes from the resulting higher pressure is nonsense.
Nerve network joins nose and eyes
However, there is a nerve that runs between the eyes and the nose. The nasociliary nerve divides into two branches, one of which travels to the top of the nose and the other to the eyelids and the sclera (whites of the eyes). This is because there is a tight relationship between the nose and the eye. Sneezing is an example of an inflammatory response in the eye region, which may also be caused by irritation to the nerve branch that leads to the nose.
Some individuals, for instance, may sneeze in response to really bright light, demonstrating the intimate relationship between the eye and the nose. This is due to the proximity of the optic nerve to a branch of the nasociliary nerve. The sun’s rays stimulate the optic nerve, producing an electric current. When this current goes down the nerve fiber and across to the next nerve, it makes you sneeze.
In any event, shutting your eyes when you sneeze makes biological sense. One of the first forms of self-defense is the simple act of closing your eyes. All painful stimuli cause you to shut your eyes automatically. Our ancestors knew this organ was critical to their existence; therefore, they guarded it instinctually.