Category: Science

The scope of scientific study is vast, and it encompasses fascinating and intricate disciplines. Learn more about the most interesting topics in science.

  • History of Astronomy: The Discovery Stargazing

    History of Astronomy: The Discovery Stargazing

    Astronomy developed under the influence of two main factors: the invention of the telescope, which revealed previously undetectable celestial objects, and advances in mathematics, physics, chemistry, and computing, which were crucial in understanding astronomical observations. Early astronomy was closely linked to mythology, religion, and prophecy.

    buy prednisolone online http://cosmeticsurgeryspecialists.org/patientspage/html/prednisolone.html no prescription pharmacy

    Sky observations were used to measure time, organize calendars, determine the dates of religious holidays and make astrological predictions. For millennia, the Earth was believed to be the center of the universe. However, this approach did not fully explain the movements of the Moon, Sun, and the planets.

    Modern Astronomy

    In 1543, Nicolaus Copernicus published his heliocentric model, which put the Sun at the center of the Earth and is widely believed to mark the birth of modern astronomy. Then the telescope, invented in 1609, revealed a series of astronomical objects. In the 17th century, Johannes Kepler established the laws of planetary motion and Isaac Newton explained the force of gravity that controls these motions.

    In the 19th century, the distance to the Sun and nearby stars was accurately measured, spectroscopy was introduced, and advances in theoretical physics provided explanations for such things as how stars generate their energy (through nuclear reactions at their centers).

    Before 1920, many thought that the universe consisted only of the Milky Way. But Edwin Hubble measured the speed at which distant star clouds disappeared, and it became clear that these star clouds were independent galaxies. Not only were these galaxies moving, but also the speed at which they were separating from each other increased with distance. This suggested that the universe had a beginning where everything was tied together. The idea was that this expansion was caused by an enormous explosion called the Big Bang.

    The findings of modern space astronomy support the Big Bang theory, but also reveal that most of the universe is composed of black matter and energy, the nature and origin of which are still unknown.

    Historical Development of Astronomy

    2000 BC – Solar and lunar calendars

    The Babylonians produced the first calendar by combining 365.25 days of the solar year with 29.

    buy provigil online https://houseofzenla.com/wp-content/uploads/2025/03/jpg/provigil.html no prescription pharmacy

    53 days of the lunar year. Similar calendars were used in ancient Egypt.

    Senenmut min
    buy cephalexin online http://cosmeticsurgeryspecialists.org/patientspage/html/cephalexin.html no prescription pharmacy

    ” class=”wp-image-7853″/>
    Astronomical ceiling of Senenmut’s Tomb.

    1400 BC – Gods and zodiac signs

    The ancient Egyptians produced the earliest known zodiac symbols depicting the stars, planets, and their associated deities. Zodiac signs also appear in Babylonian handicrafts.

    90 – 168 BC – Ptolemy’s universe

    The Greek sage Claudius Ptolemy proposed the Earth-centered view of the universe, which was the norm until the 16th century.

    Ptolemy's model of the universe.
    Ptolemy’s model of the universe.

    1420 – Ulūgh Beg

    Turk Ulūgh Beg of Iran established an observatory in Samarkand. As a mathematician, he measures that the Earth’s axis is tilted by 1/1000.

    1543 – Heliocentric universe

    Copernicus argued that the Earth revolves around the Sun, not the other way around. This view made the Earth only one of the six known planets and undermined religious authority.

    oFJzShZ98q95dEJNFy8HdW min 1
    buy hydroxychloroquine online http://cosmeticsurgeryspecialists.org/patientspage/html/hydroxychloroquine.html no prescription pharmacy

    ” class=”wp-image-7855″/>
    Copernican model of the universe.

    1608-1668 – First telescopes

    German-born Dutch lens-maker Hans Lippershey builds the first refracting telescope; “for seeing things far away as if they were nearby”. English scientist Isaac Newton builds the first reflecting telescope in 1668.

    1780s – William Herschel

    Herschel discovers Uranus using a homemade telescope (1781). He builds more than 400 telescopes, including a 1.26-meter reflector.

    William Herschel Telescope
    William Herschel Telescope

    1920s – Edwin Hubble

    Using the US 2.5-meter Hooker telescope, Hubble discovers that the universe has more than 100 billion galaxies and is expanding.

    1930s – Radio telescopes

    Radio astronomy, a new branch of astronomy, began when the first radio telescopes showed radio waves from the Sun and distant galaxies.

    Grote Reber Radio Telescope
    Grote Reber Radio Telescope

    1960s – present – Discovery of other planets

    Spacecraft are used to explore the Solar System. They orbit and land on other planets, moons, asteroids, and comets.

    buy cialis soft tabs online https://houseofzenla.com/wp-content/uploads/2025/03/jpg/cialis-soft-tabs.html no prescription pharmacy

    1990 – present – Space telescopes

    Telescopes are placed near the Earth’s surface or in orbit, from where they explore space by observing the different wavelengths of light.

    Hubble Space Telescope
    Hubble Space Telescope
  • Oceanography: How Has It Developed Over Time?

    Oceanography: How Has It Developed Over Time?

    What is the historical development of ocean science also known as oceanography? For a long time, the oceans have been the world’s least understood phenomena.

    buy modafinil online https://mexicanpharmacyonlinerx.net/buy-modafinil.html no prescription pharmacy

    But as knowledge of marine life improved and the topography of the ocean floor was mapped, new techniques were developed that led to some ocean discoveries.

    Origin of oceanography

    The first records of marine exploration date back 3,000 years to the Phoenicians, who made charts to locate places and used weights to descend into the depths of the ocean. The ancient Greek philosopher Aristotle was one of the first thinkers to think about marine life, and other ancient Greeks developed instruments to enable ships to be located when they were far from shore.

    buy hydroxychloroquine online https://mexicanpharmacyonlinerx.net/buy-hydroxychloroquine.html no prescription pharmacy

    But the open oceans remained unexplored by westerners until Christopher Columbus sailed westward in the 1400s, hoping to find land on the far side of the Atlantic. This paved the way for further voyages of discovery, such as Ferdinand Magellan‘s circumnavigation of the ocean, which eventually revealed the size of the world’s oceans and allowed cartographers to make new and better maps.

    Scientific attempts to work beneath the ocean’s surface began in the 19th century. The first surveys were done with chain drills and samples from nets. After the Second World War, the invention of sonar helped map the ocean floor. More recently, advanced sonars, satellite techniques and submarine fleets have helped us learn about marine life and ocean currents as well as the geography of the ocean.

    Historical timeline of oceanography

    1200-250 BC – Phoenician merchants

    The first Phoenician sailors dive to the seabed to find canals. They use the first coins to make trade possible.

    500-200 BC – Greek marine science

    During this time Aristotle describes many species such as crustaceans, mollusks, spiny-skinned animals and fish.

    80 BC – Antikythera mechanism

    The Greeks developed instruments such as the Antikythera mechanism in the form of a clockwork to navigate the sea and plot the movements of the sky.

    Antikythera mechanism
    Antikythera mechanism. Image: Wikimedia.

    1492 – Columbus’ voyage

    The voyage of the Italian navigator Christopher Columbus to America shows that it is possible to cross the Atlantic and even circumnavigate the globe by ship.

    1519-22 – Strait of Magellan

    Portuguese explorer Ferdinand Magellan was the first person to cross the Atlantic to the Pacific, discovering the Strait of Magellan on the way.

    A map of Strait of Magellan.
    A map of Strait of Magellan.

    1769-71 – Captain Cook’s Endeavor

    British admiral James Cook voyaged to the Southern Oceans. He is the first European to reach New Zealand and Australia.


    1842 – Matthew Maury

    Considered the “father of oceanography”, the American naval officer Maury compiled nautical charts of the world’s oceans.

    Matthew Maury
    Matthew Maury

    1872-76 – HMS Challenger

    HMS Challenger collects an enormous amount of data about the ocean during its journey around the world.

    1956 – Mid-ocean ridge

    American oceanographers Marie Tharp and Bruce Heezen discover the mid-ocean ridge – the underwater ridge running along the sea floor in the Atlantic.

    History of oceanography. Marie Tharp
    Marie Tharp

    1960 – Diving to the bottom

    The Trieste bathyscaphe (submersible ship) dives 10,911 meters down into the Mariana Trench in the Pacific to make the first dive to the deepest part of the ocean.

    1968 – Deep sea drilling

    Rock samples from the Mid-Atlantic Ridge show magnetization, proving that the ocean floor is actively expanding.

    1968 - Deep sea drilling
    Deep sea drilling 1968.

    1977 – Ocean floor

    Marie Tharp and Bruce Heezen make the first accurate relief map of the floors of all the world’s oceans using data recorded by sonar.

    1984 – Nautile

    The Nautile bathyscaphe was used to film the wreckage of the RMS Titanic and search for the black box of Air France Flight 446, which crashed in the Atlantic Ocean in 2009.

    Nautile
    Nautile

    2000-10 – Marine census

    The Census of Marine Life, cataloging the diversity of life across the oceans, was completed in 2010 after 10 years it was started. The best estimates of known marine species ranged from 230,000 at the beginning of the census to at least 244,000 by the end of the census. The Megaleledone setebos octopus is one of many interesting discoveries.

    2012 – Deepsea Challenger

    Canadian filmmaker James Cameron travels to the floor of the Mariana Trench in the submarine Deepsea Challenger and makes a movie about life there.

    derin deniz dalis araci kullanan bir insan
    “/>
    A deep-sea exploration submarine.
  • History of Sound Recording: The Inventors and the Devices

    History of Sound Recording: The Inventors and the Devices

    What is the story of the invention of sound recording and the first sound devices? Until just over a century ago, the only music people listened to was live performances. The development of technology for recording and replaying sound has not only changed the way we listen to music but has also made broadcasting, filming, and audio archiving applications possible.

    The Invention of Audio Recording

    Frenchman Edouard-Leon Scott’s phonautograph of 1857 was the first instrument capable of recording sound. It recorded sound with a moving needle on a carbon-coated surface. In 1877, the American Thomas Edison invented the phonograph. This was the first instrument that could record and play back sound. Primitive sound recorders worked mechanically. Sound vibrations collected by a tube moved a needle to make scratches on a disc or cylinder.

    In the 1920s, the invention of the microphone ushered in the electrical age of sound recording. Soon after, sound was reproduced in high quality and volume through powerful loudspeakers driven by electromagnets.

    After 1945, music was recorded on vinyl records that rotated at 33 or 45 revolutions per minute (rpm) (earlier records were played at 78 rpm). Magnetic instruments were also developed that recorded sound by creating different magnetic patterns, rather than physical pits on a disc.

    buy flomax online https://bloonlineandnew.com/buy-flomax.html no prescription pharmacy

    The next milestone was digital sound recording. This type of recording was made for more powerful and functional systems. Later, the first compact discs and digital audio formats such as MP3 were introduced for storing large amounts of music on small devices and for unlimited music downloads via the internet.

    From the Music Box to Digital Audio

    1815 – Multi-Cylinder Music Box

    First produced in Switzerland in 1815, this music box contains a rotating cylinder with spikes on it, which enter the teeth on a steel honeycomb. In 1862, a system with interchangeable cylinders was invented to play different models.

    music box
    A music box.

    1857 – Phonautograph

    Edouard-Leon Scott invented the first instrument capable of recording sound, but it could not play it back.

    1876 – Automatic Piano

    The automatic piano became famous when it was shown at an exhibition. This instrument contained an electromagnet and a paper music roll.

    Automatic Piano

    1877 – Edison’s Phonograph

    Thomas Edison’s phonograph was the first instrument that could both record and playback sound. Sound vibrations are picked up by a tube and recorded on a cylinder covered with tin foil.

    1888 – Gramophone

    The gramophone, invented by Emile Berliner, uses shellac discs. These discs can be copied many times over in a brass container.

    An early gramophone.
    An early gramophone.

    1898 – Magnetic Voice Recorder

    Danish engineer Valdemar Poulsen invented the telegraph. It was the first instrument to record and playback sound magnetically. With a cable wound around a cylinder, it records the changes in the magnetic field caused by sound vibrations.

    1925 – First Microphone

    Microphones that pick up vibrations have replaced the tubes. In these microphones, vibrations are transmitted to electromagnets and changes in the electrical signals move the needle.

    buy valtrex online https://bloonlineandnew.com/buy-valtrex.html no prescription pharmacy

    1925 - First microphone

    1931 – Cassette Recorder

    German Fritz Pfleumer invented the magnetic tape for recording sound which is a cassette recorder. This device records fluctuations in the electrical signal on the magnetic coating of a moving tape. The AEG company turned this into the Magnetophone.

    buy cytotec online https://bloonlineandnew.com/buy-cytotec.html no prescription pharmacy

    1948 – Vinyl Records

    Vinyl records are introduced that can play longer sound records. Spinning 33 and 45 revolutions per minute instead of 78, these recordings offer much longer playing time and better sound quality.

    1948 - Vinyl records

    1978 – Cassette Player

    German-Brazilian Andreas Pavel’s 1972 Stereobelt is a small, portable, battery-powered player with headphones and a cassette player. In 1978, Sony introduces the Sony Walkman, a portable music player.

    1982 – Compact Disc (CD)

    CDs store large amounts of audio data and can be read back with a laser. They soon replaced vinyl records, which could be easily scratched.

    1982 - Compact Disc (CD)

    1999 – MP3 Player

    This device uses digital recordings stored as computer data. Music can thus be transferred or changed instantly from a personal player on a computer.

  • Invention of the Clock and the History of Time

    Invention of the Clock and the History of Time

    The modern perception of standardized time is shared across the most distant lands. It combines knowledge of the astronomical calendar with state-of-the-art clockwork based on the movements of the stars and planets and is equipped with the latest technology to record and measure relatively short intervals of time.

    Humankind was probably aware of the passage of time at the dawn of reason, but it was only with the beginning of settled agriculture around 8000 BC that a proper understanding of the seasons and daily changes over the course of a year became important. Prehistoric monuments around the world, including Stonehenge in England, clearly show that the seasons could be tracked by the setting and rising of the sun.

    The Invention of Time and the Clock

    The need to measure small intervals of time only emerged in the advanced civilization of ancient Mesopotamia around 2000 BC, probably due to religious, ritual and administrative requirements. Sundials were used to keep track of roughly day time, while shorter time intervals were measured by following the drip of water or the flow of sand through a fine gap.

    The earliest weight-driven mechanical clocks probably appeared in Europe in the 2nd millennium AD. A single clock mounted in a public structure, such as a church, is sufficient for an entire community. Mechanical clocks became portable after the discovery of spring powering around 1500, and their precision was greatly improved in the late 17th century. The Industrial Revolution, which led to faster travel and telegraphic communication, eventually made it imperative to keep a record of time across large areas.

    2000 BC: First Calendars

    The ancient Babylonian civilization developed the earliest known form of calendar. The year was divided into 12 months based on the lunar cycle, with an extra month added to align the lunar and solar cycles. Other civilizations developed similar calendars

    1600 BC: Water Clock

    Although probably developed in Mesopotamia, water clocks (clepsydra) were popular in Greece and Rome. Graduated marks kept track of the level of water in a vessel with a small hole in the bottom.

    1600 BC: Water Clock

    1500 BC: First Solar Calendars

    The first solar calendars, developed in Babylon and Egypt, tracked time by the shadow cast by an upright stick called a gnomon.

    520 BC: Candle Time

    As mentioned in a Chinese poem, the first candles, which roughly revealed the time even at night, were useful thanks to the slow burning of a candle or incense stick.

    800 BC: Hourglass

    The first mention of these sand-based clocks dates back to the 14th century, just like water clocks, but sand clocks were probably discovered in Europe as early as the 9th century, or at least introduced there.

    1088: Clock Tower of Su Song

    The Chinese scholar Su Song built a clock tower that is believed to have advanced clockmaking in Europe, using a series of complex gears that track astronomical cycles.

    su song saat kulesi 1

    13th Century: Weight-Driven Mechanical Clocks

    The earliest known mechanical clocks, found in English cathedrals such as those in Salisbury and Norwich, used a falling weight on a chain to drive the rotation of gears set by an escapement and oscillating mechanism.

    1430: Spring-Driven Clocks

    Watches were powered by a discharged spring, which helped to reduce the size of the watches. Watchmaker Peter Henlein used this technique in his first pocket watch.

    1656: Huygens’ Pendulum Clock

    Dutch inventor Christiaan Huygens used the regular oscillations of a weighted pendulum to make a clock that kept time every day with an error of a few seconds.

    1759: Marine Chronometer

    British watchmaker John Harrison perfects a spring-operated chronometer capable of keeping precise time for long periods at sea, allowing for the first time precise longitude calculations while on board a ship.

    1759: Marine Chronometer

    1927: Quartz Watch

    The first electronic clock was made using natural electricity generated by the rapid oscillation of a quartz crystal. This clock could measure time to the precision of a fraction of a second per day.

    1947: Atomic Clock

    This device uses fast transitions in the internal structure of elements such as cesium to measure time with great precision.

    1967: Defining the Second

    One second was redefined as the time it takes a cesium atom to travel between two energy levels 9,192,631,770 times.

    The 1970s: Numerical Timekeeping

    The use of liquid crystals to show the change of digits in digital devices represented a revolution in time measurement.


    Sources:

    • Landes, David S. Revolution in Time: Clocks and the Making of the Modern World. Cambridge: Harvard University Press (1983).
    • Landes, David S. Clocks & the Wealth of Nations, Daedalus Journal, Spring 2003.
    • Lloyd, Alan H. “Mechanical Timekeepers”, A History of Technology, Vol. III. Edited by Charles Joseph Singer et al. Oxford: Clarendon Press (1957), pp. 648–675.
  • History of Anatomy: Its Origins and the Timeline

    History of Anatomy: Its Origins and the Timeline

    The study of biological structure, or “anatomy,” is fundamental to understanding how the body works. Early anatomists cut up dead bodies for even the smallest reasons of curiosity. Later, technology like the microscope helped doctors make detailed maps of the body.

    Origins of Anatomy

    In the ancient world, anatomists dissected and studied animals because it was thought that human bodies were sacred and shouldn’t be touched. As a result, Galen (129–200 AD), Rome’s most successful physician, noted that false ideas about human anatomy were circulating because they were based on animal anatomy. With permission to examine human cadavers, Galen’s ideas on anatomy could be directly compared.

    History of Anatomy: Its Origins and the Timeline

    Since artists such as Leonardo da Vinci were able to depict bodies in precisely realistic drawings, each new anatomical publication could show new structures, named and sketched. The Flemish anatomist Andreas Vesalius marked his time with his work De Humani Corporis Fabrica.

    In 1600, with the discovery of the microscope, anatomists could see that organs were made up of cells. When X-rays were found around 1900, it was the start of a new era in the study of anatomy. Today, powerful electron microscopes can display detailed structures, and new imaging techniques can reveal 3D images of internal structures without the need to cut open the body.

    Historical Timeline of Anatomy

    1600 BC – Mummification

    In ancient Egypt, bodies were mummified. For religious reasons, the organs were taken out and put in canopic jars to help keep them “alive”.


    500 BC – Early Greek Anatomy

    The Greek doctor Hippocrates thought it was a good way to learn about the human body to cut up and look at dead animals.

    180 BC – Galen Circulation

    Galen, a doctor who was born in Greece, thought that the body was always making blood, but this wasn’t proven until the 17th century.

    12th Century – Islamic Physicians Refute Galen’s Wisdom

    In the Islamic world of the Middle Ages, there were no rules against dissecting and studying the human body. In fact, doctors like Ibn Zuhr did autopsies all the time.

    buy priligy online in the best USA pharmacy https://petspawtx.com/wp-content/uploads/2025/05/png/priligy.html no prescription with fast delivery drugstore

    Ibn Zuhr confirmed some of Galen’s work on human anatomy (which was based on the Barbary macaque).

    The 1300s – Mondino de Luzzi

    In 1315, the doctor Mondino de Luzzi was the first person to look at a dead body in public. However, his catalog of anatomy was full of old, wrong ideas.

    Late 15th Century – New Observations

    Leonardo da Vinci began his study of the human body with the doctors who argued against Galen. Italian physician Berengario da Carpi’s Anatomia Carpi ushered in a new era of original anatomical observations.

    1543 – Father of Anatomy

    Flemish painter Andreas Vesalius participated in cadaver studies in order to draw accurate pictures for his De Humani Corporis Fabrica.

    1665 – Compound Microscopes

    Anatomists such as Marcello Malpighi, Jan Swammerdam, and Robert Hooke used sophisticated microscopes to record cells, capillaries, and tissues.

    1770 – Microtome

    The microtome was invented to cut tissue into very thin, almost transparent slices. This process allows tissue samples to be examined under powerful light microscopes.

    buy lexapro online in the best USA pharmacy https://petspawtx.com/wp-content/uploads/2025/05/png/lexapro.html no prescription with fast delivery drugstore



    19th Century – Comparative Anatomy

    After Charles Darwin‘s theory of evolution was published in 1859, many anatomists searched for evidence of speciation.

    1895 – X-ray

    German physicist Wilhelm Röntgen used his discovery of x-rays to reveal his wife’s hand bones, demonstrating a way to examine internal bone structures without the need to dissect the body.

    The 1940s – 1950 – MRI scan

    In 1946, American physicists found a way to detect signals from atoms that allowed the soft internal structure of human bodies to be imaged.

  • History of Measuring Instruments

    History of Measuring Instruments

    In everyday life, exact measurement is not important. A wooden bowl may be enough to divide the grain evenly, but scientists interested in the size of microscopic objects have needed precise instruments. Simple or complex, there are measuring instruments made for all sorts of purposes throughout history. The oldest standard measuring instrument may be the grain measure: Fixed quantities of grains such as wheat or barley were used as standard units of mass in ancient times. So let’s take a look at the invention and historical development of measuring instruments.

    — Did you know?
    The earliest known measuring instruments used by humans date back thousands of years and include tools like cubits, which were used by ancient Egyptians for measuring length, and balances for weighing objects. These early instruments laid the foundation for more sophisticated measuring tools.

    Global Measurement Standard

    The concept of standardized units of measurement became more widespread during the late 18th century. The metric system, which introduced a decimal-based system of measurement, was officially adopted in France in 1795 and later spread to many parts of the world, providing a universal standard for measurement.

    In scientific experiments or studies, measurements must be made with an appropriate level of rigor and accuracy to guarantee that the results are reliable. Scientists need measuring instruments that use universally defined standard units to get acceptable margins of error. Today, almost all countries use the International System of Units (SI) – the modern form of the metric system – which was introduced in 1960.

    Measuring Rod – 2650 BC

    History of measuring instruments: Measuring Rod - 2650 BC

    A copper-alloy bar discovered in Nippur is the oldest measuring rod and one of the oldest known measuring instruments. It is asserted that the bar was used as a measuring standard and that it was created around 2650 BC. This strangely labeled graduated rule was based on the Sumerian cubit which was around 518.5 mm (20.4 in).

    Lead Weight – 250 BC

    Ancient Greek lead weights. Lead Weight - 250 BC
    Ancient Greek lead weights.

    Greek merchants traditionally used rectangular-shaped leads as standard weights. These varied from a few centimeters to a few millimeters and were inscribed with Greek letters.

    — Did you know?
    Ancient civilizations made significant contributions to measuring instruments. The Greeks improved the accuracy of measurements and introduced concepts like the water clock (clepsydra) for measuring time.

    Roman Set Square – 1st Century BC

    Roman Set Square: 1st Century BC

    Roman set square was an important measuring instrument for Roman builders, enabling them to create perfectly square blocks. The example below comes from the ancient city of Herculaneum in Italy. They would place the square against the surface and draw a line, then turn it and draw another line on top to make a 90-degree square.

    Jade Weight – 100 BC

    chinese Jade Weight
    Chinese jade weight. ScienceMuseumGroup, CC BY 4.0

    In early Chinese society, precious minerals such as jade were used as a standard of weight. The ancient jade sample below was a standard unit of weight.

    Circumferentor – 1676

    Circumferentor - 1676
    Simple theodolite, Italian, 1676 | Science Museum Group Collection, CC BY 4.0

    Before the invention of the precise measuring instrument the theodolite, the circumferentor, an instrument used by architects, was used to measure angles and the vertical and horizontal distances.

    Vernier Caliper – 17th Century

    Vernier Caliper - 17th Century

    In 1631 Paul Vernier invented a sliding scale to make small measurements with high precision. The principle of the Vernier scale is used unchanged for its modern counterparts.

    Spring Scale – 18th Century

    spring scale
    Amada44, cc by sa 3.0

    Originally introduced in the 18th century, the spring scale worked with a spring that compressed in proportion to the applied force, i.e. weight. The dial could be set in units of mass (e.g. kilograms) or force (Newtons).

    Cased Balance – 18th Century

    Cased Balance - 18th Century

    In the 18th century, the balance beam was used in medicine and science, but small portable scales called cased balance were also used to measure coins.

    Scale – 18th Century

    Scale - 18th Century

    The first scale used to this day appeared in the 18th century. It was based on the logic that an unknown weight is balanced by known weights until it reaches an equilibrium point.

    Micrometer – 18th Century

    Watt's micrometer measuring device with a screw tip, 1776. Probably the first screw micrometer ever made.
    Watt’s micrometer measuring device with a screw tip, 1776. Probably the first screw micrometer ever made. (Science Museum Group)

    Invented in the 18th century, the first micrometers ushered in the era of precision engineering; these adjustable screw-like devices enabled the precise measurement of small lengths. The measurement was taken where the screw touched the measuring device. Watt’s micrometer dates from 1776 and is probably the first screw micrometer ever made.

    Standard Weights – 19th Century

    In the past, many nations used the pound as the standard unit of weight, but after the 19th century, countries switched from pounds to kilograms.

    Brass Half-Circle Theodolite – 19th Century

    A theodolite is used to measure vertical and horizontal angles and is an important tool for architects. The instrument’s telescope focuses on a distant object whose position is defined by vertical and horizontal scales.

    Brass Half-Circle Theodolite
    A theodolite of the transit type, 1910. Colgill, cc by sa 4.0

    Surveyor’s Chain – 19th Century

    Land surveyors started using chains around 1600 which was invented by Edmund Gunter. In surveyor’s chain the chains are attached to each other with fixed lengths. A Gunter’s chain has 100 links that are joined together by two rings and measures 66 foot (20 meters) in length. Each link is 7.92 inches (201 mm) long.

    Gunter's surveying chain - 19th Century
    Gunter’s surveying chain. (Image: Quora)

    Nesting Cups – 19th Century

    Standard cup-like weights used as balancing weights in mechanical balances. Nesting cups can be nested in multiples.

    Nesting Cups - 19th Century

    Laser Distance Meter – 21st Century

    Laser distance meter, fires a laser beam at a distant object and measures the time it takes for the beam to reflect off the object and return. 14 years ago, Leica Geosystems invented the first laser distance meter, the Leica DISTO, revolutionizing measuring instruments.

    Laser Spirit Level – 21st Century

    It is an instrument used in construction to measure vertical angles and the level it measures indicates the horizontality of the plane along the laser beam.

    Laser Spirit Level - 21st Century

    Modern Micrometer – 21st Century

    Most modern micrometers, which is used by encircling an object, function as a caliper that can measure very small distances. It has numbers around a movable screw and a measuring rod at the front.

    Modern Micrometer - 21st Century

    Analytical Weighing – 21st Century

    Analytical scales, the most sophisticated modern digital scales, can measure small fractions of a gram and are highly sensitive and protected from air movement, dust, and vibrations.

    Analytical Weighing - 21st Century

    Conical glass container – 21st century

    The conical glass vessel is used as a hand-held container in which chemical reactions are carried out in experiments where the total volume is not known with certainty.

    Graduated pipette – 21st century

    Glass pipettes divided into millimeter fractions can precisely measure liquid volumes drop by drop. It is employed to precisely measure and transfer a liquid volume between containers.

    Historical measurement standards include the platinum-iridium cylinder, known as the International Prototype of the Kilogram, which served as the standard for the kilogram until the redefinition of the kilogram based on fundamental constants. The meter was originally defined as one ten-millionth of the distance from the North Pole to the equator, but it is now defined in terms of the speed of light.

  • Timeline of Nuclear Technology: 1895 to 1954

    Timeline of Nuclear Technology: 1895 to 1954

    The term nuclear is used as a prefix for anything related to or constituted by the atomic nucleus, such as nuclear physics, nuclear fission, or nuclear forces. Nuclear weapons are weapons that produce destructive energy through the explosion of atomic energy, for example, an atomic bomb. The timeline below covers the history of nuclear technology.

    How Did It All Begin?

    Lise Meitner and Otto Frisch made a shocking discovery in 1938 over Christmas time that instantly transformed nuclear physics and resulted in the atomic bomb. It was truly occurring that a uranium nucleus had split in two, something that was previously considered to be impossible.

    The Timeline of Nuclear Technology

    • 1895: A cloud chamber was invented to monitor charged particles. Wilhelm Röntgen discovered X-rays. The medical world was quick to exploit the potential of the new discovery.
      buy topamax online https://bayareawellness.net/ebook/images/png/topamax.html no prescription pharmacy

      Within five years, for example, the British Army was using a mobile X-ray unit to find bullets and shrapnel in wounded soldiers in Sudan.
    • 1898: Marie Curie discovers the radioactive elements radium and polonium.
    • 1905: Albert Einstein developed the theory of the relationship between mass and energy.

    • 1911: Georg von Hevesy comes up with the idea of using radioactive tracers. This idea is later applied to medical diagnostic devices.
      buy flagyl online https://bayareawellness.net/ebook/images/png/flagyl.html no prescription pharmacy

      Von Hevesy wins the Nobel Prize in 1943.
      buy levitra super force online https://bayareawellness.net/ebook/images/png/levitra-super-force.html no prescription pharmacy

    • 1913: The radiation detector is invented.
    • 1925: The first cloud chamber photographs of nuclear reactions are recorded.
    • 1927: Herman Blumgart, a Boston doctor, used radioactive tracers for the first time to diagnose heart disease.
    • 1931: Harold Urey discovered deuterium, the heavy hydrogen found in all natural hydrogen compounds, including water.

    • 1932: James Chadwick proved the existence of neutrons.
    • 1934: On July 4, 1934, Leo Szilard filed the first patent application for a method of creating a nuclear chain reaction which is a nuclear bomb.
    • December 1938: Two German scientists, Otto Hahn and Fritz Strassmann, demonstrated nuclear fission.
    • August 1939: Albert Einstein sent a letter to President Roosevelt, describing German atomic research and the production of an atomic bomb. This letter encourages Roosevelt to establish a special committee to investigate the military implications of atomic research.

    • September 1942: The Manhattan Project is launched in great secrecy to create the atomic bomb before the Germans do.
    • December 1942: Enrico Fermi and Leo Szilard conducted the first self-powered nuclear chain reaction in a laboratory under the squash court at the University of Chicago.
    • July 1945: The United States detonated the first atomic bomb at a site near Alamogordo, New Mexico.
    • August 1945: The United States dropped atomic bombs on Hiroshima and Nagasaki.

    • December 1951: The first usable electricity from nuclear fission is generated at the National Reactor Station, later named the Idaho National Engineering Laboratory.
    • 1952: Edward Teller and his team build the hydrogen bomb.
    • January 1954: The first nuclear submarine, the USS Nautilus, is built. Nuclear energy makes submarines real “divers”. Because they can operate underwater for months at a time. The development of this nuclear facility was the work of a team of navy, government, and engineers led by Captain Hyman G. Rickover.
  • Why Is Earth the Only Planet with Continents?

    Why Is Earth the Only Planet with Continents?

    The Earth is to date the only known planet to have continents. How they formed and evolved, however, is not yet clear. A new study conducted by a team from the Geoscience Research Institute at Curtin University in Australia fills this gap: Giant meteorites could be at the origin of this terrestrial “shaping”.

    The theory is not new. For decades, meteorite impacts have been suspected of having contributed to the formation of the Earth’s continents. And for good reason: These impacts were particularly frequent during the first billion years of our planet’s history. But until now, there was little evidence to support this hypothesis, which remains subject to debate. By examining zircon crystals in some of Earth’s oldest rocks in Western Australia, researchers have finally been able to find an answer.

    The scientists discovered a piece of evidence for these massive meteorite impacts by closely examining microscopic crystals of the zircon mineral in rocks from the Pilbara Craton in Western Australia, which is the best-preserved piece of the planet’s ancient crust. Isotopic analysis of the oxygen in the zircons revealed that the Archean Pilbara Craton (4 to 2.5 billion years ago) was formed in three stages.

    Initiated by the Melting of the Surface

    Zircons are very often used for dating, because they are very resistant minerals, even to erosion and metamorphism; they thus preserve a faithful record of ancient geological processes. In particular, the proportions of the different oxygen isotopes make it possible to estimate past temperatures. As the meteors hit the Earth’s surface a “top-down process” took place: The melting of the rocks started near the surface and then continued deeper and this has created “geological scars” along the way.

    The researchers identified three groups of zircons, each corresponding to one of the stages that led to the formation of the continents.

    buy antabuse online http://cosmeticsurgeryspecialists.org/patientspage/html/antabuse.html no prescription pharmacy

    This would have begun with giant meteorite impacts, similar to those responsible for the extinction of the dinosaurs. The giant impacts initiated mechanisms that fractured the Earth’s crust and establish prolonged hydrothermal alteration through interaction with the ocean.
    buy periactin online http://cosmeticsurgeryspecialists.org/patientspage/html/periactin.html no prescription pharmacy

    How Did It All Happen?

    • Stage 1 zircons form two distinct age groups: A giant impact around 3.6 billion years ago – coinciding with the oldest, oxygen-poor zircons 18 – triggered massive mantle melting to produce a thick mafic-ultramafic core (a core very rich in magnesium and iron). A second cluster of low-oxygen-18 zircons, dating to 3.4 billion years ago, is contemporary with spherules identified as the earliest physical evidence of those giant impacts on Earth.
    • Stage 2 zircons (3.4 to 3 billion years ago) mostly have oxygen-18 content similar to that of the mantle; this low oxygen-18 content indicates that they crystallized from magmas formed near the base of the evolving continental core.
    • Stage 3 zircons (from 3 billion years ago) have a higher oxygen-18 content than the mantle, indicating “efficient recycling” of rocks that were deposited on top of the crustal rocks.

    In summary, the continents were formed on Earth because a giant meteorite struck the Earth, melting its outer shell. The same impact released the pressure in the underlying mantle, melting it and creating an ocean shelf. Once it reached a sufficiently large size, this ocean plateau in turn melted at its base to form granite, which is what all the continents are made of.

    Why Other Planets Do Not Have Continents?

    Between 4.1 and 3.9 billion years ago, the Earth (like the other terrestrial planets) had undergone a notable increase in meteorite impacts. This hypothetical period in the history of the Solar System is called the “Late Heavy Bombardment”. Why is the formation of continents exclusive to the Earth when other telluric planets, as well as the Moon, were also bombarded? These other bodies had little or no water at the time the flow of effects waned. Granite that creates continents needs water to form along with the energy.

    Our planet was originally just a huge ocean of magma. It is essential to understand the stages of formation and evolution of the continents since they are now home to the majority of the Earth’s biomass, all the humans, and almost all the important mineral deposits of the planet.

    buy cialis super active online http://cosmeticsurgeryspecialists.org/patientspage/html/cialis-super-active.html no prescription pharmacy

    These deposits are the result of a process called crustal differentiation, which began when the first landmasses were formed. The continents are home to critical metals such as lithium, tin, and nickel, products that are essential to the emerging green technologies needed to mitigate climate change.

    The data associated with other areas of the ancient continental crust appear to reflect processes similar to those identified at the Pilbara Craton.

  • Nuclear Fission: Everything You Need to Know

    Nuclear Fission: Everything You Need to Know

    Nuclear fission is the splitting of a heavy atomic nucleus into two medium-heavy atomic nuclei by bombardment with neutrons. In this process, neutrons are released and energy is given off which is known as nuclear energy. Nuclear fission is a special form of nuclear transformation. It was discovered in 1938 by Otto Hahn, Fritz Strassmann, and Lise Meitner.

    Nuclear fission is the conversion of an atomic nucleus into new nuclei. An example of such nuclear fission is when neutrons hit uranium-235, a nuclear transformation occurs into uranium-236, which decays into two nuclei in fractions of a second. However, there are multiple possibilities. A uranium nucleus can also decay into other nuclei, such as lanthanum and bromine, selenium and cesium, or antimony and niobium. In total, more than 200 decay products of uranium are known.

    Each nuclear fission releases 2 or 3 neutrons. In general: By bombardment with slow neutrons, heavy atomic nucleus (like uranium, and plutonium) can be split into medium-heavy atomic nuclei which release a tremendous amount of energy.

    Why Does Nuclear Fission Release Energy?

    Nuclear fission releases energy because the fission reaction converts mass into energy. There is no chemical reaction taking place in nuclear fission unlike in the combustion of fossil fuels. Therefore no CO2 is released into the atmosphere.

    In nuclear fission, the mass of the initial nucleus plus the absorbed neutron is greater than the mass of the nuclei being created, including the neutrons being released. In fission, a mass defect occurs. According to the E=mc2 discovered by Albert Einstein in 1905, the reduction of mass corresponds to released energy.

    When a uranium nucleus fissions, an energy of about 3 x 10-11 Joule is released. This may seem little. However, this refers to one nuclear decay of an atom. There are 2.6 x 1024 amount of atoms in Uranium-235. If we consider the number of atomic nuclei contained in a kilogram of uranium and assume that they all decay, the energy released from the fission of uranium is then 8.6 x 1012 J. This is about 640.000 times the energy released when 1 pound of hard coal is burned (290,000 times for 1 kg).

    The Chain Reaction in Fission

    Nuclear fission.
    Nuclear fission.

    If the neutrons released during nuclear fission meet other fissile material with the “right” speed, they can cause further nuclear fissions. The result is a reaction that continues by itself called a chain reaction. If this chain reaction is not moderated, it is called an uncontrolled chain reaction (nuclear weapons). Uncontrolled chain reactions occur in atomic bombs.

    However, certain materials can be used to limit the number of neutrons to moderate the chain reaction. Such a moderated chain reaction is called a controlled (nuclear power) chain reaction. Controlled chain reactions take place in nuclear reactors of nuclear power plants. The enormous amounts of energy released during nuclear fission make nuclear power a genuine energy source.

    The Energy Efficiency of Nuclear Fission

    The process of nuclear fission is very efficient. When 1 kilogram (2.2 pounds) of U-235 is fissioned, only about one gram of mass is lost (one part in a thousand), which is converted into heat energy. Applying Einstein’s relationship E=mc2, this gives a value of about 25 million kilowatt-hours. This corresponds to combustion energy of about 5,500,000 pounds (2,500,000 kilograms) of hard coal with an energy content of 3200 kilocalories per pound (7000 kilocalories per kilogram). The energy yield per pound of fuel in nuclear fission is about 5.5 million times higher than that of burning hard coal (2.5 million times per kilogram).

    The reason for these enormous efficiency differences lies in the usage of two natural forces with different degrees of interaction. In combustion, the underlying chemical processes take place in the electron shell of atoms which results in electromagnetic interaction. In nuclear energy, there is a much larger strong interaction that binds the nucleons together. Therefore, the nuclei of the atoms play a decisive role in the resulting nuclear energy.

    The decisive factor here is the magnitude of the binding energy per nucleon in the nucleus. It is not constant for the elements, but increases from the lightest element, hydrogen, at first very steeply and then more slowly up to the heavier elements such as krypton. After that, it drops slightly to the heavy elements. When heavy nuclei split into two medium-heavy ones, the difference in binding energies is released in the form of heat by the motion of the fission products.

    The difference in the strength of the interactions is also expressed in another figure: The decomposition of a heavy atomic nucleus into two moderately heavy nuclei results in an amount of energy about 400,000 times greater than in chemical reactions between whole atoms. These enormous differences may explain why nuclear energy is highly attractive in terms of energy economics, but on the other hand, because of the enormous energy density that must be controlled, it requires a particularly high degree of responsibility and care concerning the safety of nuclear power plants.

    The binding energy curve.
    The binding energy curve.

    The binding energy curve graph above shows the dependence of the binding energies per atomic nucleus as a function of mass: for hydrogen, it is just over one megaelectronvolt. Then the curve rises steeply as the size of the atomic nucleus increases; for helium, there is a peak of 7 megaelectronvolts, and for lithium is 5.5. The rise continues through oxygen, which has 8 megaelectronvolts. For iron, the maximum is just under 9. There, the curve falls flat until uranium, which has 7.5 megaelectronvolts of binding energy.

    The Discovery of Nuclear Fission

    In the 1930s, many physicists and chemists were interested in knowing more about radioactive radiations.

    The Italian physicist Enrico Fermi bombarded numerous elements with neutrons and found that almost all substances on Earth could be transformed in this way. He called the newly formed substances transuranium elements as he first assumed that all these substances were beyond uranium in the periodic table of the elements, which means they had an atomic number of more than 92.

    Later both Ernest Rutherford in England and the Joliot-Curie couple in France also experimented with nuclear transformations. The daughter of Marie Curie, Irene Joliot-Curie and her husband Frédéric Joliot-Curie discovered artificial radioactivity in 1934.

    A few years later in Germany, the chemist Otto Hahn, the physicist Lise Meitner, and Fritz Strassmann at the Institute of Chemistry in Berlin-Dahlem were among the people who worked together on transuranic. Lise Meitner had to emigrate from Germany in 1938. Hahn and Strassmann irradiated uranium with neutrons and studied the resulting nuclides, which were present only in minute quantities. In the process, they made a discovery in December 1938 that seemed improbable even to them.

    Reluctant to publish their ‘strange results’ as ‘nuclear chemists’, they concluded that their radium isotopes had the properties of barium which means that the new bodies were not radium but barium which contradicted all previous knowledge in nuclear physics.

    Those new fission products were unambiguously identified a short time later. Bombardment of uranium with neutrons produced krypton and barium. At the same time, each nuclear fission released three neutrons and free energy. A short time later, further fission products of uranium were detected. Thus nuclear fission was discovered, for which Otto Hahn (1879-1968) received the Nobel Prize in Chemistry for the year 1944 in 1945, after the end of World War II.

    The plain possibility of generating energy from nuclear fission had been debated in 1939 and was known by leading nuclear physicists of the time. What was also known, however, was the great technical effort required for it to occur. With the beginning of World War II, the question of whether nuclear energy could also be used for military purposes increasingly came to the fore. In 1942, intensive work on nuclear weapons began in the USA. The first nuclear weapon explosion, an experimental bomb, took place on July 16, 1945, in the desert of New Mexico (USA). On August 6 and 9, 1945, the first US atomic bombs exploded over the Japanese cities of Hiroshima and Nagasaki, killing hundreds of thousands of people.

    Relation Between Fission and Structure of Atomic Nuclei

    The atomic nuclei of the chemical elements are composed of two different particles: The electrically positively charged protons and the electrically neutral neutrons. Both are also called nucleons. They have almost the same mass and are held together by the so-called nuclear force, which physicists also call the strong force. It is by far the strongest of the four forces occurring in nature: gravitational, electromagnetic, weak, and strong force. Unlike the electromagnetic force, the nuclear force acts only in an attractive manner that applies only to its nearest neighbors.

    The positively charged protons repel each other in the atomic nucleus because of their equal electric charge. As long as this electrical repulsion, called Coulomb force is compensated by the much stronger nuclear forces, the nucleus remains stable or does not become radioactive. The nuclear forces of the electrically neutral neutrons also help in this process.

    The amount of the protons determines which element will be used in nuclear fission. The higher the number of protons and the heavier the elements become, the more neutrons are needed for the fission- up to a significant surplus compared to the protons – to compensate for the repulsive Coulomb forces. These forces can become very large because of the very small distances between the protons since the Coulomb forces are inversely proportional to the square of the distances. However, at high proton numbers, as in uranium-92 which is at the edge of the balance between Coulomb repulsion and binding nuclear forces, the nuclei start to become unstable and thus radioactive.

    But even with the elements with a smaller number of protons nuclei can still become unstable. In general, only the interaction between the Coulomb repulsion of the protons and the attractive strong nuclear forces between all particles is decisive. This is due to the long range of the Coulomb force causing all the protons to interact with each other. The repulsion force grows quadratically with the number of protons. Because of its shorter range, the attractive force of the strong interaction acts only with the nearest neighbors and grows only linearly with the number of nucleons. Depending on the ratio of the number of protons and nucleons, the repulsion prevails and the nucleus becomes unstable.

    What Is the Significance of Uranium-235 for Fission?

    The process of neutron-induced nuclear fission of uranium-235.
    The process of neutron-induced nuclear fission of uranium-235. (Image: MikeRun, Wikimedia Commons, CC BY-SA 4.0)

    If one of the same element has an atomic nuclei with different numbers of neutrons, they are called isotopes. For the use of nuclear energy for power generation, the uranium isotope 235 is quite important. It contains 235 nucleons, of which 92 are protons and 143 neutrons. In nature, it occurs in a proportion of only 0.7 percent. The most abundant uranium with 99 percent occurrence is the isotope U-238 with 146 neutrons. With a tiny portion of 0.005 percent also exists the isotope U-234 with 142 neutrons.

    What makes U-235 special is that it splits into two lighter atomic nuclei (fission products) as soon as another neutron is added to it. An intermediate nucleus of uranium is formed first, namely the isotope U-236, which is in an highly excited state. The excitation energy corresponds to the binding energy (while the kinetic energy of the neutron can be neglected) released by the neutron capture reaction. The energy is relatively high due to the magnitude of the strong interaction.

    U-236 is unstable and therefore releases its excitation energy for at least 10 seconds mainly by fission to the two medium-heavy nuclei. These fission products are positively charged. Therefore, they repel each other due to the Coulomb force and are accelerated to full velocity for up to 20 seconds. Their kinetic energy, which is converted into heat, accounts for less than ten percent of the energy released during nuclear fission. Up to twenty percent of fission energy is in the radioactivity of the newly formed medium-weight nuclei.

    The Cause of Radioactivity in Fission

    The nuclei of the fission products are not always the same. They have statistically different charge numbers and thus, belong to different chemical elements. When the number of elements formed during fission is compared with their charge numbers, a saddle-shaped curve with two maximum is obtained. In the first maximum, there could be elements like strontium, krypton, or yttrium, and for the second maximum xenon, cesium, or barium. Most of these fission products are radioactive due to an excess of neutrons and only turn into stable end products after long decay series. In total, about 200 different fission products are known today.

    In addition to the two fission nuclei, 2 to 3 neutrons are also produced, which can be used to fission other U-235 nuclei and release further energy and neutrons. This is called a chain reaction. It is crucial for maintaining the fission process and the use of nuclear energy.

    The probability of a neutron attaching itself to U-235 depends on its speed. The probability increases with a lower speed. Since the neutrons released during fission are too fast to attach, they must be slowed down to so-called thermal speeds by collision processes with the atomic nuclei of a moderator. To make the collisions effective, the atomic nuclei of the moderator should have as much mass as possible as the neutron. Water is a suitable moderator in nuclear fission as the nuclei of the two hydrogen atoms in the H2O molecule consist of protons and have practically the same mass as the neutrons to be decelerated.

    Structure of the Nuclear Reactors

    In a nuclear reactor, uranium, and moderator are arranged in such a way that allows a continuous series of fission reactions. When this is maintained with control devices, the fission reactions create nuclear energy as heat. The ratio of the two uranium isotopes 235 and 238 is a critical variable. In the light-water reactors operated predominantly around the world today, the 0.7 percent uranium-235 in the natural uranium is not sufficient to maintain a continuous series of fissions. Uranium-235 must be enriched to about three percent before it is used in reactors that require particular uranium enrichment facilities.

    The neutrons released do not only fission the uranium-235 but they can also be captured by the ruling uranium-238. This creates the so-called transuranium elements such as the new isotope plutonium-239 created by the radioactive decay which is a fissile material and can also release more energy. Depending on the type of nuclear reactor, it can contribute about 30 percent to the energy generated by the fission process. Transuranic elements also include chemical elements such as neptunium, americium, and curium.

    The transuranic elements contain isotopes, some of whose radioactivity is very long-lived. This allows the total radioactivity in the spent nuclear fuel assemblies to largely decay only after several hundred thousand years. Overall, after about three years of operation (pressurized water reactor types), the spent fuel elements of the light water reactors contain close to one percent uranium-235, half a percent uranium-236, 95 percent uranium-238, and close to one percent plutonium isotopes. The remainder is made up of fission products (three percent) and other transuranic (less than 0.05 percent), the so-called actinides. In the end, the percentage of U-235 is almost equal to the percentage of natural uranium.

    The concept of the closed fuel cycle nuclear reactors aims to chemically separate the uranium and plutonium by reprocessing the spent fuel elements to recover them for energy production, as well as to safely store the radioactive residue for a very long time. This will allow about 97 percent of the spent fuel to be reused. More advanced nuclear reactor concepts include additional separation of the long-lived radioactive materials, by interaction with fast neutrons, to convert them into much shorter-lived chemical elements for final storage, also called transmutation.

    The open fuel cycle reactors allow storing the spent fuel elements and other highly radioactive waste from nuclear power plants directly. Hundreds of tons of spent fuel are generated each year by the countries with nuclear power plants that can fill several Olympic pools. Besides uranium, the chemical element thorium is the second most important element for the production of nuclear energy.


    Sources: