The transportation industry was radically altered with the invention of the first automobile. The history of the automobile has gone through several transformations, from early attempts at human-powered transportation to fully autonomous and networked electric cars. An important development throughout the industrial revolution, this breakthrough has had far-reaching social and economic effects.
Never before have so many people, on both an individual and a societal level, come to embrace and even idolize a technological object. Manufacturers nowadays are developing smaller vehicles with cutting-edge safety features and engines (electric, solar, hydrogen, etc.).
The First Car in History
Joseph Cugnot’s “fardier à vapeur” was the first automobile in history.
Carriages were the means of transportation before the invention of the first automobile. Carriages were either sleds or carts with wheels and were drawn by a person or a pack animal. As early as the early 18th century, scientists and engineers attempted to fit a steam engine inside a car, but it wasn’t until the late 18th century that they saw any real success. In 1770, Frenchman Joseph Cugnot created the first car in history.
The military engineer Cugnot decided to mechanize the artillery cart (“fardier”) by replacing the horse with a third wheel powered by a steam engine. The aim was to use the cart to transport heavy parts like cannon barrels. These trials were not definitive, but they did serve to highlight the potential of high-pressure steam as driving power.
In 1769, Joseph Cugnot first created a small-scale version of his project which he called “fardier à vapeur.” In 1770, he constructed a full-scale version of the fardier à vapeur, with the intended capabilities of carrying four tons and traveling 4.8 miles (7.8 km) in one hour. The car weighed 2.5 tons and was controlled by a pair of handles.
The fardier had to stop every 15 minutes to refill the machine with wood. The steam engine was too cumbersome and inefficient to be really useful, therefore Joseph Cugnot’s efforts ultimately failed along with those of his French, American, and British successors.
Engineers focused on bettering the train instead, setting aside the automobile until the development of the internal combustion engine, which was lighter, more efficient, and less bulky.
First Car with an Internal Combustion Engine
1885, the Benz Patent-Motorwagen, the first car with an internal combustion engine. (Photo by Nick.Pr, CC BY)
The first car in history powered by an internal combustion engine, a three-wheeled vehicle, was built in 1885, and its creation is credited to German inventor Carl Benz. The car was called Benz Patent-Motorwagen (“patent motorcar”) and it was also the first mass-produced car in history. In 1886, it was shown to the public after receiving a patent for it.
In the latter months of mid-1888, the Patent-Motorwagen became the world’s first commercially available car. Patent-Motorwagen was sold for $4,750 in today’s money. This first Motorwagen used the Benz 954 cc (58.2 cu in) single-cylinder four-stroke engine, and produced 0.75 hp (0.55 kW).
At its debut in Mannheim, Germany, Karl Benz drove the car at a peak speed of 10 mph (16 km/h). The car’s internal combustion engine helped establish Karl Benz as the father of modern cars. He built 25 units of Patent-Motorwagen in total.
First Auto Show
A 1910 poster of the Paris Motor Show. (Image, CC0)
In 1898, Paris’s Grand Palais hosted the world’s first motor show, known as the “Paris Motor Show.” Even after 125 years, it remains a major event in the automotive industry. The retrospective exhibition at the tenth edition of the show in 1907 was even more magnificent and comprehensive, with artifacts ranging from the Cugnot cart, a real “prehistoric” legend from 1770, to a De Dion-Bouton tricycle from 1885, a German Daimler draisine from 1887, and a De Dion-Bouton steam omnibus from 1896.
There were still Brasier and Renault automobiles on exhibit, along with electric vehicles, but sadly absent was Etienne Lenoir’s gas-powered vehicle, which was said to have been utilized in Paris in the early 1860s and inspired Jules Verne.
First Car to Break 100 km/h
La Jamais Contente reached 105.9 km/h (65.8 mph) in 1899. (Image, CC0)
While the issue of motorization had yet to be resolved, the first accomplishment of the automobile concerning the vehicle’s global conception did not change much over the next few decades in terms of transmission, steering, or bodywork. While some may have seen the near dead end of the electric automobile coming, it wasn’t proven until the century’s conclusion when competition between different forms of propulsion finally pushed an electric car above 62 miles per hour (100 km/h).
On April 29, 1899, Belgian engineer Camille Jenatzy finally crossed this psychological boundary of a “speed in three digits” in his La Jamais Contente (“The Never Contented”). It was the first car that broke the 100 km/h barrier (62 mph) and reached 105.9 km/h (65.8 mph).
The First Affordable Car
Ford Model T runabout, probably 1913–1914. (Photo, Pierre Poschadel, CC BY SA)
Henry Ford’s goal was to mass-produce an automobile that was easy to build, reliable, and affordable. He was successful in his gamble by introducing assembly line production, which had a profound impact on the automobile industry across the world.
When Henry Ford unveiled his vision for the Model T in 1908, Ford Motor Company was already five years old. He aimed for a simple car design that would allow the vehicle to be assembled quickly and a solid build quality that would demonstrate its reliability. Both of which contributed to the Model T’s minimal maintenance costs.
After-sale service was first introduced by Ford with the introduction of the Model T. Ford established a network of carefully selected agents by 1908, when other automakers still ignored regular maintenance as a cost of doing business. This increased the brand’s awareness.
Ford’s Highland Park plant, located outside of Detroit, produced 3,900 vehicles daily in 1914. Furthermore, between the years of 1908 and 1927, 16.5 million Model Ts were sold. In the early 1920s, the Model T was the world’s most popular car, with production facilities in the United Kingdom, Germany, France, Denmark, South Africa, and Japan.
The Model T lacked modern conveniences such as a wiper (since the windshield was optional), a radio, and even a driver-facing door. The accelerator was on the steering wheel. In 1972, the Volkswagen Beetle became the best-selling automobile of all time, surpassing the Ford Model T.
The Automobile Becomes an Industry
Adolf Hitler ordered the production of the Volkswagen to ease the transportation burden on the general population. The ultimate design of the automobile that would become the best-selling vehicle of all time may be seen in this 1935 V-series prototype. (VW)
Already in 1900, more than 9,500 automobiles were manufactured in the United States, France, and Germany. Henry Ford came up with the Model T eight years later and had millions of them mass-produced using his assembly lines. He is the one responsible for the birth of fast, personal transportation for everyone. At the turn of the 20th century, Ford standardized manufacturing throughout the automotive industry and beyond.
Several million Volkswagen “Beetles,” Renault 4CVs, and Citroen 2CVs were mass market successes at the end of World War II, making vehicle ownership more accessible to the average European. Because of the dramatic growth in the number of cars on the road and their average speed, legislation mandating the use of driving licenses and establishing highway codes became necessary.
The automobile caused major shifts in society, especially in how people saw and used their personal space. It encouraged the growth of trade and communication between nations, as well as the construction of several new facilities (roads and highways, parking lots). Everyone still recognizes the names of the great pioneers of the car industry, from Ferdinand Porsche to André Citroen and Louis Renault.
Contemporary Challenges of the Car
The car industry faced several difficult changes in the 1970s. Rising gas prices and concerns about supply were a direct result of the 1973 and 1979 oil shocks. Automobile companies started making more compact models. They improved the engines so that vehicles could go longer on the same quantity of fuel. They made the bodies more aerodynamic by altering their shapes.
As the number of drivers increased and automobiles became safer and quicker, so did public concern about road safety. Both the frequency and severity of accidents increased.
Another major issue is exacerbated by the growing number of automobiles on the road: air pollution. Although catalytic converters are effective at reducing a variety of harmful emissions, carbon dioxide (CO2), a major contributor to the greenhouse effect and, by extension, global warming, is one of the few gases they cannot remove. Manufacturers like Tesla are shifting their attention to electric vehicle research and development in order to combat this environmental threat.
Bibliography
Ervan G. Garrison, 2018. History of Engineering and Technology: Artful Methods. Routledge. ISBN 978-1351440486.
L. J. K. Setright, 2004. Drive On!: A Social History of the Motor Car. Granta Books. ISBN 1-86207-698-7.
Industrial robots are advancing at a rapid rate on a global scale. More than 3.5 million of them were present in 2021, and 500,000 more were added only in the previous year. China, whose robot density per employee has for the first time surpassed that of the USA, is seeing a particularly high increase in automation. Germany, ahead of Italy and France in Europe, has the most industrial robots per capita.
It has long been difficult to picture many industries—whether in the automobile industry, metal processing industry, or chemical industry—without robots. The majority of tasks that assembly line employees traditionally completed are now carried out by adaptive, machine-based assistants. Along with traditional robot arms, autonomous transport robots, and computerized manufacturing lines, they also include mobile 3D printers and robotic recycling assistance.
Country-specific robot density per 10,000 workers. (Credit: World Robotics Report 2022, IFR)
Record expansion despite the pandemic
According to Marina Bill, president of the International Federation of Robotics (IFR), robot density is a critical sign of how automation is developing in industrial sectors globally. Her team assessed how the overall number of industrial robots and their density, computed per 10,000 workers, changed in 2021 compared to earlier years for the World Robotics 2022 Report.
As a consequence, the surge in robots and automation has not halted, despite the Corona epidemic. 517,385 new industrial robots were deployed globally in 2021, an all-time record and a 31 percent increase over the year before. Annual robot deployments have more than quadrupled during the last six years. Another record-breaking average of 141 robots per 10,000 employees already exists globally.
In the top five is China
One in every two new industrial robots deployed globally was utilized in China in 2021, where the number of robots increased especially quickly. Nearly 270,000 new robots were placed in the country. In China’s industries, there are now more than a million industrial robots in use. The high level of investment in China is reflected in its quick robotic expansion, and there is still a lot of room for automation.
China has now officially become one of the top five most automated nations in the world, surpassing even the USA in terms of robot density. In 2021, there were 322 operating robots for every 10,000 workers in China’s industrial sector. China now ranks fifth in terms of the density of robots, after South Korea, Singapore, Japan, and Germany. With 274 robots per 10,000 workers, the United States comes in ninth place.
Despite this, South Korea continues to have the most automated sector, with 1,000 industrial robots for every 10,000 workers. The electronics industry and a robust automobile sector are the main drivers of this.
Germany has the largest robot density in Europe, yet even in this nation, 23,000 additional robots were added to companies in 2021 despite the epidemic. This is the second-highest number after 2018—a record year in which the automobile sector made significant investments—and it represents a 6% rise in new installations. In Germany’s industries, there are now more than 345,000 industrial robots in use.
This indicates that Germany has the fourth-highest robot density globally and that no other nation in Europe has as many industrial robots in operation as this one. The industrial sites between Flensburg and Munich are home to around 33% of the industrial robots in Europe. Around 38% of robots are employed in the automotive sector, which is followed by the metalworking sector, the chemical processing sector, and the plastics and chemical processing sectors.
Instead of using satellites, a new form of positioning system based on mobile communications might one day replace GPS, allowing for more accurate and dependable navigation in urban and interior environments. The new TNPS system can achieve positioning precision of up to 4 inches (10 cm), even across high-rise buildings, by combining local mobile communications with the fiber optic network. According to a paper in Nature, this is made feasible by using time-synchronized gigahertz transmissions.
We’ve become used to utilizing GPS systems on a regular basis, whether on our phones or in our cars. To do this, the time it takes for signals to travel from different GPS satellites to the receiver is compared. At night, however, modern satellite navigation systems have a number of drawbacks: their accuracy is limited to around 80 inches (2 m), and they typically don’t operate at all in heavily populated areas, valleys, or indoors due to the scattering or blocking of satellite signals.
As opposed to relying on satellites, a positioning system that uses mobile radio and optical fiber would not be affected by disruptions in the signal.
Therefore, scientists have been toiling away at locating systems that don’t rely on satellites and can be used everywhere, even indoors and in subterranean parking garages. Some use specialized radio sticks, while others depend on mobile communications or a wireless local area network (WLAN).
Combining wireless and optical networks
There is now a way to navigate without using satellites. The Terrestrial Networked Positioning System (TNPS), developed by a team led by the Free (Vrije) University of Amsterdam’s Jeroen Koelemeij, combines the pervasiveness of cellular signals with a runtime-based positioning approach similar to that of the GPS system (but without satellites). Mobile radio transmissions have hitherto been unable to reach the same level of precision as fixed-line communications because they lack the atomic clocks used by the latter to synchronize their signals.
The approach is to use the existing fiber-optic network in many cities and areas to reinforce the mobile signals. Using this, the mobile communications antennas may be kept in perfect temporal sync with the new positioning system. This is made possible with the help of a supplementary device that syncs its internal clock with a centralized reference clock through the fiber optic connections. Delays in the fiber optic network are continuously detected using specialized software and measurement signals, and then accounted for in the synchronization.
The end result is very precise, nanosecond-level synchronization of mobile radio signals. Koelemeij and his group report a temporal uncertainty of around 0.2 nanoseconds at an on-campus test facility. Such accuracy is essential for accurate positioning, since a timing mistake of only one nanosecond translates to a movement of 12 inches (30 cm).
The mobile radio signals themselves are the second part of the new location system. In contrast to the megahertz band used by GPS satellites, the gigahertz range is where 4G and 5G networks transmit and receive their signals. Therefore, the TNPS employs a cellular carrier wave of 3.6 gigahertz and employs a method known as OFDM multiplexing to disperse the real signal among several frequency bands in a range 160 megahertz wide. Using a sequence of narrowband signals spread over a wide virtual bandwidth.
Receiver stations can now identify and account for interference from buildings and other barriers because of the increased bandwidth compared to GPS signals. The researchers can separate the various signal reflections, allowing for more accurate placement. The crew was able to pinpoint the vehicle’s location inside an urban setting to within 4 inches (10 cm) during a practical test.
Urban areas should have a backup to GPS
Researchers claim this technique may greatly enhance positioning, particularly in highly crowded metropolitan regions, while building on top of preexisting infrastructure. According to Koelemeij, they have implemented a system using TNPS that is as accessible as cellular and WiFi networks but provides precise location and time synchronization in the same way as GPS does. Thus, the system might function as a backup and supplement to existing satellite navigation systems.
According to the study, the TNPS technology may be easily incorporated into existing cellular infrastructure. This is due to the fact that the transmissions are sent in underutilized frequency bands of the mobile radio spectrum. In addition, the time synchronization protocols utilized are widely adopted, easily accessible, and work with the wavelengths of conventional fiber-optic transmissions. According to Koelemeij, such signals are already a part of trans-regional optical networks capable of transmitting massive volumes of data in tandem.
The findings hint at a future in which telecommunications networks would provide not just data transfers, but also precise and reliable time and location determination that is not reliant on the Global Positioning System (GPS).
Incredible simplicity. One experiment shows how increasing the range of electric automobiles may be achieved simply by altering the rear axle. There, a novel rear axle design for compact electric vehicles’ batteries is introduced. Due to its newfound rearward positioning, bigger battery plates can be placed, thus increasing the range of electric vehicles to around 71 miles (115 km). And the first stages of discussion with automakers have begun.
However, electric vehicles have not been widely adopted thus far. Smaller, less priced electric vehicle models have a shorter range, which is a major factor in this, along with the shortage of charging stations and lengthy charging durations. Smaller batteries and in-car charging technologies are now being developed by scientists, however, they are still in the research and development stage and hence prohibitively expensive.
Better battery storage
Researchers led by Xiangfan Fang are showing, however, that this need not be that complicated. They have been working with Ford, VW, and other project partners to find easy solutions to provide extra room for the vehicle’s battery. This is due to the fact that, in the case of compact electric vehicles, space constraints for battery packs are a major factor in determining performance and driving range. Since the space for the battery packs is constrained by the back axle, that’s where the team’s attention was directed.
Turning the rear axle around shifted the cross member of the axle back toward the trunk, which was the basic rationale of the study. Since more room is created beneath the vehicle, the battery could be placed closer to the front. The vehicles’ range has been increased by 35 percent, or around 71 miles (115 km), thanks to the revised design of the rear axle.
Vehicle performance criteria
It was necessary for the vehicle engineers in the study to make further changes to the rear axle to keep the car’s driving characteristics unchanged. The engineers initially created the new axle digitally and incorporated it virtually into the body to exactly calculate and replicate its characteristics. The first steel axle prototype was made using this approach.
The new electric vehicle’s rear axle has many joints that guarantee, among other things, that the car performs properly during braking and does not rise up at the back. The next stage was to put the prototype axle in a test vehicle, a Ford Fiesta. Heavier metal plates were installed beneath the floor of the gas-powered vehicle to represent the weight of the battery.
Discussions with automakers are now ongoing
The vehicle was then fitted with sophisticated measuring equipment and put through its paces on a test bench and a track in Belgium by industry professionals. As a consequence, the vehicle’s safety and convenience were not compromised too much by the altered rear axle. The test vehicle was somewhat less dynamic than vehicles with standard rear axles in several respects. However, the gap is so narrow that fine-tuning should be able to iron it out.
The team is now working on refining the new axle idea. Meanwhile, discussions are under progress with several manufacturers to make the rear axle a regular feature of compact electric vehicles. In a few years, they’d be overjoyed to see electric automobiles rolling about with their axle powering them.
Finding adequate conductors wasn’t an issue in the early days of electrical engineering (the early 19th century). The issue was more with the insulation around these wires. For instance, German Samuel Sömmering describes his tinkering with sigellac, varnish, and even rubber. And both Sömmering and the Russian Pavel Schilling tested a rubber-coated wire cable back in 1811. Using silk strands as insulation was an attempt made by some of the experimenters. Later, a firm rubber-like material called gutta-percha, which comes from a plant in Malaysia, was utilized instead.
As long as the voltage (“volt number”) is not too high, isolating conductors in modern electrical systems is a breeze because of the broad variety of plastics at our disposal. Here, you’ll learn about the many cable types available. Some may be familiar to you from around the house, while others take really unique and unusual shapes.
The fundamental aspects of all cables:
To accommodate greater currents, the cable cross-sections must be enlarged.
The more severe the voltage fluctuations, the more complex the insulation must be.
Cable types
Flexible cables.
Power cords are the flexible cables that go from an electrical outlet to a household appliance. For this kind of connection, three cores of stranded copper wire (thin twisted copper wires) are encased in their own insulation tube, making for an extremely flexible and durable cable.
There are relatively rigid cables made of a thicker copper wire. These lines are often concealed inside an insulating tube that is either flush-attached to the wall or is installed externally (surface-mounted).
These lines are often housed in a unified plastic conduit. This configuration is frequently flush-mounted.
The telephone lines carry little current. So, telephone wires may be quite delicate. Typically, several separate lines will be combined into one larger bundle.
A cable is made up of several individual conductors, whereas a wire only has one. Wires are often exposed and coiled.
These cables are used to transport high voltage currents. Because of this, they need to have big diameters and good insulation. Most of the time, air is employed as an insulator, and thus the wires dangle freely between ceramic insulators mounted on the poles (overhead lines). These high-voltage cables also run below ground, but only in densely populated regions or places where the massive pylons would cause significant visual disruption. However, this approach comes at a high cost because of the complicated insulation required.
Copper cables would be too heavy to use for overhead lines. As an alternative, thick aluminum wires are employed, and they are strung around a steel center. The lines’ stability is ensured by this central pillar.
In 1857, C. W. Field commissioned the construction of a telegraph wire between Europe and North America that was around 4000 kilometers long and weighed 2500 tons. It still took more than a week to send a message back then since no one had invented telegraphy. Although the telegraph wire was intended as a solution, its first implementation was a disaster. They persisted, creating new and better cables as well as vessels designed specifically for laying cables.
Despite the rise of satellite communications, numerous undersea cables are being actively maintained today for many reasons, including:
Extremely large volumes of information can be sent over undersea cables.
They’re built to last.
Transmission times using deep-sea cables are far lower than those via satellite (about a factor of 3).
Undersea copper connections are being phased out in favor of submarine fiber optic cables due to the fact that larger data quantities can be transmitted with light than with electrical signals. Several tens of glass fibers, and sometimes even a hundred, are wrapped around a narrow copper tube. For further durability, the cables are reinforced with steel wires.
One of the most outstanding achievements of the human brain is its ability to learn. Neuromorphic computing attempts to transfer this fundamental brain function to machines. In this way, these computers learn in a similar way to us. But how does neuromorphic computing work? And what are the advantages of such AI systems modeled on the brain?
From the medieval Golem to the Terminator, from Metropolis to “Wall-E,” understanding the human brain, harnessing its properties, and endowing inanimate matter with human capabilities is a frequently addressed, usually threatening, often useful, but always fascinating topic. But how realistic are these ideas in today’s world? And what possibilities do we have at all to explore and imitate such a complex system as the brain?
Researchers are trying to do just that: they are working on constructing computers modeled on the brain. These neuromorphic systems also learn similarly to us at the level of their function and structure. Initial successes have already been achieved.
The brain as a model
Only at first glance does the brain appear to be a grateful research object: it is a manageable size, and as long as scientists are content with mouse brains, there is no shortage of material to study. But the true conditions for brain researchers look different.
With more than 100 billion nerve cells, the brain is the most complex organ in the human body; only in a living organism can it be observed at work. One of its most essential characteristics quickly becomes clear: the whole is more than the sum of its parts. The brain is not just a collection of tens of billions of nerve cells; it is the complex interconnections between the nerve cells that allow us to think and learn.
20,000 genes for the brain
At the beginning of development, the brain emerges from a single cell. So there must be rules that govern how the nerve cells interact during embryonic development. According to current knowledge, these rules can only be stored in our genetic material. It is assumed that about one-third of the approximately 20,000 genes in the human genome are required for brain development; this alone indicates the extraordinary complexity of the organ.
So how can we approach this extraordinary biological complexity with the goal of having machines mimic thought processes, i.e., create “artificial intelligence” (AI)?
Artificial intelligence is already producing some impressive results, such as the victories of machines over the best human chess players. However, not only traditional board games such as chess or Go, but also modern computer-based real-time strategy games can be successfully contested by machines. Researchers demonstrated this using the online game “Starcraft,” among others.
AI systems have also already mastered poker, including bluffs. AI systems can steer vehicles through heavy traffic with virtually no accidents or identify people in the live video data of thousands of surveillance cameras. These are all examples of machine learning achievements that already exist today.
Principles adopted from nature
What helped artificial intelligence achieve this breakthrough was the adoption of principles from nature. These include the concept of the multilayer nerve network, the imitation of which makes it possible to approximate even the most complicated relationships. It also includes the observation that the calculations must be closely linked to the necessary memory; otherwise, the transport of the data becomes a bottleneck.
Seen in this light, the progress of artificial intelligence is thus based primarily on the imitation of biomorphic design principles.
How does neuromorphic computing work?
Our brain learns by flexibly adapting its synapses – the contact points between neurons.
When the first scientific foundations for the machine learning methods currently in use were developed in the 1950s, people had only rough ideas about how learning in the brain worked at the neuron level. The algorithms that resulted are undoubtedly very powerful, but it is equally undeniable that these procedures do not occur in nature in the way they are implemented today.
Learning from weaknesses
One could speculate that many of the limitations of artificial intelligence today are based on this shortcoming. For example, a typical weakness of AI systems is their dependence on a vast amount of learning examples; another weakness is their lack of ability to abstract or generalize correctly. This can lead, for example, to AI systems developing “biases.”
Previous AI systems required enormous amounts of data to learn.
The lack of embedding in a continuous time sequence is also a deficit in current AI systems. Only then can learning, adaptation to the environment, and action be closely interwoven, determined, and coordinated by a common internal state. Only when machines possess these capabilities will it be possible for them to independently take on complex tasks in a natural environment.
Taking a cue from reality
How can these weaknesses be addressed and solved? Scientists try to rely on “neuromorphic computing,” a research direction whose basic assumption is that one only needs to study the natural model closely enough and understand the mechanisms of nature well enough to obtain answers.
The goal of neuromorphic computing is to transfer complete knowledge about the function of the natural nervous system to artificial neural systems; such a maximally biologically inspired artificial intelligence should ideally show superior results.
However, there are also good reasons why other AI researchers rely more on conventional methods instead of incorporating the knowledge of neuroscience. One of these reasons is the complex way nerve cells communicate with each other in nature: Each individual nerve cell makes contact with about a thousand, and sometimes even millions, of other cells. If you wanted to reproduce this natural behavior with computer systems, you would have to send at least a thousand messages for each signal from each nerve cell and distribute them to the corresponding recipient cells.
Flexible connections
To make matters worse, the natural connections between nerve cells are not static but are constantly changing. Every day, about 10 percent of all neuronal connections in our brains are broken and replaced by new ones. Many external conditions determine which connections are dissolved and which are weakened or strengthened; we only have a rudimentary understanding of these conditions.
What we currently know, however, is that the signals of complete populations of nerve cells form spatial and temporal patterns and that the targeted rewiring of connections is learned. With this in mind, then, the broader question is: How can neuromorphic computation help us understand the mechanisms of learning and the assembly, disassembly, and reassembly of neuronal connections?
Why neuromorphic computing has a future
This is a neuromorphic chip. They contain hundreds of artificial nerve cells and tens of thousands of synapses. Credit: UZH, ETHZ, USZ
Some scientists simply consider neuromorphic computing to be a redundant path. With the steady increase in the power of mainframe computers, one of their arguments goes, the deficits described would disappear by themselves. The last decade, however, has shown that the expectations placed on mainframes have far exceeded what is actually technically possible.
The miniaturization of electronics, the basis of our computer technology, has slowed down considerably; in expert circles, the current discussion is no longer whether miniaturization will ever come to an end but when it will. For some time now, the energy consumption of circuits has also not been decreasing as fast as would be necessary if we wanted to match the performance increases of the past decades.
Artificial synapses on a silicon chip
Neuromorphic computing appears to be a pioneering approach to realizing biologically inspired artificial intelligence. Neuromorphic computing is about transferring the currently known biological structures of the nervous system as directly as possible to electronic circuits.
For example, scientists have succeeded in reproducing individual neurons together with their synapses—the contact points between nerve cells at which impulses are transmitted—as microelectronic circuits on silicon chips. These circuits have as many properties as possible in common with their natural counterparts; the physical model incorporates all the biological knowledge available at the current state of research. Other research teams have constructed artificial synapses based on magnetic circuits or photonic devices.
The limitations of what is currently feasible are partly imposed by neuroscience and partly by microelectronics: Microelectronics, for example, does not allow the nerve cell circuits responsible for learning to be reproduced in their full complexity, but it does make it possible not only to imitate the speed of natural processes but even to accelerate them significantly.
Computing with the hybrid plasticity model
Neuromorphic systems are at their best when it comes to learning. The findings that neuroscientists have gained in researching the brain’s ability to learn and the interconnection of neurons can be directly transferred to electronic models and tested.
A first model
The new “hybrid plasticity model” incorporates insights from neuroscience, electronics, and computer science in equal measure. For each possible connection between electronic neurons, the plasticity model holds a circuit ready to measure the signal flow. A conventional computer system could never perform this task nearly as efficiently and compactly. All signals must be monitored simultaneously, a task that nature easily masters with quadrillions of synapses firing simultaneously.
Compared to the natural model, our current electronic systems are only modest miniature versions. In the future, neuromorphic systems with up to a trillion connections will be possible. This would make it possible to test the learning of complex functions, such as the movements of humanoid robots.
The crucial factor here is that the parallel measurements of all signal flow between nerve cells must be processed by a special computer core within the same microchip. Only then can the computer core quickly and directly evaluate the measurement results for all signal flows without having to exchange information over distances of more than a few millimeters.
Fast learning, even with simple rules
Because the hybrid plasticity model is a freely programmable microprocessor, scientists can determine the rules according to which the connections between the nerve cells are to be changed.
Experiments have shown that even relatively simple rules lead to stable learning results and efficient use of existing connections in a short time if they take into account the different temporal and spatial structures in the signals according to their biological models.
This learning is “hybrid” in the sense of a mixture of a physical and a virtual replica of nature. Mathematics and engineering, two fundamental cultural techniques of humans, are needed to get one step closer to understanding one of the most fundamental abilities of animate nature—the principles of learning of nervous systems. It is precisely this ability to learn that makes the phenomenon of “culture” possible in the first place.
Photons instead of electrons could shape the computer technology of the future. This is because light can be used to make circuits even smaller and computers even faster. The optical hardware alternatives open up the possibility for fascinating applications, for example, in the field of artificial intelligence and high-performance computers that come close to the way the human brain works.
Scientists, mainly experimental physicists, all over the world are researching nanophotonics and working to realize integrated optical circuits for artificial intelligence and optical quantum computing.
How exposure shapes semiconductor structures
In photography, the effects of hard and soft light are well known. Hard light can be used to illuminate the surroundings with clearly defined, sharp contours in the cast shadow. Soft light makes the image more diffuse and blurs outlines. The decisive factor for the respective effect is the size of the light source in comparison to the illuminated object: a relatively small light source leads to hard shadows, and an extended light source leads to softer, blurred contours.
Down to the nanoscale
Hybrid nanowires with the polarization of light. (Credit: June Sang Lee, University of Oxford)
Playing with sharpness and blur does not only allow for impressive design possibilities in visual art. Sharp contours are also desirable when it comes to creating defined structures for semiconductor technology on very small scales. However, diffraction of light at very small structures causes clear patterns to soften, which always happens when the scales are in the range of the optical wavelength and thus on the nanoscale.
These diffraction effects dictate the minimum resolution that can be achieved when exposed to a given wavelength or color of light. To achieve very fine nanostructures or very densely packed patterns, really hard light would be ideal. Especially since light affects not only the contours but also the properties of the exposed materials, such as in photolithography, an important method for manufacturing semiconductor devices.
The importance of coating
To create nanopatterns, photoresists are used in semiconductor technology. The solubility of photoresists depends on how they are exposed. There are resists that cure or chemically crosslink upon exposure, and there are resists that soften when exposed to the light of a certain wavelength. Curing coatings are suitable for protecting materials, while softening coatings are used, for example, to define areas where further processing steps are to take place.
The sophisticated sequence of exposures with both types of photoresists is the basis for producing nanostructures that process electrical or optical signals and can be used as integrated building blocks (chips) for electronics and optics. For so-called integrated optics, i.e., optics on a chip basis, scientists usually aim for nanostructures that are smaller than one micrometer. For comparison, hair has a diameter of about 50 micrometers.
Light can be introduced into very small nanostructures with the help of so-called waveguides. This makes circuits possible in which light particles, called photons, act instead of electrons.
Advantages of photonic computers: Light instead of electrons
Since photons move at the speed of light in a vacuum, they always beat out electrons, which are much heavier particles.
Photonic circuits are interesting for all applications that require data to be processed very quickly and without consuming large amounts of energy. This is because light particles work in them instead of electrons. This opens the door to completely new optical computer architectures that do not work like traditional electronic computers but are modeled on the way the human brain works.
Modeled on the brain
The focus is on the development and improvement of these computing architectures. Neuromorphic computers inspired by the construction and operation of the human brain are important for providing hardware for artificial intelligence (AI).
AI applications have already arrived in many areas of our everyday lives. For example, they help with speech recognition, encrypt data on smartphones, support search functions, are used for pattern recognition on the Internet, and are essential for safety in autonomous driving.
All these applications pose enormous challenges for conventional electronic computers because data storage and processing are carried out separately. This type of electronic architecture makes it necessary to exchange data continuously via special systems known as bus systems, which leads to sequential clocking and limits data throughput.
Analog computing
The human brain handles data completely differently. They are processed locally with very high parallelism, and processing is often analog, i.e., continuous, rather than digital, i.e., in steps. This is important for all computational operations required in AI. “Photonic neuromorphic computing” promises to meet these high demands.
Photonic neuromorphic computers use light to perform addition and multiplication as elementary mathematical computational operations. The actual computational task is translated into the encoding of the numbers to be computed together as well as the measurement of the result. Photonic neuromorphic computers can do this in a single step because light particles, unlike electrons, have many degrees of freedom that allow for inherent, simultaneous data processing—this is the special charm of optical computing methods.
How photonic chips calculate
Our brains – and also artificial neural networks – process information using multi-layered networks of parallel nodes. (Image: ScienceDirect)
In electronic systems, it is necessary to perform many individual operations one after the other. For data transmission using optical fibers, for example, the wavelength of light is exploited so that data can be exchanged in many colors simultaneously over the same optical fiber.
Faster through parallel processing
This principle can also be used for photonic computers. Different computing operations are encoded on different colors and performed in parallel in the same optical computing system. Parallel processing offers enormous speed increases due to increased clock frequency; the calculations are performed at the speed of light, as it were. Photonic computers can not only compute much faster than electronic systems, but they can also scale in a dimension that is fundamentally unavailable to traditional computers.
Thanks to advances in materials science and integrated optics, photonic circuits can now be designed and simulated on the computer and then manufactured in factories. Fortunately, silicon can be used as a waveguide material, which is also highly compatible with the methods used in the semiconductor industry.
With matrices and vectors
The provision of all these capacities makes it possible to combine individual components into large-scale systems. For optical computing, this means: Small circuits are interconnected to form powerful arithmetic units that perform many multiplications and additions in parallel. This is done with the aid of arithmetic grids or matrices. The inputs of the numbers to be multiplied are available in parallel as vectors.
Based on these, the photonic chips perform matrix-vector calculations using light, with very high data throughput and very low energy consumption. Synthetic structures that mimic nerve cell networks are already being used today in artificial intelligence for cognitive processes. Matrix-vector multiplications are central computations in these neural networks. However, they have to be performed repeatedly, which takes a lot of time and energy.
To save time and energy, scientists try to have the central computational steps performed by special accelerator systems that are optimized for this kind of computational operation and can make them efficiently available for further computations. This is where photonic hardware accelerators come into play: they allow very fast data processing and provide a very large computing capacity in the long term.
Hybrid computing with electrons and light
Photonic computers are very well suited for special operations, while other operations can be better mapped to electronic hardware. The ideal would be “hybrid computers” that work electronically and, at the same time, contain photonic accelerators. So far, it is still difficult to integrate photonic accelerators into existing electronic computing systems. However, these approaches are attractive models for future high-performance computers that physicists are exploring to satisfy the computational appetite of new AI applications, even in the long term.
Where photonic modules have an advantage
The very high processing speeds make photonic computing modules interesting, for example, for object recognition in autonomous driving, an essential safety requirement. For this purpose, camera and sensor systems record the environment, and the computer must interpret the data so that it can react dynamically to what is happening on the road. AI processes, especially neural networks, are a central component of this.
How fast the overall system reacts is determined by the computing time of the system components. The faster the object detection, the faster the vehicle can be corrected; the accelerator systems thus directly influence the safety of the overall system.
Other interesting applications include training neural networks, which is very time- and energy-consuming.
Photonic matrix-vector multipliers can be used for this purpose. They offer the advantage of high data throughput, they can be efficiently adapted to different requirements, and they contribute significantly to a reduction in energy consumption.
Phase change as a chip designer
Photonic computers can also be reprogrammed in a variety of ways. One elegant solution is via phase-change materials, such as those used on rewritable DVDs. If these materials exist as hard crystals, they are metal-like. If the crystal structure is disordered or amorphous, the materials are glass-like and transparent. Both states can be very accurately adjusted by exposure to short laser pulses.
The change in material state is used as a basic principle for optical data storage on DVDs, for which the multiplicands of the material property are encoded: metal-like, hard crystal states represent a logical zero, and glass-like, soft crystal states represent a logical one. Once set, the states are maintained over very long periods of time, up to decades, without the need for external energy input.
Last but not least, it is attractive for practical applications that even very small structures of a phase change material have very large effects on the light via the waveguide used. This allows scientists to reduce the size of the devices and effectively use the space on a chip.
The softness and hardness of light outline a fascinating field of tension: on the one hand, the precise formation of structures with the help of the properties of light, and on the other hand, the adjustment of the degree of hardness of materials through light modifications. Skillful exploitation of this interplay makes it possible to design new lithography processes and new optical methods for data processing.
The transition regions between the two extremes are particularly exciting, such as mixed states of phase-change materials that contain both ordered crystalline and disordered components. Equipped with this toolbox, it is certainly possible to solve even the hardest problems of computer architecture design.
Every day, a fleet of satellites of every possible size and form completes numerous orbits around the Earth. The variety of duties they do mirrors the diversity of their appearances. Above the turbulent atmosphere, scientific satellites probe the depths of space, seeking answers to the astronomical community’s many unresolved problems. Television, long-distance telephone conversations, and global information exchange as we know them today would not exist if not for the fleet of communications satellites.
However, artificial celestial bodies employed for Earth monitoring are very crucial for living on Earth. They are able to see well even in the dark, and even through thick clouds. They aid in the mapping of the Earth, the forecasting of weather, the monitoring of volcanoes, and the packing of ice. To sum up, space explorers have emerged as crucial partners in mapping our home planet.
Keeping an eye on Earth
When in the right position, the naked eye may get a glimpse of a satellite even though it is very tiny and traveling at a great distance from Earth. This can only be done if their orbits are quite low, between 185 and 500 miles (300 and 800 kilometers) above the surface of the Earth; at these altitudes, they can complete an orbit of the Earth in around 90 minutes.
The optimum conditions for seeing satellites occur when the satellite is bathed in sunshine but the observer is located in Earth’s shadow, making it night for the observer. During the months of May, June, and July, when the days are long and the nights are short, this is particularly true. During this season, the sun dips just below the horizon at night. As a result, the nighttime elevation of Earth’s shadow is shallowest from north to south. In other words, at the height of the satellite, it is always daytime, even in the midst of the night. If the satellite is still receiving solar radiation, we on Earth will see it as a bright, swiftly moving object in the sky. If it travels too far south into the shadow of the Earth, where it is also dark, its light will suddenly go out.
These manmade stars shine brighter as their size and orbital height increase. The “satellite” ISS’s size and brightness are visible even to the unaided eye. The ISS is only visible because it reflects sunlight. You’ll need a telescope to see satellites that are either too small or too high in the sky.
It’s interesting to note that hardly any satellites can be seen crossing the sky from east to west. The ISS travels in a southwest-to-northeast direction. The rotational axis of the Earth explains why. The majority of satellites are positioned in a west-to-east pointing orbit, called a “corrected orbit”. It is not necessary to use maximum thrust in these orbits since the Earth’s rotation also adds to the speed. There would be no use in a satellite flying counter to the Earth’s rotational axis other than to use more fuel for acceleration.
So-called Iridium flares are magnificent celestial occurrences that are released by satellites. For the purpose of creating a mobile communications network that could be used all over the world, the defunct business Iridium launched 66 communications satellites into space. Years later, in 2022, 75 new Iridium satellites were launched into orbit on SpaceX Falcon 9 rockets to replace the old constellation. The satellites are still floating around up there, and they’re the ones making all those bright spots.
Flares are transient light phenomena characterized by a quick onset and rapid decay. Each satellite’s three enormous transmission panels are to blame for these phenomena. As the sun shines on the panels, some of the light is reflected back to Earth, where it quickly spreads into a 60-mile-wide (100-kilometer) light spot. The following may be seen by an observer who places themselves in the beam of this light: At first, we see a dim object whizz across the sky. In the next few seconds, it will brighten gradually at first, and then rapidly, until it practically blinds you. In the same rapid fashion, the brilliance will begin to fade once again until the satellite apparition is ended.
Satellites and their uses
According to the Union of Concerned Scientists (UCS), more than 5,500 man-made satellites performing a broad range of functions orbit the Earth today. And there are about 1,500 new satellites launched into orbit every year. Larger objects like the International Space Station (ISS) and the Space Shuttle also travel in orbit above Earth, although satellites are the most common. But what exactly are satellites anyway?
A “satellite” is the scientific term for any smaller entity in orbit around a bigger body. Therefore, the moon orbits around Earth, while Earth orbits around the sun. However, in a strict sense, we consider any artificial flying object that is in orbit around a celestial body to be a satellite. Manned and unmanned spacecraft exist, with the former more often referred to as space stations and the latter as space shuttles. There is also a great deal of space debris, such as retired satellites and spent rocket stages, orbiting among those artificial satellites. As of 2022, their number is around 27,000.
Satellites are launched into orbit for three major purposes: observing the Earth, communication, and space exploration. The latter investigates the solar system or space in different electromagnetic spectrum bands, including X-rays. Their primary function is to shield astronomers from the disruptive effects of the planet’s atmosphere.
A satellite designed for Earth observation performs a wide range of functions. They aid in weather prediction, volcano monitoring, Earth mapping, iceberg tracking, and a whole lot more. Without communications satellites, modern society would be a long way from its aim of globalization. They allow for global communication through the smartphone, global television broadcasting, and the transmission of scientific data from orbiting satellites to ground stations on Earth.
The development of satellite technology has substantially increased humanity’s knowledge of the planet and the cosmos. Quite a bit of the data satellites acquire reveals Earthly phenomena. The Landsat satellites (the newest Landsat 9 launched in 2021) provide such a detailed map of the Earth that it can be used to check the accuracy of older, less reliable maps. Scientists can even use them to check whether the vegetation in an area is healthy or has been harmed. The scouts in space can also assist in finding pollutants in the environment. Images from the Hubble Space Telescope, which is technically a satellite, have shed light on fundamental aspects of the cosmos. The James Webb Telescope has taken the flag now.
The specific orbits of satellites
The International Space Station orbits Earth once every 90 minutes.
Scientists use the term “orbit” to describe the specific route taken by each satellite as it circles the planet. All satellites must fly at heights above the tropopause (17 km/11 mi above the equator or 9 km/5.6 mi above the polar regions) to prevent speed reductions due to air friction. This is the case at altitudes above 185 miles (300 km), below which satellites cannot be put into permanent orbit.
An artificial celestial body’s orbit is determined primarily by two factors: its speed and the angle at which it orbits the Earth’s equator. A satellite’s speed is its circular orbital velocity, at which centrifugal force and Earth’s gravity cancel each other out, keeping the satellite in a stable orbit. This is conditional on the satellite’s height. It’s 5 miles per second (or 7.9 km ps) close to the surface and slows down with increasing altitude.
Depending on the satellite’s purpose, various orbits may be more or less appropriate. A polar orbit’s inclination angle is around 90 degrees with respect to the equator. Satellites in these orbits circle the Earth above the poles, providing a birds-eye view of the whole planet as it spins beneath them. In order to track storms, for instance, some weather satellites are placed in such orbits.
The geostationary orbit is used by other satellites. They need to be precisely 22,300 miles (35,888 km) above the equator to achieve this. At this height, a satellite travels around the Earth at a speed of 3.065 km/s, the same speed as a point on the earth’s surface when the earth rotates. Because of this, the celestial body always stays above the same point on the Earth’s surface.
Such orbits are often used by meteorological and broadcast satellites. Because otherwise, for example, the television satellite’s reception antenna would need to be continually adjusted with the satellite. Since the geostationary satellite’s transmitter is permanently fixed in the sky, the satellite dish only needs to be adjusted once to receive signals.
The equipment of the artificial satellites
Equipment for measuring and solar cells
The appearance of satellites, their size, and weight can be rather variable. The first satellite of the United States, Explorer 1, launched in 1958, was 6.5 feet (2 meters) in length and weighed 17.5 pounds (8 kilos). The Compton Gamma-Ray Observatory, built and launched by NASA in 1991, was 70 feet (21.3 meters) in length and weighed 17 tons. The largest artificial satellite is the International Space Station (ISS), weighing 444 tons or 980,000 lb with a diameter of 109 m or 357 ft in width.
Although satellites might vary widely in terms of size and mass, their core design features remain the same across the board. Large solar panels equipped with solar cells are the energy source for most satellites. The satellites are equipped with correction engines to counteract the effects of the Earth’s gravity and the very small amount of residual friction experienced during orbit. This enables the control center’s experts to reset the satellite’s orbit.
In addition to the measurement equipment that is essential for the satellite’s mission, they also carry additional instruments that are used for command and control. These tools can be used to identify power outages, temperature fluctuations, and pressure fluctuations, among other things. To keep the satellite on track, control sensors measure the spacecraft’s distance from Earth and its angle with respect to the horizon and the stars. The ground station can make course adjustments in response to erroneous readings from this equipment.
The vast majority of satellites are deployed in Earth observation roles. Primarily, weather monitoring satellites and other asteroids are crucial to human survival on Earth. But how do we make use of space-based observations?
Hurricanes, typhoons, floods, cyclones, tidal waves, and even wildfires can’t be predicted without the help of weather satellites. Understanding these occurrences allows for better catastrophe prediction and prevention.
It would be a mistake to discount the value of weather satellites in agriculture. They inform farmers of impending weather events like hail and snow, allowing them to better plan planting and harvesting. When a weather change is imminent, crops that are vulnerable to frost or wetness can be moved to a safe location with the aid of satellites. Planning large-scale projects like the building of bridges, roadways, and dams relies heavily on accurate weather predictions.
Television Infrared Observation Satellites (TIROS) were the first of their kind to study Earth from space. It was crucial in studying Earth’s cloud cover and proving the usefulness of satellites for meteorological reasons. The first in this series, TIROS 1, was launched in 1960, and its data has greatly improved our understanding of the Earth’s cloud cover ever since.
Nimbus is the name of a cloud formation that inspired NASA to create a program of the same name in 1964. To create the first-ever worldwide meteorological satellite system, these satellites were to be outfitted with both visible and infrared imaging sensors. The ozone spectrometer aboard Nimbus 7 was crucial to the investigation of the ozone hole over Antarctica and the worldwide distribution of ozone.
The newest weather satellites can be found in the GOES (Geostationary Operational Environmental Satellite) fleet. The first satellite in this series, GOES-1, ushered in a whole new era in weather monitoring. GOES-18 was launched on March 1, 2022, and GOES-U is scheduled for launch in April 2024. They are able to capture detailed photos throughout the visible and infrared spectrums and give maps of the Earth’s temperature and humidity.
Many more satellites than just those dedicated to weather monitoring circle the planet. The Global Positioning System (GPS) satellite fleet is one such example; it is used for satellite-based navigation. Within a few meters of accuracy, they allow for pinpoint earthly position determination. 24 satellites make up the GPS constellation, which circles the planet at varying distances from one another. And they allow up to 32.
All of Earth can receive satellite transmissions since their orbits span from 60 degrees north to 60 degrees south. Additionally, they serve a purpose regardless of the climate. Whenever a satellite communicates with Earth, it relays data about itself, its location, and the current time.
The GPS receiver uses the difference in time between when the signal was broadcast and when it was received to pinpoint its location. Scientists use the time difference to determine how far away the satellite is. The so-called 2D position, or geographical longitude and latitude, can be calculated if the receiver processes signals from three separate celestial bodies. A GPS device can determine the altitude value using information from four or more GPS satellites in orbit.
As of 2022, more than 140 million drivers in the United States alone rely on the GPS system to help them navigate unfamiliar places.
Satellites investigating space
The Upper Atmosphere Research Satellite (UARS), 1991, (Image credit: NASA Marshall Space Flight Center)
The first satellites in history were scientific satellites, which not only monitored the Earth but also looked out into the void of space. Scientists have learned a lot about the Earth and the cosmos from the data they have offered.
The Upper Atmosphere Research Satellite (UARS), which was launched in 1991, was the first research satellite to gather information about Earth from orbit. Its goal was to identify the systems in charge of the operations in the upper atmosphere. For instance, UARS created the first worldwide map of the atmospheric dispersion of chlorine monoxide. This demonstrated a clear correlation between the substance’s presence and the drop in ozone levels.
From orbits around the Earth, several additional satellites study the universe, the Sun, and other celestial bodies. For instance, Supernova 1987A was observed by the International Ultraviolet Explorer (IUE) in 1978, and it was discovered that the star explosion cooled surprisingly swiftly.
Both the Hubble Space Telescope (1990–today) and the Compton Gamma Ray Observatory (1991–2000) examined gamma rays in space, and produced important discoveries in the field of astronomy. The so-called gamma-ray bursts, one of the celestial phenomena that continue to confound scientists, were discovered by Compton to be far more intense than previously believed and to originate from regions well beyond the Milky Way. By that time, these gamma bursts had only recently been detected.
The United States launched a number of satellites in 1963 to keep an eye out for any nuclear weapon explosions in the atmosphere or in space during the Cold War. Instead of finding nuclear explosions on Earth, scientists were able to detect gamma bursts from space, a phenomenon that is still disputed among astrophysicists today.
We have discovered a great deal about the universe’s beginning thanks to these research satellites. A NASA satellite discovered slight temperature changes in cosmic background radiation in 1992. This radiation is a remnant of the Big Bang, the explosion that gave rise to the universe 13.8 billion years ago. The temperature changes that have been seen are consistent with the idea that the universe’s structure was formed when it was less than a trillionth of a second old.
A scientific satellite with equipment that goes far into outer space is the XMM-Newton built by ESA in 1999. It is equipped with the very sensitive X-ray telescope XMM (X-Ray Multi-Mirror), which hunts out and investigates undiscovered celestial bodies. It is accomplishing this by concentrating on countless stars in the Milky Way, extraterrestrial galaxies, galaxy clusters, and quasars, which are thought to contain black holes. One of the most powerful X-ray satellites in the world, XMM-Newton can detect minute amounts of X-ray radiation, penetrating the cosmos to depths never before reached.
The XMM-Newton travels in an elliptical orbit around the Earth at an altitude of between 5,700 (3,540 mi) and 113,000 kilometers (70,200 mi). It looks for X-ray sources in space outside the Earth’s atmosphere, which filters X-rays from space. These measurements may subsequently be used by researchers to make inferences about the physical and chemical data they hold. As of 2018, at least 5,600 papers have been published on the XMM-Newton and its scientific results.
Satellites simplify life
Today, the ability to connect to a phone anywhere on the globe and make calls is taken for granted, even in the most isolated areas. In a similar vein, the news can spread like wildfire around the world. Without the use of communications satellites, none of this would be feasible. This would otherwise only be possible via the time-consuming installation of cables in several parts of the globe; just like the first transatlantic cable.
The earliest commercial satellites were communications satellites. While some satellites are effectively administered by government entities, the majority are run by commercial businesses.
In 1960, aluminum-coated balloons were used as the first communications satellites. These were inactive satellites that only actively reflected radio signals in order to transfer them. Due to their restricted usefulness, they were quickly replaced by active satellites, which receive signals, amplify them, and then transfer them to another location on the Earth’s surface.
Telstar 1, which was run by the American telecommunications corporation AT&T, was the first privately launched communications satellite. It was the first satellite to carry both black-and-white and color television across two continents, and it was also capable of switching telephone calls between America and Europe.
The International Telecommunication Satellite Consortium (Intelsat), a global organization made up of 65 nations that significantly enlarged the commercial communications network, was created in response to the enormous need for new communication channels. Only two stations could connect at once with the first Intelsat satellite. Intelsat now runs one of the world’s biggest fleets of communications satellites, consisting of 52 satellites that provide service to 200 nations.
The NASA-developed Advanced Communications Technology Satellite (ACTS) was sent into orbit in 1993. This technology was far more cost-effective and affordable since it guaranteed three times the communications capacity at the same weight as other satellites. Additionally, it enabled quicker communication, enabling a business to use this upgraded technology. In fact, ACTS was the first high-speed, all-digital communications satellite.
Meteorologists need to be aware of the vertical distribution of temperature and water vapor in the Earth’s atmosphere in order to anticipate the weather. Weather balloons have often been used up to now to determine these. The gaps in our understanding of the vertical structure of the atmosphere are enormous since they are only ever installed at a few sites on Earth and at great intervals.
LANDSAT
Landsat 9 in orbit, NASA.
The LANDSAT series, launched in 1972, is one of NASA and the U.S. Geological Survey’s (USGS) most adaptable Earth observational tools. Earth’s surface has been observed by LANDSAT satellites for more than 50 years, which has aided in our understanding of the intricate interactions that cause many of the world’s changes.
In 1972, the first of the LANDSAT satellites were sent into orbit, paving the way for detailed, high-resolution monitoring of the planet’s land and sea surfaces ever since. The last of these heavenly bodies, called LANDSAT 9, entered Earth’s orbit in 2021 and is tasked with doing so for the next five years. LANDSAT is unlike any other Earth-observing satellite due to the breadth of its possible uses. From tracking the growth and shrinkage of glaciers to checking the cleanliness of lakes and coastlines to charting the distribution of pack ice and mapping out forest coverage, the Earth satellites’ imagery has been used for a wide variety of uses.
LANDSAT is used by researchers to keep an eye on the land surface and nearshore water regions and analyze the effects of climate change on various ecosystems. Important natural processes and human-induced changes, including deforestation, agricultural usage, erosion, and water levels in drinking water reservoirs, are all documented by LANDSAT. Repeated observations of the same spot over the course of a year can reveal seasonal variations.
The repeated eruptions of Hawaii’s Kilauea have been studied in part via the use of LANDSAT imagery. Mapping active lava flows is critical so that locals may be warned in a timely manner. The delicate equipment on LANDSAT is capable of distinguishing between fresh lava flows and those that have previously cooled.
In recent years, forest fires have played a significant role in the degradation of ecosystems across the world. Knowledge of the volume and wetness of biomass on the ground, which supplies fuel for the flames, is crucial in preventing and putting out these types of natural catastrophes. For safer firefighting, this data may be used to identify potentially hazardous places and minimize dry biomass there.
Methods for recognizing various kinds of dry biomass have been developed with the use of LANDSAT images. Scientists may use spectral analysis to tell whether the flora they’re studying consists of lush meadows or dry trees that might spark the next forest fire.
Radar satellites
Tracing the surface of the planet
Scientists utilize radar satellites for the specific purpose of surveying the Earth’s surface. In the course of their development, radar systems have become the most potent remote sensing tools available today.
Radar waves, which have a wavelength in the centimeter range, are simply radio waves that are reflected by solid objects and liquids. These systems are versatile in their use, since they may be used not only to find things but also to investigate surfaces through their unique reflecting qualities. Radar systems excel due to their ability to function in low-light conditions and see-through cloud cover.
The military first used radar as a means of surveillance in order to locate and track hostile aircraft and ships. After the war, the technology was adopted by the general public. It was rapidly recognized for its use in cartography, oceanography, and land-use research.
Radar systems need to be elevated above the reach of airplanes if they are to be used for mapping out bigger regions. NASA began conducting experiments with radar systems in orbit as early as 1962; in 1972, Apollo 17 sent an upgraded version of this device to the moon for a detailed study of the lunar surface and underlying geological features. These findings prompted the addition of a radar system to the SEASAT satellite for ocean monitoring. As a result of SEASAT’s efforts, more data was collected in 100 days about the ocean floor than in the previous 100 years of ship-based research combined.
The use of radar equipment on space shuttles became commonplace in the 1980s. During this trial run, new information on how to scan geological features was gleaned from radar pictures. Meanwhile, there are a plethora of radar satellites in Earth’s orbit. They were used to learn more about the planet’s surface in novel ways and to keep tabs on noteworthy occurrences. In the past, for instance, after its replacement, ERS-2, had taken up its duties, the ERS-1 satellite was reactivated so that it could watch the eruption of the Icelandic volcano Vatnajökull.
ERS satellites (European Remote-Sensing Satellites) have a dual capability for observation. The first setting is utilized for land surveys, while the second is for ocean research. Applications for satellite radar data are expanding in the field of environmental monitoring. To better understand oceanic phenomena like wavefronts and sea conditions, for instance, researchers can monitor oil spills, track sediment intake from rivers, and track the movement of ice sheets. The satellites collect information on human activities on the ground, such as farming and logging, as well as natural phenomena, such as earthquake hotspots and geological formations.
Radar satellites can also be used to investigate mineral, oil, and gas deposits as well as underground water supplies. Archaeology, for example, has benefited greatly from the use of radar technology. In the past, scientists were able to measure the whole Angkor temple complex in Cambodia. In addition, barely visible from the ground, vestiges of an even earlier wall complex were uncovered close to the modern Great Wall of China.
Networks of thousands of mini-satellites are presently being placed in earth orbit, which allow for broadband Internet access even in the remotest corners of the planet, data linkages for autonomous cars or drones, and communications everywhere. There are pros and drawbacks to these mega-constellations, which promise nothing less than a new era in global communications.
Competition in the satellite Internet industry has begun. SpaceX with Starlink, Amazon with the Kuiper project, and OneWeb, a corporation formed particularly for this purpose, are the three private firms now in direct rivalry. However, both China and the EU have stated their ambition to launch their own satellite networks into space in the near future. Big bucks and total control of the digital communications market are in sight.
However, some people have a negative opinion about putting tens of thousands of satellites into Earth’s orbit. Astronomers are concerned that this may cause light pollution and interference with their telescope observations. The increased risk of accidents and space debris is a major concern for space organizations.
What does satellite internet bring?
The issue is not new: although individuals in major cities and urban regions benefit from continual connectivity and relatively fast data transfer rates made possible by broadband Internet and extensive cellular coverage, those living in rural areas are left in the digital darkness. Fiber optic connections are the stuff of fantasy for those living in rural areas of even highly developed nations like Germany.
Almost half of the world’s population is still offline
Distribution of ping-accessible Internet connections in 2012. Cody Hofstetter/CC-BY-SA 4.0
And on a worldwide basis, this digital divide is much more severe: Approximately half of the world’s population uses the Internet, with estimates putting the number of users at slightly under four billion. When it comes to network availability, developing nations and sparsely populated areas like Asia’s steppes do the worst. According to the International Telecommunications Union (ITU), a full 72% of Africa and slightly more than half of Asia lacked access to the internet in 2020.
This is because installing data connections, whether fiber optic or even copper cables, is a difficult and costly process. The same is true for the construction and installation of cell towers. This is profitable for commercial network providers only if a sufficient number of users are present in the region. Despite claims to the contrary, if this is not the case, laying the cables or constructing the antennas is a loss-making business and has thus simply not been done to date.
Satellite network instead of cable
At this point, the mega-constellations become relevant: Through orbiting networks of communications satellites, we hope to speed up Internet and cell phone service in regions where they were previously unavailable or painfully slow. These constellations provide continuous service because their hundreds to thousands of mini-satellites are evenly spaced over the world. One thousand kilometers is the extent of the coverage area that can be provided by a single satellite in SpaceX’s Starlink constellation.
The satellites used for telecommunications and broadcasting are located in geostationary orbit at an altitude of 36,000 kilometers; the mini-satellites of the mega-constellations, in contrast, travel in low earth orbit (LEO) at an altitude of 500 to 1,500 kilometers. In these constellations, many satellites should be able to cover any given area, resulting in a reliable connection. One satellite’s loss of communication won’t affect the operation of the system as a whole.
Each satellite weighs around 250 kilograms and is equipped with little, making them far cheaper to manufacture than traditional communications satellites. Constellation mini-satellites are minimally equipped with a radio antenna and transmitter, a couple of tiny solar sails, a computer, a star and sun tracker for navigation, and an ion propulsion engine. Because they are cheap to construct in bulk, these satellites may be launched in batches of 60–75 at a time.
Increased data transfer speed and lower latency
The new mega-constellations’ increased capacity and transmission speeds are the main benefits over earlier satellite broadcasts. The Ka and Ku bands of the microwave spectrum are used by satellites in low Earth orbit, and their frequencies range from around 12 to 40 gigahertz. Providers of the new satellite Internet have made claims that the short-wave Ka band is capable of very high data transfer speeds, rivaling or even surpassing those of terrestrial broadband services.
The latency of up to 700 milliseconds experienced by a signal sent by a typical communications satellite renders real-time remote control of equipment, video telephony, and other real-time applications impractical over such links. In low Earth orbit, communications travel a fraction of the distance, bringing latency down to 20 to 40 ms, which is practically on par with fiber-optic and DSL speeds.
New application possibilities
The Internet via satellite gains a wealth of new uses because of these characteristics. The mega-constellations may be useful for a variety of emerging technologies, as well as the possibility of bringing high-speed Internet to previously unconnected locations. One potential use of this lightning-fast satellite link is the remote operation of automobiles, aerial vehicles, and other robotic companions in real-time. This might make remote monitoring of pipelines, disaster zones, or even agricultural fields and plantations simpler than ever before. In the future, satellite transmissions may potentially be useful for autonomous cars.
Improvements might also be made to the Internet connectivity of ships, planes, and moving vehicles. They had to rely on mobile phones or wired links to older types of satellite communications before this. However, mega-constellations may be able to provide broadband connectivity to even moving targets like ships at sea or aircraft in the sky. For its Starlink system, SpaceX has already submitted license applications to the U.S. Federal Communications Commission for use of such mobile apps began in the spring of 2021.
However, only those who can afford the monthly fees and the price of the satellite dish and router will be able to take advantage of this exciting new world of the Internet. The cost of using Starlink’s beta version is $99 per month plus an initial equipment cost of $499. Despite the fact that Starlink is making a profit on these basic bundles at the moment, they are still likely to be out of reach for many in developing nations.
Starlink – the pioneer
What is the purpose of SpaceX’s constellation?
In September 2022, SpaceX launched 52 Starlink satellites into space.
Three privately held corporations owned by some of the world’s wealthiest individuals own the most technologically sophisticated mega-constellations. Elon Musk, the creator of Tesla and CEO of SpaceX, is at the helm of the Starlink project. The only satellite network that is up and operating at the moment is his, and it is still in beta.
There are now around 2,200 operational satellites
In 2018, SpaceX tested only two prototype satellites; in May of this year, the company started constructing its orbital network. The startup has gotten off to a sluggish start, but now every two weeks it sends a Falcon 9 rocket carrying between 50 and 70 miniature satellites into orbit. In September of 2021, there were around 1,800 Starlink satellites in orbit, and that number is steadily increasing. Today it is around 2,200.
Most of the 1,440 Starlink satellites already in orbit are part of the “first dish” of the mega-constellation, which is located at a height of 550 kilometers and has orbits inclined at 53 degrees to the Earth’s equator. There are also more shells with tens of thousands of satellites, orbiting at varying inclinations and altitudes between 540 and 570 kilometers. The current approval status indicates that 4,408 satellites will make up the Starlink constellation.
Future plans call for around 7,000 more Starlink satellites to be placed in an even lower orbit, between 335 and 345 kilometers above the earth. They will make use of the unused portion of the communications spectrum that lies between 40 and 52 gigahertz. All systems are expected to be finished by 2027.
Currently, 40 countries have access
Starlink’s network beta shows that it’s possible to go online through satellite and how successfully it does so. Access to Starlink was initially restricted to users in the United States, Canada, and the United Kingdom; but, beginning in the spring of 2021, certain European nations, notably Germany, have been granted restricted access. In 2022, Starlink went live in Japan and India. More than 100,000 users and 500,000 pre-orders have been reported for the satellite network’s beta edition.
Initial tests conducted in the USA demonstrate how reliable the satellite connections are. These claim that Starlink has reached peak download rates of 168 Mbit/s and average download speeds of 97 Mbit/s. This indicates that orbital data transfer already approaches the 115 Mbit/s baseline US broadband Internet bandwidth and even significantly outperforms it locally. In the UK, Starlink has a download speed that is almost nearly twice as fast as the typical broadband connection, at 108 Mbit/s. In terms of uploads, Starlink is reportedly only slightly slower than wired internet speeds.
However, while Starlink works to enhance the system’s hardware and software, beta customers are still forced to deal with sporadic satellite connection failures. Transmission rates are anticipated to rise even more as the number of satellites in orbit rises: “Data rates, latency, and network availability will improve considerably as we launch more satellites, deploy more ground stations, and upgrade our network software,” according to a Starlink announcement.
Data congestion in metropolitan areas
There is a caveat, though: satellite Internet becomes slower the more people use it from a single area. Elon Musk, chief executive officer of SpaceX, said in a tweet that “Starlink is built for low to medium population densities.” That implies that in certain regions, we might quickly exceed our maximum user capacity. This essentially implies that sparsely inhabited areas are where satellite constellations may best use their benefits. On the other hand, fiber optics companies may continue to lead in urban regions.
The rivals
In addition to promising faster Internet and improved data transfer, mega-constellations in Earth orbit might end up being a true money mine for their operators. Whoever wins here might have a significant lead over their rivals. Currently, there is a tremendous race for who can expand orbital Internet capacity quickly. State participants like China and the European Union are getting ready to join the competition.
OneWeb: Getting closer
To get at least some of its remaining low-Earth orbit (LEO) broadband satellites into orbit, OneWeb has contracted with India’s biggest launch vehicle.
With its Starlink constellation, SpaceX is the first to launch, but two rivals are right behind it. The closest is OneWeb, a business created especially for satellite broadcasting. It launched its first six test satellites into orbit in February 2019, using a Soyuz rocket as a payload. However, OneWeb only just avoided a pandemic-related bankruptcy in the summer of 2020. The corporation didn’t have the requisite cash once again until Bharti Airtel and the British government entered the picture. The satellite gear is also being co-manufactured by the European airline Airbus.
358 OneWeb spacecraft, or almost half of the 648 constellations intended for the first expansion stage, are already in orbit above the Earth. In 2022, it’s expected to be finished and put into use. The OneWeb constellation covered the whole planet by May or June 2022, according to billionaire businessman and CEO of Bharti Enterprises Sunil Mittal. OneWeb has submitted applications for permission to launch 6,372 satellites, and there are already plans for further mega-constellation expansion phases.
Yet a different target audience
Unlike Starlink, the OneWeb satellites orbit at a height of 1,200 kilometers and have an inclination to the equator of a respectable 98 degrees. Because of this, they need fewer satellites for comprehensive coverage and can even reach high latitudes. There were enough satellites in orbit by the end of 2021 to provide orbital Internet service north of 50 degrees latitude.
OneWeb partner Airbus said in July 2021 that “this will allow coverage of Northern Europe, the UK, Canada, Alaska, Greenland, Iceland, and the Arctic Ocean.” This would be beneficial for the region’s ships and planes, in addition to the nations there. However, since the signals must go a little farther than with Starlink satellites, which soar just half as high, latency may be a little greater.
OneWeb is not designed to give direct access to individual users; rather, it targets organizations such as enterprises, telecommunications companies, governmental bodies, or whole towns as clients.
Amazon is the third but currently least developed commercial satellite Internet rival. The business intends to place a mega-constellation of 3,236 satellites in Earth orbit as part of “Project Kuiper.” By 2026, half of these spacecraft should be in orbit. Similar to Starlink, the constellation will be spread out among three dishes at a height of 560 to 630 kilometers and will transmit data mostly in the Ka-band.
However, Amazon is now far behind its rivals. While Amazon is currently hiring experts for the project, Starlink already has the beta version of their satellite Internet up and running. Additionally, according to reports, the business allegedly collaborated with Facebook to borrow certain staff for this project, which was made public in July 2021.
Better synergy effects
However, the Amazon constellation has a certain benefit that might compensate for this weakness: Amazon can seamlessly combine Project Kuiper’s transmission services with its current Internet offerings, most notably Amazon Web Services (AWS) cloud storage, unlike its rivals. “Data may be sent from point A to point B by SpaceX. However, Amazon is able to provide data to customers and its cloud services through the satellite network” Zac Manchester of Stanford University says.
This would provide many Amazon cloud service users with the benefit of being able to access their data quickly and anywhere they are using fast broadband connections, in addition to being able to outsource computationally intensive applications that require a lot of storage space to the cloud as they have done in the past.
Government initiatives
Even while commercial businesses are the most advanced in mega-constellations, state players have also woken up in the meantime. They don’t want to take the chance of being excluded from the lucrative and maybe crucial satellite Internet market in the future.
China: 13,000 satellites requested
Launched in China in August 2021, this Long March 2 launcher had two test satellites on board for a future Chinese mega-constellation in addition to additional payloads. China Aerospace Science and Technology Corporation (CASC)
The first state participant to begin is China. Several reports claim that the nation started working on creating its own satellite constellation in 2018. According to Bai Weimin of China’s State Space Administration (CASC), among others, the first prototypes have already been constructed and put through orbital testing. “We are planning and developing Internet satellites and have already launched test satellites,” he said in an interview with Shanghai Security News. Officially included in the five-year plan for 2021 to 2026 that President Xi Jinping and his administration have established is a satellite Internet.
According to a proposal made to the International Telecommunication Union (ITU) in September 2020, the Chinese mega-constellation would include up to 13,000 satellites spread out in a variety of orbits between 500 and 1,145 kilometers in height. According to reports, a national network firm was founded expressly to build these satellites. By 2022 at the latest, the first wave of 60 test satellites will be sent into orbit.
It is not yet known whether the Chinese satellite Internet will be available worldwide or if it will solely serve Chinese citizens. “The domestic market seems to be the present emphasis. However, similar to previous technologies, such as high-speed rail, it is also plausible that China would first work out any flaws domestically before marketing the service internationally.” In October, American analyst Bhavya Lal stated.
EU: A feasibility study is in progress
The European Union arrived relatively late to the party. It has only asked a group of aerospace, telecommunications, and satellite manufacturing businesses to research the viability and need for an orbital communications system in 2020. According to consortium member Airbus, the project “will examine how a space-based system may complement and link essential infrastructures.” Additionally, the advantages of using cloud services will be examined. By the end of 2021, the review’s preliminary findings are anticipated.
“We can observe that certain constellations are still in the process of formation. However, they are not European, which might present a problem for European member states as we consider safe connection inside and beyond the continent,” said Dominic Hayes of the Space and Defense Division of the EU Commission. The goal is to prevent future reliance on commercial suppliers by European governments, agencies, and even military users.
Hoping to benefit from a “late birth”
Hayes admits, “We don’t have the benefit of being first to market.” However, the fact that certain parts and technologies are developed by commercial pioneers and can later be manufactured more affordably might be advantageous for Europe. The EU expert uses the Starlink system’s receivers as an illustration: Up until this point, SpaceX had been making theoretical losses by selling the antennas and routers below their market value. But in the future, mass manufacturing might make such receivers substantially less expensive.
Who will be engaged in building and developing a European satellite constellation if that option is chosen? This is a valid concern. Along with the businesses already participating in the feasibility study, another potential applicant would have been the French satellite operator Eutelsat. But in April 2021, the latter made a $550 million investment in OneWeb, making it a direct rival to a proposed European constellation.
EU Internal Market Commissioner Thierry Breton said, “I don’t understand how one participant can be participating in two rival initiatives.” According to him, the new EU structure is critically necessary for the bloc’s autonomy, sovereignty, and future. Breton said, “We won’t give up on this.”
Space debris and collisions
The drawback of massive constellations
Earth’s orbit is becoming increasingly crowded: If all the mega-constellations planned so far come to pass, there may eventually be as many as 100,000 satellites orbiting the planet, which is significantly more than the roughly 2,500 “normal” military and civilian satellites that have been in place up to this point. Astronomers and space agencies alike have serious worries about this in a number of ways.
Near-collision averted
ESA’s Aeolus satellite had to dodge a Starlink satellite in 2019. ESA
The European Space Agency (ESA) had to conduct an evasive maneuver with its Aeolus Earth observation satellite because a collision with a Starlink mini-satellite was about to occur on September 2, 2019, proving how legitimate such worries are. The Aeolus launched its thrusters, lifting its orbit by 350 meters, barely averting the collision.
The Starlink satellites are really designed to automatically avoid other flying objects, but a flaw prevented this from happening in this instance. ESA has to respond as a result. It’s true that this prevented damage to both satellites and further space junk and collision debris from entering Earth’s orbit. However, such evasive actions are fuel-intensive and only effective when the threat is identified in time. Each ESA satellite in low Earth orbit already receives two collision warning signals each week.
The topic of when a costly and time-consuming evasive maneuver becomes essential is also raised by such warnings: “Such actions would be routine if you already respond at a collision probability of 1:10,000. But with a risk of being struck off at 1 in 50, it’s quite likely,” Hugh Lewis from Southampton University recently explained the problem. Monitoring systems are currently not accurate enough to tell the difference between a collision and a barely missed one in advance.
Who has to dodge?
ESA satellite Aeolus and a Starlink satellite on a collision course on September 2nd, 2019. ESA
Another issue is that there are now no defined traffic regulations in orbit: It is not immediately evident who has to be avoided and there are no automatic communication procedures between the operators involved. Holger Krag, director of the ESA’s Space Safety Program, adds that there is a pressing need to catch up in this area. After all, as Earth’s orbit becomes more congested, the danger of collision increases, and each impact causes an avalanche of new debris to race around the world.
Future satellite launches and space mission launches would also benefit from this collaboration. After all, the more rockets there are in orbit, the greater the chance that one of them may launch and strike a satellite. The ESA specialist said that “the spacecraft operators need to come together to develop automated maneuver coordination.”
In the legal gray area
A ring of debris from the Chinese satellite Fengyun-1C a month after it was shot down by a Chinese missile. NASA Orbital Debris Program Office
The issue of how satellite constellation operators should handle imperfect or totally failing satellites in orbit is also still up for legal debate. According to Corinne Baudouin and her colleagues at the University of Paris-Saclay, there is simply no internationally enforceable law that currently mandates the mandatory regulation of waste disposal in orbit. “SpaceX is not doing anything that violates the rules, because there simply aren’t any yet,” the researchers say.
True, several nations and space agencies have previously decided on non-binding principles. These allow for the sharing of data on satellite locations, potential collision hazards, and approaching accidents, among other things. Such agreements also provide for the disposal of inoperative satellites beyond low Earth orbit by allowing them to burn up by impacting the atmosphere within a certain time frame.
Simple politeness is unlikely to be enough, however, as Baudouin and her colleagues note in their explanation of the problem. “Even if an operator does not abide by the regulations, there is a possibility of producing new space debris,” they write. The most egregious example of this contempt for all regulations came from China in 2007, when it used a medium-range rocket to launch a defunct weather satellite. About 40,000 additional bits of junk are now in orbit as a consequence.
What is the rate of failure?
So far, only the satellite failure rate has been specified in terms of the mega-constellation standards. For instance, the U.S. Federal Communications Commission (FCC) mandates that operators provide reports on the number of satellite failures, near flybys, and evasive maneuvers that have occurred every six months. A supplemental report is also necessary if there are more than three or four satellite failures in a calendar year.
However, there is conflicting information on how high the Starlink satellite failure rate is. For instance, SpaceX now estimates it at a maximum of 1.45 percent, yet five percent of the constellation’s first generation is already damaged. The FCC is reportedly already debating stronger requirements, with a limiting of the failure risk to a maximum of 0.1 percent being under consideration, in light of the heightened danger of collision caused by failed mini-satellites. However, it is still unclear what this implies for the satellites that are already in orbit.
Light pollution and large constellations
Streaks of light from 19 Starlink satellites captured by the four-meter telescope at the Cerro Tololo Inter-American Observatory.
Astronomy may see dramatic changes in the future if thousands of mini-satellites are in low Earth orbit. This is due to the fact that, in telescopic photos, the satellite reflections look like unsettlingly brilliant points of light. Additionally, the night is already becoming noticeably brighter due to flare or stray light from artificial Earth satellites.
Luminous spots in the sky
As chains of dazzling points of light, the first 60 Starlink satellites from SpaceX that were launched in May 2019 immediately generated a sensation around the globe. Due to their low height, sunlight reflected off of their flat, glossy metal surfaces was easily seen. When the Earth’s surface is already completely black but the satellites are still being lighted by the sun, the brilliant reflections mostly happen soon before dawn and immediately after sunset.
Astronomers were concerned about the following: It’s anticipated that there will be tens of thousands of these satellites in the next few years. This increases the risk of possible negative effects on astronomy conducted both on Earth and in space. The American Astronomical Society (AAS) issued a warning on this.
Under bad circumstances, even the reflections of regular satellites may obstruct or even make telescope photos useless, and the danger rises proportionately for constellations.
Up to 40 percent unusable recordings
Olivier Hainaut and his colleagues at the European Southern Observatory explain that a strong satellite reflection “might overwhelm the detector and destroy the whole picture with a big, powerful telescope” (ESO). The researchers predict that reflections from satellite constellations might make 30 to 40% of the wide-angle photographs obtained in the first and final hours of the night from such a telescope useless.
The failure rate, however, is estimated to be less than 1% for telescopes with limited fields of view or for spectroscopic studies in the visible and near-infrared. However, they consider it essential that satellite operators, astronomers, and governmental organizations reach a consensus on how to reduce the possibility of unintended consequences in astronomy.
Avoiding glossy surfaces on satellites is one precaution that may be taken, as SpaceX is now experimenting with a number of its Starlink satellites. According to astronomers, the positioning and direction of the satellites should be managed to reduce light flashes that are directed at Earth, such as those produced by solar sails.
More orbital flare
Around 90 minutes before dawn, the night sky above the European Southern Observatory’s (ESO) Very Large Telescope in Chile. Future constellation satellites that might be observable as interfering objects in astronomical observations are indicated by the green dots. ESO/ Y. Beletsky, L. Calçada
This is crucial because, as recently discovered by Miroslav Kocifaj of Comenius University in Bratislava and his colleagues, satellite constellations may affect nighttime light pollution and astronomical imagery in ways other than the direct interference effects of bright light spots. They used a model to determine how much diffuse dispersed light is already produced by reflections from the tens of thousands of bigger space debris objects in orbit and the almost 3,400 operational satellites.
As a consequence, the night is illuminated by the orbital debris and active satellites alone to the tune of 16 to 20 microcandela per square meter. The researchers explain that this amount of light is above the critical threshold that the International Astronomical Union has designated as an acceptable upper limit for light pollution at astronomical sites. This amount of light corresponds to about 10% of the natural brightness of the night.
In other words, even at far locations, orbital flare already brightens the night. This might become worse if tens of thousands more satellites and their solar sails are deployed later.
Using water as a source of energy, scientists in the United States have developed a tiny camera capable of taking photographs underwater without the need for recharging or any other maintenance. The gadget is able to do this because of piezo elements, which transform the energy in water vibrations into electricity, and its low power consumption compared to traditional cameras. It uses a passive method of data transmission in which it backscatters an incoming sound wave.
A very small percentage of the oceans have yet to be surveyed and investigated. There hasn’t been much progress made in this area, even after massive censuses like the Census of Marine Life were conducted. The challenge of putting several sensors and cameras in the water without an external power source is a contributing factor. To date, such equipment has relied on either batteries, which have a finite lifespan, or cables from ships, which can only provide power for a limited duration.
Potential energy from vibrations
The underwater camera’s construction without a battery or cable. (Afzal et al./Nature Communications, CC-BY 4.0)
But now, MIT graduate student Sayed Saad Afzal and his colleagues have developed an underwater camera that doesn’t need any external power source to operate. There are two technologies that work together to make this happen. The first is the use of piezoelectric elements, which can transform mechanical vibrations into electricity. This is achieved by shifting charges in the element generated by the vibrations.
Now, a ship’s horn, a marine mammal’s snort, or even a sonar may cause the water to vibrate and, therefore, strike the piezoelectric transducer, producing electrical energy that can charge a tiny supercapacitor. The camera is powered by this current. Unfortunately, regular color cameras aren’t very power-efficient; therefore, particular consideration was given to this aspect of the design.
Image captured by a monochrome camera sensor
The battery-free camera prototype’s first shots. (Afzal et al./Nature Communications, CC-BY 4.0
The researchers had to be creative to reduce the hardware footprint as much as feasible. Color photos were preferred, but the most cost-effective digital image sensors only create monochrome (black and white) photos. To see anything at all in the dim underwater environment, the camera has to be able to shine a light on its targets, which also demands electricity.
The researchers solved this issue by integrating a black-and-white image sensor with red, green, and blue light-emitting diodes. The sensor takes one picture of an item as each of the three colored LEDs lights up in succession. The three monochrome pictures are distinct from one another because the color elements are absorbed and reflected differently depending on the color of the object. Recombining them using specialized software allows for the recreation of a full-color picture, conceptually analogous to that of an LED television.
Backscattering is used to send information
The data transfer from the underwater camera to the ocean surface was another obstacle that needed to be addressed. The team at MIT employed a method that has already been used in battery-free mobile phones and LED billboards. The new camera uses backscatter technology, which encrypts its data by absorbing or reflecting an acoustic signal aimed at it, rather than actively creating radio waves or other signals to transport the data.
The camera is then radioed by the receiver (which can be a buoy floating on the water’s surface) to the depths below. The zeroes and ones of digital image data are imprinted on the signal by the camera’s piezoelectric module, which reflects the signal back for 0 and absorbs it for 1. The reflected signal can be picked up by the receiver buoy’s submerged microphone and decoded.
A single switch is all that’s needed to toggle between absorption and reflection in this setup. The underwater camera without battery or cable consumes just one-hundred-thousandth of the power required by conventional submerged communication systems.
Successful results from the first round of testing
Initial field testing of the scientists’ new battery-free camera included using it to document the plastic debris lying at the pond’s bottom. High-resolution photographs of a starfish were captured, and the camera also caught the development of the aquatic plant Aponogeton ulvaceus over the course of a week. All of these evaluations were carried out with the prototype camera fully underwater, functioning independently, and without a battery or power cord.
Researchers think that autonomous and low-cost underwater cameras will open up new avenues for studying the ocean. In addition to monitoring fish in aquaculture, they might be used to investigate marine pollution and look at uncommon species. The researchers are already working on increasing the battery-less camera’s storage capacity and range (which is now just 130 feet or 40 meters) so that it can be used in such applications. Source: Nature Communications, 2022; doi: 10.1038/s41467-022-33223-x.
Accessories that can also be worn as eyewear: Sunglasses not only protect the wearer’s eyes from the potentially harmful effects of ultraviolet (UV) rays, but they also help the wearer see better in bright sunlight. Certain models have lenses that automatically darken in response to the presence of sunlight. Nevertheless, how do these glasses manage to tint themselves on their own?
Being in the light is not always a pleasant experience, regardless of whether it is a blazing hot summer or a white-covered winter. An excessive amount of sunlight makes it difficult to see clearly and causes you to blink involuntarily to block the sun’s rays out. The ultraviolet (UV) rays that come from the sun are dangerous: A cornea that has been sunburned is susceptible to injury the same way that any other part of the body is. In the worst-case scenario, the person could suffer permanent damage, including blindness. A quality pair of sunglasses that offer protection from UV rays is going to be your most effective line of defense here. The sunglasses that darken themselves automatically when exposed to bright sunlight are an excellent choice for conditions with conditional lighting.
Similar to the idea behind black and white photographs
Amazingly, self-tinting sunglasses achieve their effect by replicating the chemical process that occurs during the exposure of a conventional black-and-white photograph. The lens may contain silver bromide or silver chloride, both of which are unstable in the presence of ultraviolet light and simply disintegrate. The resultant silver can form tiny crystals, which, depending on the substance that was used and the amount of diffusion that took place, end up giving the glass a hue that is either gray, brown, or black.
In contrast to a photograph, the reaction does not cease or become irreversible after it has been exposed. Simply switching off the light will cause the process to go in the opposite direction. However, this lightening process is significantly sped up by submerging in warm water, even though it occurs more slowly than the darkening process.
The intricacy of the smart glasses design
The so-called “smart glass,” on the other hand, makes things more complicated: An electrical voltage is applied to the surface of this glass, which either makes it darker, brighter, or completely opaque, depending on the desired degree of opacity or transparency in the product. Altering the color of a glass window is another possible application of this method. These panes operate in a manner that is analogous to that of liquid crystal displays found in televisions or smartphones. When voltage is applied, the crystals are caused to align in the electric field, which enables them to take in more light and make the pane darker.
These glasses have the benefit of being able to be electrically regulated, which enables their use in any lighting condition. This is a significant advantage.
The technology is far too complicated to be utilized in the production of sunglasses; however, it has the potential to be utilized in the production of other items, such as window panes.
It is never a good idea to look directly at the sun while wearing sunglasses, not even during a total solar eclipse. Sun filters and solar viewing glasses are designed specifically for this purpose.