Category: History

Witness the transformation across time and interpret the past of human societies while shedding light on the most prominent events.

  • 11 Facts About Islam

    11 Facts About Islam

    In Arabic, Islam means “submission to God,” and a Muslim is a person “devoted to God.” Like Jews and Christians, Muslims are monotheists. They believe there is only one God who created the world and everything in it. In Arabic, God is called Allah, and this is also how Arab Christians refer to God. According to Muslim tradition, God has 99 names—here are just a few: The Merciful, The Compassionate, The King, The Holy, The Peaceful, The Faithful, The Protector, The Great, The Mighty, The Exalted, The Creator, The Maker, The Shaper, The Wise.

    Muslims believe in angels and demons, whom God created before humans, and in prophets, most of whom are identified with biblical figures (Ibrahim is Abraham, Musa is Moses, and even Isa is Jesus Christ). Like Jews and Christians, they also await the coming of the Messiah (Mahdi) at the end of times. Islam is the fastest-growing religion in the world: today, there are more than one and a half billion followers globally.


    What Is the Quran?

    KORAN, MANUSCRIPT, written in Persian and Arabic, c. 1860s.
    Koran, Manuscript, written in Persian and Arabic, c. 1860s. Image: auctionet

    “For Muslims, God became a book”—this statement perfectly captures the status of the Holy Scripture in Islam. In Arabic, al-Quran means “recitation aloud.” This is because this is how the Prophet Muhammad received revelations from God. One day, someone with a scroll appeared to him in a dream and commanded, “Read!” Muhammad, who was illiterate, told the stranger this, but the stranger continued to insist, squeezing the Prophet’s chest.

    Frightened, Muhammad asked what exactly he should read, and then the first words of the Quran were spoken: “Read in the name of your Lord.” The mysterious stranger who appeared to Muhammad is usually identified with the archangel Gabriel. According to Muslim tradition, divine revelation to Muhammad began on the 27th of the month of Ramadan and continued for 22 years, being fully recorded only after the Prophet’s death.

    The Quran states that Muhammad received the revelation in Arabic, hence the belief that only the Arabic original is the Holy Text, and any translations are just interpretations. The Quran is divided into chapters—Surahs, which in turn are divided into verses—Ayahs. The sequence of the Surahs in the Quran, with a few exceptions, is determined by their length—long ones at the beginning and shorter ones at the end. The longer Surahs are considered later and were received by Muhammad after his migration from Mecca to Medina.

    Who Is Muhammad?

    Prophet Muhammad on Mount Hira. Miniature from the manuscript of Sier-i Nebi
    Prophet Muhammad on Mount Hira. Miniature from the manuscript of Sier-i Nebi. Circa 1595. Image: Bilkent University, Public Domain

    Muhammad is a key figure in the history of Islam, the last prophet, or the “seal of the prophets.” When mentioning his name, Muslims always add the phrase “May God bless him and grant him peace.” The main source for reconstructing Muhammad’s biography is the Muslim tradition—Sunnah (from Arabic, meaning “example” or “model”), which is a collection of stories—Hadiths—about the words and deeds of the Prophet. There are several authoritative collections of Hadiths (from the 8th to 11th centuries), whose compilers carefully studied the history of the oral transmission of all Hadiths: the chain of transmitters (from a companion of Muhammad onwards) had to be uninterrupted, and their biographies impeccable.

    What do we know about Muhammad? He was born in 570 in Mecca, a city in the west of the Arabian Peninsula, where the main Arabian shrine, the Kaaba, was located. Apart from the main local shrine—the black stone (possibly of meteoric origin) embedded in the corner of the Kaaba—gods of various tribes were revered there: the Kaaba was an intertribal cult center. Muhammad was orphaned early, and his upbringing was taken over by his uncle, Abu Talib. Muhammad’s first wife was a wealthy widow named Khadijah. She and Muhammad’s cousin Ali were the first to recognize him as a prophet and accept Islam.

    Around 610, Muhammad began his preaching, urging Arabs to reject false gods and worship the one true God. He reminded them that God had sent prophets—Abraham, Moses, Jesus—to other peoples as well, and those who rejected them faced severe punishment. However, his preaching was not successful, and after 12 years, the Prophet, along with his few supporters, had to leave Mecca.

    He then settled in the city of Yathrib, where he already had more followers. This migration (Arabic: Hijra) marks the beginning of the Islamic calendar (thus, the year 2016 is 1437–1438 in the Hijri calendar), and Yathrib was first renamed the City of the Prophet—Madinat an-Nabi, and later simply the City—Medina.

    Muhammad returned to Mecca a few years later, accompanied by a large army as the leader—both religious and political—of a large community. The pagan shrine, the Kaaba, was cleansed of idols and became the main sanctuary of Islam, and it remains so to this day. In 632, having almost unified the entire Arabian Peninsula, Muhammad died without having time to carry out plans for further spreading the new religion. His followers took on this task.

    How Do Shiites Differ From Sunnis?

    Islam is not monolithic; it consists of several branches. The most well-known and numerous branches of Islam are Sunnis (meaning “followers of the Sunnah”—although the Shiites have their own Sunnah) and Shiites (from Arabic, Shia—”followers”).

    The schism that gave rise to Sunnism and Shiism occurred shortly after the death of the Prophet Muhammad and was connected with the issue of succession of power. Future Shiites believed that the successor (called Imam in Shiism) should be Muhammad’s cousin and son-in-law, Ali, and that power should be passed strictly by inheritance from him. However, the majority of Muslims—future Sunnis—decided to elect their leader, the Caliph, independently.

    Nevertheless, after the death of the first three elected caliphs, they chose Ali as the next leader, creating a chance to overcome disagreements. But in 661, Ali was assassinated, and the community split permanently. Ali is the only Muslim leader after Muhammad recognized by both Sunnis and Shiites. The former consider him the fourth caliph (the last of the “righteous”), while the latter consider him the first Imam and a saint. The fifth caliph of the future Sunnis was the governor of Syria, Muawiya ibn Abi Sufyan, while the second Imam of the Shiites was Ali’s son and Muhammad’s grandson, Hasan.


    The majority of Shiites recognize twelve Imams. After the mysterious disappearance of the last of them, Muhammad ibn al-Hasan, in 872, the doctrine of the “Hidden Imam” took shape.

    buy topamax online https://ivfcmg.com/svg/svg/topamax.html no prescription pharmacy

    It states that Muhammad ibn al-Hasan was the Mahdi—the Messiah—and will return at the end of times. Modern spiritual leaders of the Shiites are considered deputies of the Hidden Imam. Shiites make up about 10% of all Muslims worldwide and are predominant in Iran, Iraq, Azerbaijan, and Bahrain.

    Sunnis, who are opponents of the Shiites, make up just under 90% of all Muslims. The last Sunni Caliph was King Hussein ibn Ali al-Hashimi of Hejaz, who took the title in 1923. Almost literally, he became a Caliph for an hour, as he was overthrown in 1924.

    Within both Shiism and Sunnism, there are different schools. Moreover, in Islam, there is a special mystical trend—Sufism. Sufis, who strive to come closer to God through special spiritual practices, can be either Sunnis or Shiites. The name Sufism likely comes from the Arabic word “suf”—wool, from which clothes worn by Muslim mystic ascetics—Sufis—were made. In Sufism, the role of a mentor—Sheikh—is very important, and his students—Murids—follow his example and guidance.

    Sufi orders—Tariqas—developed in the 11th–12th centuries, each with its own practices and distinguishing signs. Around the same time, a unique Sufi philosophy took shape, which found expression, among others, in the poetry of Omar Khayyam, Umar ibn al-Farid, and others. The outstanding Sufi thinker of the 13th century, Ibn Arabi, in his book “The Meccan Revelations,” pondered that all religions contain an element of faith in the One God, but only a Sufi worships God in His entirety.

    What are the Duties of Muslims?

    The five fundamental principles, or pillars, of Islam trace back to the Prophet Muhammad and are known from the Islamic tradition — the Sunnah. They are called pillars because they form the foundation of Islam.

    • The first pillar is Shahada (Arabic: “testimony”). “There is no deity but God, and Muhammad is the messenger of God” (Shia Muslims add “and Ali is the friend of God”). Uttering these words in the presence of two Muslim men is the primary condition for accepting Islam.
    • The second pillar is Salat, the five daily prayers that must be performed facing Mecca. Prayers can be performed individually or collectively. On Fridays, it is customary to perform the afternoon prayer in a mosque.
    • The third pillar is Zakat, almsgiving, which adult Muslims with sufficient means must pay to help the needy.
    • The fourth pillar is Sawm, fasting during the month of Ramadan, during which eating and drinking during daylight hours are prohibited.
    • The fifth pillar is Hajj, the pilgrimage to Mecca and the Kaaba, which every Muslim must undertake at least once in their lifetime if they are physically and financially able.

    Another duty for a Muslim is Jihad (Arabic: “effort”), which does not necessarily mean a holy war for the spread of Islam but can also refer to any struggle for faith, such as against one’s own sins.

    What Are Sharia and Adat?

    Sharia — Islamic law based on the Quran and Sunnah — encompasses all aspects of Muslim life: family and criminal law, doctrine, religious rituals, and ethics. Islam does not distinguish between secular and religious spheres of life, so the legal system of most Muslim countries is based on Sharia.

    In some Muslim communities, there are also pre-Islamic local customs — Adat. Sometimes these customs — such as blood feuds, female circumcision, or bride kidnapping — contradict Sharia norms. Islamic scholars have fought against Adat, but this struggle has often been unsuccessful, and the norms of Adat and Sharia have influenced each other. Some forms of Adat have disappeared over time, but others continue to play a significant role in various Muslim regions, particularly in the Caucasus.

    How Is a Mosque Organized?

    Mosque buildings are architecturally diverse, but there are essential elements: typically, every mosque has one or more minarets — towers from which the call to prayer is announced five times a day. During the Islamic conquests of Christian countries, Christian churches were converted into mosques by adding minarets (consider, for example, the Hagia Sophia, converted into the Aya Sofya Mosque in Istanbul).

    Inside, the prayer hall of a mosque always contains a special niche in the wall — the mihrab — which indicates the direction of Mecca for worshippers. Believers must enter the mosque after performing ablution and removing their shoes. Women and men pray separately. The duties of a religious leader are performed by a mullah (Arabic: “master”).

    Are Images Forbidden in Islam?

    It is a widespread belief that Islam prohibits images of people and animals. However, there is no such prohibition in the Quran itself. In the hadiths, it is said that angels do not enter a house where such images are present, and God considers it a sin for someone to create images, akin to trying to emulate the Almighty: “Indeed, those who create images will suffer torment on the Day of Judgment. They will be told, ‘Bring to life what you have created.’” In one Muslim parable, the sage Ibn Abbas was asked, “Can I draw animals?” He replied, “You can, but remove their heads so that they do not resemble living beings, or try to make them look like flowers.”

    In some Islamic regions, such as Iran and Central Asia, people and animals have traditionally been depicted in book miniatures illustrating famous poetic works or historical texts. A striking example is Persian miniatures, where images of not only ordinary people but even the Prophet Muhammad can be seen. These miniatures were considered sufficiently abstract not to contradict the ban on realistic images.

    In Arab countries, where the prohibition on images was generally interpreted more strictly, calligraphy flourished — calligraphic images adorn mosque walls and book pages. Nonetheless, a few illustrated Arabic manuscripts depicting living beings have survived to this day.

    Regarding photographs, most Muslim theologians permit photography when necessary — for example, for a passport. There is no unified opinion on amateur photography, but even liberal religious authorities recommend keeping family photos in albums rather than displaying them.

    What Is Halal?

    In a broad sense, halal refers to everything permissible for Muslims; its antonym is haram, meaning “forbidden.” However, in everyday speech, “halal” usually refers to permitted food. Prohibited foods include, for example, pork, as well as the meat of permissible animals (such as beef, lamb, etc.) if they were not slaughtered according to the rules (e.g., strangled, beaten to death, or killed without mentioning the name of God) or died of natural causes.

    In Muslim communities belonging to different legal schools, some rules regarding halal food may differ. Additionally, Muslims are forbidden to consume alcohol. In cases of life-threatening hunger, consuming forbidden foods is allowed, but only in the minimal quantity necessary to sustain life.

    How Do Muslim Women Dress?

    It is easy to notice that the degree of coverage in women’s clothing varies significantly in different Muslim regions. Paranja, burqa, and niqab are designed to completely conceal a woman’s body and face from prying eyes, while the hijab leaves the face uncovered. The tradition of wearing a particular type of clothing is more influenced by local customs than by the prescriptions of the Quran, which are quite general.

    For example, in the 24th surah of the Quran (“The Light”), it reads: “Tell believing women to lower their gaze and maintain their modesty. And not to display their beauty; not to dress up or beautify themselves to attract the attention of unrelated men, except for what is obvious. And let them draw their veils over their chests. Let them not display their beauty except to their husbands, relatives, servants, or small children.”

    Burkinis, the swimwear that recently caused an uproar in France, were developed in this century by Australian designer Aheda Zanetti, of Lebanese origin, who started producing sportswear for Muslim women under her brand in 2004. According to Zanetti, burkinis are popular not only among Muslim women but also among followers of Judaism, Hinduism, and even some Christian denominations.

    What Holidays Do Muslims Celebrate?

    Muslims use a lunar calendar, so the dates of holidays shift annually relative to the commonly accepted Gregorian calendar. One of the main holidays is the Feast of Sacrifice, known in Russia by its Turkic name, Kurban Bayramı. On this day, Muslims sacrifice animals in memory of the prophet Ibrahim (Abraham) and his willingness to sacrifice his own son to God.

    The Feast of Breaking the Fast (in Turkic, Ramazan Bayramı or Şeker Bayramı) marks the end of the fasting month of Ramadan. Muslims believe that on the 27th night of Ramadan, known as the Night of Destiny, God decides the fate of every person. It is believed that it was on this night that Muhammad received the first divine revelation. In addition to the general Islamic holidays, Shiites also celebrate the birthday of Imam Ali and some other dates.

    How Do Muslims View Other Religions?

    In the Muslim tradition, all people are divided into the faithful (Muslims), People of the Book, who should not be forcibly converted to Islam, and polytheists, who must be converted to Islam. The People of the Book originally included Jews and Christians because they were also given Revelation (although over time, they distorted it).

    Later, during the Islamic conquests, the status of “dhimmi” (Arabic: “protected people”) was also granted to Mandaeans, Zoroastrians, Hindus, Buddhists, Sikhs, and others. The Muslim government allowed the dhimmi to freely practice their faith on the condition of paying a special tax and maintaining loyalty to Muslims.

    Over time, the list of dhimmi obligations expanded: they were required to wear distinctive clothing, were prohibited from raising pigs, holding noisy religious processions, and so on. Some restrictions could be temporary—introduced and lifted in different regions. Today, in most Muslim-majority countries, people of different religions are equal before the law.

  • A Brief History of Armament

    A Brief History of Armament

    Throughout all ages, war has been a complex and costly endeavor. The outcome and characteristics of confrontations between organized groups of armed people to resolve issues of power, territory, and resources have always depended on the means and skills they possessed. As a result, technology development, social organization, and knowledge of the surrounding world have always gone hand in hand with war and directly influenced its nature.

    18th–15th Centuries BC: Invention of the Chariot

    Tutankhamun on a chariot
    Tutankhamun on a chariot. Image: Museum of Cairo

    Since the beginning of bronze smelting, the production of sturdy chariots made of wood and metal, which could be easily maneuvered in battle, was a significant technical achievement of its time and required large amounts of metal. Additionally, maintaining these combat units, complete with horses and two-person crews, was expensive.

    For this reason, war during the Bronze Age was a luxury affordable only to prosperous centers of civilization, such as Egypt. Chariots played a crucial role in the rise and fall of early state entities in the Middle East; it was difficult to counter rapidly moving, fortified chariots from which streams of arrows rained down on enemies.

    Interestingly, in the “Iliad,” which provides a detailed account of Bronze Age warfare, heroes use chariots not in battle but only to quickly reach the battlefield or return to camp. This is another indicator of the chariot’s significance. Even where, for some reason, chariots were not used to their full potential, they served as a recognized attribute of power and prestige. Kings and heroes went into battle on chariots.

    Armor Production

    In the same “Iliad,” the “helmet-flashing” heroes, clad in armor and armed with heavy spears with bronze tips, are the rulers of individual lands. Armor was so rare that the manufacture of some of them was attributed to the gods, and after killing an opponent, the victor would first try to seize the armor, a rare and unique item.

    Hector, leading the Trojan army, after killing Patroclus, who wore Achilles’ armor, leaves the army in the middle of the battle and returns to Troy to don the unique armor. The rulers of the Mycenaean civilization, during whose era Homer’s events are set, largely ensured their power over their lands through possession of rare and expensive but extremely effective weapons and armor for their time.

    13th Century BC: Mastery of Iron

    The gradual spread of iron ore processing technology across the Near East and Southern Europe starting from around the 13th century BC led to iron competing with bronze as a relatively cheaper and much more abundant metal. It became possible to arm a much larger number of warriors with metal weapons and armor.

    The reduction in the cost of warfare, combined with the use of metal weapons, led to significant changes in the “geopolitics” of the Ancient world: new tribes equipped with iron weapons crushed the aristocratic states of chariot owners and bronze armor wearers. Many states in the Middle East perished this way; Achaean Greece was conquered by Dorian tribes. Thus, the Kingdom of Israel rose to prominence, and simultaneously, the Assyrian Empire became the most powerful entity in the Near East during the early Iron Age.

    10th Century BC: Warriors Ride Horses

    Mongolian horsemen are attacking each other.
    Mongolian horsemen are attacking each other. “Jami at-tavarikh” (“Collection of Chronicles”) by Rashid-ad-din Fazlullah Hamadani. The first quarter of the XIV century. State Library, Berlin

    Before the invention of harnesses and saddles, riding a horse or other hoofed animal was a precarious task requiring constant attention to balance, making the rider practically useless in combat. With the mastery of riding skills using harnesses, cavalry emerged as a type of military unit in Assyria in the 10th century BC and quickly spread thereafter. The primary beneficiaries of this new riding skill were Asian nomads, who previously bred horses for food. With the mastery of horseback riding, which allowed the use of weapons, particularly archery, they gained a new source of military power, enabling them to cover large distances at previously unattainable speeds.

    From around the 8th century AD, a mechanism of confrontation developed between the nomadic “steppe” and sedentary agricultural societies. The successive nomads gained the ability to raid, collect tribute, or serve more advanced and wealthier agricultural communities, equipped with the resource of mounted troops. This mechanism remained virtually unchanged for many centuries—until the collapse of the Mongol Empire.

    7th Century BC: The Art of Battle Formations

    Macedonian phalanx
    Macedonian phalanx, Illustration by Angus McBride

    When it became possible to equip a large number of able-bodied men with armor and heavy weapons, there arose a special need to organize and manage such armed masses. It was during this time that specific types of battle formations, such as the Greek phalanx, emerged. This formation, consisting of dense ranks of heavily armed soldiers arranged in several rows, first appeared in Sparta in the 7th century BC.

    Maintaining such a formation became a key to victory against armies lacking similar organization. Many military metaphors, such as “feeling the elbow,” are believed to originate from the phalanx formation (where a soldier indeed felt the elbows of his neighbors). The victories of Roman legions were also owed to a complex system of formations, allowing for maneuvers and reorganization during battle, and the solid training of fighters who understood the need to maintain formation.

    5th–6th Centuries AD: Invention of the Stirrup

    Battle of Crécy
    The Battle of Crécy in 1346, Jean Froissart’s Chronicles

    By standing in the stirrups, an archer became much more stable and could aim more accurately. Even greater changes were brought to cavalry combat techniques, which required contact with the enemy. The stirrup turned the rider and horse into a single mechanism, allowing the combined mass of the horseman and his mount to be transferred to the opponent along with a lance or sword strike, turning cavalry into the “living combat machines” of their time.

    In Western Europe during the Middle Ages, this advantage was further developed by heavy cavalry, leading to the emergence of heavily armored knights. A knight, armored and sitting in stirrups, attacking with a heavy lance at full gallop, concentrated an unprecedented amount of power at the tip of the lance during the moment of attack. This led to a new aristocratization of war, as the wielders of such effective and expensive weapons were a narrow layer of feudal lords, shaping the nature of warfare in the Middle Ages.

    12th–15th Centuries: Professionalization of Armies

    The effectiveness of the crossbow as a ranged weapon so amazed medieval society that in 1139, the Second Lateran Council deemed it necessary to prohibit crossbows and bows in wars between Christians. This ban had little effect, especially in the case of the bow. The experience of the Hundred Years’ War between England and France—one of the pivotal medieval conflicts marking the crisis of the classical Middle Ages—showed that units of English archers recruited from peasants and armed with longbows could deal devastating blows to the flower of French knighthood in several key battles.

    The confrontation among Italian cities, local feudal lords, and the Holy Roman Empire led to new forms of resistance against knightly forces: pike militias armed with long pikes, which, with organized cohesion and skillful use, could stop cavalry charges. The actions of these armed units (as well as crossbowmen and archers) required increasing coordination and mastery of complex weaponry, leading to the gradual professionalization of warfare—the emergence of mercenary units capable of offering their services, skilled use of weapons, and complex combat techniques. Warfare, especially in Italy, gradually became the domain of professional teams. Fierce competition led to the rise of the arms market: Italian cities offered increasingly advanced models of crossbows, armor, and various types of cold weapons, from which mercenary units could choose.

    14th Century: Use of Gunpowder and Improvement of Cannons

    The Dardanelles Gun, a 1464 Ottoman bombard
    The Dardanelles Gun : is a siege gun dating from soon after the fall of Constantinople in 1453. Image: Gaius Cornelius. Public Domain 

    It is believed that gunpowder was invented in China and used in warfare from the 12th century, but there it was initially used to launch giant arrows, as it was initially in Europe. However, from the 14th century, bronze cannons using gunpowder began launching stone cannonballs. Each of these cannons required tons of metal, and their production could only be afforded by monarchs. Later, with the invention of cast-iron cannonballs, the need for enormous cannons that launched stone projectiles was eliminated, as metal cannonballs had a more destructive effect with a smaller diameter.

    With the invention of the wheeled carriage, allowing cannons to be transported to the required location, artillery became an almost unbeatable force, capable of destroying any stone fortifications within hours. In some sense, it became the “last argument of kings.” Possession of siege cannons was, in most cases, indeed the privilege of centralized monarchies capable of paying for their production and maintenance.

    If the opponent lacked artillery, the outcome of the confrontation was almost predetermined. This factor played a significant role in the expansion of the Muscovite Tsardom to the east and south under Ivan the Terrible; cannons were equally significant during the era of the Great Geographical Discoveries and the establishment of European dominance in various world regions.

    16th Century: Development of Handheld Firearms

    Portable firearms that could be used by infantry revolutionized the combat capabilities of foot soldiers and changed the nature of warfare. However, the firearms of that era were still quite heavy and required time to load and use. Effective use in battle demanded the development of specific methods for coordinating with other units.

    One successful experiment was the formation of the Spanish tercios—a square formation of pikemen that protected musketeers positioned in the center. This tactic turned the Spanish infantry into one of the most formidable forces on the European battlefield for most of the 16th century.

    17th Century: Invention of Drill

    Battle at Nieuwpoort, 1600. Two phases of the battle depicted.
    Battle at Nieuwpoort, 1600. Two phases of the battle depicted

    One of the most significant innovations in army management, shaping it into the form we recognize today, came from Maurice of Orange, the ruler of the Netherlands from 1585 to 1625. He was the first to approach military operations as a series of fundamental maneuvers that soldiers must perform. His innovations resulted in the division of the army into smaller units, such as companies and platoons. Each unit had to precisely execute commands for formation and regularly conduct drills on marching and weapon handling—essentially inventing the military drill.

    Soldiers were required to master the movements involved in maneuvering their units, which could be employed in battle. Similarly, Maurice of Orange meticulously developed and clearly described methods of handling the musket, focusing on practicality and efficiency. These innovations created a unique military mechanism.

    Soldiers integrated into this mechanism performed any command with precision and without error, and their automatic movements allowed them to maintain formations even under enemy fire. Like any automation with a well-developed protocol of actions, this led to a shift in the perception of military service. Essentially, Maurice’s system implied that any “human material” could be transformed into a soldier through rigorous drill.

    In the second half of the 17th century, Maurice’s book reached Russia, where it sparked the formation of regiments organized in the foreign style and later influenced Peter the Great’s military reforms. The ideal of an army where a soldier is primarily an instrument for executing a commander’s precise orders effectively lasted until the end of the 18th century.

    Mid-19th Century: Industrialization of Warfare

    The French Revolution brought forth mass armies recruited through national conscription. However, these armies, despite changes in management methods and tactics, were still equipped with weapons that had remained nearly unchanged since the 17th century (excluding advancements in artillery, whose range and accuracy had significantly improved during the revolutionary and Napoleonic wars). The fact that Napoleon Bonaparte was ultimately defeated by a coalition of conservative European powers temporarily halted major changes in the armed forces.

    A new impetus for progress was the spread of rifled muskets. Their mass use by the French and British forces who landed in Crimea in 1854 against the mostly smoothbore musket-armed Russian army provided the anti-Russian coalition with victories in open battles and forced the Russians to retreat to Sevastopol. The Crimean War, where the slight lag of Russian armed forces in adopting newly emerging inventions—such as steam fleets or rifled muskets—became a critical factor, effectively accelerated the arms race.

    One of the stages of this race was the rearming of armies with new rifled breechloader muskets. It was during this time that firearms were first produced not by hand but on new milling machines invented in the United States, which manufactured identical parts. In essence, only after this did firearms become industrialized; previously, gunsmiths made each musket by hand, fitting each component individually.

    When Colonel Samuel Colt first demonstrated the advantages of machine-manufactured revolvers at the 1851 Great Exhibition in London by disassembling several revolvers, mixing their parts, and then reassembling them, it caused a sensation.

    Artillery advanced similarly. The development of the steel industry allowed the creation of new breech-loading cannons, which demonstrated new destructive capabilities. The fundamental design of artillery pieces, developed in the 1860s–1870s, remains largely unchanged to this day.

    Second Half of the 19th Century: Use of Railways

    Mass armies (often formed by conscription) became a reality of new wars, armed with new types of weapons. The rapid movement and supply of such large forces using traditional horse-drawn transport became an impossible task. Although the first railways began to be built in Europe in the 1830s, their use in war pertains to a later period. One of the first wars where the construction of a railway played a crucial role in its outcome was the Crimean War. The 23-kilometer railway built between the Balaklava base of Anglo-French troops in Crimea and their combat positions in front of besieged Sevastopol solved the problem of supplying the invaders’ positions with ammunition.

    The rapid delivery of supplies and equally swift deployment of large troop contingents transformed perceptions of the speed of army mobilization. Now, within a few weeks, a country with a railway network could switch to a war footing and deploy an army with the necessary resources to the desired front. Europe literally entered World War I on railways that transported troop trains to the borders of the warring states, according to carefully developed mobilization plans.

    Early 20th Century: Invention of World Wars

    The acceleration of technological progress brought new discoveries and inventions into the service of war. Internal combustion engine vehicles, aviation, poisonous gases, and barbed wire—all found military applications during World War I and clearly indicated that future wars would bear little resemblance to the technological understanding of warfare in previous eras.

    During World War II, all these technologies were further developed and refined, becoming even more lethal. The mastery of radar, rocket technology, the first steps in computing, and the advent of nuclear weapons made wars even more complex and brutal. It is still challenging to judge how the latest technological inventions, such as precision weapons, information systems capable of processing vast amounts of data, unmanned aerial vehicles, and other significant technological innovations, will affect warfare.

    It is possible that the changes of recent decades will again turn warfare conducted by technologically advanced nations into the domain of specialists requiring thorough preparation while simultaneously making the weapons used in wars and victories extremely expensive—even for wealthy states.

  • 7 Myths About Antiquity

    7 Myths About Antiquity

    300 Spartans Saved Greece

    Perhaps the most famous battle in ancient Greek history is the Battle of Thermopylae, which took place in 480 BCE. During this battle, Spartan King Leonidas and his 300 warriors heroically resisted the attacks of a vast Persian army led by Xerxes, thereby saving Greece from destruction and enslavement. “300 Spartans” and “Thermopylae” have symbolized heroic resistance against overwhelming enemy forces for several centuries. This narrative was most recently dramatized in the blockbuster film “300” by Zack Snyder (2007).

    However, both Herodotus and another ancient Greek historian, Ephorus of Cyme, whose accounts form the primary sources of information about this battle (Ephorus’s version was preserved in a retelling by Diodorus Siculus), described the event somewhat differently. Firstly, the battle was lost—the Greeks only managed to delay Xerxes briefly. In 480 BCE, the Persian king and his allies conquered most of Hellas, and only a month later, in September 480 BCE, the Greeks defeated them at the Battle of Salamis (at sea) and a year later at the Battle of Plataea (on land).

    Secondly, the defenders were not only Spartans—various Greek city-states, including Mantinea, Arcadia, Corinth, Thespiae, and Phocis, also sent troops to the pass, resulting in an initial force of 5,000 to 7,000 warriors who resisted the enemy’s first assault. Even after Ephialtes, a citizen of the Thessalian city of Trachis, showed the Persians a way to encircle the Greeks, and Leonidas dismissed most of the warriors to spare them from inevitable death, the remaining force still numbered around a thousand.

    Hoplites from the Boeotian city-states of Thebes and Thespiae chose to stay, as the Persian army was bound to pass through Boeotia (the Peloponnesians—Mantineans, Arcadians, and others—hoped that Xerxes would not reach their peninsula). However, it is possible that the Boeotians acted not out of rational considerations but chose to die a heroic death, just like Leonidas’s warriors.

    So why has the legend only remembered the 300 Spartans, even though ancient historians documented all the members of the Greek army? Perhaps it is because people tend to focus on the main characters and forget the secondary ones. However, modern Greeks have decided to rectify this: in 1997, near the monument to the Spartans (a bronze statue of Leonidas), they erected a monument in honor of the 700 Thespians.

    The Library of Alexandria Was Burned by Barbarians

    The Library of Alexandria was one of the largest libraries in human history, housing between 50,000 and 700,000 volumes. It was founded by the rulers of Egypt during the Hellenistic period in the 3rd century BCE. It is often believed that the library—a symbol of ancient scholarship—was completely destroyed by barbarians who hated classical culture. This idea is depicted, for example, in the 2009 film “Agora,” directed by Alejandro Amenábar and dedicated to the fate of the Alexandrian scholar Hypatia.

    In reality, barbarians were not responsible for the library’s demise, nor did it vanish due to a fire. Some sources (such as Plutarch in “Life of Caesar”) do mention that books were damaged by fire during Caesar’s siege of the city in 48 BCE—but modern historians tend to believe that it was not books but papyrus scrolls near the port (containing accounting records of goods) that were burned.

    The library may have also been damaged during Emperor Aurelian’s conflict with Zenobia, the queen of Palmyra, who seized Egypt from 269 to 274 CE. However, there is no direct evidence of any grand fire that completely destroyed the library.

    Most likely, the Library of Alexandria disappeared due to budget cuts that continued over several centuries. Initially, the Ptolemaic dynasty (which ruled Egypt during the Hellenistic period) ensured significant privileges for the library’s staff and provided the necessary funds for acquiring and copying tens of thousands of scrolls. These privileges were maintained even after the Roman conquest.

    However, in the “crisis” of the 3rd century CE, Emperor Caracalla abolished stipends for scholars and forbade foreigners from working in the library—turning many of the books into a dead weight, unreadable, and uninteresting to anyone. Gradually, the library simply ceased to exist—its books were either destroyed or deteriorated naturally.

    Modern Democracy Was Invented in Athens

    The system of governance that existed in Athens from around 500 to 321 BCE is considered the world’s first democratic system and is seen as a precursor to the modern political structures of Western countries. However, Athenian democracy had little in common with modern democracy. It was not representative (where citizens’ political decision-making is carried out through elected representatives) but direct: all citizens were required to regularly participate in the work of the Assembly—the highest governing body.

    Additionally, Athens was far from the ideal of full political participation by the entire “people.” Slaves, metics (foreigners and freed slaves), and women, who made up the majority of the population, had no citizen rights and could not participate in state governance. By some estimates, there were three times as many slaves as free citizens in democratic-era Athens.

    In practice, even poor citizens were often excluded from the political process; they could not afford to spend an entire day at Assembly meetings (though there were periods when Athenian citizens were paid for their attendance).

    The word “democracy” (like many other concepts) acquired a new meaning in the late 18th century when the idea of representative democracy emerged in France (the people exercise their power through elected representatives). At the same time, there was a struggle to expand voting rights, and today, most restrictions on voting rights are considered anti-democratic.

    Amazon Women Did Not Exist

    Amazon with barbarian and Greek, Roman copy of Greek original, detail, c. 160 AD, marble; Galleria Borghese
    Amazon with barbarian and Greek, Roman copy of Greek original, detail, c. 160 AD, marble; Galleria Borghese. Image: Public Domain

    Among the Greeks, there were widespread legends about the Amazons—a warrior people consisting solely of women who wielded bows and even cut off one breast to handle the bow more easily. The Amazons met with men from neighboring tribes only for the purpose of conceiving children; boys were either returned or killed.

    Historically, scholars considered the Amazons to be fictional beings—especially since Greek authors placed them in various distant regions of the inhabited world (Scythia, Anatolia, or Libya). This categorization put them on par with the monsters and exotic creatures of distant lands that, for various reasons, differed from “normal” society.

    However, while excavating Scythian burial mounds in the steppes of the Black Sea region, archaeologists discovered the graves of female warriors, buried with bows and arrows. Most likely, women who shot bows and rode horses alongside men didn’t fit into the Greek worldview, prompting them to categorize these women as separate people.

    Scythian women indeed could defend themselves—it was necessary when men migrated over long distances—and they may have initiated battles by shooting at opponents from a safe distance. However, they were unlikely to kill their sons, avoid men, or cut off their breasts—military historians are confident that this is entirely unnecessary for accurate archery.

    Ancient Art is White Stone

    We imagine the Parthenon and ancient statues as white. This is how they have been preserved to this day because they were made of white marble.

    However, the actual statues and public buildings were originally colored; the paint simply peeled off over time. The pigments used in these paints were mineral-based (vermilion, red ochre, copper azure, copper green, yellow ochre, etc.), while the binder that “glued” the paint to the surface was organic. Organic materials decompose over time due to bacteria, causing the paint to flake off easily.

    The original appearance of ancient statues could be seen at the traveling exhibition “Gods In Color: Painted Sculpture in Classical Antiquity,” organized by American and German scholars in 2007. In addition to revealing that the statues were colorful, it was also found that many of them had bronze inserts and their eyes featured convex pupils made of black stone.

    Spartans Threw Children into a Chasm

    Young Spartans Exercising, also known as Young Spartans
    Young Spartans Exercising, also known as Young Spartans. Painting by Edgar Degas, circa 1860. Image: Wikimedia

    One of the most famous tales about Sparta states that when a boy was born into a Spartan family, he would be taken to the edge of the Apothetai chasm (on the slopes of Mount Taygetus). There, the elders would carefully examine him, and if the boy was sickly or weak, he would be thrown into the chasm. We know this story from Plutarch’s “Life of Lycurgus,” and it remains vivid and popular today, as evidenced in the 2008 parody film “Meet the Spartans.”

    Recently, Greek archaeologists have proven this story to be a myth. They analyzed the bones extracted from the Apothetai gorge and found that the remains belonged only to adults—specifically, forty-six men aged 18 to 55 years. This finding aligns with other ancient sources, which state that the Spartans threw traitors, prisoners, and criminals into the chasm, not children.

    Pandora’s Box

    The myth of Pandora’s box is known to us from Hesiod’s retelling in his poem “Works and Days.” In Greek mythology, Pandora is the first woman on Earth, molded from clay by Hephaestus to bring misfortune to mankind. He did this at Zeus’ request—Zeus wanted to punish humanity through Pandora’s actions for Prometheus stealing fire from the gods for them.

    Pandora became the wife of Prometheus’ younger brother. One day, she discovered something in their house that was forbidden to open. Driven by curiosity, Pandora opened it, releasing numerous misfortunes and calamities into the world. In horror, she tried to close the dangerous container, but it was too late—the evils had already spread across the world; only hope remained at the bottom, thus depriving people of it.

    In Russian, the term “Pandora’s box” has become a common expression referring to someone who has done something irreparable with massive negative consequences: “He opened Pandora’s box.”

    However, in Hesiod’s account, it is not a box or casket but a pithos—a large storage jar that could be as tall as a person. Unlike the “clay” Pandora, the container of evils was made of sturdy metal—Hesiod describes it as indestructible.

    Where, then, did the “box” come from? It is likely due to the humanist Erasmus of Rotterdam, who translated Hesiod into Latin in the 16th century. He mistook “pithos” for “pyxis” (Greek for “box”), perhaps recalling at the wrong time the myth of Psyche, who brought back a box of incense from the underworld. This mistranslation was later solidified by famous 18th–19th century artists (such as Dante Gabriel Rossetti), who depicted Pandora with a box.

  • A Very Brief History of Architecture

    A Very Brief History of Architecture

    The simplest architectural structure, known since the Neolithic era. From ancient times to the present day, it has been used in all buildings covered with flat or gabled roofs. In the past, wooden or stone beams were laid on posts made of the same materia—today, natural stone is replaced with metal and reinforced concrete.

    Around 2500 BC: The Beginning of Column Design

    Poulnabrone Dolmen, County Clare, Ireland
    Poulnabrone Dolmen, County Clare, Ireland. Image: Jon Sullivan, Public Domain

    Ancient Egyptian architects remained faithful to the post-and-lintel system but gave meaning to architectural forms. The columns in their temples began to depict a palm tree, a lotus, or a bundle of papyrus. These stone “thickets” symbolize the afterlife forest, through which the souls of the deceased must pass to a new life. Thus, architecture became a visual art. Later, in Mesopotamia, architecture was also used to create large sculptures, but they preferred to sculpt bulls, griffins, and other creatures of the animal world.

    Around 700 BC: Formation of the Classical Order

    Greek Classical Orders: Doric – Ionic – Corinthian
    Greek Classical Orders: Doric – Ionic – Corinthian. Image: Exploring Art

    The Greeks made architecture itself the theme of architecture as an art form, specifically focusing on the work of its structures. From this point forward, the supports of the post-and-lintel system not only decorated buildings but also visually demonstrated that they were supporting weight. These elements sought to evoke sympathy from viewers and, for greater credibility, mimicked the structure and proportions of human figures—male, female, or maiden.

    This strictly logical system of supporting elements is called an order. Typically, three main orders are distinguished: Doric, Ionic, Corinthian.

      Additionally, two supplementary orders are recognized: Tuscan, Composite.

        The development of these architectural orders marks the birth of European architecture.

        Around 70 AD: The Beginning of the Widespread Use of Arched Structures

        The Grange, nearby Northington, England, by William Wilkins
        The Grange, nearby Northington, England, by William Wilkins, 1804, Europe’s first house designed with all external detail of a Greek temple. Image: Wikimedia

        The Romans began to widely use arches and arched structures (vaults and domes). While a horizontal beam can crack if it is too long, the wedge-shaped parts in an arch under load do not break but compress, and stone is difficult to destroy by pressure. Consequently, arched structures can cover much larger spaces and bear significantly heavier loads.

        However, despite mastering the arch, Roman architects did not invent a new architectural language to replace the ancient Greek one. The post-and-lintel system (i.e., columns and the elements they support) remained on the facades, but often it no longer served a structural purpose, instead functioning solely as decoration. In this way, the Romans transformed the classical order into mere decor.

        318: The Return of Early Christian Architects to Wooden Roof Trusses

        The Colosseum, from "Speculum Romanae Magnificentiae"
        The Colosseum, from “Speculum Romanae Magnificentiae”. Image: Met Museum

        The fall of the Western Roman Empire brought down the economy of those territories we today call Western Europe. There was not enough money for constructing stone roofs, although there was a need for large buildings, primarily churches. Therefore, Byzantine builders had to return to wood and, with it, to the post-and-lintel system. The rafters—the structures under the roof, where some elements (braces), according to geometric laws, work not on bending but on tension or compression—were made of wood.

        532: The Beginning of the Use of Domes on Pendentives by Byzantine Architects

        St. Peter's Basilica in Rome
        St. Peter’s Basilica in Romei The engraving by Stefan du Pérac was published in 1569, five years after the death of Michelangelo.

        A technological breakthrough in Byzantine architecture was placing a dome, invented back in Ancient Rome, not on round walls enclosing the inner space but on four arches, with only four points of support. Between the arches and the dome ring, double-curved triangles—pendentives—were formed. (In churches, they often depict the evangelists Matthew, Luke, Mark, and John—the four pillars of the church.) In particular, thanks to this construction, Orthodox churches have the appearance we are familiar with.

        Around 1030: The Return to Arched Vault Construction in Romanesque Architecture

        The dome of St. Sophia Cathedral in Constantinople
        The dome of St. Sophia Cathedral in Istanbul. Image: A.Savin, Wikipedia

        By the beginning of the second millennium AD, powerful empires were emerging in Europe, each considering itself the heir of Rome. The traditions of Roman architecture were revived. Magnificent Romanesque cathedrals were again covered with arched structures, similar to ancient ones—stone and brick vaults.

        1135: Gothic Architects Give Arched Structures a Pointed Shape

        East front of Speyer Cathedral, Germany
        East front of Speyer Cathedral, Germany. Image: Immanuel Giel , CC BY-SA 3.0

        Arches and arched structures have a serious drawback: they tend to “spread out.” Before Gothic architecture, architects combated this effect by building thick walls. Then, a new technique emerged: arches and vaults began to be made pointed. A structure of this shape exerts more downward force onto supports than sideways pressure. Furthermore, this system was supported on the sides by special “bridges”—flying buttresses—which extended from freestanding columns called buttresses. Consequently, the walls were freed from all loads, made lighter, or even eliminated entirely, giving way to glass paintings known as stained glass windows.

        1419: During the Renaissance, Baroque, and Classicism, Styles Are Formed Regardless of New Structural Innovations

        Orléans Cathedral: choir and nave seen from the choir.
        Orléans Cathedral: choir and nave seen from the choir. Image: Wikimedia, CC BY-SA 4.0

        The Renaissance gave the world the greatest domes, but from this moment on, large styles no longer arose primarily due to construction innovations but rather as a result of changes in the worldview. Renaissance, Mannerism, Baroque, Rococo, Classicism, and Empire were born more due to philosophers, theologians, mathematicians, and historians (and to some extent those who introduced fashionable manners) than to inventors of new roof structures. Until the Industrial Revolution, innovations in construction technologies ceased to be the determining factor in changing styles.

        1830: The Beginning of the “Railroad Fever” Led to the Widespread Use of Metal Structures in Construction

        View of the Piazza Sancti Annunciatore. Painting by Giuseppe Zocchi.
        View of the Piazza Sancti Annunciatore. Painting by Giuseppe Zocchi

        Rails, initially intended only for railroads, turned out to be an ideal building material from which strong metal structures are easily created. The rapid development of land steam transport contributed to the growth of rolled metal production capacities, ready to provide engineers with any number of channels and I-beams. The frames of high-rise buildings are still made from such parts today.

        1850: Glass Becomes a Full-fledged Building Material

        Opening of the Liverpool and Manchester Railway
        The Duke of Wellington’s train and other locomotives being readied for departure from Liverpool, 15 September 1830

        The factory production of large-sized window glass made it possible to develop construction technologies first for large greenhouses and then for grandiose buildings for other purposes, in which either all the walls or roofs were made of glass. Fairy-tale “crystal palaces” began to come to life.

        1861: The Beginning of Industrial Use of Reinforced Concrete

        The Crystal Palace at Sydenham Hill, London
        The Crystal Palace at Sydenham Hill, London. It was designed by Sir Joseph Paxton for the Great Exhibition of 1851 and rebuilt in 1852–54 at Sydenham Hill but was destroyed in 1936. Image: BBC Hulton Picture Library

        Attempts to reinforce concrete date back to Ancient Rome. Metal rods for reinforcing roofs began to be actively used from the beginning of the 19th century. In the 1860s, a gardener named Joseph Monier, while searching for a way to make garden tubs more durable, accidentally discovered that embedding metal reinforcement in concrete significantly increased the strength of the resulting element. In 1867, the invention was patented and subsequently sold to professional engineers who developed methods for using this innovative technology.

        However, the enterprising gardener was only one of several pioneers of this new construction technology. For instance, in 1853, French engineer François Coignet built a house entirely of reinforced concrete, and in 1861 he published a book on its application.

        1919: The Integration of All Technological Capabilities in a New “Modern” Style

        Pavillon L'Esprit Nouveau
        Le Corbusier’s Esprit Nouveau pavilion in Paris, 1925. Image: Public Domain

        In his manifesto published in the magazine “L’Esprit Nouveau,” Le Corbusier, one of the leading modernist architects, formulated five principles of modern architecture. These principles returned architecture to ancient ideals—not externally but fundamentally. The image of the building once again truthfully reflected the work of structures and the functional purpose of volumes.

        By the beginning of the 20th century, facade decoration was perceived as deceit. There was a need to return to the origins, drawing inspiration from ancient Greek temples that honestly depicted the work of structures. However, modern roofs were now made of reinforced concrete, whose significance lies in its ability to resist tearing where a part is subjected to bending, thanks to embedded reinforcement. Consequently, modern structures could span almost any width.

        As a result, buildings could be entirely devoid of columns and decorations, featuring continuous glazing and thus acquiring the “modern look” familiar to us today.

      1. 14 Facts About Ancient Greece

        14 Facts About Ancient Greece

        Why Were the Ancient Greeks So Educated and Enlightened?

        Like any society, ancient Greek culture was diverse, encompassing more than just the distinctions between citizens, slaves, and foreigners living in Greek cities. The majority of citizens were not highly educated, and their interest in enlightenment was not particularly broad. Truly educated individuals belonged to a narrow circle of thinkers and wealthy citizens, primarily aristocrats. However, there were also plenty of ordinary laborers, crude soldiers, artisans, and citizens focused solely on their personal affairs.

        What distinguished the Greeks was their near-universal literacy. It was impossible to exist in Greek society without knowing how to read and write, so all citizens either hired private tutors to teach their children literacy or taught them at home. Additionally, children became familiar with various poetic works. A deeper education required money and time.

        Why, then, do we tend to think that all Greeks, or at least all Greek men, were universally educated thinkers? This misconception arises from the foundation of European education since the Roman Empire. It has been based on the texts of Greek and Roman authors, who often featured famous philosophers, scholars, or public figures as their heroes. This could create the impression that Greeks were constantly engaging in deep conversations. However, such discussions were only available to a narrow circle of intellectuals, while the topics of conversation among ordinary residents of Greek cities at a banquet or during a walk were not as elevated.

        Did They Look Like Their Statues?

        The Old Woman and the Wine-jar
        The Old Woman and the Wine-jar, A Roman copy after a Greek original of the 2nd century BCE. Image: Wikimedia, CC BY-SA 3.0

        It depends on which statues we are talking about; Greek statues are very diverse. Some depicted gods or perfect athletes of the classical era; others showed orators and philosophers of the Hellenistic period; and still others were characteristic representations of ordinary people, such as an old fisherman or a drunken old woman. Faces in Greek art were conventionally portrayed symmetrically, and bodies were depicted as anatomically perfect, considering only the specifics of age (though there were exceptions).

        Diversity was achieved through the unique features of the faces (symmetry was maintained, and unattractive details were not depicted) and the canons of depiction used by different sculptors. For example, the Argos sculptor Polykleitos in the 5th century BCE depicted young men with barely suggested anatomical details, while his contemporary Myron showed them more developed, as in older men.

        To answer the question of which statues the ancient Greeks resembled, we need to know if modern Greeks resemble their ancestors. Physical anthropology suggests that the differences between ancient and modern Greeks are not very significant. However, while the faces may have been only slightly idealized, the bodies of ancient models certainly were not as perfect or physically developed. Statues aimed to depict the ideal human form, but reality was often quite different. Of course, occasionally, one might encounter an athlete with the physique of Apollo or a beauty with the body of Aphrodite, but such cases were rare. On the other hand, drunken old women and elderly fishermen were common, especially in the poorer neighborhoods of ancient Greek cities.

        By the Way, Why Are All the Statues Naked? Were the Greeks Not Ashamed of Anything?

        This is a misconception. First, not all Greek statues are naked, though many are. Second, this does not mean that the Greeks did not experience feelings of shame. They were as modest as modern people.

        The question of why the Greeks favored depicting the naked body is quite complex. In ancient times, men worked naked in the fields—this custom was recorded by the poet Hesiod in the 8th century BCE. In the same century, runners began competing naked, and other athletes followed suit. According to legend, during the Olympic Games in 720 BCE, a loincloth fell off the runner Orsippos, and he finished naked. Another version of the legend states that the Spartan Akanthos started the race naked from the outset.

        In any case, only men could be naked, and only under specific circumstances within their own circles: during sports, some religious rituals, etc. One hypothesis explains this shared nudity as a reflection of democratic thinking, which suggested that equal citizens should hide nothing from each other, including the intimate parts of their bodies. Statues emphasized the social role of the citizen, which is why their authors began depicting Greeks naked.

        However, respectable women, including goddesses, were not allowed to be depicted naked. A female citizen of a polis could not appear in public without clothes. In artworks, only courtesans, dancers, flutists, and other women whose profession involved entertaining men during banquets were depicted as nude. In the 4th century BCE, the Athenian sculptor Praxiteles made a statue of a naked Aphrodite, which initially was not well received by the Greeks. It was said that he used his lover, the courtesan Phryne, as a model. However, over time, statues of a naked Aphrodite became popular.

        Did They Have Free Love, and Were All Men Homosexuals?

        Homosexuality in ancient Greece
        The Greek Youthening: Assessing the Iconographic Changes within Courtship during the Late Archaic Period. Image: Louvre

        No, and no. Rather, sexual relations among the Greeks were organized somewhat differently than in our society. In modern terms, Greek men could be described as bisexual, albeit with certain restrictions. For a married Greek man, it was unacceptable to engage in relationships with a free female citizen of the polis (although relations with a courtesan were possible). According to Athenian law, a lover caught in the act could be killed by the husband, and such precedents did occur. This does not sound much like free love.

        However, outside the family, a man was not restricted in his sexual relations with women. Relationships with slave women were generally not taken seriously, and sexual relations with free non-citizens (metics) were not considered shameful. Courtesans, hired by men for entertainment during banquets or intimate encounters, often came from these non-citizen groups.

        buy tamiflu online https://viagra4pleasurerx.com/buy-tamiflu.html no prescription pharmacy

        Homosexuality is more complex. For Greek men, it was normal to form sexual relationships with beardless youths up to the ages of 15–16, and such relationships were even encouraged as a form of mentorship. However, relationships between adult men were considered shameful and were often subjects of ridicule.

        buy valtrex online https://viagra4pleasurerx.com/buy-valtrex.html no prescription pharmacy

        Nonetheless, this behavior was viewed more tolerantly than in many traditional societies.

        When it comes to women, female citizens of Greek cities were restricted in their sexual relationships: they could only sleep with their husbands, and other relationships were strictly forbidden. At the same time, women sometimes imitated homosexual relationships with men: adult women formed relationships with young girls. The famous poet Sappho, who lived on the island of Lesbos, was known for such relationships—hence why homosexual relationships between women are called lesbian. However, Sappho’s writings are more about platonic rather than sexual love between women.

        Did Ancient Greeks Wear Underwear?

        No, the Greeks didn’t wear anything like underwear. Instead, loincloths were sometimes used as undergarments; slaves, craftsmen, and peasants wore them during work. However, loincloths were not always worn under clothes—they were used to keep warm or to protect the groin. True underwear, resembling modern bikinis or thongs with side ties, appeared in Roman times but was likely used only by women serving at feasts and perhaps during sports activities.

        Did They Really Dilute Wine with Water?

        Wine boy at a symposium
        Wine boy at a symposium. Image: Public Domain

        Yes, indeed, and sometimes quite heavily. A common practice was to dilute one part wine with two parts water, but it could go up to ten parts water. Considering that Greeks did not fortify their wine—that is, add alcohol to it—and it rarely contained more than 15% alcohol, diluted wine often became weaker than modern beer.

        There are different explanations for the origin of this tradition: Greek authors merely state that diluting wine distinguished Greeks from barbarians. One might assume that diluting wine prevented participants at feasts from getting too drunk, but Greek vases often depict very drunk people, even vomiting. There is a theory that water from wells and cisterns could be of poor quality, and wine was used to disinfect it. However, further dilution reduced the alcohol content, lowering the antiseptic properties of the wine.

        Most likely, Greek wine was significantly different from modern wine in its production technology. To increase the sugar content, grapes were slightly dried, resulting in a very sweet and thick drink. Additionally, they added seawater, crushed stone, and other ingredients to the wine; these helped prevent spoilage but made it cloudy. A strainer through which the drink was filtered was an essential attribute of a Greek feast.

        However, in some circumstances, wine was not diluted, such as during cold weather when it was drunk as a medicine, at the beginning of a feast, or simply because someone liked its taste. Barbarians, on the other hand, drank undiluted wine to get drunk as quickly as possible, which Greeks considered inappropriate behavior.

        What Did They Eat Besides Olives? Buckwheat?

        Processed olives or olive oil were certainly not the only dishes on the Greek table. However, their cuisine was much less diverse compared to our modern diet. The most important staples in Greek nutrition were grains—wheat and barley. Wheat was used to make bread, and it was also used, like other grains such as millet or emmer, to make porridge. Unfortunately, the Greeks did not know about buckwheat; otherwise, they would have surely used it.

        Another important food base was legumes. Large beans were unknown in Ancient Greece (they were brought from America much later), but they grew peas, lentils, and various types of vetch, which is now considered livestock feed. Legumes were used raw and as a base for thick stews. They also ate onions, peppers, cucumbers (which were very bitter at the time), and other vegetables.

        Meat, as in other ancient cultures, did not appear on the Greek table every day, and its main sources were sheep and goats, which were raised in abundance. They were often used as sacrificial animals, and after the sacrifice, they were eaten at specially organized feasts. Beef was rarely eaten; only the wealthy could afford it. Pork was not very popular. Among poultry, chicken was often prepared, and of course, they ate chicken eggs. They hunted boars and deer, as well as various birds, including small birds. However, near cities, large animals were not found, so games rarely appeared on the table. Cheeses made from sheep, goat, and cow milk were also common products.

        If in ancient times the aristocracy disdained eating fish—as mentioned in Homer’s poems—in archaic and classical Greece, all sea products were consumed. Tuna and sturgeon, caught in the mouths of major rivers of the Black Sea, were especially valued. Fish was salted and dried; crabs, lobsters, octopuses, cuttlefish, and so on were also eaten.

        buy elavil online https://viagra4pleasurerx.com/buy-elavil.html no prescription pharmacy

        Mollusks, primarily oysters and mussels, were consumed in large quantities.

        The Greeks ate fruits like apples, pears, figs, and grapes. For dessert, they had dried figs, raisins, and honey, which was also added to salads and bread.

        They drank water, milk (often goat milk), and wine. Beer was known but not particularly popular.

        Did They Really Believe That Thunder and Lightning Were Zeus’s Anger, and That a Three-Headed Dog Awaited Them Underground?

        Some did, and some didn’t. Since ancient times, myths have explained the world’s structure, and most Greeks certainly did not question them: they saw the gods’ will behind natural phenomena, with Zeus throwing lightning bolts and Poseidon controlling the waves. However, with the development of philosophy and science, by the 6th century BCE, educated Greeks began to doubt such a worldview. On one hand, the world of gods seemed insufficiently structured and logical; on the other, the concept of Hades, where everyone became bodiless, memoryless shadows, seemed hopeless.

        In some new philosophical ideas, powerful elements took on the primary role, while in others, the gods were absent altogether. Nevertheless, intellectual musings about the world’s structure were typical of a narrow educated stratum, while more ordinary people continued to believe in omnipotent gods, mighty heroes, and fearsome monsters. They accepted these stories as a given but worshiped the gods more out of tradition than genuine faith.

        How Much Larger Was the Poor Population Compared to the Rich, and What Were the Relations Between Them?

        We do not know the exact ratio of wealthy to poor populations in Ancient Greece, as it varied greatly from city to city and era to era. Even in cities like Athens and Sparta, these figures are very approximate. In Athens in the 5th century BCE, there were about 300,000 people. Among them were around 40,000 free men, which is about 10–15%. Adding their family members brings the number to roughly 150,000 people. Around 100,000 were slaves, and the rest were free non-citizens, known as metics. Slaves were not just poor; they had no property at all.

        The Solonian reforms of the 6th century BCE divided the free citizens of Athens into four property classes. The wealthiest, whose annual income exceeded 26,000 liters of grain, wine, or oil, were called pentacosiomedimnoi (500-medimnoi). Such a Greek could own large lands and gardens and possibly even his own ships. In the case of war, they were required to equip a warship at their own expense. Those whose income ranged from 300 to 500 medimnoi were called hippeis, or horsemen, meaning they could afford to maintain a horse.

        In Greece, where fertile land was scarce, this required either a piece of uncultivated land or purchased feed, which was quite expensive. These citizens owned substantial land and spacious, wealthy homes. During wartime, they naturally served in the cavalry.

        Citizens with an income between 200 and 300 medimnoi were called zeugitai. These were sturdy landowners who could maintain a normal household and had enough land to do so. These citizens formed the backbone of Greek city-states and initially constituted the majority. During wars, they served as heavy infantry—hoplites, since their income allowed them to buy armor and weapons. Finally, those with even lower income were called thetes. These were people who were barely making ends meet; they were exempted from any duties in the city, as it could undermine their already modest budget.

        Essentially, thetes were poor farmers and the working poor. During wartime, they served as lightly armed infantrymen, whose role in battle was minimal, and as rowers in the fleet. Greeks did not use slaves on their galleys; the rowers were poor but free men. Initially, there were relatively few thetes, but gradually the number of poor in large cities increased. From these examples, we see that the wealthiest people were at least two and a half times richer than the poor and about twice as wealthy as the middle class—zeugitai. The gap between the rich and poor in Ancient Greece was smaller than in modern society.

        Naturally, poor citizens and free non-citizens did not particularly like the rich, but since any issue could be resolved in court (and for citizens, also in the assembly or other governmental bodies), open conflicts were rare. In some cases, citizens were paid for participating in the assembly, serving in court, or were even given funds to attend the theater during festivals. One could also turn to the authorities for social support. Such measures prevented social explosions.

        The situation of slaves varied in Greek states, and there were occasional small uprisings. The hardest work in Athens was in the Laurium silver mines, from where slaves would escape at the first opportunity. Moreover, the positions of private slaves and state slaves differed; the latter could serve as policemen, clerks, jailers, heralds, and so on. In fact, they were state employees and were in a better position than other slaves. In Athens, where slaves were treated fairly decently (which sometimes caused discontent among citizens who considered them too bold), there were no uprisings.

        Were the Greeks as Warlike as the Vikings? Did They Often Engage in Warfare?

        The Greeks were very good warriors and often fought, but their martial nature was somewhat different from that of the Vikings. Essentially, the Vikings were professional pirates and mercenaries who roamed the world for plunder and profit. They posed a threat even to their own compatriots. Sometimes the Vikings managed to conquer significant territories and even establish kingdoms, but these entities often survived only due to the strength of small military bands that did not exert significant cultural influence on the subjugated people.

        The Greeks were good warriors because they knew what they were fighting for and were well-prepared for battle. Ideally, every citizen of a Greek city-state would arm himself at his own expense and, if necessary, join the militia. Usually, such a “militia,” a force of armed citizens, had a good fighting spirit but could not compete on the battlefield with professional soldiers. However, constant physical exercises and competitions made the Greeks strong, accurate, and ready for combat.
        Moreover, the Greeks possessed brilliant battle tactics.

        They formed in long lines shoulder to shoulder, shielded, approached the enemy at arm’s length, and fought hand-to-hand with short spears and swords. This style of fighting required composure and provided a sense of support from comrades standing closely on either side and behind. To learn to move in a phalanx—this formation was called—a Greek soldier would undergo military training. The best were the Spartans, who almost daily practiced military skills.

        Constant military conflicts that arose between Greek states and their neighbors were also important for training. These wars were swift and did not lead to heavy losses. If one phalanx overturned another, the losers would flee, abandoning their shields. Victors could not leave their heavy shields on the ground, as it was considered unworthy of a warrior, and running with them in hand was inconvenient. Therefore, the defeated did not suffer losses during the retreat. Usually, a truce was declared immediately after the battle to bury the dead.

        Greek farmers did not like to fight unnecessarily, as it distracted them from farming, but they were always ready for war and well-prepared. This played a huge role in the defeat of such dangerous and experienced warriors as the Persians, who conquered vast territories but were defeated by the small Greek cities.

        Did They Invent the Olympic Games?

        Greek vase depicting runners at the Panathenaic Games c. 530 BC
        Greek vase depicting runners at the Panathenaic Games c. 530 BC. Image: Wikimedia, CC BY 2.5

        Yes, there is no doubt about it. The Greeks had a tradition of holding competitions in honor of the gods, as it was believed that the gods took pleasure in the process of choosing a winner (naturally, only their chosen one could win). The most important competitions were held under the auspices of major sanctuaries at regular intervals. For example, in Athens, there were the Panathenaic Games in honor of Athena, in Corinth, the Isthmian Games in honor of Poseidon, and in Delphi, the Pythian Games in honor of Apollo. However, the most prestigious and significant were the Olympic Games, held at the sanctuary of Zeus in Olympia, on the Peloponnesian Peninsula.

        According to Aristotle, the first games took place in 776 BC, although this might be a legend. The Olympic Games were held every four years. To ensure that athletes could travel to Olympia and return home without fear for their lives and freedom, a pan-Hellenic truce was declared during the games. Initially, the only event was a running race, but gradually, other sports were added. Victory in these competitions was highly honorable and glorified not only the athlete, known as an Olympian, but also their city. The games continued until the end of the 4th century AD when they were banned along with other pagan festivals.

        In modern times, some European competitions were traditionally called the Olympic Games, and since 1896, the games were revived and, like before, began to be held every four years.

        Why Is Everything Ancient Still Considered the Most Beautiful by Many?

        To answer this question, one needs to understand what people generally consider beautiful. Scientists suggest that the sense of beauty is partly explained by biological reasons. For example, people like the division of an object along its length in a ratio of 68 to 32%, which is called the golden ratio. On the other hand, something is considered beautiful if it is associated with the cultural tradition of a particular people.

        Ancient Greek sculptors, artists, and architects were known for their high artistic sensibility and often intuitively found proportions pleasing to the human eye. They placed great importance on accurately copying human anatomy while maintaining strict symmetry in the depiction of the face and body, which corresponds to beauty standards accepted in various cultures.

        However, the most significant reason is that Greek art, slightly modified by the Romans, became the foundation for the art of several European peoples. In the Middle Ages, much changed, but during the Renaissance, artists returned to Antiquity, actively using ancient images, and sometimes copying works of art. From this period, classical education, which included the study of Ancient Greek and Latin, as well as acquaintance with ancient art, began to develop. From the 18th century, this became standard, and Ancient Greek art was considered the ideal and foundation of European art. Despite the significant changes in art today, the ancient foundations of European culture are well-known, continue to influence modern culture, and, because of this, ancient art seems beautiful to many.

        Why Did Greece, With All Its Advantages, Not Become a Great Empire Like Rome?

        The Greeks were prevented from establishing a large empire by the very structure of their society. The ancient Greeks lived in independent city-states—poleis, which usually controlled nearby, easily visible lands. Colonies founded in distant lands usually immediately became independent states, maintaining friendly relations with their metropolises but remaining independent from them. To address various issues, such as waging war against a common enemy, the poleis formed alliances: some lasted for centuries, while others existed briefly.

        If one city gained strength, a coalition of opponents would immediately form against it, attempting to weaken the neighbor. Some states, particularly Athens and Sparta, managed to control fairly large territories by Greek standards, but they extended their influence mainly through allied relations. The Roman state, however, was essentially the rule of one city that managed to subjugate vast territories.

        The Greeks managed to establish a large state only under the Macedonians—a people possibly related to them but considered barbarians. Under the unified rule of King Philip II, the Macedonians conquered and effectively united Greece, and then, thanks to the exceptional military talent of his son, Alexander III, known as the Great, they defeated the Persian Empire and conquered some of its neighboring territories.

        Despite the leading role of the Macedonians, the Greeks played a huge role in this conquest, and their language and culture became unifying forces for the conquered territories. However, this state barely outlived its founder, but on its ruins emerged large Greek states: the Ptolemaic Egypt, the Seleucid Empire, and the Kingdom of Pergamon. Gradually, the Romans subjugated most of these states.

        How Did Such a Vast Civilization Disappear?

        The Greek civilization did not disappear like the states of the American Maya or the mythical Atlantis. The Romans, who conquered Greek territories, treated Greek culture with respect, recognizing the taste and education of the Greeks, and they preserved local self-government in the cities, allowing the ancient traditions to be maintained and developed. Thus, Greek culture did not disappear but continued to develop within the Roman state. After the Roman Empire split into the Western and Eastern Empires, the Byzantine Empire gradually formed in the latter’s territory, with the Greek population at its core, speaking Middle Greek—a later form of Ancient Greek.

        From the end of the 7th century AD, it became the official language of the empire. Of course, Orthodoxy became the main religion of Byzantium, but its culture was inherited from the ancient Greeks and Romans. The Greeks did not lose their cultural uniqueness even under the rule of the Ottoman Empire, preserving it until the early 19th century, when Greek statehood was revived. Therefore, modern Greece is a rightful cultural successor of the ancient Greek states, with which it is connected by an unbroken line of historical development. Ancient Greece did not perish but transformed into modern Greece.

      2. The Naked Truth: Why Artists Depicted Christ Unclothed in Art History?

        The Naked Truth: Why Artists Depicted Christ Unclothed in Art History?

        In Hans Baldung Grien’s engraving, we see familiar figures: the infant Jesus, the Virgin Mary, her mother St. Anne, and St. Joseph, Mary’s husband. Yet, something strange is happening. Mary is holding Jesus on her lap, while Anne, under Joseph’s gaze, leans in and touches the child’s genitals.

        Art historians explain this scene as either a “domestic vignette” or a magical ritual. But why would an artist depict such a medical examination when it involves not just an ordinary child, but the newborn Savior? Why show the Holy Family under the sway of superstitions?

        The Holy Family, woodcut by Hans Baldung
        The Holy Family, woodcut by Hans Baldung, circa 1511

        Medieval people were much more frank in discussing sexual matters than we might assume today. They understood very well that if Jesus was incarnated as a male and not a female, it raised certain questions: Was he truly a man in every sense? Did he experience desire? Did he remain a virgin? Could he become a father?

        Art does not shy away from these doubts. The scene where someone examines the genitals of the young Jesus is not merely the product of Baldung Grien’s twisted imagination. His engraving is just one of many works where artists aimed to emphasize the sex of the Divine Infant.

        Christ as a Man

        Madonna and Child with Two Angels (Crevole Madonna)
        Madonna and Child with Two Angels (Crevole Madonna). Image: Museo dell’Opera del Duomo

        In 1983, American professor Leo Steinberg published a book with the intriguing title “The Sexuality of Christ in Renaissance Art and in Modern Oblivion.” In it, he drew attention to something long known but so familiar that it had gone unnoticed: after 1260 in Italy, artists began to depict the infant Jesus nude. Some would lift the hems of his garments to reveal his legs, others would dress him in a transparent tunic, or fully expose his torso.

        By the early 15th century, a naked infant, whether in a Nativity scene or in depictions of the Madonna holding him, had become quite commonplace. But that wasn’t all—by the latter half of the same century, European art developed numerous methods to draw the viewer’s attention to the sexual organs of the young Jesus. In Byzantine art, only exposed hands and feet of the infant were depicted, and that too, rarely.

        Artists were inventive: sometimes, for unclear reasons, the infant’s tunic would suddenly rise, or his tunic would abruptly end. Jesus might reveal his covering or clothes himself to show us that he was a boy and not a sexless being. Sometimes the Virgin Mary assists him in this; in other instances, she covers his genitals, but this gesture still directs the viewer’s attention to specific body parts.

        Although in the Middle Ages, as today, small children were often (especially in summer) left naked, the number of naked infant Jesuses in European art from the 14th to the 16th century is too large for such a motif to be explained merely by observations of reality. Artists don’t depict everything they observe—they choose. For instance, in no medieval depiction do we see the infant Jesus crawling, although infants usually move that way, and no one ever saw anything wrong with it.

        Therefore, the origins of the images of the naked Christ child should likely be sought not in daily life but in theology. The popularity of such images may have been influenced by the preaching of the Franciscans—an order that was incredibly influential in the 14th and 15th centuries. They constantly emphasized that Christ was not only God but also man, and their slogan was “nudus sequi nudum Christum”—”naked follow the naked Christ.”

        In the religious life of the late Middle Ages, the suffering Savior’s human aspect comes to the forefront—first as a man, then as God. The humanity of Christ was constantly discussed in theological treatises intended for learned clerics and sermons directed at the laity. They reminded people that the path to salvation for every believer was opened by Jesus’ crucifixion. However, to die, God had to fully become a man. Thus, artists strove to show not only the divine but also the earthly nature of Christ, demonstrating that, like other people, he possessed gender and the ability to reproduce.

        The connection between human mortality and the ability to reproduce was repeatedly noted by Christian theologians. The eternal God is not subject to death and does not engage in reproduction. However, upon becoming human, He had to be capable of dying and leaving descendants.

        For when our first parents sinned in Paradise, they forfeited the immortality which they had received, by the just judgement of God. Because, therefore, Almighty God would not for their fault wholly destroy the human race, he both deprived man of immortality for his sin, and, at the same time, of his great goodness and loving-kindness, reserved to him the power of propagating his race after him.

        The Venerable Bede. “Ecclesiastical History of the English People.” Circa 731 [Link]
        Joos van Cleve - The Holy Family
        Joos van Cleve – The Holy Family, circa 1512. Image: Metropolitan Museum of Art

        Of course, before the Fall, Adam could also have children. But this ability only became crucial for the preservation and subsequent salvation of humanity after he was expelled from paradise. Jesus assumed precisely this fallen human nature with all its possibilities and limitations, which likely explains some of the unusual details in his depictions. Of course, unlike all other people, he is not tainted by original sin. But he willingly accepted its consequences—carnal desire, pain, and death.

        Giovanni Bellini. Madonna and Child
        Giovanni Bellini, Madonna and Child, Late 1480s. Image: The Met

        Adam and Eve, before the Fall and before they became mortal, were naked and unashamed of their nakedness. Similarly, Christ, free from original sin, may not be ashamed of his nakedness. If so, the common gesture of the Madonna covering Jesus’ genitals with her hand may not have been a concession to propriety, but rather an attempt by the mother to protect her human son from impending suffering and death. In some depictions, the infant himself covers his genitals or touches them.

        Doubting Shepherds

        One of the most famous female mystics of the late Middle Ages was Bridget of Sweden (1303–1373). Among her numerous revelations was a vision of the Nativity, emphasizing the importance of Christ being incarnated specifically in a male body. According to Bridget, the shepherds who came to worship the Divine Infant wanted to know who was born, a girl or a boy (since the angels had told them that the savior of the world, not a savior, had been born). Upon finding out, they left, praising the Lord.

        According to the Gospel of Luke (2:21), on the eighth day after the Nativity, the Divine Infant was circumcised, and “he was named Jesus, the name given by the angel before he was conceived in the womb.” Already in the 6th century, the Church established a feast in honor of this event—the Feast of the Circumcision of the Lord (January 1), and theologians began to teach that the cutting of Jesus’ foreskin was his first “installment” in the redemption of all Christians from the power of sin and death. The Church Fathers argued that, unlike other clueless infants who undergo circumcision, Jesus on that day first shed blood for people voluntarily; he allowed himself to be circumcised to set an example of obedience to the Law and, according to Bernard of Clairvaux (1090–1153), to obtain “proof of true humanity” on his body. Thus, circumcision became the first step toward the Passion of Christ, the beginning of the sacrifice.

        Scenes of Christ’s circumcision, rare in early medieval art, became increasingly popular in the 14th and 15th centuries. In many cases, this event (through details emphasizing the Child’s suffering: blood and a large knife) visually corresponded with the Crucifixion. The Virgin Mary is always present in depictions of circumcision. This was not mentioned in the Gospel text. Her involvement directly contradicted medieval Jewish traditions—according to which, mothers were forbidden to be present at their sons’ circumcision (Christian theologians believed the same rules applied to Jews in the Gospel times). There is even a miniature where the Virgin Mary performs Jesus’ circumcision herself.

        The active role of the Mother of God can be explained either by the fact that artists, depicting the infant, almost “automatically” included his mother in the frame (after all, someone had to hold him), or by the fact that they (together with their theologian “consultants”) were guided by the scene of the Crucifixion, in which Mary always stood at the foot of the cross. The first sacrifice of Christ was depicted after the model of the last.

        Betrothal with God

        Ludwig Krug, Christ as the Man of Sorrows
        Ludwig Krug, Christ as the Man of Sorrows, Circa 1510–1532.

        In one of the visions of Catherine of Siena (1347–1380), an Italian religious figure later recognized as a Doctor of the Church (only four women have received such an honor), Jesus spiritually betrothed her by placing on her finger a ring made from the cut-off part of his foreskin.

        Your unworthy servant and the slave of Christ’s servants is writing to you in the precious blood of God’s Son.  I long to see you a true daughter and spouse consecrated to our dear God.  You are called daughter by First Truth because we were created by God and came forth from him.  This is what he said: “Let us make humankind in our image and likeness.”  And his creature was made his spouse when God assumed our human nature.  Oh Jesus, gentlest love, as a sign that you had espoused us you gave us the ring of your most holy and tender flesh at the time of your holy circumcision on the eighth day. 

        You know my reverend mother, that on the eighth just enough flesh was taken from him to make a circlet of a ring.  To give us a sure hope of payment in full he began by paying this pledge.  And we received the full payment on the wood of the most holy cross, when this Bridegroom, the spotless Lamb, poured out his blood freely from every member and with it washed away the filth and sin of humankind his spouse.

        Catherine of Siena. Letter to Joanna, Queen of Naples, August 4, 1375

        Images depicting the circumcision of Christ demonstrate the connection between redemption and masculinity. However, it is very important to note that although Jesus was a man, like his mother, he remained a virgin. But this was not the virginity of a eunuch; it was virginity as a triumph over sin, just as the resurrection is a triumph over death. The surprising and even shocking details in the iconography of the infant Jesus may have a quite canonical theological foundation.

        This more perfect Adam, Christ—more perfect because more pure—coming in the flesh to set an example of your weakness, offers Himself to you in the flesh, if only you accept Him, a man completely virginal.

        Tertullian, On Monogamy, around 213 AD

        “So what did the Lord, the truth and the light, do when He came [into the world]? He, having taken on flesh, kept it incorrupt—in virginity.”

        Methodius of Olympus, The Banquet of the Ten Virgins, late 2nd – early 3rd century [Link]

        This, says he, I wish, this I desire that you be imitators of me, as I also am of Christ, who was a Virgin born of a Virgin, uncorrupt of her who was uncorrupt. We, because we are men, cannot imitate our Lord’s nativity; but we may at least imitate His life… When difference of sex is done away, and we are putting off the old man, and putting on the new, then we are being born again into Christ a virgin, who was both born of a virgin, and is born again through virginity.

        Jerome of Stridon, Two Books Against Jovinian, around 393 AD [Link]

        It is not accidental that Jesus was depicted naked not only in infancy but also in death. This, however, required greater boldness from the artist (after all, this is the nudity of an adult, not an infant) and often resulted in public disapproval.

        But let us return to the engraving by Hans Baldung Grien. Before us is the embodied symbol of faith—God truly became man. The guarantee of complete incarnation is St. Anne, Jesus’ grandmother by flesh. As she touches her grandson from below, Jesus touches his mother’s chin. This almost imperceptible gesture had quite a specific meaning for the medieval person. But what was it?

      3. History of Cancer

        History of Cancer

        Before the Common Era

        Traces of Cancer in the Remains of Ancient People

        In 1990, during excavations of the Chiribaya tribe cemetery in Peru on the northern border of the Atacama Desert, American professor Arthur Aufderheide discovered the mummy of a young woman with osteosarcoma — a malignant bone tumor. These remains were well-preserved thanks to the local climate: clay extracted all the liquid from the body, and the desert wind dried the tissues. However, scientists do not always find remains of tumor tissues. Sometimes, they come across traces of the disease left in the body of an ancient person, such as small holes in the skull and shoulder bones — results of metastases from skin melanoma or breast cancer.

        Incurable “Lumps in the Breast” from the Edwin Smith Papyrus

        One of the earliest written records of the disease is the Edwin Smith Papyrus, named after the American archaeologist who bought the artifact in an Egyptian market in 1862. This ancient Egyptian manuscript, dated to around 1600 BCE, is likely an incomplete copy of an earlier medical treatise created in the 27th century BCE. Its authorship is attributed to the famous Imhotep — an architect and healer of the Old Kingdom period. The surviving document describes 48 cases of various injuries, with the forty-fifth section unexpectedly devoted to the consequences of a severe specific disease — presumably breast cancer:

        When you examine the swelling lumps in the breast and find that they have spread throughout the breast, and if you place your hands on the breast, you find no heat and the tissues feel cool, with no graininess, internal fluid, or liquid discharge, but they protrude when touched, then you can say of the patient: ‘Now I treat the growth of tissues… the enlarged lumps of the breast mean the presence of swellings in the breast, large, spreading, and firm, and touching them is like feeling a ball of bandages, or they may be compared to an unripe fruit, hard and cool to the touch…’.

        For each injury described in the papyrus, the author suggests different treatments: for example, poultices for wounds and balms for burns. However, the mysterious “swelling lumps in the breast” baffled the ancient physician — unable to find a cure for this ailment, the “Treatment” section dryly states: “None exists.”

        The Persian Queen Atossa and the Successful Surgical Treatment of Breast Cancer from Herodotus’ “Histories”

        Two millennia after the Imhotep Papyrus, a similar disease reappears in written sources. Herodotus, in his “Histories,” tells of the Persian queen Atossa, who suffered from a bleeding breast tumor. Some researchers believe she had inflammatory breast cancer. None of the remedies helped Atossa, so the Greek physician Democedes proposed surgically removing the malignant tumor. The queen agreed, and the operation ultimately saved her life. Thus, Atossa’s story can be considered the first recorded example of a successful mastectomy — a surgical procedure to remove the breast.

        Ancient Rome

        Black Bile

        The four humors. Illustration from Leonard Tourneisser's book Quintessence. 1570
        The four humors. Illustration from Leonard Tourneisser’s book Quintessence. 1570

        Roman physician Galen, who lived from 129 to 216 CE, following Hippocrates, believed that a healthy person’s body was in balance with four humors (fluids): blood, phlegm, yellow bile, and black bile. An excess of any humor inevitably led to illness, and treatment involved removing excess fluid from the body, for example, through bloodletting, emetics, and laxatives.

        Galen believed that malignant tumors were caused by an excess of black bile, which became a dense mass in the body. Interestingly, medieval physicians believed that an excess of black bile also caused melancholy: μελαγχολία in ancient Greek means “black bile.” Galen, whose authority remained unchallenged for over a thousand years, argued that the disease could not be defeated by surgery: the black bile would remain and pose a threat of new tumors. According to him, a patient could only be supported with general therapeutic measures.

        Middle Ages

        Boar Tusk, Fox Lungs, and Ground Elephant Bone

        Galen’s views predetermined the medieval medical approach to tumors. Surgery was generally considered more harmful than beneficial, and the absence of radical treatment was seen as the best possible remedy. An alternative to the surgeon’s knife was a wide range of rather exotic remedies: arsenic tincture, lead tincture, boar tusk, fox lungs, ground elephant bone, crushed white coral, castor bean seeds, senna plant, and more. Alcohol and opium tincture were used to relieve unbearable pain.

        Image from Andreas Vesalius' De humani corporis fabrica (1543), page 163
        Image from Andreas Vesalius’ De humani corporis fabrica (1543), page 163

        In the Renaissance, medicine revisited Galen’s idea that an excess of black bile in the body caused malignant tumors. Andreas Vesalius, the founder of scientific anatomy and author of “On the Fabric of the Human Body in Seven Books” (1543), who personally dissected corpses, could not find even the slightest trace of the infamous substance supposedly responsible for the development of tumors.

        Enlightenment: Cancer from an Excess of Lymph

        Another suspected cause of malignant tumors was lymph (also known as phlegm in humoral theory). According to this new understanding, tumors were no longer attributed to mysterious black bile but to a clear fluid that permeates the human body and is responsible for a calm temperament. German physicians of the 17th–18th centuries, Georg Ernst Stahl and Friedrich Hoffmann, suggested that tumors consisted of fermented lymph, which could have different densities, acidities, and alkalinities in each specific case.

        Cancer as Contagion

        While Hippocrates, Galen, Stahl, and Hoffmann sought the cause of the disease within the body, two 17th-century Dutch physicians, Zacutus Lusitanus and Nicolaas Tulp, found it outside. Independently, they reached the same conclusion: cancer is contagious. The doctors proposed moving all patients outside major cities to isolate them and thus prevent the spread of the dangerous disease. The idea of the contagiousness of cancer, prevalent in Europe during the 17th and 18th centuries, is now considered mistaken. However, it is known that certain viruses, bacteria, and parasites can indeed increase the risk of developing tumors, such as the human papillomavirus or the bacterium Helicobacter pylori.

        Cancer from Lifestyle

        In the 18th century, scientists made three important discoveries that laid the foundation for the epidemiology of cancer. In 1713, Italian physician Bernardino Ramazzini noted that cervical cancer was almost nonexistent among nuns, while the incidence of breast cancer was relatively high. Ramazzini concluded that lifestyle (in the case of nuns — lack of sexual relations) could directly influence the development of this disease in women.

        In 1775, English surgeon Percivall Pott discovered that scrotal cancer, often found in chimney sweeps, had an occupational nature: tumors arose due to the accumulation of soot in the folds of the scrotum. Pott’s discovery later led to the study of carcinogens associated with specific professions and the gradual adoption of occupational health and safety measures.

        By the 20th century, it was established that carcinogens could include benzene, used in the production of medicines, plastics, and rubber, as well as the fine-fiber mineral asbestos, used in construction, among others.

        It also became clear in the 18th century that bad habits could cause this disease. In 1761, English scientist John Hill, in his book “Cautions against the Immoderate Use of Snuff…,” first directly linked tobacco use and the development of tumors. However, lung cancer caused by smoking only became a subject of intensive research in the 20th century.

        Surgery as the Only Way to Defeat Cancer

        The medicine of the Enlightenment era drew a final line under Galen’s teachings. While Vesalius created a scheme of the “healthy” human body, the English anatomist Matthew Baillie managed to describe the “pathological” body in minute detail. In his work “The Morbid Anatomy of Some of the Most Important Parts of the Human Body” (1793), Baillie thoroughly examined various malignant formations, but in none of them could he find any hint of black bile.

        Having “buried” black bile as a scientific error, medicine paved the way for surgery as possibly the only effective method of fighting malignant tumors. However, as before, patients could only rely on the skill of doctors. Indeed, in the absence of means to alleviate their condition during operations, it was difficult to count on a favorable outcome: people who agreed to go under the surgeon’s knife typically died from pain shock, heavy blood loss, and various infections.

        19th Century

        Ether and Carbolic Acid

        The situation changed dramatically in the middle to late 19th century with the invention of anesthesia and antiseptics. American dentist William Morton began using the organic substance ether as a general anesthetic, while English surgeon Joseph Lister started using carbolic acid to disinfect wounds. After these important discoveries, doctors were able to resort to radical surgery when it was necessary to remove a tumor along with lymph nodes.

        Surgeons such as Theodor Billroth and William Stewart Halsted became famous in this field. Billroth performed the first esophagectomy (removal of part of the esophagus), laryngectomy (removal of the larynx), and gastrectomy (removal of the stomach) in history, while Halsted performed radical mastectomy, which is the complete removal of breast tissue.

        Cell Theory

        Illustration of Virchow's cell theory
        Illustration of Virchow’s cell theory

        In the 19th century, as microscope design was improved, scientists began to gradually approach an understanding of the true nature of cancer. In 1838, German biologist Johannes Peter Müller, in his work “On the Finer Structure and Form of Morbid Tumors,” proved that malignant formations consist not of lymph but of cells. However, he still believed that tumor cells were formed not from normal cells but from blastema between them. But his student Rudolf Virchow proved that all cells, including tumor cells, are formed from other similar cells: omnis cellula e cellula (“all cells come from cells”).

        However, even Virchow was mistaken: he was sure that cancer spread through the body like a fluid. In the 1860s, German surgeon Karl Thiersch disproved Virchow, proving that tumors consist of epithelial tissue, not connective tissue. He showed that metastases appear as a result of the spread of malignant cells, not through some unknown fluid.

        20th Century

        Hormone Therapy

        In the late 1870s, English physician George Thomas Beatson discovered a direct connection between the ovaries and milk production in the breast: after removing this organ in rabbits, he noticed that they stopped producing milk. This discovery led Beatson to wonder: could oophorectomy, or removal of the ovaries, have a positive effect on advanced breast cancer? Indeed, his experiments showed that such an operation often helped improve the condition of women with this type of cancer.

        The scientist also suggested that the ovaries themselves might be the main cause of breast cancer development. Even before estrogen was discovered, Beatson determined that the female hormone from the ovaries had a stimulating effect on breast cancer. Beatson’s discoveries laid the foundation for modern hormone therapy, where substances like tamoxifen, which suppresses the effects of estrogen, and aromatase inhibitors, which inhibit enzymes that convert male hormones (androgens) into female hormones (estrogens), are used to treat and prevent breast cancer.

        In the 20th century, a hormonal method was also found to treat a male disease—prostate cancer. In the 1940s, American scientist Charles Huggins discovered that after castration, patients experienced a sharp regression of metastatic prostate cancer. In addition, it was found that simultaneously decreasing testosterone levels and increasing estrogen levels helps in treating “male” cancer.

        Radiation Therapy

        Radiation therapy at MSK circa 1949
        Radiation therapy at MSK circa 1949. Image: Memorial Sloan Kettering Cancer Center

        At the end of the 19th century, Wilhelm Röntgen discovered X-rays, and Marie Curie and Pierre Curie discovered radium. Along with this came a new direction in tumor treatment: radiation therapy. Scientists found that radium could damage tumor cells in the body to the point of their complete destruction. However, in the early stages of radiation therapy, many doctors did not fully realize that radiation attacks healthy cells just as successfully as diseased ones. If the necessary dose is miscalculated, the rays can be deadly.

        It took almost a century to fully control unmanageable radiation. At the end of the 20th century, conformal radiation therapy was invented, in which the beam is precisely directed at tumor tissues thanks to detailed three-dimensional models created using computed tomography.

        Chemotherapy

        The militarism of World War II killed millions of human lives but unexpectedly helped find a new means in the fight against malignant tumors. In the 1940s, as part of developing more effective weapons, American scientists Louis Goodman and Alfred Gilman, commissioned by the US Department of Defense, studied chemical substances related to mustard gas, a poisonous chemical compound. During these studies, they accidentally found that nitrogen mustard helps in treating cancer of the lymph nodes (lymphoma). This was one of the first steps towards introducing chemotherapy into medical practice.

        Sidney Farber. 1960
        Sidney Farber, 1960. Image: Public Domain

        The real era of chemotherapy began with Sidney Farber, an American oncologist who, in the late 1940s, proved that a substance called aminopterin could cause remission in children suffering from acute leukemia: it blocks the division of leukocytes. Subsequently, adjuvant therapy became widely used in medical practice, which is a special chemotherapy aimed at destroying all tumor cells remaining in the body after surgery. Such therapy was first tested in the treatment of breast cancer, then in the treatment of colon cancer, testicular cancer, and other diseases. Another important innovation was combination chemotherapy, in which several different drugs are used simultaneously for more effective treatment.

        Immunotherapy and Targeted Drugs

        One of the most effective modern means in the fight against malignant tumors is immunotherapy. It has been actively used since the 1970s. Immunotherapy involves introducing special biological agents into the patient’s body that can both mimic the natural immune response and help the body’s own immune cells fight the tumor. The first targeted drugs—rituximab and trastuzumab—were created in the late 1990s; since then, they have been used to treat lymphoma and breast cancer, respectively.

        Lymph node with mantle cell lymphoma
        Lymph node with mantle cell lymphoma. Image: Gabriel Caponetti, CC BY-SA 3.0

        A promising direction in modern immunotherapy is the development of special cancer vaccines. For example, in 2010, the first vaccine was approved in the USA for a form of prostate cancer that no longer responds to hormone treatment. This drug, unfortunately, cannot kill the disease, but it helps the immune system fight tumor cells, prolonging the life of patients who seem to have a fatal diagnosis.

        Another important direction in modern immuno-oncology is the development of checkpoint inhibitors that regulate mechanisms that block the body’s immune response. The first of these drugs, ipilimumab, was approved in the USA in 2011. Now patients with diseases that were previously considered incurable have a chance for effective therapy. For example, former US President Jimmy Carter was able to get rid of metastatic melanoma when he was already over 90 years old.

        Also, quite recently, cell therapy using lymphocytes taken from the patient, which are then genetically modified, began to spread. It was first and successfully used against acute lymphoblastic leukemia in a five-year-old American girl, Emily Whitehead. In addition, research in the field of gene therapy continues. In general, modern medicine strives for ever-greater personalization of treatment, which makes it possible to completely cure the disease.

      4. How Long Did It Take to Deliver Goods Along the Silk Road?

        How Long Did It Take to Deliver Goods Along the Silk Road?

        The Silk Road is a conventional name for trade routes in Antiquity and the Middle Ages that connected China with Western Asia, the Black Sea region, and the Mediterranean. The name is conventional because the most famous product was silk. Moreover, it was not a single road but three different routes:

        • The southern land route from China through Bactria or Sogdiana (Samarkand), and through the Parthian trading center Merv to Ecbatana, Seleucia on the Tigris, and Syria;
        • The northern route from China through the steppes of Central Asia, the southern Urals, to the northern Black Sea region;
        • The maritime route that connected the countries of India and Southeast Asia across the Indian Ocean with Arabia, the Persian Gulf, and Egypt.

        Sea navigation in ancient times was coastal, meaning they sailed along the coast. The duration of these routes is known to us from the peripli — navigation guides that describe in detail the coastlines, distances between settlements, their sequence, the character and customs of the local population, and how they treated foreigners. The description of the waters from the modern Red Sea to the eastern regions of India is found in the Periplus of the Erythraean Sea. This route was fraught with many difficulties and risks, the main ones being storms and pirate attacks.

        In ancient times, no attempt to sail continuously around Arabia succeeded, so goods traveling from China to Europe were transferred to other ships at intermediate points. It could take several years for Chinese goods to reach European markets.

        buy toradol online http://miamihealth.com/jarrod/html/toradol.html no prescription pharmacy

        A figurine of a Western merchant made in the Chinese state of Northern Wei. 386–534 years
        A figurine of a Western merchant made in the Chinese state of Northern Wei. 386–534 years. Image: Wikimedia Commons

        An alternative was overland routes. One way for Chinese imports to reach Europe was through caravan trade across the steppes. Valuable goods (such as silk, jewelry, and lacquerware) were typically accompanied by more ordinary and practical items (such as dishes and agricultural products). Additionally, the caravan did not travel directly from the point of departure to the final destination; instead, it stopped at several caravanserais where some goods were sold or exchanged.

        buy bactrim online http://miamihealth.com/jarrod/html/bactrim.html no prescription pharmacy

        Camels were the primary draft animals. The ancient historian Strabo wrote that they were used for transporting commercial goods, a fact confirmed by archaeological findings. Camel bones have been found in layers dating back to the 1st century CE in Greek cities of the northern Black Sea region, including Olbia, Panticapaeum, Phanagoria, and Tanais. A loaded camel could cover up to 100 kilometers per day. With the total distance of the route from Southeast Asia to Crimea being about 12,000 kilometers, such a journey could take around six months.

        The existence of trade routes between China and Europe in ancient times has been confirmed by archaeological discoveries of Chinese mirrors, silk, and lacquerware found in the northern Black Sea region. In the late 1990s, remains of Chinese lacquered boxes were discovered in tombs from the end of the 1st century CE at the necropolis of the late Scythian Ust-Almin settlement in southwestern Crimea.

        buy addyi online http://miamihealth.com/jarrod/html/addyi.html no prescription pharmacy

        Together with gold jewelry, Roman bronze utensils, and other luxury items, they were placed in the graves of local elite women, serving primarily as markers of the high status of the deceased.

      5. Buddhism in 9 Questions

        Buddhism in 9 Questions

        Who Invented Buddhism?

        Unlike the two other major world religions (Christianity and Islam), Buddhism is a nontheistic religion, which denies the existence of a creator God and an eternal soul. The founder of Buddhism, Siddhartha Gautama of the Shakya clan, who belonged to the kshatriya varna (the warrior caste), was born in northern India, presumably in the mid-6th century BCE. His biography quickly became surrounded by various legends, and the historical facts became firmly intertwined with myths, beginning with the circumstances of his birth, which were quite unusual. The future mother of the prince dreamed that a white elephant entered her body, which was interpreted as a prophecy of the arrival of a great person, a future ruler of the universe.

        Siddhartha’s childhood and youth were cloudless: he knew neither illness, nor sorrow, nor need. But one day, when he left the palace, he encountered a sick person, an old man, and a funeral procession. This shocked him so deeply that he left his home and became an ascetic.

        At the age of 35, during a long meditation, Siddhartha attained enlightenment, becoming the Buddha, and began to preach his teachings — the Dharma. The essence of this teaching is contained in the Four Noble Truths. First, the world is imperfect and full of suffering. Second, the source of suffering is desire and the thirst for life, which keep the wheel of samsara — the cycle of life, death, and rebirth — in motion.

        Third, it is possible to break free from the cycle of samsara by attaining enlightenment (bodhi) and ultimately nirvana, a state of blissful non-existence. Fourth, there is a path to liberation, consisting of eight steps, which includes following ethical norms, meditation, and salvific wisdom. This path is called the Eightfold and Middle Path because it is equidistant from both strict asceticism and a life full of pleasures (which ultimately lead to suffering).

        How Is Buddhism Different From Hinduism?

          Buddhism is a world religion, which means that people of any nationality can become Buddhists. This is one of the radical differences between Buddhism and Hinduism — a national religion that is absolutely closed to external influences.


          The social structure of Indian society was formed by four classes, or varnas — Brahmins (priests and scholars), Kshatriyas (warriors), Vaishyas (farmers and merchants), and Shudras (artisans and laborers). Belonging to a varna was determined solely by birth, just like belonging to Hinduism as a whole.

          For several centuries, the teachings of the Buddha were transmitted orally, and in the 1st century BCE, they were written down on palm leaves, which were stored in three baskets. Hence, the name of the Buddhist canon — the Tripitaka (“Three Baskets”). There are several branches and many schools within Buddhism, but all Buddhists share a belief in the “Three Jewels” — the Buddha, the Dharma (the teachings of the Buddha), and the Sangha (the monastic community).

          buy diflucan online http://rxdc.com/images/html/diflucan.html no prescription pharmacy

          The ritual of joining the Buddhist community involves reciting a short ritual formula mentioning the “Three Jewels”: “I take refuge in the Buddha, I take refuge in the Dharma, I take refuge in the Sangha.”

          In addition, all Buddhists must follow five precepts established by the Buddha: do not harm living beings, do not steal, do not engage in sexual misconduct, do not lie, and do not use alcohol or drugs.

          Are There Branches in Buddhism (Like in Christianity)?

            Buddhism, which initially was one of the many movements opposed to Hinduism, became a radical reformist doctrine both intellectually and spiritually, as well as socially. Buddhists placed a person’s ethical merits above their birth status, rejecting the varna system and the authority of the Brahmins. Over time, this small movement developed its own social structure, a body of sacred texts, and ritual practices. As a world religion, it spread far beyond the Indian subcontinent.

            However, in India, Buddhism gradually declined. Today, less than 1% of Indians identify as Buddhists. Buddhism ranks only fifth among religions in India, significantly trailing Hinduism, Islam, Christianity, and Sikhism. Nevertheless, the founder of Buddhism, Buddha Shakyamuni, is revered in Hinduism as one of the incarnations (avatars) of the god Vishnu. On the world stage, Buddhism ranks fourth among religions: it is practiced by about 7% of the global population.

            What Does It Mean to Be a Buddhist?

              There are three main branches in Buddhism: Theravada — “the teaching of the elders,” Mahayana — “the Great Vehicle,” and Vajrayana — “the Diamond Vehicle.” Theravada, which is primarily practiced in Sri Lanka and Southeast Asia, is considered the oldest branch, directly tracing back to Buddha Shakyamuni and his circle of disciples.

              From the Mahayana followers’ point of view, Theravada is an overly elitist teaching, which they disdainfully call Hinayana, or the “Lesser Vehicle,” because it suggests that nirvana can only be achieved through the monastic path. Mahayana followers, however, assert that laypeople can also attain enlightenment. A special role for them is played by the doctrine of Bodhisattvas — enlightened beings who voluntarily remain in samsara to help others escape the cycle of rebirth. For instance, in the Tibetan tradition, the spiritual leader of the Tibetans, the 14th Dalai Lama, is considered the incarnation of Avalokiteshvara, the Bodhisattva of Compassion. Mahayana is prevalent in China, Tibet, Nepal, Japan, Korea, Mongolia, and southern Siberia.

              Finally, Vajrayana emerged within Mahayana at the end of the first millennium CE and reached its peak in Tibet. Followers of this tradition claim that enlightenment can be achieved within a single lifetime by adhering to Buddhist virtues and employing special meditative practices.

              buy keflex online http://rxdc.com/images/html/keflex.html no prescription pharmacy

              It is currently practiced mainly in Mongolia, Tibet, Buryatia, Tuva, and Kalmykia.

              Is There Only One Buddha, or Are There Many?

              Buddhism postulates the existence of countless “awakened ones” — buddhas, with Shakyamuni being the most famous among them. However, Buddhist texts also mention the names of his predecessors, ranging from 7 to 28 in number. Additionally, the future arrival of another buddha, Maitreya, is expected. Currently, as Buddhists believe, the bodhisattva Maitreya resides in the Tushita heaven (the “Garden of Joy”), and he will later appear on earth, attain enlightenment to become a buddha, and begin preaching the “pure dharma.”

              Is Buddha a God or Not?

              As mentioned earlier, Buddhism is a non-theistic religion. However, in Buddhist mythology, the “human” aspects of the life of Buddha Shakyamuni coexist with descriptions of his supernatural abilities, as well as cosmic-scale phenomena that accompanied various stages of his life journey.

              buy clenbuterol online http://rxdc.com/images/html/clenbuterol.html no prescription pharmacy

              He is described as a being who has existed eternally, capable of creating special worlds — “Buddha fields.”

              The Buddha’s relics are perceived as evidence of his mystical presence in our world and are surrounded by special reverence. According to tradition, his remains were divided into eight parts and stored in the first Buddhist worship structures — stupas (which translates from Sanskrit as “top” or “earthen mound”). Additionally, Mahayana Buddhism introduced the concept of the eternal “dharmic body” of the Buddha, which he possessed alongside his ordinary, physical body. This body is identified both with the dharma and with the universe as a whole. It is evident that the Buddha is revered not only as a “great person” but also as a deity, especially in Mahayana and Vajrayana Buddhism.

              Furthermore, Hindu deities have not been completely expelled from the Buddhist pantheon — the figure of the Buddha has merely pushed them into the background. According to Buddhist teaching, gods, like all other living beings, are subject to the cycle of samsara, and in order to escape from it, they must be reborn in the human world — since only there are buddhas born. Incidentally, before his final birth, Buddha Shakyamuni, according to legends, was reborn more than five hundred times, having lived as a king, a frog, a saint, and a monkey.

              Do Buddhists Celebrate the New Year?

              In popular Buddhism, there are many holidays — quite popular, though often only loosely related to religion. One of these is the New Year, which is celebrated differently in various regions. In general, the Buddhist festive cycle is based on the lunar calendar (everywhere except Japan). One of the main specifically Buddhist holidays is Vesak, which in different countries is associated with one to three key events in the life of Buddha Shakyamuni (birth, enlightenment, nirvana).

              Other holidays include the Day of the Sangha, commemorating the Buddha’s meeting with his disciples, and the Day of Dharma, commemorating the Buddha’s first sermon. Additionally, in Buddhist countries, there is a Day of the Dead: pre-Buddhist ancestor worship is very persistent and plays a significant role.

              Do Buddhists Have Temples?

              The most well-known Buddhist religious structure is the stupa. Initially, stupas were built as reliquaries where the remains of Buddha Shakyamuni were kept and venerated, later as memorials to important events. There are several varieties of stupas, and their architectural appearance largely depends on regional traditions: they can be hemispherical, square-stepped, or have the shape of pagodas. To earn good karma, Buddhists practice ritual circumambulation of the stupa.

              There are also temples with even more diverse architectural styles. It is believed that these temples house the three treasures of Buddhism — the Buddha (his statues and other images), the dharma, embodied in the texts of the Buddhist canon, and the sangha, represented by monks living in the temple or monastery.

              Are Buddhists Vegetarians or Not?

              One might think that one of the most important Buddhist principles — ahimsa — implies abstaining from eating meat.


              However, in reality, dietary restrictions in different regions are mostly determined by local customs. Among Buddhists, there are both proponents and opponents of vegetarianism, and both sides cite legendary sayings of the Buddha to support their positions. For example, there is a Buddhist parable about a deer and a tiger, in which the deer ends up in hell for boasting about its vegetarianism while unknowingly killing small insects by eating grass, whereas the tiger, a predator, purified its karma because it suffered and repented throughout its life.

            1. 7 Questions About Korea

              7 Questions About Korea

              Do Koreans Use Hieroglyphs?

              Replica of Hunmin Jeongeum Haerye
              Replica of Hunmin Jeongeum Haerye, the book in which the creation of hangul is explained. Picture taken at the National Museum Korea in Seoul

              Both no and yes. No, because both in the South and the North of the Korean Peninsula, they use their own alphabets. In the Republic of Korea, it is called Hangul, and in the DPRK — Chosongul, and both countries consider it an outstanding achievement of their national culture.

              Korean writing has several unique features. For example, the shape of the letters representing consonant sounds schematically reflects the position of the articulatory apparatus when pronouncing the corresponding sound. For instance, the letter ㄱ (read as “k” or “g” depending on its position in the syllable) resembles the position of the tongue when pronouncing this sound.


              The shape of the letter ㅇ represents the position of the vocal cords when pronouncing the nasal sound “ng.”

              The letters representing vowel sounds use symbols of the three fundamental elements of Korean natural philosophy — heaven, earth, and man. Their combinations demonstrate the central idea of Korean traditional culture — harmony and balanced coexistence of different principles.


              Hangul was created in the mid-15th century by King Sejong, who was ruling in Korea at that time (the title “wang” means “king”). Before that, Korean noblemen used Chinese characters for reading and writing: at that time, China was the cultural hegemon, not only in Korea but throughout the Far East region.

              King Sejong decided that writing was needed not only for the aristocracy but also for ordinary people, and an alphabet is simpler than hieroglyphics, which takes years to learn. Hangul indeed looks like hieroglyphs, so it is not surprising that they are often confused. This can be explained by the fact that medieval Koreans lived under Chinese cultural influence and, when inventing their writing system, were guided by familiar patterns.

              Despite their own alphabet, Koreans did not completely abandon hieroglyphics, continuing to study it in school and use it in life.


              Any educated person should know a minimum (for today’s high school graduates, it is 1,800 characters). Additionally, hieroglyphics, along with the national alphabet, is widely used in the scientific field, jurisprudence, and some media.

              Are All Koreans Named Kim?

              Of course, not all of them, but Kim is indeed the most common Korean surname. According to the 2015 census in South Korea, 21.5% of residents (that is, one-fifth of the country) had this surname. The second most common is Lee (14.7%), followed by Park (8.4%), and then Choi (4.7%). Less common surnames are Jung, Kang, Cho, and Yoon.

              All Korean surnames have a hieroglyphic writing, meaning they can be written both with the Korean alphabet and with Chinese characters. There are slightly fewer than three hundred such “family” characters — accordingly, there are about the same number of Korean surnames. But this does not mean that all Kims are related to each other. Each clan has its “bon” — the birthplace of its founder, which is documented for modern Koreans. For example, there are Andong Kims whose ancestor came from Andong, and Gimhae Kims who originated from Gimhae.

              The same applies to other surnames. If a couple planning to marry has the same surname but different bons, there will be no problems. If the bons coincide, there may be difficulties — even if the family trees of these people crossed several hundred years ago. Until 1997, such marriages were legally prohibited; now they are still avoided, although the ban has been officially lifted.

              Kim Jong-un — Which Is the Surname, the Name, and Is There A Patronymic?

              Kim_Jong-un
              Kim Jong-un. Image: Kremlin.ru, CC BY 4.0

              We already mentioned that Kim is the surname. Therefore, the name is Jong-un. Koreans do not have patronymics. Most often, Korean personal names consist of three elements, each read as a syllable. The first syllable is the surname, and the following two are the given name. Occasionally, there are personal names consisting of a single syllable.

              In Russian Korean studies, there is a rule that Korean names should be written in two words — first separately the surname, then the given name as one word. This method was proposed in the last century by linguist and orientalist Alexander Alekseevich Kholodovich and his follower Lev Rafailovich Kontsevich. However, despite the rules, many write Korean names in three words.

              Confusion is further compounded by the Koreans themselves, especially when they write their names in foreign languages. Adapting to Western norms, they write their given name first, followed by the surname. Additionally, there are several different systems for Romanizing Hangul, which also leads to misunderstandings and errors.

              For example, the name of the founder and first leader of the DPRK in his native language sounds like Kim Il Sung (Kim Ilsong). However, due to certain features of Korean phonetics, dialects, and discrepancies in Cyrillic transliteration systems, this name was once written in Russian as Kim Ir Sen, and his son’s name as Kim Jong Il (in the DPRK it sounds like Kim Jong Il). The third Kim — the current leader of North Korea — is now called Kim Jong-un in Russian (the first character in his name is the same as his father’s), whereas in his homeland his name is pronounced as Kim Jong-un.

              Why Are There Two Koreas?

              Colonel-level discussions between the US and North Korean militaries on 11 October 1951
              Colonel-level discussions between the US and North Korean militaries on 11 October 1951. Image: U.S. National Archives and Records Administration, Public Domain

              This was the result of World War II and the subsequent division of the world into two camps — capitalist and socialist. Japan’s defeat in World War II restored national independence to the Koreans — from 1910 to 1945 Korea was a Japanese colony — but then the victorious countries could not agree on an acceptable future for Korea, resulting in the infamous 38th parallel, which later divided the spheres of influence.

              In 1948, the Democratic People’s Republic of Korea (DPRK), supported by the USSR, was established in the north, and the Republic of Korea (ROK), allied with the United States, was established in the south. Each Korea wanted to be the main and only Korea — in the summer of 1950, the Korean War broke out, nearly escalating into World War III. The conflict lasted three years and ended with a temporary truce that remains in place to this day, and the two countries developed independently, still striving to prove to themselves and the world that each is the true Korea.

              Are Koreans Buddhists?

              According to 2016 statistics, only 16% of the population of the Republic of Korea are Buddhists. There are more Christians (28%), of which 20% are Protestants and only 8% are Catholics. 56% of South Korea’s residents do not adhere to any religious teachings. In the DPRK, freedom of religion is officially declared, but the cult of personality surrounding the country’s leaders and a generally materialistic worldview make any religious activity impossible. The few existing Buddhist monasteries in the country function more like museums.

              However, statistics on faith are not the most reliable source. A Western missionary in Korea once said that in their minds, Koreans are Christians, in their hearts, they are Buddhists, and in their stomachs, they are shamanists. Indeed, in the Republic of Korea, shamans (more often shamanesses) and their followers still exist. In big cities, many secular people, when opening a new restaurant or other small business, set a special ritual table, place a pig’s head with a bundle of cash in its mouth on it, and invite a shamaness to perform a ritual to ensure the project’s success.

              Moreover, on a personal level, people turn to shamans to restore peace in the family, overcome illness, help a child get into a good university, and the like. Shamans are respected at the state level — they are treated as carriers of traditional culture, paid stipends, and famous shamans go on tours, including abroad. Their dances and chants are studied by ethnographers and musicologists.

              Why Is the Carrot “Korean”?

              Morkovcha and other salads at Tolkuchka Bazaar
              Morkovcha and other salads at Tolkuchka Bazaar, Turkmenistan. Image: Kerri-Jo Stewart, CC BY 2.0

                In fact, the carrots and other salads sold in Russian markets and stores as Korean are not quite Korean. At least, neither in South nor in North Korea are they prepared and eaten. All this is the cuisine of Central Asian Koreans who found themselves in the USSR, mainly on the territory of modern Kazakhstan and Uzbekistan, as a result of the deportation of 1937.

                In the new, difficult conditions, people tried to cook the familiar food from the products that were at hand. This is how the popular in Russia Korean pickles and snacks appeared. The native food tradition is only related to the method of preparation: in Korea, the basic meal involves a bowl of rice and soup, which are necessarily accompanied by several panchan — side dishes made from pickled vegetables, herbs, salted seafood.

                The most famous Korean panchan is the spicy fermented cabbage kimchi. In general, kimchi is made from a variety of ingredients — radish, cucumbers, garlic shoots, sesame leaves, bamboo, and much more. The main thing is that the product is well fermented — otherwise it will not be kimchi.

                Do Koreans Really Eat Dogs?

                  Yes, but not everyone and only on special occasions. Dog meat dishes have never been and are not part of the everyday Korean diet, they are considered a special healing food and are consumed for recovery after illness or severe fatigue, as well as traditionally on the three hottest summer days, determined by the lunar calendar: this helps better withstand the heat. However, today this remedy is resorted to only by the older generation and mainly men, while the young prefer to fight the heat with the help of samgyeopsal – a special chicken soup with ginseng root.

                  With the spread of Western values and lifestyle in Korea, the practice of eating dogs is losing popularity, but they do not completely abandon it. Society and the authorities consider it more important to introduce reasonable ethical and sanitary standards in the field of keeping and slaughtering the dog breed – now there is really no order in this matter.

                1. 13 Questions About the Inquisition

                  13 Questions About the Inquisition

                  What Does the Word “Inquisition” Mean, and Who Invented It?

                  Lucius III
                  Lucius III, 1879

                  The word “inquisition” comes from the Latin word inquisitio, which means “investigation,” “inquiry,” or “search.” We know the Inquisition as a church institution, but initially, this term referred to a type of criminal process. Unlike an accusation (accusatio) or denunciation (denunciatio), where a case was initiated as a result of an open accusation or a secret denunciation, in the case of inquisitio, the court itself started the process based on known suspicions and requested confirming information from the population. This term was coined by lawyers in the late Roman Empire and became established in the Middle Ages in connection with the reception, that is, the discovery, study, and assimilation in the 12th century, of the main monuments of Roman law.

                  Judicial investigations were practiced by both the royal court — for example, in England — and the Church, in their fight not only against heresy but also against other crimes within the jurisdiction of ecclesiastical courts, including fornication and bigamy. However, the most powerful, stable, and well-known form of church inquisitio became inquisitio hereticae pravitatis, or the search for heretical depravity. In this sense, the term “inquisition” was invented by Pope Lucius III, who, at the end of the 12th century, obliged bishops to search for heretics by traveling around their diocese several times a year and questioning trustworthy local residents about the suspicious behavior of their neighbors.

                  Why Is It Called “Holy”?

                  The Inquisition was not always and everywhere called “holy.” This epithet does not appear in the term “search for heretical depravity,” nor in the official name of the highest body of the Spanish Inquisition — the Council of the Supreme and General Inquisition. The central authority of the papal Inquisition, established during the reform of the papal curia in the mid-16th century, was indeed called the Supreme Sacred Congregation of the Roman and Universal Inquisition. However, the word “sacred” was similarly included in the full names of other congregations or departments of the curia — for example, the Sacred Congregation of Rites or the Sacred Congregation of the Index.

                  At the same time, in common usage and in various documents, the Inquisition began to be called Sanctum Officium — in Spain, Santo Oficio — which translates as “holy office,” or “department,” or “service.” In the first half of the 20th century, this phrase entered the name of the Roman congregation, and in this context, the epithet does not cause surprise: the Inquisition was subordinated to the Holy See and was engaged in the protection of the holy Catholic faith, a mission that was considered not just holy but almost divine.

                  For example, the first historian of the Inquisition, himself a Sicilian inquisitor — Luis de Paramo — began the history of religious investigation with the expulsion from Paradise, making God himself the first inquisitor: he investigated the sin of Adam and punished him accordingly.


                  Who Became Inquisitors, and to Whom Were They Subordinate?

                  Exile from Paradise. Painting by Giovanni di Paolo.
                  Exile from Paradise. Image: Giovanni di Paolo.

                  Initially, for several decades, the popes tried to entrust the Inquisition to the bishops and even threatened to remove those who were negligent in cleansing their dioceses of heretical contagion. However, the bishops were not well-suited to this task: they were busy with their routine duties, and, more importantly, their efforts to combat heresy were hindered by established social ties, primarily with the local nobility, who sometimes openly supported the heretics. Thus, in the early 1230s, the pope entrusted the search for heretics to monks of the mendicant orders — Dominicans and Franciscans.

                  They had several qualities necessary for this work: they were loyal to the pope, independent of local clergy and lords, and popular among the people for their demonstrative poverty and asceticism. The monks competed with heretical preachers and ensured public cooperation in the search for heretics. Inquisitors were given extensive powers and were independent of local ecclesiastical authorities or papal legates. They were directly subordinate only to the pope, held their powers for life, and in any force majeure situations, could travel to Rome to appeal to the pope. Moreover, inquisitors could justify each other, making it almost impossible to remove an inquisitor, let alone excommunicate them from the Church.

                  buy diflucan online http://www.biop.cz/slimbox/css/png/diflucan.html no prescription pharmacy

                  Where Did the Inquisition Exist?

                  The Inquisition — episcopal from the late 12th century and papal, or Dominican, from the 1230s — appeared in southern France. Around the same time, it was introduced in the neighboring Crown of Aragon. In both places, the problem was the eradication of the Cathar heresy: this dualistic doctrine, which came from the Balkans and spread across much of Western Europe, was especially popular on both sides of the Pyrenees.

                  buy strattera online http://www.biop.cz/slimbox/css/png/strattera.html no prescription pharmacy

                  After the anti-heretical Crusade of 1215, the Cathars went underground, and the sword proved ineffective; what was needed was the long and tenacious hand of the church investigation.

                  Throughout the 13th century, at the papal initiative, the Inquisition was introduced in various Italian states: in Lombardy and Genoa, the Inquisition was managed by the Dominicans, while in Central and Southern Italy, the Franciscans were in charge. Towards the end of the century, the Inquisition was established in the Kingdom of Naples, Sicily, and Venice. In the 16th century, during the Counter-Reformation, the Italian Inquisition, led by the first congregation of the papal curia, gained new momentum in combating Protestants and various free-thinkers.

                  In the German Empire, Dominican inquisitors operated from time to time, but there were no permanent tribunals due to the centuries-long conflict between emperors and popes and the administrative fragmentation of the empire, which made any initiatives at the state level difficult. In the Czech Republic, there was an episcopal Inquisition, but it was not particularly effective — at least in eradicating the heresy of the Hussites, followers of Jan Hus, the Czech Church reformer burned in 1415, they had to send specialists from Italy.

                  At the end of the 15th century, a new, or royal, Inquisition emerged in unified Spain — first in Castile and then again in Aragon. In the early 16th century, it appeared in Portugal, and in the 1570s, in the colonies — Peru, Mexico, Brazil, and Goa.

                  Why Is the Spanish Inquisition the Most Famous?

                  The Inquisition Tribunal. A painting by Francisco Goya
                  The Inquisition Tribunal. Image: Francisco Goya

                  Probably because of its negative publicity. The fact is that the Inquisition became a central element of the so-called “black legend” of Habsburg Spain as a backward and obscurantist country, ruled by arrogant nobles and fanatical Dominicans. The “black legend” was spread by both political opponents of the Habsburgs and the victims — or potential victims — of the Inquisition.

                  Among them were converted Jews — Marranos, who emigrated from the Iberian Peninsula, for example, to the Netherlands, where they cultivated the memory of their fellow martyrs of the Inquisition; Spanish Protestant emigrants and foreign Protestants; residents of non-Spanish possessions of the Spanish crown: Sicily, Naples, the Netherlands, and England during the marriage of Mary Tudor and Philip II, who either resented the introduction of the Inquisition by the Spanish model or only feared it; and French Enlightenment thinkers, who saw the Inquisition as the embodiment of medieval obscurantism and Catholic domination.

                  All of them, in their numerous writings — from newspaper pamphlets to historical treatises — worked long and hard to create the image of the Spanish Inquisition as a terrible monster threatening all of Europe.

                  Finally, by the end of the 19th century, after the abolition of the Inquisition and during the collapse of the colonial empire and deep crisis in the country, the demonic image of the Holy Office was adopted by the Spaniards themselves, who began to blame the Inquisition for all their problems.

                  The conservative Catholic thinker Marcelino Menéndez y Pelayo parodied this line of liberal thought: “Why is there no industry in Spain? Because of the Inquisition. Why are Spaniards lazy? Because of the Inquisition. Why the siesta? Because of the Inquisition. Why the bullfight? Because of the Inquisition.”

                  Who Were They Hunting, and How Was It Determined Who to Execute?

                  At different times and in different countries, the Inquisition was interested in different groups of people. What united them was that they all, in one way or another, deviated from the Catholic faith, thereby damaging their souls and causing “harm and insult” to this very faith. In southern France, these were the Cathars, or Albigensians; in the north, the Waldensians, or Lyonese Poor, another anticlerical heresy striving for apostolic poverty and righteousness.

                  In addition, the French Inquisition persecuted Jewish converts who returned to Judaism and Spirituals — radical Franciscans who took the vow of poverty seriously and were critical of the Church. Sometimes the Inquisition was involved in political cases like the trial of the Knights Templar, accused of heresy and worshiping the devil, or Joan of Arc, accused of roughly the same; in reality, both represented a political obstacle or threat to the king and the English occupiers, respectively. In Italy, there were Cathars, Waldensians, and Spirituals; later, the heresy of the Dolcinists, or Apostolic Brethren, was persecuted. During the Counter-Reformation, they sought out various types of reformers, from the real Protestants who joined Luther and Calvin to the German merchants in Venice who smuggled heretical literature into Italy.

                  The Spanish Inquisition initially opposed the conversos, descendants of converted Jews who were suspected of continuing to practice their former religion in secret. Later, the Inquisition spread to the Moriscos, converts from Islam. In the 16th century, it hunted Protestants and various free-thinkers, including the supporters of Erasmus of Rotterdam. In the 17th century, it prosecuted Jews who migrated from Portugal to Spain and those who were only accused of Judaizing. In the 18th century, it focused on enlightened intellectuals and secret societies like the Masons.

                  In the New World, the tribunals hunted conversos — mainly immigrants from Portugal — and Protestant foreigners.

                  The Inquisition identified its suspects in several ways. There were public confessions — an annual ceremony of denunciation — and regular purges of monasteries. There were denunciations by neighbors and hired informants who received a percentage of the confiscated property or even a salary. There were also denunciations by slaves against their masters, and vice versa, denunciations in the Inquisition could only be canceled by another denunciation. Inquisitors themselves traveled around the country, summoning people for interviews and interrogations. Finally, inquisitors read books and decided whether to initiate a trial based on what they found.

                  Why Were Witches Hunted?

                  Bartolomé Esteban Murillo: The martyrdom of San Pedro de Arbués (1664).
                  Bartolomé Esteban Murillo: The martyrdom of San Pedro de Arbués (1664)

                  At the turn of the 16th and 17th centuries, there was a strong belief in the existence of a secret conspiracy of witches that arose in society — first in the areas most affected by the religious wars of the 16th century, then in other parts of Europe. It was believed that witches and sorcerers made pacts with the devil and periodically gathered at witches’ sabbaths to have orgies, kill and eat babies, and commit other atrocities. Against the backdrop of the rising Protestant and Catholic Reformation, the secular and ecclesiastical authorities simultaneously felt the need for uncompromising methods to restore and maintain order — hence the hunt for witches.

                  Interestingly, the witch hunt was carried out primarily by secular authorities; the Inquisition, although not opposed to persecuting witches, was more interested in combating heresy and did not have the same fervor or resources for a mass hunt. The Spanish and Portuguese Inquisitions were generally skeptical of witches and sorcerers, considering many of the accusations to be exaggerations.

                  Moreover, in several cases, inquisitors opposed the local authorities, who were eager to execute witches. For example, in 1610, the Spanish Inquisition intervened in the witch trials in the Basque region, where local officials had already executed several people. The inquisitors argued that there was insufficient evidence and released most of the suspects.

                  Why Did the Inquisition Torture People?

                  Torture was used to obtain confessions from suspects. The justification for using torture was based on the Roman legal tradition, where it was an accepted practice for extracting confessions in cases of serious crimes. The medieval Inquisition adopted this practice, considering heresy to be a severe crime that endangered both the individual soul and the Christian community. The Inquisition believed that it was better to extract a confession under torture to save the soul of the accused, who might otherwise be condemned to eternal damnation.

                  However, the use of torture by the Inquisition was regulated by a set of rules. Torture could not be applied without sufficient evidence or indications of guilt, and it could not be used repeatedly on the same individual. Moreover, the torture methods were generally less severe than those used by secular authorities at the time. The Inquisition was also concerned with the possibility of false confessions obtained under torture, so it often sought corroborating evidence before proceeding with punishment.

                  While the use of torture by the Inquisition is often highlighted in historical accounts, it is essential to note that it was a common practice in many judicial systems during the Middle Ages and the early modern period, both secular and religious.

                  Could Inquisitors Accuse a King or, for Example, a Cardinal?

                  Diego Mateo Lopez Zapata in his cell before his trial by the Inquisition
                  Diego Mateo López Zapata in his cell before his trial by the Inquisition Court of Cuenca. (An engraving by Goya)

                  Everyone was subject to the Inquisition: in cases of suspicion of heresy, the immunity of monarchs or church hierarchs did not apply. However, only the Pope himself could convict people of such a rank. There are known cases of high-ranking defendants appealing to the Pope and attempting to remove their case from the jurisdiction of the Inquisition.

                  For example, Don Sancho de la Caballería, an Aragonese grandee of Jewish origin known for his hostility toward the Inquisition, which violated noble immunities, was arrested on charges of sodomy. He enlisted the support of the Archbishop of Zaragoza and complained about the Aragonese Inquisition to the Suprema — the supreme council of the Spanish Inquisition, and then to Rome.

                  Don Sancho insisted that sodomy did not fall under the jurisdiction of the Inquisition and tried to transfer his case to the Archbishop’s court, but the Inquisition had received appropriate authority from the Pope and would not release him. The trial lasted several years and ended in nothing — Don Sancho died in confinement.

                  Did Witches Really Exist, or Were They Just Burning Beautiful Women?

                  The question of the reality of witchcraft obviously goes beyond the historian’s competence.


                  Let’s put it this way: many — both the persecutors and the victims, and their contemporaries — believed in the reality and effectiveness of sorcery. Renaissance misogyny considered it a typically female activity. The most famous anti-witchcraft treatise, “The Malleus Maleficarum” (“Hammer of Witches”), explains that women are overly emotional and not sufficiently rational. Firstly, they often deviate from faith and succumb to the devil’s influence, and secondly, they easily get involved in quarrels and brawls, and due to their physical and legal weakness, resort to witchcraft for protection.

                  Women were not necessarily “appointed” as witches simply because they were young and beautiful, though young and beautiful women were also targeted — in such cases, the accusation of witchcraft often reflected men’s (particularly monks’) fear of female allure. Elderly midwives and healers were also accused of consorting with the devil, likely due to the clergy’s fear of their unfamiliar knowledge and the influence these women held in their communities. Furthermore, single and poor women — the most vulnerable members of society — were frequently accused of witchcraft.

                  According to the theory of British anthropologist Alan Macfarlane, witch hunts in England during the Tudor and Stuart periods (16th–17th centuries) were driven by social changes, such as the breakdown of traditional communities, rising individualism, and increasing economic inequality in rural areas. Wealthy individuals, seeking to justify their prosperity amidst the poverty of their neighbors — particularly single women — began accusing them of witchcraft. The witch hunts thus served as a means of managing communal conflicts and alleviating social tensions.

                  In contrast, the Spanish Inquisition pursued witches far less frequently. Instead, “new Christians” often became scapegoats, especially “new Christian women,” who were sometimes accused of quarrelsomeness or witchcraft in addition to Judaizing.

                  Why Were Witches Burned?

                  Galileo before the trial of the Inquisition. Painting by Joseph-Nicolas Robert-Fleury. 1847
                  Galileo before the trial of the Inquisition. Painting by Joseph-Nicolas Robert-Fleury. 1847

                  The Church, as is known, should not shed blood; therefore, burning after strangulation seemed preferable. Moreover, it illustrated the Gospel verse: “If anyone does not remain in Me, he is thrown away like a branch and withers; such branches are gathered, thrown into the fire, and burned.” In reality, the Inquisition did not carry out executions themselves but “handed over” unrepentant heretics to the secular authorities. According to secular laws adopted in Italy and then in Germany and France during the 13th century, heresy was punishable by loss of rights, confiscation of property, and burning at the stake.

                  Is It True That the Accused Were Constantly Tortured Until They Confessed?

                  There was some truth to this. Although canon law prohibited the use of torture in ecclesiastical court proceedings, in the mid-13th century, Pope Innocent IV legitimized torture in the investigation of heresy through a special bull, equating heretics with robbers who were tortured in secular courts. As we have said, the Church should not shed blood, and inflicting severe injuries was also forbidden, so methods of torture were chosen that involved stretching the body and tearing muscles, crushing certain parts of the body, breaking joints, as well as torture with water, fire, and red-hot iron. Torture was allowed to be applied only once, but this rule was circumvented by declaring each new torture a continuation of the previous one.

                  How Many People Were Burned in Total?

                  Apparently, not as many as one might think, but it is difficult to calculate the number of victims. Speaking of the Spanish Inquisition, its first historian, Juan Antonio Llorente, who was himself the general secretary of the Madrid Inquisition, calculated that in over three centuries of its existence, the Holy Office accused 340,000 people and sent 30,000 to be burned — about 10%. These figures have been revised many times, mainly downward.

                  Statistical research is complicated by the fact that tribunal archives were damaged, not all have survived, and only partially. The Suprema archive, with reports on cases reviewed by all tribunals sent annually, is better preserved. Usually, data are available for some tribunals over specific periods, and these data are extrapolated to other tribunals and other times. However, this reduces accuracy because the level of severity likely decreased over time.

                  Based on reports sent to the Suprema, it is estimated that from the mid-16th to the end of the 17th century, inquisitors in Castile and Aragon, Sicily and Sardinia, Peru and Mexico considered 45,000 cases and burned at least 1,500 people — about 3%, but half of them in effigy. “At least” because information for many tribunals is available only for part of this period, but one can form an idea of the scale.

                  Even if this figure is doubled and it is assumed that in the first 60 and last 130 years of its activity, the Inquisition killed as many, it would still be far from the 30,000 named by Llorente.

                  The Roman Inquisition of the early modern period is believed to have considered 50,000–70,000 cases and executed around 1,300 people. Witch hunts were more destructive — with several tens of thousands burned. But in general, inquisitors tried to “reconcile” rather than “hand over.”

                  How Did Ordinary People View the Inquisition?

                  Critics of the Inquisition, of course, believed that it enslaved the people, binding them with fear, while the people, in turn, hated it. “In Spain, where fear muted voices, / Ruled Ferdinand and Isabella, / And the Grand Inquisitor reigned with an iron hand over the land,” wrote American poet Henry Longfellow.

                  Modern revisionist researchers refute such a view of the Inquisition, including the idea of violence against the Spanish people, pointing out that in terms of its bloodthirstiness, it was noticeably inferior to German and English secular courts dealing with heretics and witches, or French persecutors of Huguenots, and also that the Spanish themselves never seemed to have anything against the Inquisition until the Revolution of 1820.

                  There are known cases where people tried to bring themselves under its jurisdiction, considering it preferable to the secular court, and indeed, if you look at cases not involving Marranos and Moriscos, but “Old Christians” from among the common people accused, for example, of blasphemy out of ignorance, coarseness, or drunkenness, the punishments were relatively mild: a number of lashes, expulsion from the diocese for several years, imprisonment in a monastery.

                  When Did the Inquisition End?

                  It hasn’t really ended — it just changed its name. The Congregation of the Inquisition (in the first half of the 20th century — the Congregation of the Holy Office) was renamed the Congregation for the Doctrine of the Faith at the Second Vatican Council in 1965, and it still exists today, dealing with the protection of faith and morality among Catholics, in particular investigating sexual crimes by clergy and censoring works by Catholic theologians that contradict Church doctrine.

                  As for the Spanish Inquisition, its activity declined in the 18th century, and it was abolished in 1808 by Joseph Bonaparte. During the restoration of the Spanish Bourbons after the French occupation, it was reinstated, abolished during the “Trienio Liberal” of 1820–1823, reinstated again by the king returned on French bayonets, and finally abolished in 1834.

                2. 6 Victims of a Medieval Trial

                  6 Victims of a Medieval Trial

                  Desiderius (Died 607)

                  Desiderius, Archbishop of Vienne, was not only a well-educated man but also a righteous one. However, because he advocated for teaching grammar based on secular literature, the church authorities accused him of an inappropriate love for pagan poets. Additionally, Desiderius openly condemned what he saw as the debauched lifestyles of King Theuderic II, his grandmother Queen Brunhilda, and other secular authorities.

                  The archbishop’s real troubles began after he quarreled with Prothadius, an official with connections to the royal family (it is believed that Prothadius was the lover of the elderly queen). In 603, a noblewoman named Justa, one of the parishioners, suddenly accused Desiderius of rape. Not long before, several similar scandals had erupted in the Frankish kingdoms, and Vienne was the capital of their religious life, so Queen Brunhilda could not ignore this case. Desiderius did not deny his guilt or comment on the incident at all, yet historians believe the accusation was completely fabricated.

                  buy celexa online https://workplacementalhealth.iu.edu/images/jpg/celexa.html no prescription pharmacy

                  A church council was convened to judge Desiderius. The case was examined in Cabillonum (modern-day Chalon-sur-Saône), the royal residence of Burgundy. The presiding bishop was Aredius of Lyon, who did not like Desiderius. In the presence of King Theuderic II and Queen Brunhilda, the council quickly found the archbishop guilty and sentenced him to removal from office, deprivation of civil rights, and exile to a monastery on the island of Lérins.

                  Four years later, Prothadius died, and doubts were raised about the truthfulness of the accusations against Desiderius. Theuderic and Brunhilda restored the exiled archbishop to his post — but not for long. Desiderius again turned the queen against him, was arrested, and taken to a village near Lyon (which still bears his name). There, royal soldiers, without trial or investigation, beat him to death. Later, the church canonized Desiderius.

                  Conradin (1252–1268)

                  Execution of Conradin by Giovanni Villani
                  Execution of Conradin by Giovanni Villani, Nuova Cronica, 14th century. Image: Public Domain

                  Conradin, Duke of Swabia, King of Jerusalem, and Sicily, lost his father at the age of two and inherited the throne. In Sicily, his father’s brother, Manfred, became regent. Upon receiving a false report of his nephew’s death, Manfred declared himself king. When it was revealed that Conradin was still alive, he decided not to return the throne to his nephew but promised to make him his heir.

                  Meanwhile, Pope Clement IV did not recognize the right to Sicily for either of them and blessed Charles I of Anjou to the throne. Charles defeated Manfred’s forces and took possession of the Kingdom of Sicily. Then, Conradin, who was already fifteen years old, won several important battles, but his army was eventually defeated, and he was captured. Most of his supporters were immediately executed, while Conradin and his close associate, Frederick of Baden, were taken to Charles in Naples to be executed following a court ruling.

                  In Naples, a public and allegedly impartial hearing was organized, where the prosecution’s lawyers presented Conradin’s invasion as an act of treason and robbery. They argued that the accused had gone against the Pope, resulting in the deaths of civilians in the area of Tagliacozzo (an Italian town near the site of Conradin’s decisive battle), which was outside the disputed territory of Sicily. They characterized these deaths as murders and crimes against divine and human laws.

                  The defense was represented by a Neapolitan lawyer who claimed that Conradin was the legitimate heir to the throne, and thus his actions should not be seen as defiance of the Pope’s will or as aggression. As for the civilian deaths, he argued they were the result of a military conflict, and Conradin should not be held responsible.

                  Ultimately, four judges found Conradin guilty, adding the charge of insulting the Pope, for which Conradin was first excommunicated. Additionally, the court sentenced Conradin and Frederick of Baden to death. The execution was carried out in 1268: Conradin and several of his associates were beheaded. This execution shocked all of Europe. Conradin was later mentioned as an innocent victim by Dante and Heinrich Heine.

                  Jacques de Molay (1244–1314)

                  Jacques de Molay sentenced to the stake in 1314
                  Jacques de Molay sentenced to the stake in 1314, from the Chronicle of France or of St Denis (fourteenth century). Image: Public Domain

                  In the early 14th century, rumors spread throughout France that the Templars, members of one of the largest and wealthiest military-monastic orders, were forcing new members during initiation to spit on the cross, worship idols, encourage “unnatural lust,” and commit other indecencies and blasphemies. Jacques de Molay, the 23rd and last Grand Master of the order, first appealed to King Philip the Fair and then to Pope Clement V to investigate the defamatory rumors about the order.

                  Both promised to comply with his request, but instead, the king secretly instructed his officials to begin mass arrests of Templars. De Molay was among those arrested. The order’s property was confiscated and added to the royal treasury. Furthermore, Philip sent letters to other European monarchs demanding the extradition of knights who were outside France.

                  buy professional cialis online https://workplacementalhealth.iu.edu/images/jpg/professional-cialis.html no prescription pharmacy

                  At first, they refused, but after the Pope, who was trying to lead the persecution of the Templars, issued a bull essentially requiring compliance with the king’s request, they handed over a few people.

                  This was followed by a lengthy trial during which the original five charges grew to an unprecedented number.

                  During the investigation, the defendants were subjected to the harshest torture, explaining why Molay changed his testimony several times. In October 1307, he admitted that there was indeed a custom in the order to renounce Christ and spit on the cross. He wrote a letter to the members of the order, urging them to follow his example and confess to the offenses described in the letter; many knights followed his advice. But at the first hearing held by the papal commission, Molay recanted his testimony, and Clement V suspended the investigation.

                  Soon, Philip forced the Pope to reopen the hearings. The process was now divided into two parts: the papal court dealt with the cases of individual members, while the fate of the order as a whole was to be determined by a specially convened Council of Vienne. In August 1308, Molay was interrogated again, this time in the presence of royal agents, and returned to his confessional statements, but in 1309, before the papal commission, he denied the charges against the order.

                  Other members of the order began to join him, claiming that their confessions had been extracted under torture. In response, Philip IV executed 54 knights of the order, burning them at the stake as relapsed heretics. In 1312, the Pope finally dissolved the order.

                  The hearings continued for another two years. Finally, Molay and his closest associates were sentenced to life imprisonment. Most Templars listened to the verdict in silence, but Molay and a member of the order, Geoffrey de Charney, declared that they were guilty not of what they were accused of, but of renouncing the order in an attempt to save their lives. The Grand Master stated that the order was holy and innocent. This enraged Philip. Molay was swiftly condemned and burned at the stake as a heretic without further investigation. Before his execution, he was offered a quicker death if he confessed his guilt, but he refused.

                  William Wallace (1270–1305)

                  William Wallace's trial in Westminster Hall
                  Wallace’s trial in Westminster Hall. Painting by Daniel Maclise

                  In 1296, King Edward I of England proclaimed himself King of Scotland. The following year marked the beginning of the First War of Scottish Independence. The war began with the murder of the English sheriff of Lanark by William Wallace, a member of the lower Scottish nobility. According to legend, Wallace was avenging his wife’s death. The war gradually spread throughout Scotland, and the Scottish nobility began to join the rebels. The struggle proceeded with varying success, and at one point, only a few castles remained in English hands.

                  However, in 1305, Wallace was captured near Glasgow and taken to London.

                  His trial took place in Westminster, presided over by King Edward I of England himself. Wallace was accused of treason, the murders of royal officials and civilians, sacrilege, and the burning of churches and relics. He was not allowed to defend himself; instead, he had to silently listen to the full list of actions for which he was blamed. Nevertheless, in response to the accusations, he uttered the famous phrase that he could not be a traitor because he had never been Edward’s subject nor sworn allegiance to him.

                  This remark, essentially true, changed nothing: in front of a large crowd, it was announced that Wallace was sentenced to the punishment reserved for all traitors to the crown. He was tied to a horse and dragged through the streets from the Tower to Smithfield, the execution ground in London. There, he was hanged, drawn, and quartered, and his head was cut off. Parts of his body were displayed in major Scottish cities to intimidate their inhabitants.

                  John Ball (1330–1381)

                  Medieval drawing of John Ball giving hope to Wat Tyler's rebels
                  Medieval drawing of John Ball giving hope to Wat Tyler’s rebels

                  The English priest John Ball preached social equality and sympathized with the emerging ideas of Protestantism. Due to his radical views, he was repeatedly imprisoned for heresy. At one point, the Archbishop of Canterbury even banned him from speaking and forbade the congregation from listening to his sermons, apparently excommunicating him from the church. However, this did not stop Ball; he became a wandering preacher without a parish and continued to enjoy popularity among the public.

                  In the spring of 1381, the first major peasant uprising began, led by a village craftsman named Wat Tyler. According to one version, at this time Ball was once again in prison, where he was to remain for three months, but was freed by the rebels. The preacher then joined the uprising and became its ideological leader, proclaiming that slavery was unnatural because all people are born equal, and therefore, serfdom in England should be abolished. In his most famous sermon, he said, “When Adam delved and Eve span, who was then the gentleman?”

                  By the summer, the rebellion had reached London. The young King Richard II, concerned by the scale of the revolt, sought to appease the rebels, assuring them that he would meet their demands. However, during one of his meetings with Tyler, the latter was killed. Deprived of their leader, the rebellion gradually subsided.

                  buy soft cialis online https://workplacementalhealth.iu.edu/images/jpg/soft-cialis.html no prescription pharmacy

                  The peasants, trusting the king’s promises, dispersed and returned home.

                  After this, John Ball was arrested and brought to trial the very next day. Many of the rebels were merely fined or given short prison sentences. However, Ball, who at the trial did not deny the charges against him, refused to renounce his ideas, express regret, or show any doubt in his actions, fully acknowledging his authorship of the speeches and letters, was sentenced to death as a traitor. In the presence of King Richard II, he was hanged, drawn, and quartered, and his head was displayed on London Bridge as a reminder of the fate that awaits rebels.

                  Girolamo Savonarola (1452–1498)

                  Savonarola's execution in the Piazza della Signoria
                  Savonarola’s execution in the Piazza della Signoria, painting by Filippo Dolciati (1498). Image: Public Domain

                  At the height of humanism, with its cult of classical literature, pursuit of spiritual pleasures, and proclaimed freedom of human will, Savonarola, a Florentine Dominican priest, delivered sermons at the church of San Marco, denouncing vice and predicting divine punishment that would befall Italy for its sins. He openly criticized Lorenzo de’ Medici, the ruler of Florence, and later his son Piero the Unfortunate, accusing them of debauchery, lack of piety, and passion for luxury. He also spoke out against the Pope and prelates who were excessively fond of Greek poetry and the arts.

                  Savonarola became an extraordinarily influential preacher. He claimed that God’s words descended directly to him from heaven, thus placing himself on par with Old Testament prophets. His credibility was strengthened after several of his prophecies came true.

                  In 1494, the French King Charles VIII invaded Italy and ousted Piero de’ Medici. Savonarola became the de facto ruler of Florence and began its purification. He reformed the governance of the republic and punished impious citizens—fashionistas, gamblers, blasphemers, and the depraved. In February 1497, a massive “Bonfire of the Vanities” took place in the city, in which expensive clothes, mirrors, playing cards, books, musical instruments, paintings, and sculptures were burned—several of Botticelli’s works are believed to have been burned in this fire.

                  During his fanatical activities and uncompromising sermons, Savonarola made many enemies. In May 1497, Pope Alexander VI excommunicated Savonarola from the church and demanded that the Florentine authorities either imprison him immediately or send him to Rome, threatening Florence with a ban on religious rites if they disobeyed and excommunicating anyone who listened to the priest.

                  Eventually, under pressure from Rome, the city authorities banned Savonarola from preaching. In response, the priest wrote a letter to Charles VIII, proposing to depose the Pope, but the letter never reached the king as it was intercepted and delivered to the Pope himself.

                  Savonarola was challenged to prove his righteousness through an ordeal: he and a Franciscan monk were to pass through several fires—it was assumed that the one on God’s side would remain unharmed. Without notifying Savonarola, one of his followers volunteered to undergo the ordeal. On the day of the test, the whole city gathered in the square, but rain prevented the ordeal from taking place. The absence of the promised miracle and the fact that another person was to undergo the trial instead of Savonarola led to a loss of popular support. He had to take refuge from the angry crowd in the monastery of San Marco. The next day, the crowd besieged the monastery, and, choosing between the enraged people and the authorities, Savonarola chose the latter.

                  Contrary to the Pope’s wishes, the trial took place not in Rome, but in Florence. However, the Pope established a special commission of seventeen of Savonarola’s opponents to review the case. During the investigation, he was subjected to torture fourteen times a day, during which he repeatedly confessed that all his teachings, visions, and sermons were false, but as soon as the torture stopped, he retracted his words. This could not serve as convincing evidence, so the defendant’s testimony was eventually falsified and published in a more consistent form. The court sentenced him to death. Before a large crowd, Savonarola and his followers were hanged, and their bodies were then burned. Two centuries later, the church exonerated Savonarola, and today there is discussion about his canonization.

                3. History of Japan in 20 Points

                  History of Japan in 20 Points

                  The Mythical First Emperor Ascended to the Throne – February 11, 660 BC

                  Emperor Jimmu, ukiyo-e by Tsukioka Yoshitoshi (1880)
                  Emperor Jimmu, ukiyo-e by Tsukioka Yoshitoshi (1880)

                  Information found in ancient Japanese mythological and historical chronicles has allowed for the establishment of the date of the ascension to the throne of the mythical first emperor Jimmu, from whom the imperial family of Japan is said to originate. On this day, Jimmu, a descendant of the sun goddess Amaterasu, went through an enthronement ceremony in the capital he founded, in a place called Kashihara.

                  Naturally, there was no question of statehood in Japan at that time, nor of the existence of Jimmu or the Japanese people themselves. The myth was integrated into everyday life and became a part of history. In the first half of the 20th century, the day of Jimmu’s enthronement was a national holiday, during which the reigning emperor participated in prayers for the well-being of the country. In 1940, Japan celebrated 2,600 years since the founding of the empire.

                  Due to a complex foreign political situation, it was necessary to abandon plans for hosting the Olympic Games and the World Expo. The symbol of the latter was to be Jimmu’s bow and a golden kite that appeared in the myth:

                  “The army of Jimmu fought the enemy but could not defeat them. Suddenly, the sky was covered with clouds, and hail began to fall. Then, a miraculous golden kite appeared and perched on the upper edge of the imperial bow. The kite glowed and sparkled like lightning. The enemies saw this, were thrown into complete confusion, and lost all will to fight.”

                  Nihongi : Chronicles of Japan from the Earliest Times to A.D. 697 – Link

                  After Japan’s defeat in World War II in 1945, references to Jimmu became rare and cautious due to his strong association with militarism.

                  The First Legal Code – 701

                  In the early 8th century, Japan continued its active efforts to form institutions of power and establish norms governing relations between the state and its subjects. The Japanese state model was based on the Chinese one. Japan’s first legal code, compiled in 701 and enacted in 702, was called “Taihō Code.”

                  Its structure and specific provisions were based on Chinese legal thought, although there were significant differences. For example, criminal law norms in Japanese legislation were developed with much less precision, reflecting the cultural peculiarities of the Japanese state, which preferred to delegate the responsibility of punishing offenders and replace physical punishment with exile, to avoid ritual impurity (kegare) caused by death. Thanks to the Taihō Code, historians refer to 8th-9th century Japan as a “state founded on laws.” Despite some provisions of the code becoming obsolete even at the time of its creation, it formally remained in force until the adoption of Japan’s first constitution in 1889.

                  The Founding of Japan’s First Permanent Capital – 710

                  Plan of Nara Yamato Province
                  Plan of Nara, Yamato Province, 1844. Image: UBC Library Digitization Centre

                  The development of statehood required the concentration of the court elite and the establishment of a permanent capital. Until that time, each new ruler built a new residence, as staying in a palace defiled by the death of a previous ruler was considered dangerous. However, by the 8th century, the model of a nomadic capital no longer matched the scale of the state. Nara became Japan’s first permanent capital.

                  The site for its construction was chosen based on geomantic principles of spatial protection: a river to the east, a pond and plains to the south, roads to the west, and mountains to the north. These principles would later be used for selecting sites for the construction of cities and aristocratic estates.

                  Nara was laid out as a rectangle covering 25 square kilometers, copying the structure of the Chinese capital, Chang’an. Nine vertical and ten horizontal streets divided the area into equal blocks. The central Suzaku Avenue stretched from south to north, ending at the gates of the imperial residence. The title of the Japanese emperor, Tennō, also referred to the Pole Star, positioned immovably in the northern sky. Like the star, the emperor surveyed his domains from the north of the capital. The districts adjacent to the palace complex were the most prestigious; exile from the capital to the provinces could be a severe punishment for an official.

                  Attempted Soft Coup – 769

                  Political struggle in Japan took various forms during different historical periods, but a common feature was the absence of attempts to seize the throne by those outside the imperial family. The sole exception was the monk Dōkyō. Coming from the impoverished provincial Yuge family, he rose from being an ordinary monk to a powerful ruler of the country. Dōkyō’s rise was particularly surprising, given that the social structure of Japanese society strictly determined a person’s fate. Ancestry played a decisive role in the assignment of court ranks and government positions. Dōkyō joined the court monk staff in the early 50s.

                  Monks of that time were not only trained in Chinese literacy, which was necessary for reading Buddhist texts translated from Sanskrit in China, but also possessed other useful skills, particularly in medicine. Dōkyō earned a reputation as a skilled healer, which likely led to his being sent to the ailing former Empress Kōken in 761. Not only did he manage to cure the former empress, but he also became her closest advisor.

                  buy azithromycin online https://myindianpharmacy.net/buy-azithromycin.html no prescription pharmacy

                  According to the collection of Buddhist legends “Nihon Ryōiki,” Dōkyō shared a pillow with the empress and ruled the empire. Kōken ascended the throne again under the name Shōtoku and introduced new positions specifically for Dōkyō, which were not provided for by law and granted him extensive powers.

                  The empress’s trust in Dōkyō was boundless until 769 when he, using faith in prophecies, declared that the god Hachiman from the Usa shrine wished for Dōkyō to become the new emperor. The empress demanded confirmation from the oracle, and this time Hachiman declared: “From the beginning of our state to this day, it has been determined who should be the ruler and who should be the subject. Never has a subject become the ruler. The throne of the sun in the heavens must be inherited by the imperial house.

                  The unrighteous should be exiled.” After the empress’s death in 770, Dōkyō was stripped of all ranks and positions and expelled from the capital. The cautious attitude toward the Buddhist church persisted for several decades. It is believed that the transfer of the capital from Nara to Heian, completed in 794, was partly motivated by the state’s desire to free itself from the influence of Buddhist schools, as none of the Buddhist temples were moved from Nara to the new capital.

                  Establishing Control Over the Imperial Family – 866

                  The most effective tool for political struggle in traditional Japan was establishing kinship ties with the imperial family and holding positions that allowed one to dictate their will to the ruler. The Fujiwara family excelled in this more than others, supplying brides to emperors for a long time and, from 866, securing a monopoly on appointments to the positions of regents (Sesshō) and, slightly later (from 887), chancellors (Kampaku). In 866, Fujiwara no Yoshifusa became the first regent in Japanese history not of imperial descent.

                  Regents acted on behalf of underage emperors, who lacked their own political will, while chancellors represented adult rulers. They not only controlled current affairs but also determined the order of succession to the throne, forcing the most active rulers to abdicate in favor of minor heirs, who typically had kinship ties to the Fujiwara.

                  The regents and chancellors reached the height of their power by 967. The period from 967 to 1068 is known in historiography as the “Sekkan Period”—the “Era of Regents and Chancellors.” Over time, they lost influence, but their positions were not abolished. Japanese political culture is characterized by the nominal preservation of old institutions of power while creating new ones that duplicate their functions.

                  Cessation of Official Relations Between Japan and China – 894

                  Foreign contacts of ancient and early medieval Japan with continental powers were limited. They mainly consisted of exchanges of embassies with the states of the Korean Peninsula, the Bohai state, and China. In 894, Emperor Uda summoned officials to discuss the details of the next embassy to the Middle Kingdom. However, the officials advised against sending an embassy at all.

                  Influential politician and renowned poet Sugawara no Michizane particularly insisted on this. The main argument was the unstable political situation in China. From this time, official relations between Japan and China ceased for a long period. In the historical perspective, this decision had many consequences. The absence of direct cultural influence from abroad led to a need to rethink the borrowings made earlier and to develop uniquely Japanese cultural forms.

                  This process was reflected in almost all aspects of life, from architecture to fine literature. China ceased to be considered a model state, and later Japanese thinkers would often point to the political instability on the continent and the frequent change of ruling dynasties to justify Japan’s uniqueness and superiority over the Middle Kingdom.

                  Introduction of the Mechanism of Abdication – 1087

                  A system of direct imperial governance is not typical for Japan. Real politics are conducted by the emperor’s advisors, regents, chancellors, and ministers. This, on the one hand, strips the reigning emperor of many powers, but on the other hand, makes it impossible to criticize him personally. The emperor usually engages in the sacred governance of the state. There were exceptions, however.

                  One of the ways emperors regained political authority was through the mechanism of abdication, which allowed a ruler to govern without being bound by ritual obligations when transferring power to a loyal heir. In 1087, Emperor Shirakawa abdicated in favor of his eight-year-old son Horikawa, then took monastic vows, but continued to manage court affairs as an ex-emperor. Until his death in 1129, Shirakawa dictated his will to both the reigning emperors and the regents and chancellors from the Fujiwara clan.

                  This form of governance by abdicated emperors became known as insei — “rule from the cloister.” Despite the reigning emperor holding sacred status, the ex-emperor was considered the head of the family, and according to Confucian teachings, his will had to be obeyed by all junior family members. The Confucian type of hierarchical relations was also prevalent among the descendants of Shinto deities.

                  Establishment of Dual Governance in Japan – 1192

                  In traditional Japan, military professions and forceful methods of conflict resolution were not particularly prestigious. Civil officials, who were literate and able to compose poetry, were preferred. However, in the 12th century, the situation changed. Representatives of provincial military houses, particularly the Taira and Minamoto, emerged on the political scene with significant influence. The Taira achieved what was previously impossible — Taira no Kiyomori occupied the position of chief minister and managed to place his grandson on the imperial throne.

                  By 1180, dissatisfaction with the Taira from other military houses and members of the imperial family reached its peak, leading to a protracted military conflict known as the “Genpei War.” In 1185, the Minamoto, under the leadership of the talented administrator and ruthless politician Minamoto no Yoritomo, achieved victory.

                  However, instead of restoring power to the court aristocrats and members of the imperial family, Minamoto no Yoritomo systematically eliminated his competitors, established himself as the sole leader of the military houses, and in 1192 received the title of Seii Taishogun — “Great General who Subdues the Barbarians” — from the emperor.

                  From this time until the Meiji Restoration in 1867–1868, a system of dual governance was established in Japan. Emperors continued to perform rituals, while the shoguns, as military rulers, conducted actual political governance, managed foreign relations, and often interfered in the internal affairs of the imperial family.

                  Attempted Mongol Invasion of Japan – 1281

                  The defeat of the Mongols in 1281.
                  The defeat of the Mongols in 1281.

                  In 1266, Kublai Khan, who had conquered China and founded the Yuan Dynasty, sent a message to Japan demanding recognition of Japan’s vassal status. He received no response. Later, several similar messages were sent without success. Kublai Khan began preparing a military expedition to the shores of Japan, and in the fall of 1274, the Yuan fleet, which included Korean units and totaled 30,000 men, pillaged the islands of Tsushima and Iki and reached Hakata Bay.

                  The Japanese forces were inferior in both numbers and weaponry, but direct military confrontation was largely avoided. A storm scattered the Mongol ships, forcing them to retreat. Kublai Khan made a second attempt to conquer Japan in 1281. The military campaign lasted just over a week before events repeated: a typhoon destroyed much of the massive Mongol fleet and thwarted plans to subjugate Japan.

                  These campaigns are linked to the emergence of the concept of kamikaze, which literally means “divine wind.” To the modern person, kamikaze is primarily associated with suicide pilots, but the term is much older. According to medieval beliefs, Japan was the “land of the gods.” Shinto deities, who inhabited the archipelago, protected it from external harmful influences. This was confirmed by the “divine wind” that twice prevented Kublai Khan from conquering Japan.

                  Schism within the Imperial House – 1336

                  Traditionally, it is believed that the Japanese imperial line has never been interrupted, which allows us to speak of the Japanese monarchy as the oldest in the world. However, there were periods in history when the ruling dynasty experienced schisms. The most serious and prolonged crisis, during which two sovereigns ruled Japan simultaneously, was triggered by Emperor Go-Daigo. In 1333, the position of the Ashikaga military house, led by Ashikaga Takauji, strengthened. The emperor sought their assistance in his struggle against the shogunate.

                  In return, Takauji desired the position of shogun and aimed to control Go-Daigo’s actions. The political struggle turned into open military conflict, and in 1336, Ashikaga’s forces defeated the imperial army. Go-Daigo was forced to abdicate in favor of a new emperor favored by Ashikaga. Unwilling to accept these circumstances, Go-Daigo fled to Yoshino in the Yamato Province, where he founded the so-called Southern Court. Until 1392, two centers of power existed in Japan — the Northern Court in Kyoto and the Southern Court in Yoshino.

                  Both courts had their own emperors and appointed their own shoguns, making it virtually impossible to determine the legitimate ruler. In 1391, Shogun Ashikaga Yoshimitsu proposed a truce to the Southern Court, promising that henceforth, the throne would alternate between representatives of the two lines of the imperial family. The proposal was accepted, ending the schism, but the shogunate did not keep its promise: the throne was occupied by representatives of the Northern Court. Historically, these events were perceived very negatively.

                  For example, in history textbooks written during the Meiji period, the Northern Court was often ignored, and the period from 1336 to 1392 was referred to as the Yoshino Period. Ashikaga Takauji was portrayed as a usurper and an enemy of the emperor, while Go-Daigo was described as an ideal ruler. The schism within the ruling house was seen as an unacceptable event, one that should not be remembered unnecessarily.

                  Beginning of the Period of Feudal Fragmentation – 1467

                  Neither the shoguns from the Minamoto dynasty nor those from the Ashikaga family were sole rulers who commanded all the military houses of Japan. Often, the shogun acted as an arbitrator in disputes arising between provincial warriors. Another prerogative of the shogun was the appointment of military governors in the provinces. These positions became hereditary, which contributed to the enrichment of certain clans. The rivalry between military houses for positions, as well as the struggle for the right to be called the head of a clan, did not bypass the Ashikaga family.

                  The shogunate’s inability to resolve accumulated contradictions led to major military clashes that lasted 10 years. The events of 1467–1477 are known as the “Ōnin-Bunmei War.” Kyoto, then the capital of Japan, was almost completely destroyed, the Ashikaga shogunate lost its authority, and the country was left without a central government. The period from 1467 to 1573 is called the “Era of Warring Provinces.” The absence of a real political center and the strengthening of provincial military houses, which began issuing their own laws and introducing new systems of ranks and positions within their domains, led to a period of feudal fragmentation in Japan.

                  Arrival of the First Europeans – 1543

                  Maps of Japan by Luis Teixeira Iaponia nova discriptio. 1636.
                  Maps of Japan (Teixeira Iaponia nova discriptio, 1636). Image: Luis Teixeira

                  The first Europeans to set foot on Japanese soil were two Portuguese traders. On the 25th day of the 8th month in the 12th year of Tenbun (1543), a Chinese junk with two Portuguese on board was driven to the southern tip of the island of Tanegashima. The negotiations between the newcomers and the Japanese were conducted in writing. Japanese officials could write in Chinese, but they did not understand the spoken language.

                  The characters were drawn directly in the sand. It was established that the junk had been accidentally driven ashore by a storm, and these strange people were traders. Soon they were received at the residence of Tokitaka, the ruler of the island. Among the various exotic items they brought were muskets. The Portuguese demonstrated the capabilities of firearms. The Japanese were struck by the noise, smoke, and firepower: the target was hit from a distance of 100 paces. Two muskets were immediately purchased, and Japanese blacksmiths were tasked with setting up their own firearm production.

                  By 1544, several gunsmith workshops were operating in Japan. Contacts with Europeans soon intensified. Besides weapons, they spread Christian teachings throughout the archipelago. In 1549, Jesuit missionary Francis Xavier arrived in Japan. He and his followers engaged in active proselytizing, converting many Japanese lords — daimyo — to Christianity. The nature of Japanese religious consciousness suggested a calm attitude toward faith. Adopting Christianity did not necessarily mean renouncing Buddhism or the belief in Shinto deities. Later, Christianity was banned in Japan under the threat of the death penalty, as it undermined state authority and led to unrest and uprisings against the shogunate.

                  The Beginning of the Unification of Japan – 1573

                  Among the historical figures of Japan, perhaps the most recognizable are the military leaders known as the Three Great Unifiers: Oda Nobunaga, Toyotomi Hideyoshi, and Tokugawa Ieyasu. Their actions are believed to have overcome feudal fragmentation and unified the country under a new shogunate, founded by Tokugawa Ieyasu. The unification began with Oda Nobunaga, an outstanding military commander who managed to subdue many provinces thanks to the talents of his generals and the skilled use of European weapons in battle.

                  In 1573, he expelled Ashikaga Yoshiaki, the last shogun of the Ashikaga dynasty, from Kyoto, making it possible to establish a new military government. According to a proverb from the 17th century: “Nobunaga kneaded the dough, Hideyoshi baked the cake, and Ieyasu ate it.” Neither Nobunaga nor his successor, Hideyoshi, were shoguns. It was only Tokugawa Ieyasu who obtained the title and secured its hereditary transfer, but this would have been impossible without the actions of his predecessors.

                  Attempts at Military Expansion on the Mainland – 1592

                  Kiyomasa hunting a tiger in Korea
                  Kiyomasa hunting a tiger in Korea. Tiger hunting was a common pastime for the samurai during the war. Image: Public Domain

                  Toyotomi Hideyoshi was not of noble birth, but his military achievements and political intrigue allowed him to become the most influential person in Japan. After Oda Nobunaga’s death in 1582, Hideyoshi dealt with General Akechi Mitsuhide, who betrayed Oda. Revenge for his lord greatly increased Toyotomi’s authority among his allies, who rallied under his command. He managed to subdue the remaining provinces and strengthen his ties not only with the military houses but also with the imperial family.

                  In 1585, he was appointed as Kampaku (Chancellor), a position traditionally held only by aristocrats from the Fujiwara clan. His legitimacy was now based not only on arms but also on the Emperor’s will. After unifying Japan, Hideyoshi attempted external expansion on the mainland. The last time Japanese troops participated in mainland military campaigns was in 663. Hideyoshi planned to conquer China, Korea, and India. However, these plans were not realized.

                  The events from 1592 to 1598 are known as the Imjin War, during which Toyotomi’s forces fought unsuccessful battles in Korea. After Hideyoshi’s death in 1598, the expeditionary corps was urgently recalled to Japan. Japan would not attempt any further military expansion on the mainland until the end of the 19th century.

                  Completion of Japan’s Unification – October 21, 1600

                  The founder of the third and final shogunate dynasty in Japanese history was the military leader Tokugawa Ieyasu. He was granted the title of Seii Taishogun by the Emperor in 1603. Tokugawa secured his position as the head of the military houses through his victory at the Battle of Sekigahara on October 21, 1600. All military houses that fought on Tokugawa’s side were called fudai daimyo, while his opponents were called tozama daimyo.

                  The former received fertile lands and the opportunity to hold government positions in the new shogunate. The lands of the latter were confiscated and redistributed. The tozama daimyo were also excluded from participating in government, leading to dissatisfaction with Tokugawa’s policies. Members of the tozama daimyo would later become the main force of the anti-shogunate coalition that brought about the Meiji Restoration in 1867-1868. The Battle of Sekigahara marked the end of Japan’s unification and paved the way for the establishment of the Tokugawa shogunate.

                  The Edict on National Isolation – 1639

                  The period of Tokugawa shoguns’ rule, also known as the Edo period (1603-1867) after the name of the city (Edo, now Tokyo) where the shogunate’s residence was located, was characterized by relative stability and the absence of serious military conflicts. This stability was partly achieved through the rejection of foreign contacts. Beginning with Toyotomi Hideyoshi, Japanese military rulers implemented a consistent policy to limit European activities on the archipelago: Christianity was banned, and the number of ships allowed to enter Japan was restricted.

                  Under the Tokugawa shoguns, the process of national isolation was completed.

                  buy levitra super force online https://buynoprescriptionrxonline.net/buy-levitra-super-force.html no prescription pharmacy

                  In 1639, an edict was issued prohibiting any Europeans, except for a limited number of Dutch traders, from entering Japan. The year before, the shogunate had faced difficulties in suppressing a peasant uprising in Shimabara, which was led under Christian slogans. From then on, Japanese were also forbidden from leaving the archipelago. The seriousness of the shogunate’s intentions was confirmed in 1640 when the crew of a ship from Macau, which had arrived in Nagasaki to renew relations, was arrested. Sixty-one people were executed, and the remaining 13 were sent back. The policy of self-isolation would last until the mid-19th century.

                  The Beginning of Japan’s Cultural Flourishing – 1688

                  Location of Tokugawa Shogunate japan map
                  Japan in Provinces in the time of Tokugawa Ieyasu. Image: Wikimedia, CC BY-SA 3.0

                  During the Tokugawa shogunate’s rule, urban culture and entertainment flourished. The peak of creative activity occurred during the Genroku era (1688-1704). It was during this time that the playwright Chikamatsu Monzaemon, later called the “Japanese Shakespeare,” the poet Matsuo Basho, a reformer of the haiku genre, and the writer Ihara Saikaku, known to Europeans as the “Japanese Boccaccio,” created their works. Saikaku’s works were secular in nature and often humorously depicted the everyday life of townspeople. The Genroku years are considered the golden age of Kabuki theater and the Bunraku puppet theater. This period also saw the active development of crafts as well as literature.

                  Meiji Restoration and Modernization of Japan – 1868

                  The end of the military houses’ rule, which lasted over six centuries, came during the events known as the “Meiji Restoration.” A coalition of warriors from the Satsuma, Choshu, and Tosa domains forced Tokugawa Yoshinobu, the last shogun in Japanese history, to return supreme power to the Emperor. This marked the beginning of Japan’s active modernization, accompanied by reforms in all areas of life. Western ideas and technologies were quickly adopted. Japan embarked on a path of Westernization and industrialization.

                  The reforms during Emperor Meiji’s reign were carried out under the slogan “Japanese spirit, Western technology,” reflecting the specifics of how the Japanese incorporated Western ideas. Universities were opened, compulsory elementary education was introduced, the army was modernized, and a Constitution was adopted. During Emperor Meiji’s reign, Japan became an active political player: it annexed the Ryukyu Archipelago, colonized Hokkaido, won the Sino-Japanese and Russo-Japanese wars, and annexed Korea. After the restoration of imperial power, Japan participated in more military conflicts than during the entire period of the military houses’ rule.

                  Surrender in World War II and the Beginning of American Occupation – September 2, 1945

                  World War II ended on September 2, 1945, after the act of full and unconditional surrender of Japan was signed aboard the American battleship “Missouri.” The American military occupation of Japan continued until 1951.

                  buy xenical online https://buynoprescriptionrxonline.net/buy-xenical.html no prescription pharmacy

                  During this time, there was a complete reassessment of the values that had taken hold in the Japanese consciousness since the beginning of the century. Even such a once unshakable truth as the divine origin of the imperial lineage was subject to reconsideration.

                  On January 1, 1946, a decree was issued on behalf of Emperor Showa about the construction of a new Japan, containing a provision that became known as the “Hirohito and the Declaration of Humanity” This decree also formulated the concept of democratic transformation of Japan and the renunciation of the idea that “the Japanese people are superior to other peoples and destined to rule the world.” On November 3, 1946, a new Constitution of Japan was adopted, which came into force on May 3, 1947. According to Article 9, Japan henceforth renounced “war as the sovereign right of the nation” for all eternity and declared the rejection of maintaining armed forces.

                  The Beginning of Post-War Reconstruction of Japan – 1964

                  Post-war Japanese identity was built not on the idea of superiority, but on the idea of the uniqueness of the Japanese people. In the 1960s, a phenomenon known as nihonjinron — “discussions about the Japanese” — began to develop. Numerous articles written within this framework demonstrated the uniqueness of Japanese culture, the peculiarities of Japanese thinking, and admired the beauty of Japanese art.

                  buy propecia online https://myindianpharmacy.net/buy-propecia.html no prescription pharmacy

                  The rise of national consciousness and reassessment of values were accompanied by world-scale events held in Japan.

                  In 1964, Japan hosted the Summer Olympics, the first to be held in Asia. The preparations for the games included the construction of urban infrastructure projects that became a source of pride for Japan.

                  buy buspar online https://buynoprescriptionrxonline.net/buy-buspar.html no prescription pharmacy

                  High-speed “Shinkansen” trains were launched between Tokyo and Osaka, which are now famous worldwide. The Olympics became a symbol of the return of a changed Japan to the global community.