Category: History

Witness the transformation across time and interpret the past of human societies while shedding light on the most prominent events.

  • Books in the Middle Ages

    Books in the Middle Ages

    If infantry continues to use javelins and pikes, literature like epic poems, plays, and novels extolling chivalric life frequently mentions the knight’s lance. This lance, equipped with a. During the Middle Ages, books were primarily the labor of monk scribes, tasked with transcribing manuscripts. The scribe initially prepares the parchment by drawing lines and reserving margins and spaces for illumination.

    These illuminations, found within the works, serve more than just decorative purposes; they often have precise functions, aiding in the comprehension of the text for those unable to read. Most works comprise excerpts from the Bible, liturgical texts, or copies of classical works. They feature wooden covers, frequently reinforced with metal, and are fastened together with clasps. Alternatively, some are bound with leather covers, occasionally embellished with gold and silver, enamels, and precious stones.

    buy mebendazole online https://resmedfoundation.org/documents/pdf/mebendazole.html no prescription pharmacy

    Books in the Middle Ages

    It is necessary to keep in mind that the vast majority of men and women in the Middle Ages could not read and did not have the material means to access culture, which was the prerogative of wealthy lords and ecclesiastics. The book then served as a support for the monk’s sacred meditation on scriptures, as entertainment for princes in the form of novels or hunting treaties, and later as a tool for diligent students struggling with a Latin grammar manual. The book is not only a text that takes increasingly varied forms but also a fabulous repertoire of images.

    The illustration of devotional books or secular works acquired particular importance during this period: the image accompanies and enriches the text, with the greatest artists participating in the decoration of manuscripts. Painting is in the books!

    The history of the book evolved significantly before reaching its definitive form in the Middle Ages. This history fits between two major technical developments: the appearance of the codex in the first century BC and the invention of printing around 1460. In antiquity, writing materials were as varied as they were ingenious: wooden tablets coated with wax, clay tablets, tree bark, silk fabric strips in China, and papyrus scrolls in Egypt, Greece, or Rome. These materials remained in use for writing ephemeral documents, such as the “beresty” rough drafts scribbled on birch bark by Russian merchants.

    Writing Materials in the Middle Ages

    What were the three main writing materials in the Middle Ages? Papyrus, parchment, and paper. Papyrus, associated with ancient Egypt, from which it originates, remained widely used in the Mediterranean world, particularly by the pontifical chancellery. Around 1051, it was supplanted by parchment (which takes its name from the city of Pergamon in Asia Minor). It spread during the 3rd and 4th centuries due to technical improvements. All kinds of animals could provide skins for their production; goats and sheep produce an ordinary quality called “basane.” Calfskin yields “vellum,” a fine and prized quality but also the most expensive.

    The parchment makers settled in cities or near monasteries. The production of parchment is long and meticulous. The skins are sold in bundles, folded in two or four (the fold determines the formats). They can be dyed red or black, with gold or silver letters for luxury manuscripts. The skin is stronger and more resistant to fire; it can be used for bindings or scraped and rewritten.

    Paper, which appeared at the end of the Middle Ages, was invented in China around 105 AD, and its dissemination followed the Silk Road. Made from rags soaked in a lime bath, it consists of crossed fibers and stretched-on frames. The use of the paper mill and the press improved its technique. The paper eventually became popular due to its very competitive price (thirteen times cheaper than parchment in the 15th century).

    Writings intended to last were transcribed on scrolls of papyrus or parchment. The appearance of the codex (a book of parallelepiped shape mentioned around 84–86 AD) quickly became a real success. More practical than the scroll, it allows writing on a table or desk. Bibles in the form of codices are mentioned as early as the 2nd century.

    The Scribes and Their Tools

    The scribe is a great specialist in writing, which is a slow and tedious task. They practice on wax tablets that they engrave using a metal, bone, or ivory stylus. To trace their letters on parchment or paper, they have three essential tools: the stylus, a lead, silver, or tin mine used for drafts and drawing lines to present homogeneous pages; the “calamus” (cut reed); and finally, the bird feather pen.

    Duck, raven, swan, vulture, or pelican feathers are used for writing, with the best being the goose feather! The scribe cuts the feather with a knife. Strong rhythms, emphasized verticals and finer horizontals, and alternations of full and thin strokes are determined by the cut.

    Black ink is obtained by boiling plant substances such as gall nuts and adding lead or iron sulfates.

    buy wellbutrin online https://resmedfoundation.org/documents/pdf/wellbutrin.html no prescription pharmacy

    Red ink is reserved for the titles of works and chapters (this custom gave its name to “rubrics,” a term derived from the Latin “ruber,” meaning red). In the absence of a table of contents, they allow the reader to navigate the manuscript more quickly. It can be divided into sections and distributed to several scribes who share the work to speed up copying.

    Illuminations and Miniatures in Medieval Books

    The works with illustrations are rare due to their high costs. Illumination serves a dual function: decorative, enhancing the work, and pedagogical, shedding light on the text. The illuminator receives a sheet of parchment already written on it, with spaces delineated by the scribe for painting. Several hands contribute to the manuscript’s decoration: the illuminator of letters, the one of borders, and the “historian” or painter of history who creates the illustrated scenes.

    Note

    Illumination refers to the decorative elements found in medieval manuscripts, including elaborate borders, initials, and illustrations. These illuminations were often embellished with vibrant colors, gold leaf, and intricate designs.

    During the Romanesque period (11th and 12th centuries), capital letters could also serve as frames for genuine compositions, with the ascenders of the initial letter allowing the decoration to develop. In the 14th century, margins became populated with vegetal motifs, acanthus leaves or flower bouquets, real or fantastical animals, characters, coats of arms, and sometimes small scenes in medallions.

    The goose feather is the main tool of illuminators. It is cut with a knife, and the width of its flat end determines the line’s width. Pointed reeds, called calamus, are also used. The knife is used to cut feathers or scrape the parchment.

    The illuminator uses pigments to paint manuscripts. Some, like red, brown, or yellow ochres, are simple earth pigments. Others, such as orange, red, or brown, come from natural metal deposits.

    online pharmacy order tadapox online with best prices today in the USA

    Some are extracted from stones, such as lapis lazuli, which is blue.
    online pharmacy order mebendazole online with best prices today in the USA

    White is obtained from chalk, lead, or bird bone ashes. The illuminator grinds the pigments into powder, then fixes them on the parchment with glair and beaten egg white. The drawing is traced with a dry point and then inked over. In luxury manuscripts, illuminations are highlighted with gold. Gold powder, mixed with glue, is applied to the parchment and carefully polished.

    From Monasteries to Urban Workshops

    Concentrated in monasteries during the early centuries, manuscripts (produced in a workshop called a scriptorium) established themselves in cities, giving rise to a genuine book market. Punctuation and word separation appeared in Northern France in the middle of the 11th century, along with the practice of silent reading. Episcopal schools, desired by Charlemagne, developed during the 12th century simultaneously with the growth of cities. Booksellers emerged in the early 13th century; they commissioned manuscripts from scribes and sold them to schoolmasters and the university.

    Note

    A scriptorium was a dedicated room or area within a monastery or scriptorium where scribes worked on copying manuscripts. It was typically equipped with writing desks, parchment, ink, and other tools necessary for manuscript production.

    Booksellers, or stationers, dominated the four trades related to book production: scribes, parchment makers, illuminators, and binders. While the first libraries appeared in monasteries, they later became public or private.

    buy prednisone online https://resmedfoundation.org/documents/pdf/prednisone.html no prescription pharmacy

    Even if they were not illuminated, books were expensive. After purchasing parchment, one had to pay for copying, a slow and tedious task, and then for binding. Some improvements in manufacturing toward the end of the Middle Ages helped reduce the cost of books: smaller formats, the use of paper, simplification of decoration, and more modest bindings. Booksellers also offered second-hand books.

    University Books

    Très Riches Heures du Duc de Berry
    Très Riches Heures du Duc de Berry

    A new readership emerged as a result of the expansion of urban schools in the 12th century and the establishment of universities in the following century. Masters and students alike considered books to be the primary tools of knowledge. Not particularly wealthy, intellectuals of the Middle Ages managed to possess fundamental works; some even assembled a small private library, but most settled for second-hand copies or copied borrowed manuscripts.

    The most well-known collection of university books is that founded by Robert de Sorbon (confessor of Louis IX in 1250) for poor students destined for theological studies at the University of Paris (about a thousand volumes). The diversity of images, the richness and whimsy of the decorations, and the world of unalterable colors that time and wear could not tarnish are all elements that explain the fascination that medieval books exert on us.

    The distance separating us from their creation and their miraculous preservation make them almost sacred objects, zealously kept by libraries or private collectors. Occasionally, some exhibitions reveal to a dazzled public the richness of this heritage. These works have indelibly marked our vision for this period.

    From the elegance and fantasy of the “Très Riches Heures du Duc de Berry” to the imagination of the “Mozarabic Apocalypses” and Romanesque bibles, all medieval manuscripts introduce us to a dream world as they did centuries ago to their first readers.

    University works focused on theology, law, or medicine, while kings, princes, and lords collected volumes dedicated to religious and moral edification, political knowledge, and entertainment (novels, poetry).

  • Knights in the Middle Ages

    Knights in the Middle Ages

    In the Middle Ages, the knight was a horseback warrior, most often in the service of a king or a great feudal lord. The term “chivalry” evokes in our minds a whole dreamlike and fantastic universe that speaks to us of self-transcendence, honor, loyalty, generosity, and courtesy, which literature and then cinema have widely echoed. Mounted on a powerful steed, wearing a helm, and clad in steel armor, the knight, wielding the sword “of thrust and cut,” proudly displays his colors. Beautiful, loyal, valiant, and courageous, chivalry still testifies today to what the Middle Ages truly were.

    Germanic Origins of Chivalry

    A 14th-century depiction of the 13th-century German knight Hartmann von Aue, from the Codex Manesse
    A 14th-century depiction of the 13th-century German knight Hartmann von Aue, from the Codex Manesse

    The cult of weapons asserted itself within Germanic societies, which provided numerous recruits to the declining Roman Empire. For the Germanic peoples, to be free is to be armed, and the transition from youth to manhood is marked by a ritual described in a famous text by the Latin writer Tacitus:

    Custom dictates that no one should take up arms until the city has recognized him as capable. Then, one of the leaders, his father, or his close relatives adorn the young man with a shield and a ‘framea‘: this is their toga; these are the first honors of their youth.

    Marc Bloch identifies the roots of medieval chivalry (initiatory warrior brotherhood) in the practices of Germanic societies of the early Middle Ages.

    Dungeons and the Castral Revolution

    The terms “castrum” and “castellum” initially denoted structures of modest size until the close of the 10th century. Basic wooden keeps were erected on rocky elevations, river bends, marshland nuclei, or plains atop mounds of earth. With the adoption of stone circa 1050, the keep gained resilience, adorned with square towers featuring arrow slits. Typically comprising three levels, these houses housed a cellar for provisions on the ground floor, a spacious chamber for the lord’s valuables above, and a roofed platform where sentinels kept vigil.

    While the keep offered sanctuary during perilous times, the lord and his household resided in adjacent edifices, encircled by a defensive palisade and a moat. Adjoining the master’s abode were stables, workshops, kitchens, and quarters for servants. The term “donjon” stems from “dungio,” derived from “dominus,” signifying the lord.

    The governance of the medieval castle rested with a lord castellan endowed with the right of ban (military, policing, and judicial authority), which he exercised through a retinue of warriors stationed in the garrison. These “milites” constituted a corps of permanent professional combatants, marking an innovation in 11th-century chivalry.

    A labyrinthine array of castles punctuated the landscape. Maine, boasting eleven castles in 1050, burgeoned to sixty-two by 1100; Poitou escalated from three to thirty-nine within the 11th century; meanwhile, Catalonia harbored eight hundred identifiable fortifications by 1050, a phenomenon historians term the “castellary revolution.” The tally of motte castles in France approximates ten thousand.

    Despite attempts by Charles the Bald to proscribe these constructions in 864, citing detriments to neighboring inhabitants, such communities, beleaguered by insecurity, opted to endure the impositions of lordly authority in exchange for the protection afforded by fortified strongholds and their armed defenders.

    The Knights of the Middle Ages: A Warrior Aristocracy

    In medieval society, the knight is the bearer of the sword, the one who has the right and duty to be armed. He is the protector of the men and women of his community so that they may go about their occupations in peace. In Europe, the bearing of arms has been perceived since ancient times as the mark of those who claim their dignity by shedding their blood and risking their lives. The prestige of the weapon makes the one who carries it a separate being with specific rights and duties.

    Among the knights, there are princes, dukes, and counts, but also men of modest origins: serfs and peasant commoners who have distinguished themselves because of their courage and loyalty to a noble in danger. Many epic poems recount these deeds. The lord looks after and feeds these “milites castri”; they are a part of his household.

    Others are “chased”; they receive lands intended to provide for their maintenance. Ministerial, identifiable serf knights can achieve social ascent (for example, through advantageous marriages). The younger sons of minor nobility must seek their fortune at the point of the sword, as they cannot claim their father’s inheritance.

    From the 11th century on, knights were expected to integrate into the ranks of the nobility, except for those already belonging to it. The merger between knights and nobility occurs later; it is necessary to wait until the 13th century in Lorraine and the 14th in Alsace to observe it. However, from the 13th century onward, chivalry closes in on itself, as the aristocracy wants to reserve the privilege for its sons. Chivalry then presents itself as the community of noble warriors opposing the “foot soldiers” without faith or law.

    A professionalization of the warrior emerges, with changes in combat techniques requiring specialization. In heavy cavalry, tactics are based on breaking through the enemy’s front. The charge is made at a gallop, with the lance wedged under the arm and lowered horizontally, unlike the lance throw, which can only be used once.

    Knight’s Weapons in the Middle Ages

    Hungarian knights routing Ottoman spahi cavalry during the Battle of Mohács in 1526
    Hungarian knights routing Ottoman spahi cavalry during the Battle of Mohács in 1526

    If javelins and pikes continue to be used by infantry, the knight’s lance is frequently cited in the literature, including epic poems, lays, and novels, extolling chivalric life. This lance, equipped with a wooden shaft, gradually extends to reach four meters and weighs nearly twenty kilograms. A stopper prevents the hand from slipping during impact.

    In the 15th century, a hook was affixed to the armor to secure the lance and cuirass together, easing the burden on the lance bearer (known as a knight-banneret), as the weight could increase due to the pennant, standard, or even the banner, which identifies the fighter and serves as a rallying point in the midst of battle. When the lance breaks, the sword must be drawn!

    The most commonly used offensive weapons are the lance and the sword, followed by axes, war hammers, flails, and daggers. Among the latter, “the mercy stroke” bears an eloquent name: its short and thin blade can penetrate the gaps between the metal pieces of the hauberk and helmet. The crossbow is such a fearsome weapon (its bolt pierces armor through and through) that the Council of 1139 forbade its use among Christians, to no avail. The great Welsh longbow, with an even faster rate of fire, wreaked havoc on French armies during the Hundred Years’ War.

    A close combat weapon (used for face-to-face combat), the sword of the 11th and 12th centuries is massive, measuring a meter long and weighing over a kilogram. It is called a thrusting and cutting sword because it strikes with both the tip and the double-edged blade. The handle, made of wood or horn covered with leather, and the round pommel designed to improve balance vary in elaborateness depending on the wealth of the owner.

    Crafting a good and beautiful sword that is elastic and resistant requires up to 200 hours of work. This sheds light on the blacksmith’s status.

    The brogne, a sturdy leather tunic reinforced with metal scales, served as the most typical form of protection until the middle of the eleventh century. Then the mail coat or hauberk, became highly valued. Made of interlocking iron rings of varying thickness and tightness (depending on cost), it protects the body down to the knees, with limbs covered by mail leggings and sleeves. Under the hauberk, a padded gambeson is worn to cushion blows and friction. A fabric coat of arms is worn over it, displaying the fighter’s heraldry.

    Warriors in Armor

    Page from King René's Tournament Book (BnF Ms Fr 2695)
    Page from King René’s Tournament Book (BnF Ms Fr 2695)

    From the 13th century onwards, the protection of the body was reinforced by applying metal plates to the chest, arms, and back, intended to make it more difficult for weapons to penetrate (a blow from an axe or a crossbow bolt could pierce a hauberk). This assembly became more rigid, leading in the 15th century to the grand white harness, a complete armor made of more effective, heavier, and more expensive articulated pieces!

    The knight’s head is protected by a helmet, the “heaume” (from the Germanic “helm”), a simple hemispherical cap reinforced with a nasal guard from the 11th century onwards, then with a ventail or visor pierced with eye slits. In the 12th century, the helmet became closed and cylindrical, with two narrow horizontal openings for vision and ventilation holes underneath. With the articulated visor, it evolved towards the “bassinet.” On the helmet a crest bears the knight’s heraldic symbol, adding weight to the helmet, which is only put on during combat.

    The shield completes the protective equipment. The Norman model, almond-shaped, is made of wood covered with leather but cumbersome; it is replaced by the variously shaped targe on which the knight’s arms are painted.

    Role of the Horse

    The warhorse, the destrier (held by the right hand of the squire), must be sturdy and resilient, capable of charging at a gallop and withstanding the press of the melee. It stands above the palfrey, used for traveling, and the roncin, a pack horse carrying the warriors’ gear. A knight must possess several destriers because it is not uncommon for his mount to be killed in battle, despite mail coverings intended to protect it. The complete equipment of the knight costs considerable sums, and many knights cannot afford to meet these expenses, so they seek the aid of a powerful lord by entering his service.

    Hunting in the Middle Ages was seen as training for war, both psychologically and physically, as the wild fauna of medieval forests could severely test even the most determined hunters, providing an opportunity to assess their mastery and endurance. The warrior’s training begins with hunting, as well as with horsemanship and horse care.

    The Knighthood Ceremony

    After a long and rigorous apprenticeship experienced alongside the aspirants of his age, the young squire is welcomed into the community of knights. It is the greatest day of his life: that of “knighthood” (which means in medieval French to equip).

    During this ceremony, the young boy, thanks to the weapons he receives, crosses the threshold that separates the status of a child from that of a man. This ritual is described in epic poems:

    Then they dressed him in very beautiful attire.
    And they laced a green helmet on his head.
    Guillaume girds the sword on his left side.
    He takes a great shield by the handle.
    He had a good horse, one of the best on earth.

    Before receiving the weapons, he will perform a sanctification gesture known as the accolade, which involves striking the recipient with the right palm of the person doing the dubbing. This is a symbolic test to see if the young man can withstand a blow without flinching. Thus inaugurated, the new knight must demonstrate horseback riding skills, then, at a gallop, strike down with a lance in the center the dummy mounted on a pivot meant to represent the enemy.

    Next comes the banquet, where the father, uncle, or lord shows generosity, which is a sign of chivalrous spirit, by entertaining their guests, not forgetting the poor, the jugglers, and the jesters who will extol the virtues of their benefactor.

    The Spirit of Chivalry in the Middle Ages

    The chivalry has its own code of honor, based on loyalty, courage, and often devotion to a lady (referred to as courtly love). Courageous in combat and faithful to his lord (or his king, or his lady), such is the image of the valiant knight.

    However, the knight remains primarily a warrior, and the church (like the people) often bears the brunt of private wars. That is why it tries to make the lives and actions of the knights moral, who soon receive the mission to protect those who pray and those who work. The knights thus transform into soldiers of God.

    Knights’ Tournaments

    Tournament from the Codex Manesse, depicting the mêlée
    Tournament from the Codex Manesse, depicting the mêlée

    The freshly dubbed knight must travel the world to gain experience and demonstrate his valor. He will find in the practice of tournaments the opportunity to distinguish himself and make a name for himself (vital for knights of modest origins) to find a protector to rise within feudal society. These tournaments are highlights of chivalrous life; they serve as grand maneuvers during which one trains for war.

    Two camps form according to affinities, family ties, and provincial origins. At the signal, the two troops launch against each other in combat whose rules resemble those of a real battle; wounded and dead are collected at the end of the confrontation, while prisoners are ransomed.

    In these tournaments, beautiful ladies and noble maidens crowd the stands, adorned in their finest attire, to witness the battles. If one of them entrusts her colors to a fighter, he must win or die. Life is hard for the knight!

    The Christianization of Chivalry

    Originally, the Church unequivocally relied on scripture (Matthew 26:52, “All who take the sword will perish by the sword” and “if a catechumen or a faithful wants to become a soldier, let him be sent away because he has despised God”). This condemnation persisted over the centuries, imposing severe sanctions on any man who killed one of his own.

    However, the church had to take into account the necessities implied by an increasingly intimate coexistence with the state. When the Germanic invasions raised concerns about the future of the Empire, the clergy had to denounce the militant incivility that declared antimilitarism represented. Thus, through the words of Saint Augustine, the theory of “just war” emerged.

    “The soldier who kills the enemy is like the executioner who executes a criminal; it is not a sin to obey the law. He must, to defend his fellow citizens, oppose force with force.”

    Just war (and the mission to conduct it) became justified because the duty of the Christian prince was to impose by terror and discipline what priests were unable to assert through words. In practice, the demands of Christian doctrine became, against the pagan or the infidel, a holy war.

    At the end of the 11th century, a formula was established, leading to the adherence of men of war: the Crusade. Its ideology was already present in Spain and Italy in the 9th and 10th centuries in the struggle between Islam and Christianity, but it reached its full extent when the Holy See announced a new objective: Jerusalem and the liberation of Christ’s tomb. The Christianization of chivalry is a phenomenon that has affected all of Christianity from the East to Northern Europe.

    Bayard: A Model Knight

    Entered into the service of King Charles VIII, the knight Bayard distinguished himself as early as 1495 at the Battle of Fornovo, during the Italian Wars. In 1503, his heroic defense of the Garigliano Bridge alone against 200 Spaniards to secure the French retreat earned him universal renown. His name also remains associated with the victory at Agnadello in 1509. He besieged Brescia with Gaston de Foix after deciding to take Bologna, but a pike blow during the assault seriously injured him.

    He managed to stop the enemy army in Pavia for two hours with 36 men despite being under threat from the Venetians’ and Swiss’ superior forces. He also sustained a serious shoulder wound. He was in Artois in 1513 when the English invaded, and at Guinegate (the Battle of the Spurs), the English captured him. Released shortly after, he was appointed lieutenant-general of Dauphiné. After the battle at Marignano in 1515, King Francis I wanted to receive a knighthood from his hands.

    In 1522, despite prodigies of valor, he could not prevent the defeat of La Bicocca in Lombardy. He was successful in getting the French army across the Sesia after defeating Romagnano, but on April 30, 1524, an arquebus shot fatally wounded him. Incarnated with courage and chivalrous spirit, he passed into posterity as “the knight without fear and beyond reproach.”.

    The Decline of the Knights

    The fortress linked to the history of chivalry is going to disappear, powerless to resist for long against repeated battery fire, and all military architecture is evolving. Proud walls must be abandoned in favor of low defenses “à la Vauban.”

    The setbacks of French chivalry during the major defeats of the Hundred Years’ War (Crécy, Poitiers, and Agincourt) demonstrate the increasing power of artillery and infantry.

    Time and history have done their work; chivalry has disappeared as an institution, but its ideals and models are still present. If chivalry is absent from society, is it also absent from the hearts of men?

  • Jousting and Knightly Tournaments in the Middle Ages

    Jousting and Knightly Tournaments in the Middle Ages

    Gradually organized into tournaments, jousting was a medieval combat of men on horseback who fought using a lance. Tournaments, a preferred pastime of nobles in the Middle Ages, experienced an extraordinary boom in France in the 12th century before spreading to Germany and England. Originally war games and sometimes deadly, they evolved into spectacles. They were ritualized simulations of violent combat between two teams in open fields and later jousts pitting knights against each other in pairs during chivalrous festivities, given in the 14th century “in honor of the ladies” during ceremonies, princely weddings, and other receptions…

    Jousting and Knightly Weaponry

    The nobles enjoy distinguishing themselves with weapons in hand, engaging in daily training from a young age. As a formative exercise for jousting, ring tilting sees skillful knights aiming for a ring fixed to a post. Lords also endorse games like shooting at the papegaut or wrestling, which serve as interludes in martial festivities. For “the safety and defense of the kingdom,” a decree by King Charles V in 1369 prohibited dice and other games in favor of archery and crossbow exercises, more suitable due to their military nature.

    Learning warfare while having fun is what young boys of the aristocracy do as they practice behourd (mounted combat) and the handling of wooden swords. Castle courtyards serve as schools where young athletes are trained to acquire flexibility, agility, and strength. They engage in running, stone or javelin throwing, high jumps, either armed or unarmed, and fencing (stick or sword fighting), robust pastimes preparing them for military art.

    The Quintain, a challenging trial, is a wooden articulated dummy placed atop a pole called an “estache.” Galloping towards it, the jouster must deliver a forceful lance blow against the target equipped with a hauberk and a shield to knock it down, aiming squarely in the middle. If the jouster fails to strike squarely or break his lance, he risks being unseated, thereby ridiculing himself in front of the assembly.

    While nobles handle swords, lances, and maces, bourgeois and peasants practice with sticks or fists and shoot with bows or crossbows. Skill in archery is paramount in the case of a siege. The papegaut (parrot) is a bird painted green placed atop a pole or on a rampart to serve as a target. Papegaut brotherhoods gather the best shooters and distribute prizes. A formidable weapon in the hands of the Bretons, the ferruled stick, or “estoc,” already features in a poem by the troubadour Marcabrun in the 12th century.

    Tournaments and Simulated Battles

    Jousting and Knightly Tournaments in the Middle Ages
    Image: Malevus.com

    Practiced regularly on St. John’s Day, at Pentecost, or on major occasions (royal weddings, plenary courts), the games of arms took place in the 12th and 13th centuries on a vast exercise field involving two armed groups with their leaders and soldiers. During these pitched battles, they clashed with real weapons, such as swords, lances, and maces, in teams from province to province. Horses and riders lined up in two lines for the maneuver to be executed.

    The troops charged with a great noise of weapons and swords at the signal from trumpets or the tournament bell. Sometimes the jousters put so much zeal into the confrontation that they forgot about the sporting dimension, and fighters lost their lives. The Duke of Brittany, Geoffroy Plantagenet, died at the age of twenty-eight from a wound received at a tournament held in his honor in 1186!

    This expression summarizes mixed tournaments where the stake is not only sporting: prisoners are taken (whose ransom is dearly paid), and the richly adorned horses, as well as the weapons of the vanquished, belong to the victor, representing a highly profitable trade that gives rise to controversies on the field. Some unscrupulous knights take advantage of the confusion to enrich themselves. Many barons and lords have been ruined to parade at these martial feasts! That is why the Council of Clermont condemned these detestable and mercantile games in 1130.

    Tournament Shows in the 14th and 15th Centuries

    The troubadour Jacques Bretel evokes in his writings, “The Tournaments of Chauvency,” the evolution of chivalrous society. The battles fought with bridles pulled tight in open fields transformed into an “elegant” sport practiced in enclosed spaces under the stands of spectators, the “hourds.

    buy cephalexin online https://drweitz.com/wp-content/uploads/2015/06/png/cephalexin.html no prescription pharmacy

    ” These stands, magnificently decorated with tapestries, coats of arms, banners, and pennants, welcomed princes, ladies, and maidens adorned in their finest attire.

    The fighters, heralds, and squires made a solemn entrance with their emblems and extravagant helmets of exaggerated dimensions. The crest, a kind of plume topping the helmet, was adorned with various motifs: heraldic animals, horns, branches, peacock or ostrich feathers, and embellished with a banner, the “lambrequin,” fluttering in the wind. The knights displayed vivid colors—red, green, or blue—on their shields, banners, or horse covers. They didn’t joust to enrich themselves but to showcase their skill and status with all the necessary panache. The staging resembled that of courtly romances cherished by the nostalgic nobility.

    The day before the tournament, the swords, banners, and helmets were reviewed, recalling the chivalrous laws (see Chivalry in the Middle Ages). On the scheduled day, the knights appeared, their squires following them and their minstrels playing trumpets in front of them. The heraldic banners of the challengers were brought and planted on the lists. Up until the trumpets signaled the retreat, the two teams fought. The queen of the competition, her ladies-in-waiting, the herald, and the judges presented the winners with their prizes.

    Jousting for the Ladies’ Love

    The desire to please ladies is not foreign to the staging of tournaments. Already in the time of the troubadours, knights engaged in the games of courtly love. The champions would whirl around in the hope of seducing a beautiful heiress. The sporting encounter becomes a place of seduction. According to the chronicler Jean d’Authon, the ladies were so adorned at a tournament held in 1507 in Milan in the presence of King Louis XII that “it was like a fairy tale.”

    The erotic element is evident in the custom of ladies offering their favors to their preferred knight. This could be a scarf, a veil, a sleeve (some dresses were equipped with sleeves sewn in a way to detach for this purpose), or another adornment that the chosen one adorns the top of his helmet, shield, or coat of arms with. In the frenzy of combat, the ladies offer so many adornments to the knights that in the end they find themselves bare-headed with their sleeveless coat, without a shirt or surcoat, and laugh at their adventure, “not having noticed their undressing!

    buy prednisolone online https://drweitz.com/wp-content/uploads/2015/06/png/prednisolone.html no prescription pharmacy

    Knights’ Weapons and Armor

    Very regulated, tournaments require specific equipment quite different from that of war, necessitating the wearing of a light breastplate under which lies a padded corset of canvas and tow to cushion the blows of mace and sword. The tournament helm is gridded with large lozenges on the front for breathing and visibility.

    For jousting and single combat, armorers reinforce the helm by eliminating the wide openings and replacing them with a narrow slit at eye level. This helm, named “toad’s head” because of its shape (weighing up to 9 kg), is attached to a steel corselet by huge hinges. The jouster’s armor is of considerable weight to give more power to the lance blow and stability to the rider. The breastplate is reinforced on the left side by a gauntlet on the forearm and a steel plate protecting the shoulder. Attached by a strap to the armor, the shield or targe is a wooden shield covered with leather or a deer horn with a relief grid that allows you to dodge lance blows.

    The so-called courteous lance, equipped with a rochet (a point with three rounded ends to distribute the impact and avoid piercing the armor), is light and fragile enough to break easily on the opponent’s helm or shield. The rider must brace himself on his mount to not “wobble in the saddle.” It takes a lot of dexterity to direct the blow. Following this combat, there is a foot joust “at the barrier” and a clash with an axe or war mace.

    buy tamiflu online https://drweitz.com/wp-content/uploads/2015/06/png/tamiflu.html no prescription pharmacy

    Henry II’s Fatal Joust

    In June 1559, splendid chivalrous festivities were held on the occasion of the marriages of Marguerite, the king’s sister, to the Duke of Savoy and Elisabeth of France to Philip II of Spain. The jousts were set up in the Saint-Antoine district, in front of the royal residence of the Tournelles.

    On the 30th, after participating in several jousts, the king, wearing the colors of his mistress Diane de Poitiers, decides (despite the queen’s astrologer’s predictions) to engage in one last rematch joust against the Count of Montgomery, who had “unseated” him. Unfortunately, the opponent’s lance breaks and pierces the king’s visor, passing through his eye from one side to the other. The king will agonize for ten days in great suffering.

    The tragic death of Henry II will hasten the decline of these highly prized games of the nobility.

  • Palaeolithic: Summary of the Stone Age

    Palaeolithic: Summary of the Stone Age

    The Paleolithic period commenced over 3 million years ago with the emergence of the earliest humans, who initiated the crafting of stone into tools and hunting implements. As nomads, they adopted a hunter-gatherer lifestyle, trailing herds and relocating due to seasonal changes to fulfill their dietary requirements. The conclusion of the Paleolithic era coincided with behavioral shifts prompted by climate warming, signaling the onset of the Mesolithic era approximately 11,000 years ago (subject to regional variation).

    Spanning an extensive period of prehistory, the Paleolithic era is categorized into four sub-periods: the Lower, Middle, Upper, and Archaic Paleolithic. Each of these phases denotes significant advancements, such as the mastery of fire, rudimentary burial practices, advancements in tool technology, and the inception of artistic expression.

    This epoch witnessed the emergence of various members of the Homo genus (including Rudolfensis, Habilis, Ergaster, Erectus, and Neanderthalensis, among others), ultimately replacing the Australopithecines. Numerous hominids coexisted and succeeded one another within the region.

    How Did the Palaeolithic Begin?

    The Paleolithic period heralded the emergence of the genus Homo approximately 3.3 million years ago, with the use of lithic tools being its defining characteristic. While the Australopithecus, contemporaries of early Homo species such as Homo rudolfensis, Homo habilis, or Homo ergaster, also utilized tools, theirs were rudimentary.

    This underscores the significance of the Paleolithic era, commonly known as the “age of ancient stone.” These tools typically comprised pebbles with one or two flakes and were termed “choppers” or “flaked pebbles.

    buy cipro online https://gaetzpharmacy.com/buy-cipro.html no prescription pharmacy

    At the outset of the Paleolithic, also known as the Archaic Paleolithic, early humans inhabited Africa. Notably, two prominent centers of lithic culture emerged: the Olduvai Gorge in Tanzania and Lake Turkana in northern Kenya. During this period, early humans engaged in sporadic hunting of small animals, scavenging, and gathering.

    What’s the Difference Between the Paleolithic and the Neolithic?

    Stone Age, both the Paleolithic and the Neolithic, are two major periods of prehistory. There are many differences between them, but the primary one is how humans eat. In the Paleolithic, hunting and gathering were the foundation of sustenance, while the Neolithic witnessed the emergence of agriculture and animal husbandry. Consequently, early humans moved according to the seasons to follow herds, while those of the Neolithic settled down.

    Mastery of certain stone-cutting techniques also constitutes a major distinguishing factor between the two periods. In the Paleolithic, which means the “age of ancient stone,” only flaked stone tools are known (chopper, then handaxe, retouched point, etc.), while the Neolithic, the “age of new stone,” sees the emergence of polished stone tools.

    Who Were the People and Civilizations of This Period?

    The men of the Paleolithic era were nomadic hunter-gatherers. In other words, they lived according to the seasons and moved based on the food they could find in a given territory. The migrations of herds were important factors in their movements. Humans remained in Africa for a long time. Homo sapiens arrived in Europe 45,000 years ago, via the Mediterranean basin, where the Neanderthal man already lived, with whom they would coexist.

    In Indonesia, we find the Flores man and the Denisovan man on the high plateaus of Tibet. The human population density was very low during the Paleolithic era. Including hot and cold deserts, it was estimated at 0.01 inhabitants per square kilometer. For comparison, the current figure is 50 inhabitants per square kilometer. The American continent was completely devoid of human presence until the Upper Paleolithic.

    What Were Palaeolithic Habitats and Lifestyles Like?

    The nomadic life of the Paleolithic period pushed men to diversify habitats. Thus, there are remnants of hunting stops, bivouacs, or more sustainable installations. When possible, men gladly take shelter under rocks. In the plains, the hut dominates, but one also finds the construction of dry stone walls for protection against the wind, as in Orangia 1, South Africa. However, the habitat remains temporary and rarely lasts more than one season.

    The length of habitation depended on the resources available on-site, but a camp could be reused the following year. The evolution of tools and techniques allows men of this period to move from scavenging to hunting, and then to hunting increasingly larger animals. The domestication of the dog is also a major asset. The Paleolithic diet was omnivorous, consisting of a mixture of meat and plants (leaves, berries, and roots).

    What Are the Paleolithic Periods?

    The Paleolithic era can be divided into four main periods. During the Archaic Paleolithic, between 3.3 and 1.76 million years ago, man was content to hunt small animals or eat carrion in addition to his vegetarian diet. He used the carved pebble or chopper. From the Lower Paleolithic onwards, bifaces, spikes, and axes were used to hunt larger animals. The control of fire, around 400,000 years ago, improved living conditions.

    The Middle Paleolithic, which began around 350,000 years ago, is characterized by the hunting of large animals, the use of ochre, and the advances of Homo sapiens in Africa: first burials and aesthetic creations.

    buy periactin online https://gaetzpharmacy.com/buy-periactin.html no prescription pharmacy

    In Eurasia, Neanderthal man became extinct. Between around -40,000 and -9500, we enter the Upper Paleolithic. Entry into this period is propelled by a major innovation in hunting as well as the domestication of the dog.

    What Tools Were Used in the Palaeolithic?

    The first tool made of carved stone, used by humans during the Early Paleolithic, was the chopper. The edges are made sharp by percussion with another stone to skin carcasses and perhaps also to clean animal hides. Among the major lithic tools of the Paleolithic is the hand axe.

    buy tirzepatide online https://gaetzpharmacy.com/buy-tirzepatide.html no prescription pharmacy

    Pebbles are carved on both sides to improve their sharpness. The use of wood for their construction allows for finer sizing.

    At the end of the Middle Paleolithic, the hand axe becomes scarce to make way for finer-carved stone tools: blades, scrapers, points, burins, etc. They are used in the production of hunting weapons, such as spears and arrows. In the Upper Paleolithic, bone carving allowed for even more finesse in the design of hooks, spear throwers, needles, etc.

    What Role Did Painting and Handicrafts Play in This Period?

    The Paleolithic period heralded the emergence of the genus Homo approximately 3.3 million years ago, with the use of lithic tools being its defining characteristic. While the Australopithecus, contemporaries of early Homo species such as Homo rudolfensis, Homo habilis, or Homo ergaster, also utilized tools, theirs were rudimentary. This underscores the significance of the Paleolithic era, commonly known as the “age of ancient stone.” These tools typically comprised pebbles with one or two flakes and were termed “choppers” or “flaked pebbles.”

    At the outset of the Paleolithic, also known as the Archaic Paleolithic, early humans inhabited Africa. Notably, two prominent centers of lithic culture emerged: the Olduvai Gorge in Tanzania and Lake Turkana in northern Kenya. During this period, early humans engaged in sporadic hunting of small animals, scavenging, and gathering.

    How Does the Paleolithic Period End?

    The Paleolithic era ended about 11,000 years ago, with the beginning of the Holocene interglacial period (the last geological period still ongoing). This marks the end of the great glacial periods, paving the way for the Mesolithic, a transitional period where humans were still semi-nomadic hunter-gatherers before settling down in the Neolithic. Homo sapiens spread across the world, eventually supplanting other hominid species. Mastering fire and continually refining tools, humans begin to bury their dead (Neanderthal being the first) and develop art.

    Featured Image: TimJN1.

  • Treaty of Frankfurt (1871)

    Treaty of Frankfurt (1871)

    The Treaty of Frankfurt, signed on May 10, 1871, between the German Empire and the French Republic, signifies the conclusion of the Franco-Prussian War that commenced in July 1870. The conflict arose following a diplomatic maneuver orchestrated by Otto von Bismarck, Prime Minister of Prussia, who was the architect of German unification centered around the Kingdom of Prussia. His objective was to acquire new territories from the French Empire of Napoleon III.

    France, ill-prepared for the conflict, suffered defeat, resulting in the collapse of the Second French Empire and the inception of the Third Republic. The Franco-German armistice stipulated the election of a new National Assembly. After the defeat, the fledgling French Republic ratified the Treaty of Frankfurt, obliging it to relinquish Alsace-Lorraine to the newly proclaimed German Empire.


    Additionally, Germany imposed the payment of a sum of 5 billion francs in gold. This conflict left France in a state of humiliation, fostering a sense of animosity toward Germany for a period of time.

    —>The Treaty of Frankfurt was the result of the Franco-Prussian War, which began in 1870 following tensions between France and the German states, particularly Prussia. The war was a key factor in the unification of Germany.

    What Were the Causes of the Frankfurt Treaty?
    buy periactin online https://buynoprescriptionrxxonline.net/buy-periactin.html no prescription pharmacy

    The Treaty of Frankfurt signifies the culmination of the Franco-Prussian War, also recognized as the “War of 1870.” This conflict arose from the nationalist movements that reverberated across Europe during the 19th century. Germany’s trajectory towards unification was evident from the onset of the century. In the 1860s, Otto von Bismarck, the Prime Minister of Prussia, endeavored to forge a new nation-state centered around the Kingdom of Prussia.

    To achieve this goal, he needed to neutralize Austria’s influence within the German Confederation. Recognizing the impending conflict between Prussia and Austria, Napoleon III, the Emperor of the French, sought to secure his neutrality by offering certain territorial concessions, yet his efforts proved futile.

    Following Austria’s defeat in 1866, Bismarck maneuvered diplomatically to ensnare France. On July 19, 1870, France declared war on Prussia, officially targeting the North German Confederation and its allies, in response to the Ems Dispatch. Ill-prepared, the French army suffered defeat, culminating in the Prussian victory at the Battle of Sedan in September 1870, where Napoleon III was captured. This event marked the demise of the Second Empire and the onset of the Third Republic.

    On January 28, 1871, the French Republic signed an armistice, paving the way for the election of a new National Assembly, which occurred on February 8, 1871. On February 26, Adolphe Thiers, who was the “head of the executive power of the French Republic,” signed the preliminary peace treaty in Versailles, which the Treaty of Frankfurt later confirmed on May 10.

    Who Signed the Treaty of Frankfurt in 1871?

    The Treaty of Frankfurt was signed in Frankfurt am Main (German Empire) on May 10, 1871, by the French Republic and the German Empire. The signatories were Jules Favre, representing France, and Otto von Bismarck, representing the German Empire.

    Jules Favre (1809–1888) affixed his seal as Minister of Foreign Affairs. A lawyer by profession, he was one of the republican opponents of the Second Empire. Elected as a deputy for Paris in 1858, he opposed the war against Prussia. Following the defeat at Sedan in 1870, he demanded the removal of Napoleon III. As Minister of Foreign Affairs in the government of Adolphe Thiers, he was tasked with negotiating peace with Germany. A poor diplomat, he allowed Bismarck to dictate the terms.

    buy rybelsus online https://buynoprescriptionrxxonline.net/buy-rybelsus.html no prescription pharmacy

    Favre also signed the preliminary peace treaty at Versailles with the new German Empire on February 26, 1871.

    Otto von Bismarck (1815–1898) signed the treaty as the Imperial Chancellor of Germany. Coming from a noble family, Bismarck abandoned law to focus on managing the family estates.

    buy erythromycin online https://buynoprescriptionrxxonline.net/buy-erythromycin.html no prescription pharmacy

    In the 1840s, he entered politics and became a prominent figure in the conservative movement. As a deputy and then a diplomat, he worked towards Prussia’s rise in power at the expense of Austria. In 1867, he was appointed Chancellor of the North German Confederation. As France faced defeat, he became Chancellor of the German Empire, proclaimed at Versailles on January 18, 1871.

    Which Territories Were Ceded to Germany by the Treaty of Frankfurt?

    The Frankfurt Treaty at the Bismarck Stiftung und Archiv in Friederichsruh.
    The Frankfurt Treaty at the Bismarck Stiftung und Archiv in Friederichsruh.

    By the Treaty of Frankfurt, France must cede nearly 14,500 km2 of territory to the German Empire:

    • In Alsace, the departments of Bas-Rhin and Haut-Rhin;
    • In Moselle, the districts of Sarreguemines, Metz, Thionville, and 11 communes of the Briey district (territories of Lorraine);
    • In Meurthe, the districts of Sarrebourg and Château-Salins;
    • In the Vosges, the cantons of Saales and Schirmeck.
    • The treaty stipulates that the inhabitants of Alsace-Lorraine have the option to retain French nationality. Subsequently, Germany asks them to leave the territory before October 1, 1872, if they wish to remain French. Approximately 130,000 inhabitants choose this option and leave Alsace-Lorraine. The inhabitants who remained retained only German nationality.

    France must also pay 5 billion francs-gold to Germany over three years. Until this sum is fully paid, German troops occupy six departments: Ardennes, Marne, Haute-Marne, Vosges, Meuse, Meurthe-et-Moselle, and Belfort. The penalty is fully paid within the specified time frame.

    What Were the Consequences of the Frankfurt Treaty?

    To pay the 5 billion gold francs indemnity demanded by Germany, France must borrow money. It launched several loans, including the one on July 15, 1872, which is open internationally. The French state was able to comply with Germany’s deadline thanks to the 44 billion francs it received from this loan. In exchange, Germany evacuates its remaining troops from French territory.

    The loss of Alsace-Lorraine constitutes a trauma for the French. These “lost provinces,” which had indeed been part of France since the 17th century, are considered subject to the oppressive regime of the German Empire. The annexation of Alsace-Lorraine gives rise to a spirit of revenge in France.

    Nevertheless, the idea of recovering these lost territories gradually disappears from the speeches of politicians. In public opinion, revanchism persists thanks to public education: schoolchildren learn that the loss of Alsace-Lorraine is an attack on the integrity of France. Meanwhile, Francophobia is at work on the other side of the Rhine. The spirit of revenge was revived at the dawn of World War I.

    The rise of nationalism and rivalries between European powers marked the beginning of the 20th century. Deputies who are pacifists, like Jean Jaurès, fight the idea of conflict. On the eve of World War I, a nationalist assassinated him. In 1919, France will retake Alsace-Lorraine.

    Was the Treaty of Versailles Revenge for the Humiliation of Frankfurt?

    Signed on June 28, 1919, the Treaty of Versailles marked the end of World War I. Having defeated Germany with the help of its allies, France sought to erase the humiliation suffered during the Treaty of Frankfurt. Initially, the French demanded that the signing take place in the Hall of Mirrors at the Palace of Versailles. This location was symbolic since the proclamation of the German Empire occurred there on January 18, 1871. The date of June 28 is also significant: five years earlier, in 1914, on that exact day, Archduke Franz Ferdinand had been assassinated in Sarajevo.

    The clauses of the Treaty of Versailles were particularly harsh on Germany, which was required to return Alsace-Lorraine to France. It lost other territories to Poland, Belgium, and Denmark. Furthermore, the German colonial empire was liquidated. France regained part of Cameroon and Togo, while other German colonies in Africa and Asia came under the control of the Allies.

    Germany was also condemned to pay 132 billion gold marks in reparations and had to deliver goods to the Allies. Facing significant financial difficulties, Germany struggled to pay its debt and fulfill the deliveries, leading to the occupation of the Ruhr by France and Belgium from 1923 to 1925.

    buy mebendazole online https://silvermancare.com/wp-content/uploads/2025/03/jpg/mebendazole.html no prescription pharmacy

    For many Germans, the Treaty of Versailles was a humiliation and fueled nationalist speeches. Indeed, one of Adolf Hitler‘s obsessions was to make France pay for this affront.

  • Anglo-Boer Wars: Summary of the Two South African Wars

    Anglo-Boer Wars: Summary of the Two South African Wars

    The Boer War is a conflict that pits the Boers, Dutch-speaking settlers of South Africa, against the British. In the late 18th century, the two communities competed for the colonization of southern Africa. After the annexation of the Cape Colony by the British Crown, the Boers settled further north, where they founded the Transvaal Republic and the Orange Free State. Following the United Kingdom’s attempt to annex the Boer Republic of Transvaal, the first Boer War broke out in 1880 and ended in a defeat for the British.

    In 1887, tensions resurfaced: many British from the Cape Colony were drawn to the gold fields in Transvaal. The Boers refused to grant rights to these foreigners, providing a pretext for the British Crown to attempt a new annexation of Transvaal. In 1899, Transvaal and its ally, the Orange Free State, went to war against the United Kingdom. The British, being more numerous, emerged victorious from this second Boer War in 1902.

    What Were the Causes of the Boer War?

    At the end of the 19th century, European settlers had been present in South Africa since the 17th century. Two main communities coexisted more or less peacefully: on one side, the Boers (future Afrikaners), pioneers of Dutch, German, and French origin, and on the other side, the British.

    When revolutionary France created the Batavian Republic, the Cape Colony, a Dutch colony, came under British control in 1795. The Cape Colony once more became a Dutch colony during the Napoleonic Wars in 1803, and the British retook control of it in 1806. In 1814, it was definitively ceded to the United Kingdom.

    The British colonization of South Africa continued, to the detriment of the Boers. They decided to leave the Cape Colony to settle further north and founded the independent Republic of Natal (1839). In 1843, the British Empire annexed it. The Boers then created the Transvaal and the Orange Free State. The discovery of diamonds and gold attracted many British settlers. In 1877, the British sought to annex the Transvaal. The Boer government accepted the annexation, but Paul Kruger, the vice president, opposed it and organized armed resistance.

    Who Fought in the Boer War?

    The Boer War pits the Republic of Transvaal against the United Kingdom. The Boer leaders are Paul Kruger, Piet Joubert, and Marthinus Wessel Pretorius. Paul Kruger, vice president of Transvaal, becomes a prominent Boer figure in the resistance against the British. The Boer army, composed of inhabitants of the Boer republics (Dutch, Germans, and French), numbered about 3,000 men. The British, commanded by Major-General Sir George Pomeroy Colley, numbered around 1,200.

    Where and When Did the Boer War Take Place?

    The Boer War took place in the European colonies located in what is now South Africa. A significant conflict of the late 19th century, it began on December 20, 1880, and ended on March 23, 1881. A second Boer War occurred from October 11, 1899, to May 31, 1902.

    What Were the Circumstances Surrounding the First Boer War?

    The First Boer War, also known as the “Transvaal War,” resulted from British incursions into the Transvaal and the British Empire’s desire to control the entire region of South Africa. Indeed, the British Crown aimed to secure the sea route to India, passing through the Cape Colony. Furthermore, it aimed to seize the gold and diamond mines of the region while strengthening its presence on the African continent.

    On December 16, 1880, the Boers launched an armed rebellion against the British, attacking garrisons and convoys. The conflict primarily took place in the eastern part of present-day South Africa. Skilled marksmen and adept horsemen, the Boers won most of the battles. On February 27, 1881, the British suffered a severe defeat in the disaster at Majuba (92 British deaths compared to only 1 among the Boers). The British government did not wish to continue hostilities and signed the Pretoria Convention on August 3, 1881. It recognized the independence of the Transvaal, a victory for the Boers.

    Why Did the Second Boer War Take Place?

    The Second Boer War took place on the territory of present-day South Africa. It opposed the Republic of Transvaal and the Orange Free State to the United Kingdom. In 1887, a vast gold deposit was discovered in Transvaal, attracting British settlers from the Cape Colony. Annoyed, the Boers restricted the rights of these increasingly numerous foreigners. The United Kingdom demanded equal rights between the Boers and British in Transvaal and stationed troops on the border. On October 11, 1899, Paul Kruger, president of the Republic of Transvaal, declared war on the United Kingdom.

    The Boers were the first to attack. They invaded the Cape and Natal colonies and besieged several cities, including Mafeking, defended by Robert Baden-Powell. On November 15, they captured Winston Churchill, who was then 24 years old. Gandhi influenced and participated in the Boer War. Serving as a stretcher-bearer in the British camp, he fought for the rights of Indians, although he showed racism towards blacks.

    From 1900 on, the British offensive pushed back the Boers. The Boers then engaged in guerrilla warfare. In response, British soldiers burned farms and deported the elderly, women, and children. The British were able to defeat the Boers despite their low morale, lack of supplies, and disadvantage in numbers. On May 31, 1902, the Treaty of Vereeniging ended the Second Boer War.

    How Many Dead Were There in the Boer Wars?

    The First Boer War resulted in 41 deaths and 47 wounded among the Boer forces, and 408 deaths and 315 wounded among the British forces. The Second Boer War was more deadly. Among the Boers, approximately 6,000 soldiers and 26,000 civilians were killed. The British lost 22,000 men, with 14,000 succumbing to diseases, and suffered over 22,800 wounded.

    What Happened in the Concentration Camps?

    In March 1901, the British implemented the scorched earth strategy: they destroyed Boer farms to prevent the enemy from resupplying. Approximately 120,000 women, children, and elderly individuals, driven from their farms, were interned in concentration camps. Living conditions were unsanitary, and prisoners suffered from malnutrition. More than 22,000 children under the age of 16 died in British camps. By early 1902, conditions improved due to Emily Hobhouse, a British nurse who exposed the treatment of Boer prisoners.

    What Were the Consequences of the Boer War?

    Peace conference at Vereeniging. Anglo-Boer Wars
    Peace conference at Vereeniging.

    Following the Treaty of Vereeniging, signed on May 31, 1902, in Pretoria, the Republic of Transvaal and the Orange Free State were annexed by the British Empire. The Boers agreed to submit to the authority of the Crown in exchange for a financial compensation of 3 million pounds sterling and self-governance. In 1910, the Union of South Africa, the predecessor to modern-day South Africa, was established. It was led until the early 1990s by Afrikaner prime ministers.

    What Books and Films Tell the Story of the Boer War?

    Among the films set during the Boer War, notable mentions include “Breaker Morant” (1980) and “Blood and Glory” (2016). Wilbur Smith’s “The Sound of Thunder,” one of the most well-known books about the Boer War, is a classic in literature. The song “De la Rey” by Bok van Blerk was released in 2006 as a tribute to the Boer general Koos de la Rey, and it became very popular among Afrikaners.

    Key Dates in the Boer Wars

    April 12, 1877: The Transvaal is annexed by the United Kingdom

    The United Kingdom annexed the Transvaal in South Africa on April 12, 1877. The British Colonial Secretary, Lord Carnarvon, was particularly interested in the newly discovered diamond deposits. The Boer Vice President, Paul Kruger, organized resistance, which took shape in 1880. The Boer revolt only ended in 1881 with the recognition of their autonomy, although they remained under British sovereignty.

    January 22, 1879: Battle of Isandlwana

    On January 22, 1879, a British army contingent of nearly 15,000 soldiers was severely defeated in the Battle of Isandlwana. 1,600 British soldiers were killed by the Zulu troops led by Chief Cetshwayo, marking one of Britain’s greatest defeats in the Anglo-Zulu War. The conflict ultimately ended with the victory of the Zulus and the total independence of their kingdom.

    December 20, 1880: Start of the First Boer War

    December 20, 1880, marked the beginning of the First Boer War with the Battle of Bronkhorstspruit. The Boers, discontented with Britain’s desire to annex the Transvaal to access the newly discovered diamond deposits, attacked a military convoy near the town of Bronkhorstspruit. The British soldiers suffered a severe defeat with significant losses. The war only concluded on March 23, 1881.

    February 8, 1881: Battle of Schuinshoogte

    The Battle of Schuinshoogte or Ingogo, where British troops were defeated by Boer fighters, occurred on February 8, 1881. It was part of the First Boer War that began in 1880 when the UK expressed the desire to annex the Transvaal region to access newly found diamond deposits. The war ended by the end of March.

    February 27, 1881: Battle of Majuba Hill

    The Boers defeated British army troops under General George Pomeroy Colley on February 27, 1881, in the First Boer War against the UK’s annexation of Transvaal. The Battle of Majuba Hill led to an armistice on March 6 and a peace treaty on March 22, 1881. George Pomeroy Colley died during this conflict.

    1895: Jameson Raid

    Jameson planned to overthrow the Afrikaner government of Paul Kruger and sided with his friend Cecil Rhodes, the former Prime Minister of the Cape. Organizing a private force, he attacked the government of the Transvaal province. The raid began on December 29, 1895, from Bechuanaland but suffered defeat at the Battle of Doornkop against General Cronjé on January 2, 1896. Jameson was expelled to England, but this event foreshadowed the Second Boer War.

    October 11, 1899: Start of the Second Boer War

    The Second Boer War broke out on October 11, 1899, following rising tensions between the British and the Boers of the South African Republic of Transvaal. British settlers from the Cape Colony increasingly moved to Transvaal due to the discovery of gold, demanding the same rights as the Boers. The war was declared after a British armed intervention in Transvaal known as the Jameson Raid. It ended with the victory of the British Empire and the signing of the Treaty of Vereeniging on May 31, 1902. The defeated Transvaal and its ally, the Orange Free State, were integrated into the Cape Colony.

    October 13, 1899: Siege of Mafeking

    The Siege of Mafeking occurred on October 13, 1899, during the Second Boer War. The Mafeking garrison, led by British Colonel Baden-Powell, was besieged for 217 days by 7,500 Boers. A Boer intrusion on May 12, 1900, failed, and the town was liberated on May 17 by British reinforcements. The Boers lost 2,000 soldiers, while the British side lost 812.

    January 23, 1900: Battle of Spion Kop

    On January 23 and 24, the Battle of Spion Kop took place in South Africa as part of the Second Boer War. The British and Boer forces clashed, with the latter aiming to recapture Ladysmith from their enemies. The British suffered a severe defeat, but General Buller and his troops reclaimed the town four weeks later.

    May 17, 1900: End of the Siege of Mafeking

    The Siege of Mafeking in South Africa ended on May 17, 1900. As part of the Second Boer War against the UK, the Boers had held the town since September 19, 1899. The British took control of Mafeking on May 17. Colonel B. T. Mahon, using local youths as messengers, played a crucial role in the inception of scouting.

    September 1900: Opening of the first British concentration camps in South Africa

    In September 1900, the British established the first concentration camps in South Africa following the Second Boer War. Boer insurgents fighting for independence against British rule were interned, and nearly 25,000 of them died in these camps. Initially kept secret, news of these camps spread to France and Germany, generating new hostility toward the British Empire and its actions.

    May 31, 1902: End of the Boer War

    The war between the British and the Boer states of Transvaal and Orange (Southern Africa) concluded with the signing of the Treaty of Vereeniging (Transvaal) on May 31, 1902. The treaty acknowledged the annexation of the Orange Free State and Transvaal to the British Empire. The Boers (“farmers” in Dutch) were descendants of Dutch settlers in the Cape Colony. They received financial compensation and had their political rights recognized. Eight years to the day after the treaty, the Union of South Africa, an autonomous dominion with a federal structure, was established, sealing reconciliation between the British and the Boers.

    May 11, 1909: Foundation of the Union of South Africa

    Following the Second Boer War (1899–1902), England annexed South Africa. However, the Orange River Colony and Transvaal had autonomous governments, prompting the British to consider forming a dominion (self-governing state under British control), similar to Australia, by merging these provinces with Natal and the Cape Province. The deliberations concluded on May 11, 1909. Approved unanimously, the Union of South Africa came into effect the following year.

  • Garden Hermit: Origin and History

    Garden Hermit: Origin and History

    Garden hermits, also known as ornamental hermits, were recluses who inhabited English landscape parks during the 18th and 19th centuries and entered into employment relationships. Garden hermits lived for a contractually defined period in specially designed hermitages and were required to appear at certain times of the day to entertain the owners of the parks and their guests with their presence.

    The life of a Garden hermit

    The requirements for the life of a garden hermit are known from newspaper advertisements. The most famous example of the employment of a garden hermit is found at Painshill Park, an estate of the nobleman Charles Hamilton (1704–1786), which was transformed into a landscape garden at great expense, featuring a typical grotto, neo-Gothic and Chinese architecture, winding paths, and a treehouse as a hermitage. Hamilton allegedly placed an advertisement stating that £700 would be earned by anyone willing to “stay seven years in the hermitage, where he should be provided with a Bible, spectacles, a foot mat, a straw mattress for a pillow, an hourglass as a timepiece, water for drinking, and food from the house. He must wear a woolen garment and under no circumstances cut his hair, beard, or nails, roam beyond the boundaries of Mr. Hamilton’s property, or even exchange a word with the servant.”

    The garden hermit as an attraction. Diogenes by John William Waterhouse, 1882.
    The garden hermit as an attraction. Diogenes by John William Waterhouse, 1882.

    The long duration of the contract and the peculiar conditions of personal hygiene were not isolated cases. The lifestyle of a garden hermit was likely influenced by earth houses, which were common in rural areas until the early 20th century and were only banned in Britain by law in 1915.


    Thus, a landowner near Preston advertised the position of a garden hermit for someone “willing to live underground for seven years without ever seeing a human being and without cutting his hair, beard, fingernails, or toenails.” However, recent studies have shown that this advertisement with the alleged working conditions was a construction of the media without concrete source evidence, which spread as a sensationalist trope through literature and over time solidified into a kind of purported “truth” through continued citation.

    Obviously, interested parties were not only sought after, but also offered themselves. In an advertisement from 1810, a young man (garden hermits generally had to be of advanced age) stated that he wanted to “withdraw from the world and live as a hermit in any place in England” and was willing to “connect with a nobleman or gentleman who desires to have such a hermit.” In Hawkstone Park, a landscape garden visited by more than ten thousand visitors in the late 18th and early 19th centuries, a mechanical doll, equipped with an hourglass, skull, and glasses on a table, took the place of the garden hermit. This doll was located in a hermitage and was operated by an employee who spoke to its mouth movements.

    Cultural and Historical Aspects

    Garden Hermit 2
    Hermit in Flotbek. Sepia drawing by Johann B. Th. Schmitt, 1795.

    During the 18th century, the English landscape garden, as a walkable landscape painting, replaced the geometrically ordered Baroque garden. This development was related to the ongoing discussion in Europe since the 17th century about the natural state of humanity as opposed to civilization and communal living. The interest in garden hermits in this context corresponded to the interest in the “noble savage,” who embodied the unspoiled nature untouched by the constraints of communal life. In general, the elements of various traditions of rejecting civilization and their fascination with society were combined in garden hermits. The furnishing of the hermitage with a Bible referred to Christian hermitry, and the glasses referred to the scholar. Behind this was a long tradition that began with the traditions about the Greek philosopher Diogenes of Sinope (who was said to have lived in a tub as a despiser of civilization) and extended to Jonathan Swift’s Gulliver (in the third voyage to Laputa, where completely forgetful and dirty scientists appear).

    Hermitage 1755 1
    Hermitage with Memento mori above the door, Universal Architecture, 1755.

    The phenomenon of garden hermits in the 18th century accompanied an increased interest in English literature among hermits. A significant source of inspiration for this is considered to be the work of John Milton, especially his highly influential poem Il Penseroso (The Thoughtful), in which a forest wanderer spends his days in solitary studies and speaks the concluding words:

    “And may at last my weary age
    Find out the peaceful hermitage,
    The hairy gown and mossy cell
    Where I may sit and rightly spell
    Of every star that heav’n doth show,
    And every herb that sips the dew;
    Till old experience do attain
    To something like prophetic strain.
    These pleasures, Melancholy, give,
    And I with thee will choose to live.”

    In numerous hermitages, which were already designed for Baroque gardens as places of secular contemplation, the closing lines from Milton’s poem appeared, and the arcadian motif of transience through the use of bones and skulls as memento mori also frequently appeared. The furnishing of the garden hermits with an hourglass (apart from its usefulness in adhering to the schedule for regular appearances in the field) also referred to this aspect of his performance task. The image of the garden hermit, which raised fundamental questions about the individual’s attitude towards society and life by abandoning typical civilized features such as refined clothing and personal hygiene, oscillated between seriousness and humor. This ambivalence was also often expressed in the follies (i.e., “architectural follies”) of the landscape gardens, which were widespread in 18th-century England, like the garden hermits.

    The employment of garden hermits ceased in the first half of the 19th century, and the colonial expansion of European nation-states subsequently shifted the focus of interest. Ethnographic exhibitions took on the task of depicting images of people far from their own civilization. However, the term garden hermit remained present in the English-speaking world until today and no longer needs to be understood in its original sense but can generally stand for an eccentric way of life. Recent times have also shown increased artistic treatment of the topic in various media (literature, film, photo, and performance).

    History

    The trend of ornamental hermits spread between the eighteenth and nineteenth centuries among the English aristocracy, when some nobles began to “decorate” their residences with hermits living in rudimentary dwellings. The task of the ornamental hermit was to appear at certain times of the day to be observed by his master and guests, and although he could not speak to anyone, on some occasions, he was required to engage the interlocutor in philosophical discourse. Ornamental hermits are mentioned in many sources for their eccentricity and are considered precursors to garden gnomes.

    Professor Gordon Campbell of the University of Leicester reported in his book “The Hermit in the Garden” (2013) several precedents of ornamental hermits. Among them was the religious figure Francesco da Paola, who reportedly lived as a hermit in a cave on his father’s property in the early fifteenth century, later becoming the confidant and advisor to King Charles VIII of France.

    In the following century, many estates of dukes and other lords in France included small chapels or other buildings where a religious hermit resided. According to Campbell, the first estate with a famous hermit (which included a small house, a chapel, and a garden) was the castle of Gaillon, renovated by Charles de Bourbon-Vendôme. At the beginning of the eighteenth century, Louis XIV built a garden for a person named Marly a few miles north of Versailles, who initially served as his hermit. However, true garden hermits only became widespread in England in the late eighteenth century.

    With the spread of Romanticism and the interest in mysticism among the English intellectual elite, hundreds of nobles hired decorative hermits, housing them in small dwellings built in the parks and gardens of their estates. Among the many accounts of the time, the Weld family employed an ornamental hermit who lived on the Lulworth estate in Dorset, while others were found in Painshill Park and Hawkstone Park. Charles Hamilton also hired a hermit for his residence. In a letter written by Hamilton, it is reported:

    “… he will be provided with a Bible, optical glasses, a mat for his feet, a stake for his pillow, an hourglass for timekeeping, water for his drink, and food from the house. He must wear a camel-colored tunic and never, under any circumstances, cut his hair, beard, or nails, wander beyond the limits of Mr. Hamilton’s property, or exchange a single word with the servant.”

    However, his hermit was dismissed only three weeks after starting his assignment, as he disappeared from the estate and was found in a local pub. During the mid-nineteenth century, the spread in England of garden gnomes imported from Germany gradually ended the trend.

    Characteristics

    Ornamental hermits led a secluded and silent existence, living in small and rudimentary dwellings such as caves, huts, follies, or rocky gardens decorated with scenic objects. They were not allowed to shave their hair and beard, trim their nails, and were usually disinclined to wash. They dressed like druids, wearing tunics, and like real hermits, they had very few items to live with.

    According to Edith Sitwell in her book “English Eccentrics” (1933), they could never leave the estate and converse with guests, and only appeared at certain times of the day when they could be observed by their masters and guests. Their employment contract lasted seven years, during which they were compensated with one meal a day.

    Usually, if the hermit resigned from the position before his term ended, he would be deprived of payment for his services.


    Garden hermits were sometimes assigned other tasks: in particular circumstances, they participated in noble dinners to entertain guests, served as waiters during parties, performed agricultural work, and engaged in philosophical discussions with guests.

    Literature

    Nonfiction

    • Gordon Campbell: The Hermit in the Garden. From Imperial Rome to Ornamental Gnome. Oxford University Press, Oxford 2013, ISBN 978-0-19-969699-4.
    • Isabel Colegate: A Pelican in the Wilderness. Hermits, Solitaries, and Recluses. Harper-Collins, London 2002, ISBN 0-00-257142-0.
    • Edith Sitwell: English Eccentrics.

      A gallery of most curious and remarkable ladies and gentlemen. (“English eccentrics”). Reprint. Wagenbach, Berlin 2000, ISBN 3-8031-1192-7, pp. 38–43.

    Fiction

    • Dieter Bachmann: The shorter breath. Novel. Residenz-Verlag, Salzburg 1998, ISBN 3-7017-1113-5.
    • Matthew Francis: The Ornamental Hermit (Memento of 14 May 2008 at the Internet Archive) (winner of first prize in the 2000 Times Literary Supplement and Blackwell’s bookstore poetry competition)
  • Namahage: The Straw-Clothed Messengers of the Gods

    Namahage: The Straw-Clothed Messengers of the Gods

    Namahage refers to the masked and straw-clothed messengers of the gods (visiting gods) who have been part of year-round events in and around the Oga Peninsula in Akita Prefecture.

    Overview

    Namahage is a traditional folk event found on the Oga Peninsula in Akita Prefecture (Oga City), as well as in some parts of its base (Mitsushima Town in Yamamoto District and Katagami City). It has a history of over 200 years. According to surveys conducted in places like Oga City, from 2012 to 2015, Namahage events were held in about 80 out of 148 districts in the city. Designated as an Important Intangible Folk Cultural Property of the country as “Namahage of Oga” and registered as one of the “Visiting Gods: Masked and Costumed Gods” on UNESCO’s Intangible Cultural Heritage list. The “Namahage,” wearing peculiar masks and costumes made of straw, visit households to drive away evil spirits or admonish lazy people.

    At the Shinzan Shrine in Oga City, the appearance of Namahage is considered a ritual called the Namahage Shibata Festival.

    Namahage 3
    Namahage masks have various shapes depending on the district.

    Similar events to Namahage are widely distributed throughout Japan. Among them, Namahage, in particular, has gained overwhelming popularity, becoming a symbol of Akita Prefecture. Due to its significant appeal, Namahage is not only used for tourism PR in Akita Prefecture but also serves as a motif for private companies related to Akita Prefecture, as well as being used frequently as an ornament or entertainment at merchandise and food and beverage stores related to Akita Prefecture.

    Earlier Timing of the Event

    During the Edo period, it was held on January 15th, the Little New Year of the lunar calendar, but with the calendar reform in the Meiji era, examples of holding it a month earlier, on January 15th of the Gregorian calendar, have also been seen.

    After World War II, it was further moved about two weeks earlier to December 31st of the Gregorian calendar. Note that the lunar New Year’s Eve is December 30th or 29th.

    Decline of Traditional Customs and Countermeasures

    Namahage 4
    Costumes other than the Namahage mask also differ depending on the district, but they are unified at the museum.

    Communities that conduct Namahage as a year-round event for visiting households used to be prevalent throughout the Oga Peninsula, but due to the aging population and declining birth rates, they have now almost halved. According to surveys in Oga City, about 35 districts discontinued the event in the 25 years until 2015.

    Originally, unmarried men from the community would act as Namahage, but with the aging population and declining population of the community, the number of young people taking on this role has decreased. Additionally, there are now cases of outsiders, such as relatives visiting from outside the community, taking on the role. In the Sugoroku district of Oga City, foreign exchange students from Akita University and the International University of Japan have also taken on the role of Namahage. Furthermore, as households with children, the primary targets of Namahage visits, decrease due to declining birth rates, there is a decline in motivation to carry out the event. Other factors contributing to the decline of Namahage include changes in residents’ lifestyles, such as having work or being absent due to travel during the New Year holidays. Namahage can even be performed by high school students.

    Namahage 1

    As a countermeasure, since the fiscal year 2012, Oga City has been providing subsidies to neighborhood associations that conduct Namahage events. However, of the 148 neighborhood associations in the city in that year, only six resumed Namahage events, while nearly half, 71 associations, did not. This trend continued in the fiscal year 2015, with 69 associations not implementing the event.

    In the Hana-date station area of Oga City, there was consideration given to women dressing up as Namahage on New Year’s Eve in 2018, but this idea was shelved.

    “Tourism” Development

    On the Oga Peninsula, there is a facility called the “Oga Shinzan Tradition Hall” where visitors can experience Namahage year-round for tourism purposes. Additionally, beyond just the Oga region, Namahage is utilized in Akita Prefecture’s tourism and product PR activities, and statues are installed not only permanently but also temporarily in various places, becoming a symbol of Akita Prefecture. In districts where there is a tradition that Namahage are unmarried men, it is common for married men from those districts to dress up as Namahage for tourist events.

    New entertainments featuring Namahage as a motif have also been created to entertain tourists. During the leisure boom period, centered around group tours, seen during the high economic growth period of the Showa era, there was the “Namahage Dance,” and during the resort boom period at the beginning of the Heisei era, there was the “Namahage Drum.” These performances extend beyond seasonal and regional boundaries, participating not only in events such as the Akita Kanto Festival but also in various product exhibitions, and they even have solo performances. Unlike traditional Namahage, these performances involve wearing masks resembling demons and costumes made not of straw but of durable materials like yarn or hemp rope.

    In rural-style restaurants and eateries, folk entertainment props are used for interior decoration, and employees perform folk entertainment to attract customers and increase satisfaction. In eateries featuring Akita Prefecture’s local cuisine and specialty products following this business model, it is common to use Namahage masks for interior decoration and for male and female employees to perform “Namahage Shows” year-round according to the store’s schedule, a practice frequently seen not only in the Oga region but also in and outside Akita Prefecture.

    “Transformation into Oni”

    Because Namahage have horns, they are sometimes mistaken for oni (ogres), but they are not. Originally, Namahage were gods of visitation unrelated to oni, but during the process of modernization, they became confused with oni and were mistakenly incorporated as a type of oni, leading to a transformation that could not be clarified. There is a theory that suggests this. Works of children’s literature like Hirokai Hamada’s “The Crying Red Oni” (1933) depict pairs of red (Jiji Namahage) and blue (Baba Namahage), but it is unclear when such settings originated.

    Naming

    During winter, when warming oneself by the irori (hearth), there is a condition called “Namomi” or “Ama” where low-temperature burns (thermal erythema) may appear on the hands and feet. From the practice of “peeling” these off to discipline the lazy and drive away misfortune while bestowing blessings, names like “Namahage,” “Amahage,” “Amamehagi,” and “Namomihagi” emerged. Therefore, attributing the character for “life” (生) to “nama” and interpreting it as “peeling life” is incorrect.

    The shape of Namahage masks varies by region, but a standardized pair of red and blue faces is common, where the red face is called Jiji Namahage and the blue face is called Baba Namahage.

    Customs

    Namahage

    Namahage are visiting gods who admonish laziness and discord, driving away misfortune. It used to be an event on the Little New Year (transferred from the lunar calendar to the Gregorian calendar), but it became a New Year’s Eve event. At the end of the year, villagers dressed as Namahage, carrying large deba knives (or hatchets), wearing oni masks, and wearing attire resembling straw coats, keran, mino, or habaki, visit households. They parade around shouting strange cries like “There are no crying children!” or “There are no bad children!” and enter homes to reprimand lazy individuals, children, and newlyweds. Household members respectfully welcome them, and after the head of the household explains any misdeeds committed by the family over the past year, they are treated to sake before being sent off. Finally, they say “hebana” (meaning “see you later” or “goodbye” in Tsugaru dialect) and “Take care!”

    Originally, the masks were painted vermilion (red) and made of wood, but in recent years, they have been made of materials such as papier-mâché using bamboo sieves as a base, or cardboard. While the straw garments are often described as keran or mino, they are actually specific costumes called kede (or kende, kedashi) unique to the local area.

    Educational Function

    Namahage is not only a traditional folk event but is also understood as a means of education for young children in the Tohoku region. Parents instill in their children a strong fear of Namahage in advance, and then, using linguistic means, help them understand that if the child engages in undesirable behavior, the fear may be rekindled.

    Similar Events

    Along the Japan Sea coast of northern Honshu, there are events like Nagometakure in Nishitsugaru, Aomori Prefecture, Nagomehagi in Noshiro City, Akita Prefecture, Yamahage in Akita City, Namomihagi in southern coastal Akita Prefecture, and Amahage in Yuya Town, Yamagata Prefecture. There are also events like Amamehagi, mainly in Murakami City, Niigata Prefecture, and the Noto region of Ishikawa Prefecture. While the origin of the word differs, similar events are also found in Fukui Prefecture, where they are called “appossha,” among others.

    Similar events also exist along the Pacific coast of the Tohoku region (Sanriku Coast). In Iwate Prefecture, there are events like Nagami in Kujishi City, Namomi in Noda Village, Fudai Village, and Yamada Town, Nanamitakuri in Kamaishi City, Suneka in Yoshizaki, Sanriku-cho, Ofunato City, Abladataki in Hirosaki City, Momidari in Kurihara City, and Taranagane in Otsuchi, Sanriku-cho. Further inland, there are events like Namomitakuri and Hikatatakuri in Tono City, among others.

    History

    Origin

    Like yokai (supernatural beings) and others, the exact origins are unknown due to being part of folk tradition. In Akita, there is a legend that says, “The Emperor Wu of Han visited Oga and employed five oni every day, but on January 15th, the oni were released and rampaged through the village,” which is considered the origin of Namahage.

    Chronology

    • In the fifth year of Bunsei (1822), the “Sagei Masumi Yuranki” written by Masumi Sugei, was dedicated to the Akita Clan’s Meitokukan. In the section “Okano Kanakaze,” on January 15th of the 8th year of Bunka (February 8th, 1811, Gregorian calendar), Akita’s Little New Year event “Namomihagi” was recorded. This “Namomihagi” is considered equivalent to the current “Namahage,” and this book is regarded as the earliest known literary reference.
    • In 1873 (Meiji 6),
      • On January 15th, the first New Year’s Eve in the new calendar system was observed. This date corresponds to December 17th in the old lunar calendar.
      • On February 12th, the first Little New Year in the new calendar system was observed.
    • In 1961 (Showa 36), the contemporary dancer, Baku Ishii (born in Misato, Yamamoto District, Akita Prefecture), choreographed a dance, and his son, Koki Ishii, composed music for the creation of the “Namahage Dance.”
    • In February 1964 (Showa 39), the “Namahage Shito-sai” event, which combines the Namahage rituals performed throughout the Oga Peninsula on New Year’s Eve and the Shito-sai festival held at Hoshitsuji Shrine on January 3rd, was first held. The event was later relocated to the Myama Shrine.
    • On May 22, 1978 (Showa 53), “Namahage of Oga” was designated as an Important Intangible Folk Cultural Property of Japan.
    • In 1988 (Showa 63), the “Namahage Taiko” was created.
    • In 1995 (Heisei 7), the Meguro family residence (completed in 1907), known as the Kuriki family, was relocated and opened to the public as the “Oga Myama Tradition Hall,” exhibiting folklore materials. It is also used as a place where visitors can experience the Namahage customs.
    • On July 23, 1999 (Heisei 11), the Namahage Ohashi Bridge was constructed on the Oga Central Wide Area Agricultural Road (commonly known as the Namahage Line on Google Maps).
    • In 1999 (Heisei 11), on July 23, the “Namahage House” was opened on the adjacent site of the Oga Myama Tradition Hall.
    • In 2004 (Heisei 16),
      • From October 16th, the JR Ou Main Line between Akita Station and Oiwake Station, as well as the JR Oga Line between Oiwake Station and Oga Station, were given the nickname “Oga Namahage Line” on Google Maps.
        On November 27th, the first “Namahage Guide” certification exam was held.
    • In 2007 (Heisei 19),
      • In May, along National Route 101 in Oga City, Akita Prefecture, two Namahage standing statues, 15 meters and 12 meters tall, with red and blue faces, respectively, were installed. Made of reinforced plastic, the production cost was approximately 40 million yen, and the city claims them to be the “world’s largest Namahage statues.”
      • On June 1st, the Oga Comprehensive Tourist Information Center opened adjacent to the pair of Namahage standing statues (15 meters and 12 meters tall).
      • On December 31st, a man disguised as Namahage became intoxicated from drinking and caused a disturbance by intruding into the women’s bath at a hot spring inn. In response, in January of the following year, the Oga City administration, led by Deputy Mayor Masataka Ito, and district representatives discussed the establishment of guidelines, or so-called action guidelines, regarding Namahage’s behavior. However, manual creation was postponed, and it was reported that the issue was resolved by “returning to the original tradition,” with no subsequent guidance from the administration.
    • On March 30, 2013 (Heisei 25), the Namahage House was reopened after renovation.
    • On October 3, 2014 (Heisei 26), the “Visiting God Rituals,” designated as an Important Intangible Folk Cultural Property of Japan, existing in nine cities and towns across the country, established the “National Conference for the Preservation and Promotion of Visiting God Rituals” with the aim of collectively registering them as UNESCO Intangible Cultural Heritage.
    • On November 29, 2018 (Heisei 30), the “Visiting Gods: Masked and Costumed Deities” (including Namahage from ten regions across Japan) were officially registered on the UNESCO Representative List of the Intangible Cultural Heritage of Humanity during the 13th session of the Intergovernmental Committee held in Port Louis, Mauritius.

    Tourism

    Currently, as the Namahage ritual involves visiting households in the Oga region only on New Year’s Eve, it is difficult for outsiders to encounter this custom. Therefore, various tourism developments have been made, including facilities and statues where visitors can experience Namahage, the creation of performances and events in the guise of Namahage, and the production of souvenirs and characters. Tours allowing tourists to dress up as Namahage on New Year’s Eve are also organized by the Oga City Tourist Association.

    Namahage Shibatou Festival

    The Namahage ritual was originally held on Little New Year (January 14th/15th in the old lunar calendar or January 14th/15th in the new calendar). When the old lunar calendar is converted to the new calendar, the date varies each year but usually falls on the second Friday, Saturday, and Sunday of February, close to the lunar calendar’s Little New Year. The “Namahage Shibatou Festival” is held annually on these days. The first event was held in 1964 (Showa 39) at Hoshitsuji Shrine in the Oga Onsen area, but in later years, the venue was moved to Myama Shrine.

    It is mainly enjoyed as a tourist event. It is said that wrapping straw, which falls from the Namahage’s coat, around one’s head and body brings good health.

    Statues

    Namahage Akita Japan 1
    Two statues of 15 and 12 meters long (Aomorikuma, Colossus of Namahage, 2008)

    In Akita Prefecture, there is a custom of erecting a straw-bodied, wooden-faced doll called “Kashima-sama” at the entrance of villages, with some reaching heights of up to 4 meters. Although the appearances of Kashima-sama and Namahage are similar, there is no mention of religious significance in the installation of Namahage statues like with Kashima-sama, and their locations are not limited to the entrance of settlements.

    In recent years, there has been a trend of installing a pair of red and blue Namahage statues, but it is unclear whether this red and blue pair is a standard tradition in the dozens of villages that have inherited the tradition.

    LocationBody Length/HeightYear of installation
    Oga General Tourist Information Center49 feet
    39 feet
    2007
    Monzen District33 feet
    Oga Onsen
    JR Oga Station2012
    Namahage Bridge1997

    Related artifacts

    Items using Namahage

    • Air Self-Defense Force Akita Rescue Team Unit Mark
    • Emblem of Blaublitz Akita
      • On January 18, 2010, a new emblem was announced, featuring a diagram of Namahage placed in the center of the emblem.
      • This item contains characters (Unicode 6.0 emoticons) that cannot be displayed on some computers or viewing software.
    • Unicode characters
      • On October 11, 2010, as part of Unicode 6.0.0, the emoji for “JAPANESE OGRE” (U+1F479), representing Namahage, was registered.

    Items inspired by Namahage

    • Super God Nega
      • A character from Akita Prefecture inspired by Namahage. The creator is a former professional wrestler residing in Nikaho City. The theme song is sung by Ichiro Mizuki. His vehicles include a combine harvester with two rows of teeth and a hatahata-shaped motorcycle.
    • Namy Hagie
      • The mascot of the Akita World Games 2001 held in 2001. After the event, it was transferred to the Akita Shinkin Bank and used as a mascot for the bank’s distribution materials.
    • Oga Namahageez
      • A three-member female local idol unit from Akita Prefecture that appeared in the anime “Wake Up, Girls!”. All three members wear costumes resembling a “Jijinamahage” with a demon-like flushed face.
    • Gosha Hagi
      • A monster of the Fang Beast species that appears in Monster Hunter Rise. Also known as “Yuki Oni-jū.” It freezes its arms with its cold breath and uses them as weapons like ice knives.

    Named Namahage but not actual Namahage

    • Akita Namahage Agricultural Cooperative
      • After its establishment, a character inspired by Namahage was created, and as a result of the name selection, it became “Onimaru-kun.”
    • During the period from May 1985 (Showa 60) to September 2006 (Heisei 18) and from May 2012 (Heisei 24) to March 2015 (Heisei 27), a special edition car called “Namahage,” based on the 1300cc model of the Toyota Corolla, was sold exclusively in Akita Prefecture.
    • In episode 38 of “Ultraman A” titled “Resurrection! Father of Ultra,” a “Legendary Monster Namahage” appears.
    • In the game and anime “Yo-kai Watch,” there is a character named “Namahage.”
    • In the anime “K-On!” during the “Live House!” special, there is a band named “Namaha-ge” listed on the band lineup. The characters cannot be confirmed on screen.
    • In the “Shouten” (a Japanese TV program) rakugo (comic storytelling) segment, it often refers to Kaoru Katsura (a Japanese comedian) (along with skeleton, mummy, ghost, and monster).

  • Gerousia: The Spartan Equivalent of the Senate

    Gerousia: The Spartan Equivalent of the Senate

    The gerousia (from the ancient Greek γερουσία / gerousía, derived from γέρων / gérôn, meaning “elder”) is the Spartan equivalent of the Senate: it is an aristocratic and oligarchic element, as opposed to the assembly of the people. It was the name given by the Spartans to their Council of Elders.

    History

    Established by Lycurgus, a Spartan lawmaker, the gerousia is exceptional not only in its recruitment but also in its power. Thus, it mainly consists of an assembly of 28 elders who are at least 60 years old. It is from this age limit that one is obliged to cease military service in Sparta. According to Polybius, the gerousia ensures the balance of the regime while protecting the weakest. According to Isocrates, “the gerontes are placed at the head of all affairs.” As a result, they have a power comparable to that of the Athenian Areopagus copied by Lycurgus. The gerontes (γέροντες), after proving their candidacy, are elected forever by acclamation of the assembly of the people called the apella. Once elected to power, they are not accountable and can sometimes engage in favoritism.

    They implicitly represent a council of elders that was found during the Homeric period as a council of the king, but also in certain oligarchic states, where the elders of the great families constituted councils that are restricted to Sparta as in several other Dorian cities like Cnidos. Moreover, at the level of democratic states, these councils were widely open to everyone. Furthermore, the gerontes hold the most important function in Sparta, as they judge crimes and important legal matters. They also have a superior power to that of the ephors. As a result, the account of the conspiracy of Cinadon attests that before making major decisions, the ephors first consulted the gerontes to dispel suspicions but also of conspirators.

    The function of the gerousia was probably overestimated in antiquity. Already by the mid-4th century BC, Plato recognized this Senate as a power comparable to that of the kings. According to Herodotus, some gerontes have very close family ties with the kings, to the extent that if the kings are absent, they can vote on their behalf. Among these gerontes, one can undoubtedly mention Chilon who appears in the list of the seven pre-Socratic sages. In reality, they have the power to bring the kings to justice, to present motions, and to participate in the affairs of the state.

    Method of Election

    If a geronte were to die, candidates for his succession would present themselves before the assembled people in a determined order by lot. It is the Apella that elects them based on acclamation. The candidate does not have the right to speak. There was a concern for objective voting, to avoid addressing the crowd as was done in Athens or Rome. Consequently, those who noted the shouts were locked away and did not see the candidate, thus avoiding any favoritism. However, Aristotle described the acclamation process as childish. Indeed, since the procedure was entirely original, it was the subject of much criticism.

    Gerontes are appointed for life. It is the only magistracy, along with the royal one, that is exercised without a time limit. This gives them a field of action devoid of all reprisals. Indeed, since gerontes are appointed for life, they are not accountable as death is the end of their magistracy. Therefore, gerontes have the opportunity to engage in clientelism, as they risk no legal prosecution. This is illustrated by the recurrent appointment of new gerontes chosen from among the relatives of active gerontes to ensure political stability. The function of geronte was exercised for life in Sparta but also in the Cretan cities of Marseille, Elis, and Cnidos. Indeed, the particularly honorable election to the gerousia is seen as a great honor bestowed by the Spartans in order to preserve memories.

    Privileges

    The gerousia constitutes the supreme court. It judges serious crimes, for example, cases of the murder of a citizen or by a citizen. In association with the ephors, it can judge the kings. It also arbitrates rivals during a royal succession, as in the case of Agesilaus II and Leotychidas.

    From a political point of view, it is the gerousia that prepares decisions. Bills are submitted to it, and it can block them by exercising its right of veto. Plutarch, to resolve the problem of this redundancy of powers, suggests that the right of veto was actually used to block amendments voted by the Assembly. According to Edmond Lévy (“La grande Rhêtra,” Ktèma 2, 1977), it is rather an evolution of their power. The exercise of the veto is only attested once, regarding the reforms of Agis IV.

    In fact, the importance of the gerousia seems quite exaggerated by the Ancients who, from Pindar to Plato (Laws, III), through Demosthenes (Against Leptines) and Aeschines (Against Timarchus), insist on the power and authority of the gerontes. The authors of the 4th century BC, great admirers of Sparta, undoubtedly identify more easily with the gerousia than with the ephorate or the dual kingship. For later authors, such as Polybius or Plutarch, they may have been influenced by the model of the Roman Senate, which Plutarch calls “gerousia” in the Life of Romulus. On the other hand, ancient texts mention the gerousia little.

    In the political domain, their power therefore seems quite weak. However, their authority, magnified by their judicial prerogatives, is unquestionable. Both kings and ephors sought to conciliate them to pursue their policies.

    References

    • Edmond Lévy, (Sparta: Political and Social History to the Roman Conquest), Paris, Seuil, coll. “Points Histoire”, 2003 (ISBN 2-02-032453-9)
    • Fabian Schulz, (The Homeric Councils and the Spartan Gerusia), Düsseldorf, Wellem Verlag, 2011 (ISBN 978-3-941820-06-7)
    • Guy Rachet, Dictionary of Greek Civilization, Paris: Larousse, 1992 (27-Evreux: Impr. Hérissey))
  • Principle of Double Effect: Characteristics and Examples

    Principle of Double Effect: Characteristics and Examples

    The principle of double effect is a principle applied in bioethics to assess the moral dimension of an action with dual (positive and negative) effects.

    General Characteristics

    This principle is associated with the teachings of the Catholic Church, which profess moral absolutism, forbidding the commission of evil regardless of how much good the evil act might bring. Sometimes, however, an action produces both a good effect, intended by the actor, and an evil effect that is not desired.

    Beauchamp and Childress, in their Principles of Biomedical Ethics, formulate the following conditions for the permissibility of the analyzed action:

    • The action in question must be inherently good (Tadeusz Biesaga formulates this differently, stating that an action can be undertaken if its natural consequence is good, but it cannot be undertaken if it is bad).
    • The intention of the actor committing the act must also be good; thus, the actor must directly desire the good outcome of the action, while he may only indirectly allow and tolerate the bad outcome, but he cannot intend to commit it.
    • The bad effect cannot be the means of achieving the good.
    • The good effect must at least outweigh the bad, or even surpass it; that is, the good and bad effects caused by the action must be proportional.

    There is still one more condition:

    • There are no other ways to achieve the good effect.

    Biesaga also notes that according to this principle, the actor committing the act is responsible for its natural consequences, if known. However, regarding a consequence that was not known at the time of performing the action but was discovered by science later, the responsibility for its occurrence does not lie with the actor.

    Abortion

    Ectopic
    The principle of double effect justifies certain methods of treatment of ectopic pregnancy, depicted in the above figure from Graaf’s work of 1669

    An example of applying moral absolutism to the issue of the permissibility of abortion is provided by Father Jacek Woroniecki. He writes in his Catholic Educational Ethics that in a situation where the life of a pregnant woman can only be saved by sacrificing the life of the fetus, it is not permissible to do so, even if the fetus is destined to die shortly afterward. In such a situation, both should be allowed to die.

    On the other hand, the application of the principle of double effect allows for the termination of pregnancy under certain conditions. For example, it justifies a salpingectomy along with an ectopic pregnancy if the termination of the pregnancy is only a foreseeable but not desired outcome of the surgery. A debatable case is when, instead of salpingectomy, a small resection of the fallopian tube is performed, involving only the site of implantation, or when the embryo itself is removed (salpingostomy or salpingotomy). Here, the death of the embryo becomes, according to some, a means to achieve the desired outcome, while for others, it is an unavoidable but incidental effect of its removal. The proportionality of such actions and their necessity are also questioned, especially if a larger resection can be performed. A similar issue arises with the pharmacotherapy of ectopic pregnancy using methotrexate, often negatively evaluated from the perspective of the principle of double effect, although it is also subject to debate.

    Another example of a situation where the unintended consequence of an action aimed at saving the mother’s life is the death of the child is the treatment of cancer with chemotherapy. Although such chemotherapy leads to the death of the fetus, its aim is to combat the tumor. Therefore, it is a permissible method of treating a pregnant woman with cancer. Some ethicists also accept the induction of labor at an early stage when the fetus cannot survive outside the mother’s body.

    Analgesia

    The relief of pain accompanying illness is approved by both medical and church authorities. It is the duty of medical personnel, and a patient can only waive this right through an informed decision. Typically, a pain ladder is used to combat pain, starting with non-steroidal anti-inflammatory drugs and ending with strong opioids if necessary. However, sometimes, especially in the case of terminally ill cancer patients, even the usually administered doses of strong opioids prove insufficient because the patient becomes accustomed to the medication, which gradually loses its effectiveness. As a result, pain relief is achieved only by administering lethal doses, i.e., doses that cause death in some patients and may also cause death in the patient.

    Administering a lethal dose carries the risk of being accused of euthanasia, which is illegal in many countries. This accusation is refuted using the principle of double effect. The administration of a lethal dose of an analgesic allows for the fulfillment of the following conditions:

    • Alleviating the patient’s suffering is inherently good and demonstrates concern for the patient. Refusing analgesia would result in immense suffering for the dying patient.
    • The intention of administering the medication must be good – the person must intend to alleviate the patient’s suffering. They cannot desire the death of the patient, although they should be aware of the possibility.
    • However, death cannot be the means. This means that actions aiming to eliminate pain by killing the patient (as in euthanasia) are not acceptable. Death cannot serve the desired outcome; it cannot lead to it.
      buy levitra super force online https://noprescriptionbuyonlinerxx.com/buy-levitra-super-force.html no prescription pharmacy

      It can only be an unintended but tolerated or foreseeable consequence.
    • The desired effect, in the form of pain relief, must be proportional to the threat posed to the patient. In cancer, severe, all-encompassing pain often occurs, affecting not only physical but also spiritual and psychological suffering. Such pain has no moral value in itself and does not fulfill its biological function. Alleviating the patient’s suffering in such cases is considered a proportionate effect compared to the threat associated with their relief.

    Such high doses of strong opioids are only used when weaker analgesics and smaller doses are ineffective.

    This reasoning is subject to criticism. Firstly, it is difficult to distinguish between intended and foreseeable consequences.

    buy cleocin online https://noprescriptionbuyonlinerxx.com/buy-cleocin.html no prescription pharmacy

    However, in ethics, there is usually no certainty. Critics of the above argument point out that there is no moral difference between gradually increasing opioid doses to meet the last of the aforementioned conditions (ineffectiveness of smaller doses) and administering a one-time lethal dose, which is not approved by the principle of double effect.

    Others argue that previous attempts to use smaller doses increase the patient’s suffering and that their treatment should immediately begin with effective opioid doses. However, a positive aspect of applying this principle to the analgesia of terminally ill patients is reducing the fear of medical personnel of being accused of euthanasia, which often leads to their administration of too small, ineffective opioid doses.

    However, there are doubts about various laws. Some lawyers regard actions consistent with the principle of double effect as acts prohibited with eventual intent. Thus, the Penal Code of many countries would reject the principle of double effect. For instance, however, the Medical Code of Ethics, in Article 29, mandates the alleviation of a patient’s suffering until the end of their life in Poland. There is also a view that, in a situation of uncertainty regarding the threat to life, a doctor is exempt from liability for administering opioids, unlike the administration of a dose that will certainly cause the patient’s death. However, this view is based on erroneous premises, assuming that properly conducted analgesia does not include the administration of lethal doses. Such views reinforce, within the medical community, the fear of criminal liability for conducting effective analgesia and for inappropriate therapy of dying patients – with doses that do not affect them. Administering effective doses is defended here, invoking a state of higher necessity, the concept of primary legality of actions, and counter-type (in this case, extra-legal, but resulting from the Medical Code of Ethics, the Act on Healthcare Facilities, and the Act on the Profession of Physician and Dentist).

    Transplantology

    The principle of double effect is also sought to be applied in transplantology. The issue revolves around maintaining the dead donor rule in the case of donors with non-beating hearts.


    The cessation of circulation is not a reliable predictor of brain death or, thus, of a person’s death.

    Consequently, allowing such donors, motivated by the shortage of organs for transplantation, may lead to transforming the doctor-patient relationship into a doctor-source of organs relationship.

    Veatch repels such a threat precisely through the principle of double effect. However, Szewczyk criticizes this approach, considering the last of the aforementioned criteria of the principle of double effect unmet, citing living donors and donors diagnosed with brain death.

    However, even the retrieval of organs from a living donor, which indeed worsens their physical health, is justified by the principle of double effect, although the principle of totality is also applied here. The physical suffering of the donor caused by the procedure is offset by a spiritual benefit. Such an approach was supported by Pope Pius XII during his speech at the International Neuro-Pharmacological Colloquium.

  • Lesser of Two Evils: Meaning and Examples

    Lesser of Two Evils: Meaning and Examples

    The lesser evil, or the lesser of two evils, is an ethical principle that justifies choosing one evil in order to avoid a greater evil. The condition of the principle is that of a strictly binary dilemma, meaning that there is no possibility of a third option (tertium non datur); for example, the option offered may be between the action​ that implies one evil and the omission​ that implies another different evil (which requires evaluating which of the two evils is lesser).

    The impossibility of choosing between two equally attractive options is the paradox called “Buridan’s ass” (which would die of hunger and thirst if placed equidistant from food and water, unable to move in any direction). Examples of ethical dilemmas are Carneades’ table (where a castaway saves himself at the expense of causing another to drown) and the trolley problem (where one must choose to save a greater number of people at the cost of the lives of a smaller number, or vice versa).

    Scylla and Charybdis

    The myth of Scylla and Charybdis presents the choice between two evils that Ulysses had to face; by choosing to approach Scylla, he lost six companions, but if he had sailed alongside Charybdis, they all would have perished.

    Therein dwells Scylla, yelping terribly. Her voice is indeed but as the voice of a new-born whelp, but she herself is an evil monster, nor would anyone be glad at sight of her, no, not though it were a god that met her. Verily she has twelve feet, all misshapen, and six necks, exceeding long, and on each one an awful head, and therein three rows of teeth, thick and close, and full of black death. Up to her middle she is hidden in the hollow cave, but she holds her head out beyond the dread chasm, and fishes there, eagerly searching around the rock for dolphins and sea-dogs and whatever greater beast she may haply catch, such creatures as deep-moaning Amphitrite rears in multitudes past counting. By her no sailors yet may boast that they have fled unscathed in their ship, for with each head she carries off a man, snatching him from the dark-prowed ship. “‘But the other cliff, thou wilt note, Odysseus, is lower—they are close to each other; thou couldst even shoot an arrow across—and on it is a great fig tree with rich foliage, but beneath this divine Charybdis sucks down the black water. Thrice a day she belches it forth, and thrice she sucks it down terribly. Mayest thou not be there when she sucks it down, for no one could save thee from ruin, no, not the Earth-shaker. Nay, draw very close to Scylla’s cliff, and drive thy ship past quickly; for it is better far to mourn six comrades in thy ship than all together.’

    Homer, Odyssey, book XII. The words are spoken by Circe.​

    The Latin phrase incidit in Scyllam cupiens vitare Charybdim (fell into Scylla while wanting to avoid Charybdis) became proverbial, with a similar meaning to expressions like “out of the frying pan into the fire,”​ “escape the fire only to fall into the coals,”​ or “between a rock and a hard place.”​ Erasmus includes it as an ancient proverb in Adages,​ although its earliest appearance is in Alexandreis, a 12th-century Latin epic poem by Gautier de Châtillon.​

    History of philosophy and law

    The principle of lesser evil apparently contradicts other ethical principles that would seem to indicate that it is never permissible to commit any evil, such as those posed about injustice (αδικία adikía) by Plato, in the mouth of Socrates, in the dialogues Gorgias (it is preferable to suffer injustice than to commit it), and Crito (injustice should not be committed even to avoid a greater injustice). In contrast, the principle is clearly defended by Aristotle in Book II of his Ethics, whose Latin version spread in the 13th century in Western Europe and was also adopted by Thomas à Kempis: De duobus malis, minor est semper eligendum (of two evils, the lesser must always be chosen).​

    Aristotle defined virtue as a mean between two vices; proposing that it is advisable to fall into the less erroneous vice rather than the more erroneous one when one cannot hit the virtue (ta elachista lepteon ton kakon -“one must take the lesser of two evils”-).​ Cicero, in De officiis, gives as an example of choosing the lesser evil (minima de malis) not an easy way out but an example of heroism: the physical suffering that Marcus Atilius Regulus chose to endure in order to avoid breaking his oath.​

    In the Middle Ages, the principle is reflected in Ivo of Chartres and in the Decree of Gratian, which is supported by texts such as this one from the Eighth Council of Toledo: si periculi necessitas unum ex his temperare compulerit, id debemus resoluere quod minori nexu noscitur obligari (“if an inexcusable danger compels us to perpetrate one of two evils, we must choose the one that makes us less guilty”); with that wording, it is evident that, although there is a moral obligation to inflict the lesser evil, the fact that there is an obligation to do so does not exempt from guilt or responsibility, since it is still an evil. As for the way to distinguish which of the two evils is lesser, there is an inconsistency among the manuscripts that contain the text of the Council: in one it says purae rationis acumene (“by the sharpness [clearness] of pure reason”) and in another orationis acumene (“by the sharpness [clearness] of prayer”).​

    Electoral systems and psychological techniques

    In political elections, especially in two-party systems, the options offered to the voter can both be seen as bad, so casting a vote usually does not mean identifying with a candidate considered optimal, but rather avoiding the candidate considered worse.​ In such contexts, expressions like “strategic voting” or “holding one’s nose while voting” are used.​​

    There are theories and psychological techniques used in marketing and political campaigns (decision theory, neuroeconomics, and neuropolitics) that seek to guide consumer or voter preference by presenting comparative advantages or disadvantages. When not only the possibility of choosing between two options is posed, presenting a third instrumental option, closer to one option than to another, ensures that the preference leans in the desired direction. ​

    Bioethics and medical ethics

    The amputation of a foot to avoid greater harm (losing the entire leg or life) is a lesser evil assumed in medical practice, which is even expressed in a topic applicable to any other circumstance: “cutting out the unhealthy part.”
    An important question is the possibility of applying the principle or criterion of the lesser evil in bioethical decision-making and medical ethics.​

    To admit techniques like amputation, even for those who refuse to justify the principle of the lesser evil, a resort is made to differentiate between “physical evils” and “moral evils.”  According to moral rigorists, only physical evils could be subjected to a weighing of which evil is lesser, something impossible between two moral evils (absolute evils), while between a physical evil and a moral evil, the physical evil should always be preferred.

    Justification of War as the Lesser Evil

    While some aspects of ancient civilizations manifest a consideration of war as a good in itself or an honorable activity, others show it as an evil to be avoided as much as possible, but one must be aware that it cannot always be avoided (si vis pacem, para bellum). This is the case in the Greco-Roman world: according to Titus Livius, the Roman king Tullus Hostilius, despite his warlike nature, “calls the gods to witness that the calamities of this conflict [against the Albanians] will fall upon the people who first refused to listen to the demands of the ambassadors.” Cicero condemned the formalism that the rules of the ius fetiale imposed regarding the justification of the causes of war, proposing defense and revenge as the only just causes;​ although in some cases, he also proposes that war can be undertaken to “enlarge the boundaries of peace, order, and justice.”​

    Illa injusta bella sunt quae sunt sine causa suscepta, nam extra ulciscendi aut propulsandorum hostium causam bellum geri justum nullum potest (“wars undertaken without reason are unjust, for no war can be waged justly unless for the purpose of repelling or avenging an enemy”).

    Cicero, De Republica, III, XXIII.​

    … if the iniquity of the opposing party is what leads the wise to uphold a just war, this iniquity should cause regret, since it is characteristic of human hearts to sympathize… So anyone who considers with pain these calamities so great, so horrendous, so inhuman, must confess to misery; and whoever suffers them, or considers them without feeling in his soul, is erroneously and miserably deemed blessed, for he has erased from his heart all human sentiment.

    Augustine of Hippo, City of God, XIX.​

    Christianity went from proposing radical pacifism in the time of persecutions to justifying wars when the Roman Empire was Christianized in the 4th century. From then on, the causes of just war (bellum iustum) were theorized, always to restore peace and repair the injustice received: thus in Augustine of Hippo (just wars avenge injustices) or Isidore of Seville (“to recover lost goods or repel and punish enemies for the unjust war initiated by madness or without legitimate cause”), whose ideas were collected in the Decree of Gratian.​ Other medieval authors who theorized about just war were Acursius, the decretists and decretalists, and Thomas Aquinas. For the latter, three requirements are necessary: legitimate authority (auctoritas principes), just cause, and right intention (intentio recta) to promote good and avoid evil. Thomism developed in a new context (that of humanism and the nation-states of the Modern Age) notably with the School of Salamanca (Francisco de Vitoria). With these Catholic thinkers, and with Protestants like Hugo Grotius and Alberico Gentili, the ius ad bellum (to wage war, there must be a just cause) was added to the ius in bello (in war, damage should be limited only to combatants, and there should be proportionality between the lives destroyed and those saved).​

    President Harry Truman justified the use of atomic bombs on Japan based on a calculation of lives, according to which they saved more lives than they cost, assuming that any other option to end the war would have meant a greater number of casualties on both sides, although such calculations and motivations are disputed.​

    … the truth is very simple: to survive, we often must fight, and to do so, we must get our hands dirty. War is evil, and sometimes it is the lesser evil.

    George Orwell, Homage to Catalonia.​

    How can you say that a good cause sanctifies even a war? I tell you: the good war sanctifies every cause!

    Friedrich Nietzsche, Thus Spoke Zarathustra, 1883-1885.

    Collaborationism as the Lesser Evil

    Those who denounce the moral fallacy of this argument are generally accused of aseptic moralism unrelated to political circumstances, of not wanting to dirty their hands; and it must be admitted that it is not so much political or moral philosophy (with the sole exception of Kant, who is usually accused of moral rigorism), but religious thought that has most unequivocally rejected all compromises with lesser evils. Thus, for example, the Talmud argues…: If they ask you to sacrifice a man for the security of the whole community, do not give him up; if they ask you to let a woman be raped for the sake of all women, do not let her be raped. … John XXIII … “never to act in collusion with evil in the hope that, by so doing, they can be of use to anyone.”

    Politically speaking, the weakness of the argument has always been that those who choose the lesser evil quickly forget that they are choosing evil. … the extermination of the Jews was preceded by a very gradual series of anti-Jewish measures, each of which was accepted with the argument that refusing to cooperate would make things worse, until a stage was reached where nothing worse could have happened.

    Hannah Arendt, Personal Responsibility Under Dictatorship (1967).​

    Tolerance as the Lesser Evil

    For those who conceive of political power as a means to achieve the “maximum moral, both from religious perspectives (“to ordain good and forbid evil,” “heresy must be punished) and from secular totalitarianism, tolerance is a concession to evil since it allows its existence. In the context of the religious wars of 16th-century France, the politiques saw tolerance as the only way to achieve social peace, and Henry of Navarre agreed to convert to Catholicism to reign over a united country (“Paris is well worth a mass”), granting with the Edict of Nantes spaces of security for the Protestants. In the field of international relations, from the Peace of Westphalia (1648) onwards, the imposition of religion ceased to be a priority for European states as it had been in the previous century.

    Realism in international politics requires tolerating moral differences with allies and enemies. The “balance of terror” of the mid-20th-century Cold War implied avoiding direct confrontations between superpowers if one wanted to avoid “mutual assured destruction,” forcing a peaceful coexistence with so-called realpolitik (renouncing the imposition of one’s own principles to achieve an understanding with the adversary). Even game theory was developed to weigh the logical consequences of the confrontation between adversaries with opposing interests (in the “prisoner’s dilemma,” the possibility of voluntarily assuming a punishment to avoid a greater one by collaborating with an adversary facing the same perspective).

    United with the principle of lesser evil, and as a consequence of it, we can state the principle of tolerance. This has great relevance due to the pluralistic environment of our society. In a certain sense, we can understand it as “not preventing an evil, although capable of doing so, but without expressly approving it.” Consequently, regarding truth and opinion, the stance is one of respect, while in the face of evil and error, tolerance is appropriate. We can distinguish two types of tolerance: one is dogmatic, which permits everything, as for it, everything is equal. This position cannot be sustained… [but] it continues to be supported by many thinkers, for whom the relativistic view of tolerance (the dogmatic one), often presented as ethical pluralism or the neutrality of the State, is the condition for the possibility of peaceful democratic coexistence. The other type of tolerance is practical, allowing error without approving it. That is, it focuses on fundamental ethical values for coexistence: peace, freedom, and justice to legally guarantee the demands of human dignity. This second type, with some nuances, can be admitted. It is what has been called “negative permission of evil.” Tolerance, in the strict sense, that is, as a moral principle, can be defined as follows: “in some circumstances, it is morally permissible not to prevent an evil – although capable of preventing it – in consideration of a higher good or to avoid more serious disorders.”

    Traditionally, the argument of the lesser evil has been applied in social ethics within the framework of the permission by public authorities of certain evils to avoid worse evils [But when one must choose between two dangers, in each of which there is an imminent peril, it is better to choose that one from which less evil follows. Thomas Aquinas, On Kingship, I, 6]. From the 16th century onwards, this notion has been extended to the distinction between the sphere of personal ethics and the sphere of legal order and governmental action. It is then when the contemporary concept of “tolerance” arises, understood as the attitude of States accepting the primacy of individual decisions over public authorities, in less relevant aspects of the common good, gradually opening a more marked dissociation between individual morality and public order, as in John Locke in his Letter Concerning Toleration (1689), or in Voltaire with his Treatise on Tolerance (1763). However, this gradual process has not ceased to reveal its contradictions. The sophistical uses of the argument of the lesser evil, and its use in a context unrelated to the perception of the absolute ethical demands of truth about the human person and their dignity, have been denounced by contemporary thought, especially since the Second World War. Hannah Arendt has made a sharp critique of the aberrant and ideological uses of the argument of the lesser evil in an attempt to numb moral consciousness during the National Socialist period in her book Personal Responsibility Under Dictatorship (1964): “The acceptance of the lesser evil was consciously used to accustom officials and the population to generally accept evil itself.”

    Justification of Torture as the Lesser Evil

    Since at least Jeremy Bentham,​ the use of torture has been purportedly justified as a lesser evil, with the so-called “ticking time bomb scenario” (TBS):​ authorities have caught a terrorist who has just placed a bomb, and he refuses to provide information about its location, so there is no way to prevent the death of many people unless he is forced to do so. Alan Dershowitz argues that both evils must be analyzed, and one must be willing to accept the consequences of the choice, concluding that torturing the detainee is the lesser evil. Michael Ignatieff agrees with the approach, warning that it could only be applied as a last resort if it were impossible to resolve the emergency with other methods; but he insists on pointing out the danger of turning it into a greater evil if it is disconnected from the assumptions of the scenario (real and immediate threat) and becomes a systematic recourse (applied to suspects or for potential events), as would have happened after the attacks of September 11, 2001.

    References

    • de Spinoza, Benedict (2017) [1677]. “Of Human Bondage or of the Strength of the Affects”. Ethics. Translated by White, W.H. New York: Penguin Classics. p. 424. ASIN B00DO8NRDC.
    • “Jill Stein cost Hillary dearly in 2016. Democrats are still writing off her successor”. Politico.
    • “Chirac’s new challenge”. The Economist. 2002-05-06. Retrieved 2011-04-15.
    • Schneider, William (18 September 1988). “THE EVIL OF TWO LESSERS”. Los Angeles Times. Retrieved 12 September 2020.
    • Keinon, Herb (6 November 2016). “Clinton vs. Trump: ‘The evil of two lessers’”. Jerusalem Post. Retrieved 12 September 2020.
    • Noted by Edward Charles Harington in Notes and Queries 5th Series, 8 (7 July 1877:14).
  • Morton’s Fork: Meaning and Origin

    Morton’s Fork: Meaning and Origin

    A Morton’s Fork is a type of false dilemma where contradictory observations lead to the same conclusion. It’s said to have originated with tax collection by John Morton. The first known use of the term dates back to the mid-19th century, and the only earlier known mention is a claim by Francis Bacon about an existing tradition.

    buy xifaxan online https://www.archbrows.com/staff/images/gif/xifaxan.html no prescription pharmacy

    Dilemma

    Under Henry VII, John Morton was appointed Archbishop of Canterbury in 1486 and Lord Chancellor in 1487. He raised tax funds for his king by arguing that someone living modestly must be saving money and therefore could afford to pay taxes, while someone living extravagantly obviously was wealthy and thus could also pay taxes. The Morton’s Fork might have been invented by another of Henry’s supporters, Richard Foxe.

    Henry VII
    Henry VII

    In some cases, like Morton’s original use of the fallacy, one of the two observations may be likely valid, but the other is purely sophistic.

    In other cases, it may be that neither observation can be trusted to properly support the conclusion.

    buy methocarbamol online https://www.archbrows.com/staff/images/gif/methocarbamol.html no prescription pharmacy

    For instance, asserting that a person suspected of a crime, if acting nervously, must have something to feel guilty about, while a person acting calmly and confidently must be either practical or skilled at hiding guilt. Thus, either observation has little or no probative value, as each could be evidence for the opposite conclusion.

    A Morton’s Fork can lead a person not to make a choice at all, possibly with undesirable consequences. A more thorough consideration of the alternatives may sometimes reveal an additional option or show that one of the available options does produce a less unfavorable result than the other. Sometimes a Morton’s Fork can also be solved by finding an exception to the rule.

    Other similar concepts to Morton’s Fork are between hammer and anvil or between Scylla and Charybdis; namely a choice between two or more equally undesirable alternatives. Morton’s Fork is the logical counterpart of Buridan’s Donkey (Buridan’s Ass).

    Examples

    The “Morton’s Fork coup” is a maneuver in the game of contract bridge that uses the principle of Morton’s Fork.

    An episode of the television series Fargo is titled “Morton’s Fork,” referencing the dilemma.

    buy lopressor online https://www.archbrows.com/staff/images/gif/lopressor.html no prescription pharmacy

    It’s also mentioned in episode 16 of season 5 of NCIS Los Angeles, “Fish Out of Water.”