Jump to content

Fact of the Day


DarkRavie

Recommended Posts

Fact of the Day - CINCO DE MAYO

0fe722f4f609f3f5616d8bf15174ac4a.png?res

Did you know... Don’t confuse it with Mexican Independence Day—Cinco de Mayo originated with a 19th-century battle.

 

Cinco de Mayo, or May 5, is recognized around the United States as a time to celebrate Mexico’s cultural heritage (though it’s not Mexican Independence Day, which is celebrated September 16). Like a lot of days earmarked to commemorate a specific idea or event, its origins can be a little murky. Who started it, and why?

 

The History of Cinco de Mayo
The holiday was originally set aside to commemorate Mexico’s victory over France at the Battle of Puebla in 1862. The two had gotten into a dispute after newly elected Mexican president Benito Juárez tried to help ease the country’s financial woes by defaulting on European loans. Unmoved by their plight, France attempted to seize control of their land. The Napoleon III-led country sent 6000 troops to Puebla de Los Angeles, a small town en route to Mexico City, and anticipated an easy victory.

 

Mexican%20School%20-%20The%20Battle%20of

 

After an entire day of battle that saw 2000 Mexican soldiers take 500 enemy lives against only 100 casualties, France retreated. That May 5, Mexico had proven itself to be a formidable and durable opponent. (The victory would be short-lived, as the French would eventually conquer Mexico City. In 1866, Mexican and U.S. forces were able to drive them out.)

 

To celebrate, Juárez declared May 5, or Cinco de Mayo, to be a national holiday. Puebla began acknowledging the date, with recognition spreading throughout Mexico and among the Latino population of California, which celebrated victory over the same kind of oppressive regime facing minorities in Civil War-era America. In fact, University of California at Los Angeles professor David Hayes-Bautista cites his research into newspapers of the era as evidence that Cinco de Mayo really took off in the U.S. due to the parallels between the Confederacy and the monarchy Napoleon III had planned to install.

 

Cinco de Mayo in the U.S.
Cinco de Mayo gained greater visibility in the U.S. in the middle part of the 20th century, thanks to the Good Neighbor Policy, a political movement promoted by Franklin Roosevelt beginning in 1933, which encouraged friendly relations between countries.

 

There’s a difference between a day of remembrance and a corporate clothesline, however. Cinco de Mayo was co-opted for the latter beginning in the 1970s, when beer and liquor companies decided to promote consumption of their products while enjoying the party atmosphere of the date—hence the flowing margaritas. And while it may surprise some Americans, Cinco de Mayo isn’t quite as big a deal in Mexico as it can be in the States. While Mexican citizens recognize it, it’s not a federal holiday: Celebrants can still get to post offices and banks.

 

 

Source: What’s the Story Behind Cinco de Mayo?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - LET THERE BE LIGHT!

blob?bcid=Tsgn7DbnXv4GtDZBpZ87AvqQyECo..

Did you know.... Without light, there would be nothing - or at least we wouldn’t be able to tell the difference. A form of electromagnetic radiation that we evolved to perceive through our eyes, light is as mysterious a phenomenon as it is universal.

 

1. Light is Both a Particle and a Wave

1678810252755?e=2147483647&v=beta&t=bK-l

Light constantly refuses to behave as we expect it to. Defying classification, it can exhibit characteristics of both particles and waves, a concept known as wave-particle duality. The famous double-slit experiment is one of the best demonstrations of this weird phenomenon, which seems to challenge our understanding of the fundamental nature of our reality.

 

2. Light Can Push Objects

Light sails are devices that harness the power of photons from sunlight or directed lasers to propel spacecraft. These sails utilize the momentum generated by photons striking their reflective surfaces, providing a potential means for interstellar travel without the need for conventional fuel. Although the phenomenon was known for centuries - even Johannes Kepler suggested that it could be exploited to navigate the void of space - it was successfully demonstrated for the first time in 2010 by the IKAROS experimental spacecraft.

 

3. Focusing Light

8f4hU3Hjyo8.jpg?size=320x240&quality=96&

Lasers, short for "Light Amplification by Stimulated Emission of Radiation," are concentrated beams of coherent light with numerous practical applications. From cutting-edge technologies like laser surgery and laser printing to everyday devices like barcode scanners and DVD players, lasers have revolutionized various industries since their invention in the 60s.

 

4. Even Light Can Be Slowed Down

While the speed of light in a vacuum (about 186,282 miles per second) is considered a universal physical constant, it can still vary wildly when passing through different mediums. For instance, light slows down when passing through transparent substances such as water or glass, which is why objects underwater appear distorted.

 

5. Light Can be Both Absorbed and Emitted

photo-1627685061358-fa707d7a9c8e

When light interacts with matter, it can be absorbed, causing the material to heat up. However, some materials can also emit some of this residual energy as light when excited by an external energy source. This phenomenon, known as fluorescence, is observed in all sorts of natural materials and even in some living organisms.

 

6. The Oldest Light in the Universe

Cosmic microwave background radiation (CMB) is the oldest light in the universe, dating back to just 380,000 years after the Big Bang. While you can't see the CMB with your naked eye, its faint glow permeates the cosmos and provides crucial insights into the early universe's structure and composition. To appreciate it, scientists have to tune into the microwave part of the electromagnetic spectrum.

 

7. Light Can Actually Heal

b607ol9cTv4.jpg?size=320x480&quality=96&

Light is extensively used in medicine, and for a wide range of applications. Phototherapy, the therapeutic use of light, has been employed for centuries to treat various medical conditions. From UV light for skin disorders to laser therapy for surgical procedures, light-based treatments are way more common than you would think.

 

8. Light Can Be Used As Nano-Tweezers

Optical trapping, or laser tweezers, is a technique that uses super-focused laser beams to trap and manipulate microscopic particles. This groundbreaking method has a myriad of applications in both physics and biological research, allowing scientists to study individual cells or molecules with unprecedented precision.

 

9. Polarized Light Is Weirder Than You Think

109593194.jpg

Polarized light waves vibrate in a specific orientation, filtering out light waves oscillating in other directions. This property is harnessed in polarized sunglasses to reduce glare and improve visibility. Additionally, polarized light plays a vital role in technologies such as liquid crystal displays (LCDs), and can be even used for orientation in navigation. In fact, some researchers believe that Vikings made use of a polarizing device (a "sunstone") to find the location of the sun even in a completely overcast sky.

 

10. Modern Telecommunications Need Light to Work

Where would we be without optical fiber? Not on the Internet, most likely. These flexible strands of glass or plastic allow us to transmit light signals over long distances with minimal loss of signal quality, and they form the backbone of modern telecommunications networks, allowing for rapid and efficient communication across the globe. Also, optical fibers play a crucial role in medical imaging techniques like endoscopy, providing minimally invasive means of visualizing internal structures within the human body.

 

 

Source: Mind-Blowing Facts About Light That Will Illuminate Your Mind

  • Like 1
Link to comment
Share on other sites

Fact of the Day - BEES

ao-comp-beefish.jpg?strip=all&w=360&h=24

Did you know... Bees are one of nature’s most effective collaborators, constantly working for the betterment of the hive. But the extent of their ability to work in tandem is perhaps more significant than originally thought. In a strange environment, bees can learn teamwork to conquer obstacles. Even LEGO bricks.

 

In new research published in Proceedings of the Royal Society B (presumably no pun intended), scientists at the University of Oulu in Finland corralled bumblebees to observe how they tackled challenges that required cooperation to obtain a reward—in this case, nectar. Pairs of bees were presented with two obstacles. In one test, a LEGO brick had to be pushed forward to retrieve the nectar underneath. In the second, bees had to push a door in a bee-sized corridor at the same time to get to their treat.

 

The researchers started by having the bees move Styrofoam blocks; once the insects comprehended that moving the lightweight obstacle was key to getting the nectar, a hollowed-out LEGO brick was used.

 

The bees were trained both as teams and as single participants, the latter being the control group. In teams, bees without a partner tended to take longer to begin pushing an obstacle than the control bees, demonstrating an understanding that a partner was needed to complete the task successfully.

 

In the case of the door, bees would even turn around and return to their starting point. Upon seeing their teammate arrive, they would reverse direction and begin heading back toward the door.

 

You can see the bees in action below:

 

 

 

 

The study’s findings challenge conventional notions of insects, and the ability to work together towards a common goal is present even in the miniature brain of bumblebees,” lead researcher and associate professor, Dr Olli Loukola, said in a statement. “Our findings show for the first time that bumblebees can learn to solve novel cooperative tasks outside the hive. But the coolest part of this work is that it clearly demonstrates that bumblebee cooperation is socially influenced, and not just driven by individual efforts.”

 

Plenty of animals have demonstrated collaboration, including dolphins, but it’s not as easily documented in creatures with what the Oulu paper dubbed a “miniature brain," which is a bit of a backhanded compliment.

 

The researchers were quick to caution that further investigation is needed to determine the extent of how social factors influence bees. Still, waiting for bee back-up to power through a LEGO piece is compelling evidence that they know the value of teamwork.

 

 

Source: Bumblebees Can Cooperate To Push LEGO Bricks, According to Science

  • Like 1
Link to comment
Share on other sites

Fact of the Day - SCOTLAND'S NATIONAL ANIMAL

cdf37adddd39dd38376faa52c054f486.jpg

Did you know.... When it comes to national animals, the chosen creatures usually have a strong tie to their country. Tanzania went with the giraffe, Indonesia chose the Komodo dragon, and the United States picked both the bald eagle and the bison. By that logic, you might guess that Scotland selected the ginger-haired Highland cow or the Shetland pony, but their emblematic animal is actually a more surprising horned beast: The mythical unicorn.

 

The unicorn rears its horned head in films and TV shows from around the world and is a staple children’s toy. As Scotland’s national animal, the magical equine also appears on castles, ships, and mercat crosses. The reason for this seemingly odd selection isn’t because Scotland is a nation of unicorn lovers; rather, it’s thanks to the country’s bygone kings.

 

Rainbows and Unicorns and … Royal Arms?
The first King of Scots who supposedly showed an affinity for unicorns was William I, often known as William the Lion, who ruled from 1165–1214. William is said to have had a unicorn added to the Scottish royal coat of arms—the shield of which is a red Lion Rampant on a yellow background—but no evidence of it has survived.

 

The oldest extant version of the royal arms with unicorn supporters can be found carved into a stone above Rothesay Castle’s gateway. It’s believed that this now weather-worn engraving was created no later than the reign of Robert III, which ended in 1406. Unicorns then leapt onto Scotland’s currency around 1484, with James III issuing gold coins called the unicorn and half-unicorn (worth 18 and 9 shillings respectively). One side of the coin was stamped with a wavy sun or star, while the other was emblazoned with a rather fierce-looking unicorn supporting the Lion Rampant shield.

 

portrait-james-vi-scotland-1566-1625-223

 

A clear depiction of the two unicorn supporters can be seen on the coat of arms used by James V, who was king during the first half of the 16th century. James VI made a big change to the royal arms when he became James I of England in 1603 and decided to combine the Scottish and English coats of arms. The Scottish version has the unicorn standing in the dexter position—the dominant position on the right (from the POV of the shield bearer)—while the English version has the English lion on that side. To this day, the UK’s royal arms still feature a lion and a unicorn flanking the shield.

 

A Unicorn of a Different Color
Those with eagle eyes may be wondering why the unicorns on the royal arms are often wrapped in chains. That question gets to the heart of why Scottish kings likely chose the unicorn as their emblem in the first place.

 

Early accounts of unicorns depict them as real animals that were wild and fierce—qualities that kings would almost certainly like to be associated with. The first written description of a unicorn comes from ancient Greek historian Ctesias in his Indica, which was based on stories from traders. He describes it as a horse-like creature with a poison-curing horn—which is white at the base, black in the middle, and red at the tip—on its forehead. He states thatto take them alive is in no way possible,” because they will fight to the death rather than be captured.

 

A myth then developed that the only way to trap a unicorn was by using a virgin as bait. The first known reference to this tactic goes back to the 7th century, with scholar Isidore of Seville claiming that a unicorn can be “lulled to sleep” in a young girl’s lap, “having laid aside all ferocity.” This is likely where the association between unicorns and virtue started; this combination of power and purity may be why Scottish kings chose the unicorn as their symbol. It’s thought that the chains around the unicorns on the royal arms might symbolize Scottish kings having the strength to capture such a dangerous beast. 

 

e66917_31c1dbb42c3f439ba0edc7428fef3b01~

 

Another possibility is that the unicorn was chosen because in folklore it was the enemy of the lion, the symbol of England, and so may have been a nod to hostilities with Scotland’s neighbor to the south. A nursery rhyme titled “The Lion and the Unicorn” was written about the creatures battling for dominance and was likely inspired by James VI and I inheriting the English throne. The oldest surviving copy of the rhyme dates back to 1776, but it almost certainly circulated orally before then.

 

While it’s known that the unicorn became Scotland’s national animal because of its prominent place in the iconography of Scotland’s kings, what isn’t known is why and when the unicorn first pranced onto Scottish heraldry. Along with the theories above, it’s just as possible that William I (or whichever king started the tradition) simply thought that the horned horse looked cool standing next to the Lion Rampant shield.

 

 

Source: Why Is the Unicorn Scotland’s National Animal?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - BURGER KING IMFAMOUS CAMPAIGN

image-asset.jpeg

Did you know... For 25 days in the winter of 1986, a nondescript guy named Jon Menick traveled the country. He would be ushered into a Burger King franchise location by his handlers, loitering until someone recognized his olive-green jacket and high-water pants. He’d wait for them to say hello, at which point he’d stick out his hand and tell them they’d just won $5000.

 

Menick repeated this process for all 50 states and the District of Columbia. He was appearing in character as Herb, Burger King’s latest pitchman. Aside from his outmoded fashion sense, Herb was notable for being just about the only man in the country who had never eaten a Whopper. Months of print ads and television commercials had teased Herb’s existence; his “family” and “friends” were interviewed, discussing this blight on their existence. The idea of a man who had never succumbed to the pleasures of a grilled fast-service burger was presented as proportionate to a man who had never tasted an orange or experienced a full moon

 

Burger King was certain Herb would help cut into the market share held by their perennial rivals at McDonald’s. And while he was, for a time, one of the most easily identifiable faces on television thanks to that cash reward, he would also prove to be what Advertising Age would later declare the biggest promotional flop of the decade. This was because recognizing Herb was not quite the same as liking him.

 

Beefing With McDonald’s
 In 1985, McDonald’s saw more than 15 million customers a day, who handed over a total of $9 billion annually for their hamburgers, fries, Happy Meals, and McNuggets. While their advertising budget was substantial, it was only in an effort to retain their incredible 37 percent market share of burger joints. Burger King and Wendy’s, in contrast, had to fight for every scrap left over.

 

With the merits of their food a subjective discussion, both franchises leaned heavily on ad campaigns to try and pull in more stomachs. Wendy’s hit big with their “Where’s the Beef?” campaign of 1984, in which an elderly woman named Clara seemed disappointed by the lack of meat in the competition’s burgers.

 

Burger King wanted a Clara of their own. Ad agency J. Walter Thompson pitched them on the idea of a man who had committed the mortal sin of never tasting a Whopper. A pariah, he’d be spoken of in hushed tones by his associates. After toying with names like Oscar and Mitch, the agency settled on Herb. “Who’s Herb?” was slated to become the company’s campaign focus for late 1985.

 

The ad agency began by putting cryptic ads in newspapers that didn’t name Burger King or offer much of a hint of the direction they were taking. “It’s not too late, Herb,” read one; “What are you waiting for, Herb?” read another. (In one instance, a man with the same first name who owed money to loan sharks saw the ads and thought he was being personally targeted.)

 

 

 

From there, J. Walter Thompson rolled out a series of television spots featuring Herb’s shamed relatives. A kind of viral ad before the concept of viral marketing existed, people began to speculate about Herb: his likes, dislikes, what he looked like, and why he had never delighted his intestines with a Whopper. People who marched into a Burger King and announced “I’m not Herb” could get a burger for 99 cents. Overall store sales spiked by 10 percent.

 

Though Burger King never openly discussed it, plans were already underway to cast an actor as Herb for phase two of the campaign. After spending two months and $40 million on the ads, America would finally get to see the real thing.

 

A Commercial Disappointment

A trained stage performer, Jon Menick was plucked out of a pool of 75 actors to portray the character in ad spots that would debut with the January 1986 Super Bowl. Menick traveled to Wisconsin on Burger King’s dime to visit a cheese factory and “find” Herb’s essence. MTV agreed to let him be a guest VJ for a day. He earned a spot as guest timekeeper for WrestleMania 2. After months of going incognito, Herb was everywhere.

 

 

 

When he debuted during Super Bowl XX, however, there was a collective sigh of disappointment. Herb was a nerd who didn’t appear to possess many charming qualities. During a “press conference,” he admitted he tried a burger at Burger King and loved it. It wasn’t exactly a startling plot twist. Two months of pent-up curiosity resulted in a mass exodus of interest on the part of burger aficionados.

 

Burger King leaned on bribery, offering a $5000 reward for anyone who spotted Menick-as-Herb during his nationwide tour. (Local franchisees could kick in more if they wanted: some witnesses scored $10,000.) But the chain suffered further criticism when a series of episodes involving underage winners undermined their generosity. To discourage kids from cutting class to brood in Burger Kings all day waiting for Herb to show, the company insisted on a minimum age of 16 for winners.

 

One adolescent, Jason Hallman of Alabama, was 15 when he spotted Herb in March 1986. Burger King gave his 16-year-old friend the $5000 instead. Hallman’s parents complained, with the Alabama state senate weighing in. They labeled Burger King’s actions as approaching “consumer fraud” because they had failed to make the age minimum a prominent part of the rules. Another juvenile disqualified from the prize in Reno was awarded the $5000 by the local operator.

 

That May, Burger King ended any further mention of Herb, turning their advertising focus to “real people” who enjoyed their menu items. Then-company president Jay Darling admitted Herb “did not work nearly as well” as he had expected.

 

The following year, patrons were no longer on the hunt for Herb, but falling over themselves to locate a far more popular attraction. Burger King had just shipped eight million puppets based on the popular sitcom ALF to stores.

 

 

Source: Remembering Burger King’s Infamous “Where’s Herb?” Commercial

  • Like 1
Link to comment
Share on other sites

Fact of the Day - ORIGIN OF IN A PICKLE

A13d80277492140e7abdf297c48c1ff50m.jpg_3

Did you know... There’s something adorable about the expression in a pickle. Whatever trouble or predicament you’re in, once it’s described in that way, it sounds kind of cute. You can even apply it to all kinds of scenarios (i.e. “The Mafia has discovered I’ve been stealing from them, so now I’m in a bit of a pickle!”).

 

At a little over 4 feet long, even the largest pickle jar in the world isn’t big enough for most to actually enter. So where exactly does this idiom come from? It shows up in William Shakespeare’s The Tempest (written around 1610), spoken by Alonso: “And Trinculo is reeling ripe: Where should they find this grand liquor that hath gilded ’em? How camest thou in this pickle?”

 

Generally speaking, if somebody’s camest in your pickle, it’s time to get new roommates. But in this context, Alonso is referring to Trinculo being drunk—according to Grammarist, pickled in early 17th century England was a colloquialism for “being heavily intoxicated.” Alonso’s also asking how Trinculo managed to get so drunk, given that they’re both on an island with no booze.

 

While the idiom appears in The Tempest, it wasn’t the first instance where the word pickle was used in print. It appears in John Heywood’s 1562 collection Proverbs and Epigrams (“Freilties pickell”), although the meaning is somewhat ambiguous and doesn’t seem to suggest drunkenness. The term pickle itself is thought to come from the Dutch pekel, referring to brine rather than its contents, hence pickling. The leap from “preserved due to being submerged in liquid” to “drunk” isn’t a huge one, especially given alcohol’s preservative qualities.

 

In the Brine vs. In a Bind
But how did being in one come to mean a tricky situation? Fifty years after Shakespeare, diarist Samuel Pepys appeared to be using it that way, writing in 1660 of being “at home with the workmen all the afternoon, our house being in the most sad pickle.” So at some point in that intervening half-century, it acquired that second meaning.

 

Some point to the Dutch phrase in de pekel zijn (meaning “to sit in the pickle brine”) as the ultimate root of the idiom. However, certain Dutch etymological dictionaries maintain that the expression may have been more literal (directly pertaining to brine) or along the lines of Shakespeare’s usage.

 

According to a theory by food writer Sam Dean, the expression makes a lot more sense when you remember that the word pickle means something completely different in the UK than it does in America, where it generally refers to the dill variety—a cucumber rendered more delicious by the cunning application of brine, herbs, and the passage of many months. However, in Britain, that’s known as a gherkin, and the term pickle usually indicates a condiment made from a mishmash of vegetables, spices, and vinegar, a deliciously tangy-sweet brown slop that turns a cheese sandwich from a mundane experience into a culinary adventure. 

 

This pickle, while glorious, is also all over the place. It’s sticky, slimy, and isn’t entirely texturally dissimilar to the penultimate vomit that comes from a bout of food poisoning. What a jumbled mess. Therefore, what a pickle indeed.

 

 

Source: Why Do We Say ‘In a Pickle’?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - LINGUISTICS TERMS

QmUzEwT4yyY.jpg

Did you know... Grade school English teachers do their best to send you off into the world with at least a cursory understanding of how language works. Maybe you can tell your dependent clauses from your independent ones and your transitive verbs from your intransitive ones. Maybe you’re even pretty savvy at distinguishing between basic rhetorical devices—hyperbole versus oxymoron, simile versus metaphor, and that sort of thing. But unless you majored in linguistics in college or routinely spend your free time reading grammar blogs, there’s a whole world of words to describe language mechanics that you’re probably not aware of. Here are 15 of our favorites, from formal terms like amphiboly to colloquial ones like snowclone.

 

Amphiboly

 

Amphiboly, or amphibology, occurs when a sentence or phrase’s grammatical structure lends itself to multiple interpretations. There are countless ways this kind of ambiguity can happen. Maybe the placement of a prepositional phrase makes it unclear what that phrase is modifying, as Groucho Marx exploited in this classic joke: “One morning, I shot an elephant in my pajamas. How he got in my pajamas I don’t know.” Or maybe it’s not obvious which part of speech a certain word is functioning as, which happens fairly often (and sometimes to hilarious effect) in headlines. In “Eye Drops Off Shelves,” for example, drops is a noun—but the headline takes on a different meaning if you mistake it for a verb. (Ambiguous headlines are their own subset of amphiboly, colloquially called “crash blossoms.”)

 

Back-formation
We usually think of word formation as taking a root word and adding affixes (like prefixes and suffixes) so the resulting word is longer than what you had before. From friend, you can make friendly, friendship, and befriend. But it doesn’t always work that way: Back-formation is the process of creating a new word by removing affixes. English is full of surprising back-formations. Burglar, for example, didn’t arise from burgle. Burglar came first, and people then created burgle as a verb to describe what a burglar does. And legislate isn’t the stem for legislation, legislator, or legislative; all three actually predate it.

 

Cutthroat compound
Plenty of compound words include the subject (also known as the head) within the compound itself. Watermelons are melons, bluebirds are birds, and bedrooms are rooms. But there are also exocentric compounds, in which the head isn’t part of the actual term. A specific class of these compounds involves an action (verb) being performed on an object (noun). A cutthroat, for example, isn’t an actual cut throat; it’s a person who cuts a throat, literally or figuratively. Scarecrows scare crows, daredevils dare the devil, and so on. Though they’re formally called “agentive and instrumental exocentric verb-noun (V-N) compounds,” historical linguist Brianne Hughes gave them a much catchier nickname: cutthroat compounds. And while they’re not super common in English, you might start noticing them in unexpected places. Technically, William Shakespeare’s surname counts as a cutthroat compound: “one who shakes a spear.”

 

Dysphemism

a9811d9dc8b3c96483987febeafff084.jpg

You’ve probably heard of euphemisms: expressions that use “agreeable or inoffensive” language in place of terms “that may offend or suggest something unpleasant,” per Merriam-Webster. Pass away is a euphemism for “die,” and do it is a euphemism for “have sex.” Dysphemisms are the exact opposite of that: expressions that intentionally use harsh language to describe something more or less innocuous. Rug rat is a dysphemism for a “young child who’s still crawling on the carpet,” for example, and ambulance chaser is a dysphemism for “personal injury attorney.”

 

Eggcorn
Eggcorns are misheard expressions that actually make sense—e.g. deep-seeded instead of the technically correct version, deep-seated, and free reign rather than free rein. The term, coined by linguist Geoff Pullum, is a nod to acorn’s history of being misheard as eggcorn

 

Epenthesis and Syncope
You might find it irksome that so many people pronounce realtor as “REEL-uh-ter” instead of “REEL-ter,” but they’re not disregarding letter order for no reason. It’s not uncommon for us to add an extra sound (often, but not always, a vowel sound) to a word to make it easier to pronounce—a phenomenon known as epenthesis. Athlete is another example: “ATH-uh-leet” rolls off the tongue better than “ATH-leet.” Some linguists even consider the “n” sound in the article an to be epenthetic: It neutralizes the difficulty of uttering two vowel sounds back to back, as we’d otherwise have to when talking about, say, a archer shooting a arrow at a apple. We drop sounds to make words easier to pronounce, too. This type of contraction within a single word is called a “syncope”—you can find examples in vegetable, whose second “e” sound is often omitted, and family, widely pronounced “FAM-lee.” (Syncope typically refers to dropped vowels, but some linguists do also use it for dropped consonants. The dropped-sound phenomenon overall is known as deletion.)

 

Kangaroo word

images?q=tbn:ANd9GcRTnLkiR1sRakCdFhpeA5v

Recreational linguists have a name for words that contain their own synonyms: kangaroo words (because kangaroos carry their joeys in pouches). Rambunctious harbors raucous, respite has rest, and there’s ruin in destruction. In order to count as a true kangaroo word, the letters of the joey word must be ordered correctly in the parent word—i.e. you can’t do any unscrambling. You do have to remove letters from between the letters of the joey word, though; if there aren’t any, it doesn’t count. (E.g. belated and late and action and act are disqualified.)

 

Mondegreen
A cousin of the eggcorn is the mondegreen, “a word or phrase that results from a mishearing especially of something recited or sung,” per Merriam-Webster. Mondegreen is a mondegreen: Sylvia Wright coined the term in a 1954 Harper’s Magazine article in reference to Lady Mondegreen, a mishearing of “laid him on the green” from the Scottish ballad “The Bonny Earl of Murray.” One of the most famous modern mondegreens is ’Scuse me while I kiss this guy from Jimi Hendrix’s “Purple Haze.” (The actual lyric is “’Scuse me while I kiss the sky.”) And Taylor Swift’s “Blank Space” gave us All the lonely Starbucks lovers, which is really “Got a long list of ex-lovers.”

 

Nonce word 
A nonce word is a word that was coined for one occasion only. They’re not uncommon in linguistics studies on language acquisition, as researchers need to use words that participants won’t already be familiar with. (Psycholinguist Jean Berko Gleason memorably made up wug, gutch, and many other nonce words for this purpose.) Sometimes, people create nonce words to fill the need for a term that simply doesn’t exist, like puzz to describe the puzzle fuzz you find in the bottom of a puzzle box. But other times, writers are just making up words for fun—looking at you, Lewis Carroll. Some nonce words do end up filtering into the general lexicon, at which point they lose their nonce-word status. (But it’s hard to identify exactly how common a nonce word needs to become in order for it to stop being a nonce word.) Carroll is an interesting case because some of his nonce words did catch on, like chortle, while others are still nonces (e.g. slithy, a portmanteau of lithe and slimy).

 

RAS syndrome

industrial-commercial-instrumentation_la

Since PIN stands for personal identification number, saying “PIN number” is redundant. The same goes for the phrase ATM machine, as ATM stands for automated teller machine. In 2001, New Scientist gave this variety of redundancy its own tongue-in-cheek title: RAS syndrome, for redundant acronym syndrome syndrome. Even DC Comics is an example of RAS syndrome—DC stands for Detective Comics. (Strictly speaking, though, DC and ATM are initialisms, not acronyms. A more apt title would be redundant abbreviation syndrome syndrome.)

 

Rebracketing
Rebracketing occurs when we break up a word into different parts than were used when putting it together, a concept much easier to understand through real-world examples. Take hamburger: The term comprises Hamburg, the city in Germany, and the suffix -er. But as hamburgers gained popularity, people inadvertently rebracketed it as ham and burger—and burger became its own customizable term (cheeseburger, bacon burger, veggie burger, etc.). Alcoholic is another excellent example: It’s a fusion of alcohol and -ic, but we rebracketed it as alco- and -holic, appropriating -holic as a suffix to refer to other (mainly unofficial) addictions, e.g. chocoholic and workaholic. Blog is technically the result of rebracketing, too—it began as weblog (web and log), but we shifted the b from web onto log in shortening it.

 

Snowclone
Snowclones, as Geoff Pullum described them in 2004, are “some-assembly-required adaptable cliché frames for lazy journalists.” In other words, they’re clichés that you can customize for whatever you’re writing (or saying) by swapping out a couple operative words—like Hamlet’s “To be or not to be,” wherein you can fill in be and be with whatever verb you want. X is the new Y and In space, no one can hear you X (from Alien’s tagline “In space, no one can hear you scream”) are a couple other examples. The term snowclone, coined by economics professor Glen Whitman, is a nod to another snowclone: X have [a number of] words for Y, after the complicated but common claim that the Inuit people have 50 words for snow

 

Spoonerism

William_Spooner-463913617-56af97723df78c

A spoonerism is a phrase in which phonemes of two words have been switched, e.g. half-warmed fish instead of half-formed wish and blushing crow instead of crushing blow. They’re named for British clergyman William Archibald Spooner, who gained a reputation for absent-mindedness and lexical errors while serving as the warden of New College, Oxford, in the early 20th century [PDF]. It’s unclear how many spoonerisms Spooner actually uttered, but it’s probably less than what’s been attributed to him.

 

Tmesis
Tmesis involves shoehorning a whole nother word between two parts of a word or phrase—like abso-freakin’-lutely. Knowing where exactly to insert the word is one of those grammar rules that most native English speakers follow without even realizing it: As James Harbeck explained for The Week in 2015, it goes “right before a stressed syllable, usually the syllable with the strongest stress, and most often the last stressed syllable.”

 

Source: Fascinating Linguistics Terms You Didn’t Learn in School

  • Like 1
Link to comment
Share on other sites

Fact of the Day - LIGHTNING ROD

01hxcfq54q9vqjqhw1n4.jpg

Did you know... Ben Franklin’s famous experiment with the kite and key gave him a better understanding of the nature of electricity. But did that event lead to the lightning rod? 

 

On September 13, 2021, a severe thunderstorm pelted New York City with heavy rain, strong winds, and a wild lightning show. During the tempest, One World Trade Center—the Western Hemisphere’s tallest building at 1792 feet, including its antennae—was struck by several impressive bolts.

 

Fortunately, the lightning strikes produced amazing photos and videos rather than catastrophic, fiery destruction—thanks to the skyscraper’s sophisticated lightning protection system based on the designs of Benjamin Franklin. But did Franklin really originate the concept of a lightning rod with his famous experiment?

 

Kite and Key
In the late 1740s, Franklin—Founding Father, inventor, and storm chaser—began investigating whether lightning was a form of electricity, as other scientists had suggested. To test the idea, he procured a kite and attached a metal key to it with an insulating silk cord. In 1752, he flew the kite during a thunderstorm and witnessed the key attract an electrical charge, proving the theory.

 

Franklin started advocating for metal rods to protect buildings and the people inside them from the destructive forces of lightning. He hypothesized that an iron spire on top of a building or ship could protect it from fire by attracting the lightning’s energy and dispersing it safely. In a letter to a friend, he theorized that “the electrical fire would, I think, be drawn out of a cloud silently before it could come near enough to strike.”

 

For his efforts, Franklin is often thought of as the father of the lightning rod. But he might have been beaten to the idea.

 

Ben-Franklin.jpg

 

The possible pre-Franklin origins of the lightning rod are vigorously debated. Around 1730, a Russian industrialist named Akinfiy Demidov built the 189-foot-tall Leaning Tower of Nevyansk in Sverdlovsk Oblast, north of Yekaterinburg. It’s topped with a metal spire that connects to metal within the building’s structure, grounding what might be considered the first known lightning rod.

 

It’s not clear whether Demidov intended the spire to act as such, but it could be an instance of nearly simultaneous invention.

 

 

Source: Did Ben Franklin Really Invent the Lightning Rod?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - JAWS: MATT HOOPER INSPIRATION

649-08745470en_Masterfile.jpg

Did you know... Dr. Donald “Reef” Nelson was part of the inspiration for Matt Hooper, Richard Dreyfuss’s character in the iconic 1975 summer blockbuster.

 

In 1975, Jaws changed several things forever: It created the modern-day summer blockbuster, made millions of people terrified of the ocean, and did a pretty terrible PR job for the great white shark, all things considered.

 

Someone less than thrilled with how Jaws led people to feel about sharks was Dr. Donald “Reef” Nelson, science advisor on both the original film and its 1978 sequel, and part of the inspiration for Richard Dreyfuss’s character, oceanographer and double-denim enthusiast Matt Hooper.

 

When Nelson had his first encounter with a shark back in 1959, scientists knew very little about their behavior. They’d looked at dead ones and spied live ones from afar, but firsthand encounters tended to be brief and often slightly more frenzied than the conditions science tends to favor. 

 

Nelson had finished a degree in biology at Rutgers University in 1958 and subsequently moved to Florida to—among other things—join the awesomely-named spear-fishing team, the Glug Glugs. He had an epiphany after spearing a grunt, a small but surprisingly loud fish, which reacted to being speared by making a lot of noise. A tiger shark immediately appeared, which Nelson then also proceeded to spear.

 

But he took more home with him than just the fish and shark: He also had an idea. Were sharks attracted to sound? Nobody had investigated that before, so along with his research partner Samuel “Sonny” Gruber, he looked into it. The pair recorded fake sounds of struggling fish like his screaming grunt and played them underwater from an ultrasonic speaker that had been developed by the Navy. Those low-frequency vibrations drew in an astonishing 22 sharks, and the pair published their findings in Science in 1963 while still grad students.

 

Diving Deeper
In the years leading up to Nelson’s research, there had been several high-profile incidents in which the U.S. Navy had suffered huge casualties thanks to sharks. The story of the USS Indianapolis that Quint (Robert Shaw) famously tells in the original Jaws was based on a real event, and several similar incidents had occurred in the Pacific, as well as the South Atlantic Ocean and Caribbean Sea.

 

What Nelson and Gruber had uncovered during their research would help to save lives. They found that the same kind of sounds made by an injured, thrashing fish can be made by swimmers, something that can work out pretty sub-optimally for those swimmers. By 1965, Nelson was in California, working as a biology professor at Cal State Long Beach and trying to develop ways to repel sharks (including via a cattle-like prod that was intended to stun one if it drew too close), although none of them worked out. Perhaps coincidentally, Adam West’s Batman had a can of shark repellent in 1966’s Batman: The Movie, which was shot in California.  

 

01hxhw1c7kyfsyzhaqcx.jpg

 

More interesting than repelling sharks, though, was getting as close to them as possible. For a while, Nelson did this in the most absurdly badass way possible, by free-diving up to 60 feet and chasing reef sharks until they got angry, something known as the Kamikaze technique. By doing this, he observed reef sharks’ “agonistic display,” meaning the behavior they performed when under threat. 

 

Around this time in the early 1970s, a young Steven Spielberg made a visit to the Shark Lab at Cal State, which Nelson had founded in 1966. Nelson’s lifelong habit of drawing on napkins and scraps of paper meant his office was a messy, shark-filled dream, especially for a filmmaker. This wasn’t a side of scientists that viewers were used to seeing, but it worked—Spielberg made photocopies of all the maps, scribbled napkin notes, and photos he saw and his team ultimately recreated everything for Hooper’s office in Jaws.

 

“An Ultimate Marine Biologist”
Although Nelson was involved with the making of Jaws and Jaws 2, he didn’t let Hollywood go to his head. Even after Jaws, he was still using the Kamikaze technique, and he only retired it in 1976 after a close call with a very combative shark.

 

He swiftly invented a one-person, fiberglass submarine known as the SOS, or Shark Observation Submersible, and got right back to studying sharks up-close in their underwater environment. He even took video footage, and along with his team, eventually developed methods for tracking sharks using ultrasonic transmitters. This kind of acoustic transmitter technology served as a precursor to the technologically advanced tracking methods used today.

 

In the process, he learned so, so much about sharks. His team was the second in the world to put a transmitter on a shark in the wild, which opened up a whole new world of discovery. Everyone had assumed they were solitary, mindless killing machines, but Nelson and other scientists at the Shark Lab learned that they were in fact much more social than suspected, far less aggressive (except when threatened), and some species had complex, mutually beneficial relationships with other ocean-dwellers. There was vastly more to them than just being the sharp-toothed murderers-in-waiting depicted in those films.

 

Unfortunately, post-Jaws, there were a great many people who simply didn’t want to know. 

 

4ef77747-a6df-583b-899f-5c12ef302a53__39

 

Both Spielberg and Peter Benchley, co-screenwriter on Jaws and author of the original novel, expressed regret over rendering the public so terrified of sharks, as well as disappointment in the shark killings that followed. In 1978, Nelson himself had endorsed a publicity poster accompanying Jaws 2 that scared the crap out of a lot of people, as it was claimed that the “seas off our shores are aprowl with many killers” and that sharks were capable of attacking in freshwater and causing boats to sink, so it was essential for audience-goers to “know their enemy.”

 

But by the time Nelson died in 1997, humanity had a far more detailed knowledge of the world of sharks—knowledge used to keep people safe from them, but also to protect them from people. With all the video footage of sharks that he had captured with his team, Nelson was able to make over 20 documentaries between 1968 and 1994, most of which were shown in school classrooms or aired on TV. A glowing memorial published in 2001 by the journal Environmental Biology of Fishes [PDF] praised Nelson as “an ultimate marine biologist.”

 

Over the course of his career, which had spanned more than three decades, Nelson influenced multiple generations of scientists to follow in his (wet) footsteps and explore the worlds of these fascinating creatures. Nelson also produced nearly 50 papers about sharks, which means if you’re planning on reading his output ... you’re gonna need a bigger shelf.

 

 

Source: The Real-Life Marine Biologist Who Helped Inspire ‘Jaws’

  • Like 1
Link to comment
Share on other sites

Fact of the Day - FRÈRE JACQUES

01hxfkr6j7rsyj52zhye.jpg

Did you know.... This simple nursery rhyme comes with a number of unanswered questions about everything from its authorship to who inspired it.

 

The nursery rhyme “Frère Jacques,” also known as “Brother Jacques” or “Brother John” in English, tells the tale of a monk who is being summoned to ring the bells, which he seems not to have done yet because he’s still asleep. The French lyrics are:

 

Frère Jacques, Frère Jacques
Dormer-vous? Dormer-vous?
Sonnez les matines! Sonnez les matines!
Din, din, don. Din, din, don
.”

 

While the song seems to tell a simple story, “Frère Jacques” has actually been the source of more debate over the years than you might expect, from disagreements over the most accurate translation of the lyrics to speculation over whether the central character was inspired by a real-life figure to the possibility that the true author of the music was one of the most important French composers of the 18th century. Here’s a look at the story—and some of the unanswered questions—behind the nursery rhyme we know today as “Frère Jacques.”

 

Who Wrote “Frère Jacques”?
The origin of the song “Frère Jacques” isn’t entirely clear. According to American Songwriter, the melody seems to have first appeared under the title “Frère Blaise” in a manuscript called “Recueil de Timbres de Vaudevilles” that dates back to around 1780.

 

composer.jpg?enable-io=true&enable=upsca

Jean-Philippe Rameau.

 

Research into who composed that music has identified one of the most notable 18th century composers as a candidate. In 2014, the classical music scholar Sylvie Bouisseau presented a research paper arguing that the music was written by the French composer Jean-Philippe Rameau. Among other evidence, Boisseau notes that Jacques Joseph Marie Decroix—a collector of scores who compiled a number of Rameau’s works that were given to the Bibliothèque Nationale de France—included it in a manuscript of canons and attributed the music to the composer.

 

In addition, Bouisseau pointed out that the first time the music appeared in print (as opposed to the handwritten form of the 1780s manuscript) was when it was published in 1811 by the Société du Caveau, a group of composers that counted Rameau as a member. This added further credibility to the idea that he may have been the composer of “Frère Jacques”; his membership would explain why the group was in possession of the music. That said, Rameau died in 1764, and the music wasn’t published until almost half a century after his death—raising the question of why the society would have kept a piece by such a renowned composer under its hat for so long.

 

The Meaning of “Frère Jacques”
In addition to the questions surrounding its authorship, the meaning of “Frère Jacques” has been muddled over the years thanks to translations of the lyrics from French to English.

 

Some early English versions of the rhyme, for example, turned Frère Jacques into Brother John. But John isn’t the direct equivalent of Jacques in English; it’s a closer match for the name Jean, with Jacques having more similarity to Jack.

 

The third line, “Sonnez les matines,” was also once translated as “Morning bells are ringing”—but that doesn’t accurately communicate the situation to which the poem is alluding. This is because matines has sometimes been mistakenly translated as a reference to “morning,” due to the word’s similarity to matin, which translates to “morning” in French. But matines actually refers to a canonical hour in Christianity which takes place in the early hours of the morning. The summons to “ring the matins” is therefore a call to ring the bells to usher in this period for prayers. A more recent English language version translates the line as “ring the matins,” which is a more accurate version of the original.

 

Who Was the Real Frère Jacques?
There has also been speculation about whether the title character of the nursery rhyme was inspired by a real person. Some have theorized that the real Frère Jacques was Jacques Beaulieu, a pioneering lithologist who sometimes wore a monk’s habit and referred to himself as Frère Jacques, even though he wasn’t actually a monk. He was one of the first to take a lateral approach to perineal lithotomy (a surgical operation for the removal of calcium deposits like kidney and bladder stones via the perineum) and performed around 5000 lithotomies over the course of his career. 

 

But a 1999 research paper into the history of the real Beaulieu didn’t find a direct connection between him and the nursery rhyme character, instead concluding that the rhyme was more likely to refer to a number of monks who were prone to oversleeping.

 

The Influence of “Frère Jacques”
Whatever the truth about the origins of the rhyme, “Frère Jacques” has been influential in a number of ways.

 

The melody has been important to the legacy of classical music—and that would be true whether Rameau composed the music or not.  “Frère Jacques” became an inspiration to the composer Gustav Mahler, who knew it by its German name, “Bruder Martin”; he used the melody in the third movement of his Symphony No. 1.

 

It also has its place in music history outside of the classical genre: George Harrison and John Lennon slippedFrère Jacques” into the Beatles’ song “Paperback Writer” (listen carefully during the third verse).

 

 

 

It’s in its nursery rhyme form that “Frère Jacques” is most frequently heard today, and it’s sometimes cited as a good choice to use for educational purposes. Leonard Bernstein, for example, asked the audience at one of his Young People’s Concerts to sing the rhyme as a way to illustrate how sequence can be used in composition.

 

 

 

A 2019 survey even suggested using “a musical mnemonic” based on the song, which, according to researchers, “can help learning and remembering of the proper [handwashing] technique.”

 

Source: “Frère Jacques”: The History of the Classic Nursery Rhyme

  • Like 1
Link to comment
Share on other sites

Fact of the Day - MEDIEVEAL PAINTINGS

mqdefault.jpg

Did you know... Medieval artists are not known for their life-like accuracy. They doodled killer bunnies in the margins of their manuscripts and painted lions as goofy, grimacing felines. But if you’ve ever found yourself chuckling at the angry man-heads on human babies in medieval art, the joke is actually on you: These painters wanted the babies to look like Boomers.

 

Vox spoke to Matthew Averett, an art history professor at Creighton University, to find out why this trend toward intentionally old-looking babies abounded during the Middle Ages—and what caused the shift during the Renaissance toward the cherubic faces we recognize as babies.

 

madonna-and-child-enthroned-workshop-of-

 

The reasoning, like all things artistic in the Middle Ages, has to do with Jesus. Back then, the church commissioned most of the portraits of babies and children. And they didn’t want just any old baby—they wanted the baby Jesus (or other biblical kids). Medieval artists subscribed to the concept of homunculus, which literally means “little man,” or the belief that Jesus was born “perfectly formed and unchanged,” Averett said. Therefore, paintings of Jesus showed him with adult features and physiques, even when the purported child is sitting in his mother’s lap, playing with her robes, or breastfeeding.

 

This homuncular, adult-looking baby Jesus became the standard for all children, an exemplar that stuck in the Middle Ages because artists at the time had, according to Averett, a “lack of interest in naturalism, and they veered more toward expressionistic conventions.”

 

 

The ugly baby trend faded during the Renaissance, when artists rediscovered realism and applied scientific precision to their figurative works. Non-religious art also flourished at the time as the rising middle and upper classes could afford portraits of their family members. The wealthy patrons wanted representations of their darling children that reflected well on the parents, with little boys and girls who were cute—not Benjamin Button-esque. Depictions of babies shifted away from the hyper-stylized homuncular and never looked back.

 

Source: Why Do Babies in Medieval Paintings Look So Scary?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - ACRONYM vs. INITIALISM 

01hxqt45m97fc7ek72bj.jpg

Did you know.... It has to do with how you say the abbreviation in question.

Acronym is often used to describe any collection of first letters of words in a phrase: NASA for National Aeronautics and Space Administration, CEO for chief executive officer, and so on. But not all acronyms are really acronyms: Some are just initialisms. Here’s how to tell the two apart—and how abbreviations fit into the picture.

 

Acronym vs. Initialism
Before we get pedantic, though, let’s be clear: So many people consider acronym and initialism to be direct synonyms that dictionaries do list them as such, and it’s not “wrong” to use them interchangeably. But if you want to differentiate between the categories, it’s not hard to learn the distinction.

 

Basically, an acronym is any word formed by taking the first letters (or first parts of syllables) of each word in a phrase, title, or group of words—and then pronouncing that new word as a word. FOMO, for fear of missing out, is “FOH-moh.” BAFTA, for British Academy of Film and Television Arts, is “BAFF-tuh.”

 

For all the debate surrounding whether GIF (graphics interchange format) should be said with a soft or hard “g” sound, everybody can at least agree that it should be pronounced as a single-syllable word. Nobody says “G-I-F.” If we did, GIF wouldn’t be an acronym—it would be an initialism. 

 

Initialisms are terms made by taking each first letter in a phrase and then pronouncing each of those letters. Often, that’s because it’s not phonetically possible to pronounce a group of letters as a word—like TBD for to be determined or CNN for Cable News Network. But other times, we all seemingly decide it just sounds better or more appropriate to say the letters: CEO is “C-E-O,” not “SEE-oh,” and USA is “U-S-A,” not “OOH-sah.”

 

To make matters more complicated, certain terms are technically used as acronyms and initialisms. Take LOL. If you pronounce each letter individually, it’s an initialism. But people sometimes say it as a word—“LAWL” or “LOHL”—in which case it’s an acronym. The same goes for ASAP: “A-S-A-P” is an initialism, while “AY-sap” is an acronym.

 

And some acronyms have become so common that they’re widely written in lowercase, and you might not even realize they were acronyms to begin with. Laser, for example, stands for light amplification by stimulated emission of radiation, and snafu started out as military slang for situation normal: all fucked up.

 

Acronym vs. Abbreviation
Abbreviation is a catch-all term for any shortening of a word or phrase. This means that all acronyms and initialisms are abbreviations, but not all abbreviations are acronyms or initialisms.

 

Abbreviations include things like contractions (e.g. could’ve for could have, won’t for will not) and nicknames (Beth for Elizabeth, Alex for Alexander, etc.). It also includes words that have literally just been shortened. You might write mgmt instead of management, or assn for association. Cali is an abbreviation for California, and math (or maths, if you’re British) for mathematics. Abbreviations are all shortcuts, but there aren’t any overarching rules to create or pronounce them like there are with acronyms and initialisms.

 

 

Source: Acronym vs. Initialism: What’s the Difference?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - IVY LEAGUE?

PRI_Necklaces_NassauHall_View3.jpg?v=168

Did you know... The Ivy League seems aptly titled, evoking visions of stately old academic buildings covered in ivy. And that is partially how the group of elite eastern schools got its name—but it’s not the whole story.

 

Planting the Ivy

OIP.Pp2wIlcQMBV1vIb1yrgzNAAAAA?w=401&h=3

It’s a long-held American tradition for colleges to observe Class Day: a day near commencement on which graduating seniors “[celebrate] the completion of their course, typically with formal festivities, prize-giving, etc.,” per the Oxford English Dictionary. Though the first written reference to the phrase class day is only from 1833, the custom itself was around before its name. 

 

Harvard’s class day grew out of an attempt in 1754 by administrators “to improve the elocution of students by requiring public recitations of dialogues, translated from Latin,” according to an 1893 article in The Harvard Crimson. Apparently, that particular function never really caught on, but students embraced the opportunity to gather and give speeches, and the event snowballed into a fun-filled day of activities.

 

One such activity, called “planting the ivy,” involved seniors planting ivy at the base of a building or wall on campus and often installing a stone tablet engraved with their graduation year. It’s unclear exactly when or where this practice began: The 1893 Crimson article says that Harvard’s seniors started doing it “around” 1850; at Bowdoin College, it was actually the junior class that kickstarted the ritual in 1865. Planting the ivy was well-known enough by the 1870s that when Maine’s Colby College got in on the game in 1877, The Portland Daily Press described it as “The old ceremonial of Ivy Day or planting the ivy.”

 

375px-Ivy_Day_Bates_College.jpg

 

What we do know—as already evidenced by the mentions of Bowdoin and Colby—is that planting the ivy was never specific to the group of schools now called the “Ivy League.” Another defining factor helped solidify the list: sports.

 

Let the Games Begin
The origins of the Ivy League date back to October 14, 1933, when sportswriter Stanley Woodward referred to “ivy colleges” in a New York Herald Tribune article on college football match-ups.

 

“A proportion of our Eastern ivy colleges are meeting little fellows another Saturday before plunging into the strife and the turmoil. In this classification are Columbia, which will meet a weak Virginia team; Harvard, which will engage New Hampshire; Dartmouth, which is playing Bates; Brown, which is meeting Springfield; Princeton, which will strive against Williams; Army, which is paired with Delaware, and Penn, which is opening its season belatedly against Franklin and Marshall,” he wrote.

 

2e724d080fa2ee8865164b633ec4ecd6--footba

 

Woodward’s list of ivy colleges differs slightly from today’s Ivy League, which includes Yale and Cornell, but not Army (a.k.a. the United States Military Academy or West Point). Then again, Woodward’s rundown was neither official nor comprehensive; he only named the “proportion” of ivy colleges—i.e. old, esteemed universities—slated to face weak opponents that coming Saturday. Cornell and Yale were both discussed elsewhere in the article: Cornell got a shout-out in the very first sentence for its highly anticipated game against the University of Michigan, and Woodward later noted that Yale “may find the going cobbly against Washington and Lee.”

 

Woodward mentioned the “ivy colleges” again in another article just two days later. “The fates which govern play among the ivy colleges and the academic boiler-factories alike seem to be going around the circuit these bright autumn days cracking heads whenever they are raised above the crowd,” he wrote.

 

The earliest written reference to the phrase Ivy League didn’t appear until February 7, 1935, when Associated Press writer Alan Gould reported that “The so-called ‘Ivy League’ which is in the process of formation among a group of the older eastern universities now seems to have welcomed Brown into the fold and automatically assumed the proportions of a ‘big eight.’” 

 

images?q=tbn:ANd9GcRR1F4BG6U1LJ9mJwU8Twv

 

Gould’s wording makes it clear he wasn’t coining a term, but merely reiterating one that was already in colloquial use, at least among those involved in forming this new conference. And while Gould did point out that the Ivy League schools had old age in common (they were all founded in the 17th or 18th centuries except Cornell, established in 1865), inclusion in the conference seems to have initially been based on existing sports schedules. In short, the athletic teams of the eight schools—Harvard, Yale, Penn, Princeton, Columbia, Brown, Dartmouth, and Cornell—were already playing each other, so why not make the league official?

 

Ironically, they wouldn’t actually make the league official for another two decades—and this part of the story dovetails with our modern conception of the Ivy League as academically elite. In 1945, the presidents of all eight schools drafted an agreement aimed at preventing their football players from letting the sport eclipse their focus on school. The team couldn’t practice during the spring, for example, and athletic scholarships were forbidden. (That ban on athletic scholarships is currently the subject of a class-action lawsuit.) In other words, you came to an Ivy League school to be a student, and you could play football for fun in your free time.

 

In 1954, the presidents expanded the agreement to apply to all student athletes. Its ratification is considered the point at which the Ivy League became an official athletic conference, though its first competition year wasn’t until 1956.

 

The Full Lists: Ivy League vs. “Ivy Plus” Schools
Since then, the Ivy League’s original list of eight schools hasn’t changed at all. But that hasn’t stopped people from adding other academically rigorous schools to an informal “Ivy Plus” list, which is less restrictive in terms of foundation year and location. You can see breakdowns of both lists below.

 

School  Harvard University

Location: Cambridge, Massachusetts

Year Founded: 1636

 

School: Yale University

Location: New Haven, Connecticut

Year Founded: 1701

 

School: University of Pennsylvania 

Location: Philadelphia, Pennsylvania

Year Founded: 1740

 

School: Princeton University

Location: Princeton, New Jersey

Year Founded: 1746

 

 

School: Columbia University 

Location: New York, New York

Year Founded: 1754

 

School: Brown University 

Location: Providence, Rhode Island

Year Founded: 1764

 

School: Dartmouth College 

Location: Hanover, New Hampshire

Year Founded: 1769

 

School: Cornell University

Location: Ithaca, New York

Year Founded: 1865

 

Ivy Plus Schools

School: Duke University 

Location: Trinity, North Carolina

Year Founded: 1838

 

School: Northwestern University

Location: Evanston, Illinois

Year Founded: 1851

 

School: Massachusetts Institute of Technology (MIT)

Location: Cambridge, Massachusetts

Year Founded: 1861

 

School: Johns Hopkins University

Location: Baltimore, Maryland

Year Founded: 1876

 

School: Stanford University

Location: Stanford, California

Year Founded: 1885

 

School: University of Chicago

Location: Chicago, Illinois

Year Founded: 1890

 

School: California Institute of Technology (Caltech)

Location: Pasadena, California

Year Founded: 1891

 

Source: Why Is It Called the “Ivy League”?

Link to comment
Share on other sites

Fact of the Day - GIZA PYRAMIDS LOST PORTIONS OF THE NILE RIVER

PIM8747?d=2&sh=ph&s=s&p=1&bg=g&t=1708216

Did you know... A recent study published in the academic journal Communications Earth & Environment claims that the pyramids of Giza were constructed alongside an almost 40-mile-long artery of the Nile river that no longer exists today, having since been buried underneath desert and farmland. 

 

The towering monuments at the Giza pyramid complex were built over a period of nearly 1000 years; some are more than 4500 years old. The complex is located between the ancient cities of Giza and Lisht in an area that now rests on the edge of the country’s Western Desert. The region’s inhospitable environment has long puzzled archeologists, who for centuries wondered how Egyptian workers managed to move the 2.5 ton stones that make up the pyramids. 

 

In 2014, researchers from the University of Amsterdam suggested that the workers relied on engineering. Assuming they slid the stones over the desert surface, as depicted by a wall painting in the tomb of Djehutihotep, which was built around 1900 BCE, they speculated that the workers—172 per stone, according to the painting—may have wetted the sand in order to reduce friction, making the building blocks easier to pull. 

 

Even though most of the stones are thought to have come from a quarry less than a mile away from the Great Pyramid of Giza, transporting them in this manner would have been a Herculean task—almost too Herculean for construction to have proceeded in this manner.  

 

01hy13qmqtq4pscs9q42.jpg

 

The study from Communications Earth & Environment proposes that the Egyptians used water to lighten their load. Investigating a theory that the Nile used to branch out into arteries that no longer exist today, study author Eman Ghoneim—an Egyptian-American geomorphologist at the University of North Carolina Wilmington—looked at satellite images of the Western Desert and conducted geological surveys, discovering a waterway that used to stretch alongside the now isolated pyramid complex.

 

Dr. Ghoneim and her co-authors speculate that the waterway, which they proposed to name “Ahramat,” after the Egyptian word for pyramids, was buried under sands swept up by a major drought around 4200 years ago. 

 

To make the building process even easier, Egyptian workers appear to have dug causeways connecting the Ahramat to different construction sites, minimizing the distance they would have to move the stones through the use of manpower. 

 

In addition to inspiring researchers to look for other dried-up branches of the Nile, the Communications Earth & Environment paper provides the clearest answer as to why so many of Egypt’s pyramids are located in one specific area—an area that, back in the day, was much more hospitable and easily navigable than it is now. 

 

 

Source: Egypt’s Giza Pyramids Might Have Been Built Next To A Now-Vanished River

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...
Please Sign In or Sign Up