Jump to content

Fact of the Day


DarkRavie

Recommended Posts

Fact of the Day - WEATHER MYTHS

400

Did you know... The meteorological conditions we refer to as the weather can be the source of some pretty serious myths and misconceptions. Some are simply funny superstitions (like using onions to predict the severity of the coming winter). Others thought to be hoaxes or hallucinations (like ball lightning) are now proven to be actual phenomena. Here are eight common myths about the weather — including some that actually have a grain of truth to them.

 

1. Lightning Never Strikes the Same Place Twice

8d43d9b67efa648246da057f7b68bf0114551363

While everyone wishes it were true, this weather “fact” is false. Unfortunately, lightning can strike in the same location repeatedly — even during the same thunderstorm. This is especially true when it comes to tall objects, like TV antennas. For example, the Empire State Building is struck by lightning about 25 times per year. Other common lightning myths include the idea that trees can provide safe shelter (your best bet is always to go indoors) and that touching a lightning victim might get you electrocuted. Fortunately, the human body does not store electricity — which means you can perform first aid on someone struck by lightning without that particular fear.

 

2. Waterspouts Turn Into Tornadoes on Land

shutterstock_258306938_RT.jpg

This one is both true and false. That’s because there are actually two types of waterspouts — those thin, rapidly swirling columns of air above water, sometimes seen in the Gulf of Mexico, Gulf Stream, and elsewhere. The first is a “fair weather waterspout.” These form from the water up, move very little, and are typically almost complete by the time they’re visible. If they do move to land, they generally dissipate very quickly. The type known as “tornadic waterspouts,” on the other hand, are exactly what their name suggests: tornadoes that form over water, or move from land to water. Associated with severe thunderstorms, tornadic waterspouts can produce large hail and dangerous lightning. If they move to dry land, the funnel will pick up dirt and debris, just as a land-formed tornado would.

 

3. It’s Not Safe to Use Your Cellphone During a Thunderstorm

NOKIAlightning.xl.jpg

It’s not safe to use a landline when thunder and lightning are making the skies dramatic, just like it’s not safe to use any other appliances that are plugged in. But an (unplugged) cellphone should be fine, so long as you’re safely indoors. This myth may have arisen from situations in which people were struck by lightning and their cellphones melted, but it’s not because their cellphone “attracted” the lightning in any way. Of course, plugging in your cellphone (or laptop) to charge may present a danger.

 

4. Groundhogs Can Predict the Weather

637146153049000000

The Groundhog Day tradition continues every February 2, when the members of the Punxsutawney Groundhog Club trek to Gobbler’s Knob, seeking weather wisdom from a series of woodchucks, all named “Punxsutawney Phil.” If Phil emerges from his burrow and sees his shadow (in bright sunshine), supposedly winter will hang around for six more weeks. If the day is overcast: Yay, early spring! The whole event is based on old Celtic superstitions, though, and Phil’s “predictions” are only correct about 40% of the time — but at least he’s no longer eaten after making the call.

 

5. A Green Sky Means a Tornado Is Coming

s67690439.jpg

It’s a pretty rare event, but deep storm clouds filled with raindrops later in the day may scatter light in a way that makes the sky look green. Such storm clouds likely mean severe weather — thunder, lightning, hail, or even a tornado — is on its way, but it’s no guarantee of a twister per se. One thing’s for sure: It’s definitely not a sign that frogs or grasshoppers have been sucked into the sky by the storm, as people used to think.

 

6. Car Tires Protect Us From Lightning

video-lightning-bolt-nails-moving-car.jp

It isn’t the rubber tires that can keep a person inside a car safe from a direct lightning strike; it’s the metal cage of the vehicle that conducts 300 million volts of electricity into the ground. If you can’t get to shelter during a thunderstorm and must be in your (hard-topped) car, keep the windows rolled up and your hands off the car’s exterior frame.

 

7. Spiders Spin Webs, Dry Weather Ahead

spider-and-web-vector-827582.jpg

This saying has some truth to it. Spider webs are sensitive to humidity, absorbing moisture that can eventually cause their delicate strands to break. For this reason, most spiders will remain in place when rain is imminent. So it stands to reason (at least according to folklore) that if spiders are busily spinning their webs, they may know something that we don’t. In other words: Prepare for a beautiful day! (It’s also true that most spiders seek out damp places, so if you don’t want them taking up residence in your house, a dry home is less hospitable.)

 

8. Doors Are the Best Place to Be in an Earthquake

8c4ed9f59fa87e1c5bf817d9983d8f70.jpg

It’s not “weather” in the sense of atmospheric conditions, but earthquakes can be a pretty dramatic show of the Earth’s forces. Many of us learned this “tip” in school. However, the reality is that it was more true of older, unreinforced structures. Today, doorways generally aren’t stronger than other parts of the house, and the door itself may hit you in an earthquake. You’re far safer underneath a table or desk, particularly if it’s away from a window. (The CDC has more earthquake safety tips here.)

 

 

Source: Common Weather Myths, Explained

  • Like 1
Link to comment
Share on other sites

Fact of the Day - OLDEST CITIES

59e4005fd339444aa811e1655b48cead?type=we

Did you know.... The United States won its independence in 1776, but, of course, Indigenous populations and colonial settlers were here long before then. That means some cities in the nation were founded well before 1776, giving them a long, rich history that predates the country, by centuries in some cases. Here are 10 of the oldest continuously inhabited cities in the United States that you can still visit today.

 

1. Weymouth, Massachusetts

7e8c3cb2-15e5-48fb-8475-a0cf6521b9a2.jpg

Weymouth is the second-oldest settlement in Massachusetts, and dates back to 1622. London merchant Thomas Weston sent 60 men there to run a trading post colony to send goods back to England, and named the place Wessagusset. However, the colony was unsuccessful, the settlers soon began to starve, and most opted to move to nearby Plymouth. Six months later, British navy captain Robert Gorges brought 120 men to the Wessagusset site, settled there, and renamed it Weymouth. It became part of the Massachusetts Bay Colony in the 1630s. Located about 10 miles southeast of Boston, the city is now home to approximately 56,000 people.

 

2. Plymouth, Massachusetts

DSC06857_edited-1.jpg

You’re likely familiar with the story of the pilgrims landing at Plymouth Rock from elementary school classes — but the real history isn’t exactly how many of us have heard it. The pilgrims did arrive there and began to explore their new home in 1620, but it had already been named Plymouth since at least 1614 with Captain John Smith’s journey. There are no actual accounts of the pilgrims landing on the exact spot where Plymouth Rock sits today (you can go see it at the downtown oceanfront), and they had actually likely planned to move on from the spot, but many of the pilgrims were sick and winter was looming. They built their settlement on a former Indigenous cornfield, established their own government under a document called the Mayflower Compact, and operated as an independent province until 1691, when the place was absorbed by Massachusetts.

 

3. Jersey City, New Jersey

AF1QipMLMHKm3_LxnMWcy7cwSaEhF4yh9tfD3jlL

The English history of Jersey City, New Jersey — located directly across the Hudson River from New York City — dates back to explorer Henry Hudson and the colony of New Netherland, which was established on Lenape Indigenous land in 1623. Michael Reyniersz Pauw, a knight, was given a land grant in what’s now Jersey City, with the stipulation that he bring in at least 50 people — which he didn’t do. In 1633, he had to sell his land back to the Dutch West India Company. Relations with the Lenape began to falter immediately because of the colonists’ treatment of the Indigenous people, and the settlement was almost completely destroyed within 10 years. The British remained in control of the land until the Revolutionary War. In 1779, Alexander Hamilton joined other leaders from New York and New Jersey to lay out and develop the city. Now, the city is home to nearly 262,000 people — and surrounds the most well-known woman of them all, the Statue of Liberty (though her address is officially in New York).

 

4. Albany, New York

AF1QipPGd2WL8YJgAjG2lvqYHsc0Eq-wBef8-RRp

The English history of Albany, New York, starts with a simple trading post and fort built by the Dutch in 1614 on Iroquois land. Explorer Henry Hudson couldn’t move forward any further on the present-day Hudson River, so his Dutch sponsors opened an administrative outpost for the Dutch West India Company at the point he stopped, on Westerlo Island, called Fort Nassau. Due to ice and flood damage, the fort was moved north in 1615 and then replaced by the new Fort Orange in 1623, a little further north. The Dutch West India Company established a town called Beverwijck in 1652; it became known as Albany (after the Duke of Albany) when the Dutch surrendered to the British in 1664. Albany stayed under British power until the Revolution, and it was named New York’s capital in 1797.

 

5. Newport News, Virginia

newport-news-th.jpg

Although it was officially founded in 1896, Newport News, Virginia, had been a city long before then. It first showed up in print as “Newportes Newes” in the Virginia Company’s 1619 records. Newport News is named after Captain Christopher Newport, who led settlers to Jamestown in 1607. When the colonists left Jamestown in 1610 after a period of starvation, they reunited with Captain Newport on the James River. He told them supplies and reinforcements had arrived, so they returned to Jamestown to await them. From then on, the point at which they met Captain Newport — at the junction of the James River and the Chesapeake Bay — was referred to as Newport’s News. It was eventually shortened to just Newport News. The city now has about 180,000 residents and fans out nearly 70 square miles from that initial meeting point.

 

Click below ⏬for 5 more facts on Oldest Cities in the U.S.

 

Source: Oldest Cities in the U.S

  • Like 1
Link to comment
Share on other sites

Fact of the Day - AIRPORTS

mqdefault.jpg

Did you know... The CIA estimates there are more than 41,000 airports worldwide. Atlanta's Hartsfield-Jackson is the world’s busiest airport, with 75 million travelers in 2021, and Qatar's Hamad International in Doha was voted the world's best for 2022, but they only scratch the surface of noteworthy airports around the globe. From runways on ice to unexpected amenities and white-knuckle approaches, take a look at 10 of the most fascinating and extreme airports in the world.

 

1. Tenzing-Hillary Airport (Lukla, Nepal)

320px-RK_0602_00825_LuklaFlugplatz.jpg

Aerodynamics make this domestic airport in the high Himalayas one of the most dangerous in the world. Air density lessens at higher altitudes, forcing pilots to land at higher speeds. Named after Sir Edmund Hillary and Tenzing Norgay, the first climbers confirmed to have summited Mount Everest, the airport has just one short (1,729-foot) runway, which is made riskier by treacherous winds and the surrounding mountains. In spite of the danger, the "gateway to Everest" is visited every year by thousands of tourists and climbers.

 

2. Changi Airport (Singapore)

320px-JewelSingaporeVortex1.jpg

Routinely named the "world's best" in airport rankings, Singapore's futuristic Changi Airport looks like something out of the movie Avatar. And for good reason: Architect Moshe Safdie, who designed Jewel Changi Airport — the entertainment and retail complex within the facilities — was inspired by the otherworldly landscapes of the 2009 film when he was designing the nature-themed space. Airplanes aren't the only things that fly at Changi: The airport has a butterfly garden with more than 1,000 of the ethereal winged creatures, as well as botanical gardens, myriad sculptures, a suspended trampoline, a hedge maze, and the world's largest indoor waterfall.

 

3. Ice Runway (McMurdo Station, Antarctica)

3495512_240.jpg

Antarctica's McMurdo Station requires a considerable amount of cargo to support the scientists and crew conducting research at the bottom of the world. Asphalt is impossible to install, but ice is in plentiful supply. As a result, large aircraft such as the Lockheed C-130 Hercules land on a runway of groomed snow that is packed atop a layer of sea ice over deep and dangerous waters. The runway is reconstructed annually each summer and remains in operation until December, when the ice becomes unstable. Besides the main Ice Runway, there are two other nearby runways made of compacted snow and ice that serve McMurdo Station, Phoenix Runway and Williams Field.

4. Kansai International Airport (Osaka, Japan)

250px-Kobe_Airport_01.jpg

Occupying an artificial island in Osaka Bay, the floating airport of Kansai is an engineering marvel. Serving the cities of Osaka, Kyoto, and Kobe, Kansai was the first airport built on an entirely human-made landmass. Constructed beginning in 1987, the 2.5-mile ocean airport was the largest civil engineering project in the world at the time. It's connected to the mainland by a six-mile bridge, which itself cost around $1 billion USD. (The entire project was more than $20 billion.) One of the busiest airports in Japan today, Kansai was built to withstand typhoons, waves, and earthquakes; however, it is now imperiled by rising sea levels.

 

5. Barra Airport (Eloigarry, Scotland)

320px-2004_0806hebridies0040.JPG

Flying off to a beach destination has never been more literal than at this airport in Scotland's windswept Outer Hebrides islands. On Barra island, the hard-packed sands on the bay of Traigh Mhòr are the runway — the only one of its kind in the world. Since there's no asphalt, planes use the beach for takeoffs and landings, keeping a close eye on the tides and ever-changing weather conditions. The airport offers regularly scheduled flights to Glasgow.

 

Click below ⏬ to read about more interesting airports.

 

 

Source: World’s Most Interesting Airports

  • Like 1
Link to comment
Share on other sites

Fact of the Day - ALEXANDER THE GREAT

alexander-great-statue-pella-macedonia-g

Did you know... When studying history, a few big military names come to mind — Julius Caesar, Attila the Hun, Genghis Khan — but none eclipse the conqueror known as Alexander the Great. After becoming king of Macedonia at age 20 in 336 BCE, Alexander completely redrew the world map with his conquests. His empire eventually stretched around 2 million square miles, from Greece to Egypt to India, and Alexander proved himself to be one of the greatest military commanders in history — if not the greatest. Although he only sat on the throne for 13 years, his life forever changed the course of history. These six facts highlight his extraordinary, yet brief, life.

 

1. Alexander the Great Wasn’t a Self-Made Conqueror

Unbekannt_-_Alexander_the_Great_in_a_cha

Although Alexander the Great is known for his impressive military achievements, the young conqueror got a huge assist from his father. Known to history as Philip II of Macedon, this king of Macedonia subdued the Greek city-states of Athens and Thebes and established a new federation of Greek states known as the League of Corinth before turning his attention toward Persia. He was assassinated by a royal bodyguard in 336 BCE before he could launch the invasion. His son Alexander inherited (by violently eliminating his rivals) a war machine ready to conquer the known world.

 

2. Aristotle Was Young Alexander’s Teacher

In 343 BCE, Philip II summoned Aristotle to be the tutor for his son Alexander. The great Greek philosopher taught the young prince for seven years, until Alexander’s ascension to the throne in 336 BCE. Aristotle then returned to Athens, but Alexander brought the great thinker’s works with him on his conquests, and the two remained in touch through letters. Today, historians believe that the relationship between Aristotle and Alexander — along with the latter’s successful conquests — helped spread Aristotelian ideas throughout the conquered regions.

 

3. Alexander the Great Never Lost a Battle

179px-Alexander_the_Great_mosaic_(croppe

Although Alexander inherited a well-oiled war machine and was taught by arguably the greatest mind of his age, the young king more than earned his eventual fame. During 13 years of war, Alexander the Great never lost a battle, making him the most successful military commander in human history. In fact, Alexander was so impressive that some military academies still teach his tactics to this day. Alexander’s strength as a leader came from the unwavering loyalty of his army, as well as his ability to leverage terrain and take the advantage over his enemies. Even when facing superior numbers, Alexander’s strong, decisive, and unrelenting leadership always led his forces to victory.

 

4. An Ancient City Was Named After His Favorite Horse

Alexander was the greatest general who ever lived, but some of that glory is shared with the horse he rode in on. Described as a black horse with a white star on its forehead, Bucephalus was Alexander’s war horse. One famous account states that the Macedonian prince was able to tame the animal after he realized the creature was afraid of its own shadow. Alexander rode the horse into every battle until its death after the Battle of Hydaspes in 326 BCE. The king subsequently named a town near the battle, in modern-day India, Bucephala. Scholars believe that Bucephalus is likely the horse depicted in the Alexander Mosaic, a famous Roman artwork that shows Alexander’s clash with Persian king Darius III.

 

5. Many Theories Surround the Death of Alexander the Great

soldiers-paying-final-tribute-to-the-dyi

As much as Alexander changed the trajectory of history by creating one of the largest empires ever known (then or since), so did his death at the age of 32. There are various versions of his demise, which suggest a days-long paralysis or an agonizing drawn-out poisoning. Modern theories posit that Alexander was done in by typhoid fever, or perhaps a rare neurological disorder known as Guillain-Barré syndrome, which would explain reports of his paralysis. Many doctors and historians have explored his death, yet mystery still remains about what finally put an end to the greatest warrior the world had ever seen.

 

6. Alexander’s Vast Empire Did Not Last for Long

While Alexander the Great fashioned an impressive empire, his death sent the region into a tailspin of war and uncertainty for four decades as his generals vied for power. The Hellenistic region eventually settled into four kingdoms, each ruled by one of his companions or generals who ruled as a successor: Lysimachus, Cassander, Ptolemy I, and Seleucus I Nicator. The Ptolemaic Dynasty in Egypt was the last to fall, in 30 BCE, when Cleopatra (an Egyptian pharaoh of Macedonian heritage) died after losing in battle to Octavian, later known as Caesar Augustus of Rome.  
 

 

Source: Amazing Facts About the Legendary Conqueror Alexander the Great

  • Like 1
Link to comment
Share on other sites

Fact of the Day - SERVE IN CONGRESS

32luktyiu.jpg

Did you know... You’d be forgiven for thinking this distinction belongs to the members of the Bush or Kennedy clans, but it’s actually claimed by the lesser-known Dingell family, which has served southeast Michigan for 90 years and counting.

 

193px-John_D._Dingell,_Sr..gif

The political dynasty began with the election of Democrat John Dingell Sr. from Michigan’s 15th District in 1932. Along with co-authoring legislation that led to the Social Security Act of 1935, the paterfamilias was best known for introducing a national health insurance bill before his death in 1955. John Dingell Jr. picked up the fight after winning a special election to fill his father’s seat, notching a victory with the passage of the Medicare and Medicaid Act in 1965. He went on to craft a legacy that dwarfed that of John Sr. and nearly all of his colleagues, by way of his longtime chairmanship of the powerful House Energy and Commerce Committee. He retired in 2015 after a record 59 years in the House.

 

164px-Debbie_Dingell_Official_Headshot.j

The seat then passed into the hands of his wife, Debbie, who set about making her own mark as a sponsor of environmental and health care legislation en route to winning a fifth term in 2022 in the brand-new 6th District. Not yet 70, Debbie could keep the uninterrupted lineage going for several more years, though she’ll likely need help from a yet-to-be-determined successor if the Dingells hope to push past the century mark as representatives of the Great Lakes State. 

 

Just one mother-son pair has served concurrently in Congress.

191px-Frances_P._Bolton_1940-3_seated.jp Oliver_P._Bolton_(Office_of_Clerk).jpg
That would be Frances and Oliver Bolton, Ohio Republicans who shared the chamber over three terms between 1953 and 1965. Frances, who began her congressional career in 1940 by replacing her deceased husband, Chester, went on to earn reelection 14 times, along the way authoring the Bolton Act to establish the U.S. Cadet Nurse Corps. Oliver had the less distinguished career of the two, though both mother and son insisted that he was his own person. When Frances asked if there was anything she could do to help his congressional campaign in 1952, he reportedly replied, “Sure there is — stay the hell out of my district.”

 

 

Source: One family has served in Congress continuously since 1933.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - ELEPHANT

e03de1d135c381a8c7379efffab59aac.jpg

Did you know.... Newborn elephants suck their trunks.
Other than being members of the class Mammalia, humans and elephants might seem to have little in common. But these seemingly disparate creatures, separated by 80 million years of evolution, have some stunning similarities. One of the most intimate (and adorable) is a behavior shared between newborn human babies and elephant calves. Just like a human infant sucks their thumb, a newborn elephant will do the same with its trunk, and for the same reason — comfort. 

 

k9nfsjtatyp41.jpg?width=320&crop=smart&a

 

During the first six months of life, our brains are biologically wired to suck on things, since that’s the primary way infants receive sustenance from their mothers. Thumb-sucking is also a way for babies to self-soothe during times of stress. For elephants, it’s a very similar situation. Since sucking is associated with food and their mothers, elephant calves will suck their trunks much like a natural pacifier — a pacifier with more than 40,000 muscles. An elephant calf also sucks its trunk to learn how to subtly manipulate this immensely important protuberance, and uses the technique as an enhanced form of smelling. So while much has changed since humans and elephants parted ways during the Late Cretaceous, there’s at least one stunning (and very cute) similarity.

 

Elephants have the longest gestation period of any mammal.

b398022a93a3343c92450dd2f8b30740.jpg
Humans have a relatively long gestation period for mammals (especially compared to the Virginia opossum, which is pregnant for only 12 days), but a few animals outlast even us Homo sapiens. Manatees remain pregnant for 13 months, and giraffes can carry their young for two months beyond that, but all mammals pale in comparison to the African elephant, which has a gestation period of 22 months. There are two reasons for this nearly two-year-long pregnancy — one obvious, the other less so. The first is size. The African elephant is the largest land-dwelling mammal on Earth, and it takes time to grow such an enormous creature from a small clump of cells into a calf that weighs more than an average adult man. The second reason relates to an elephant’s amazing intellect, which includes a brain that is shaped similarly to our own but is three times larger. An elephant’s brain contains some 250 billion neurons, and the temporal lobe is particularly well developed because it allows elephants to create complex mental maps stretching hundreds of miles. Without this impressive memory, elephants couldn’t find their way back to life-sustaining watering holes year after year. So while an elephant pregnancy might seem incredibly long, it’s definitely time well spent.

 

 

Source: Interesting Facts About Elephants

  • Like 1
Link to comment
Share on other sites

Fact of the Day - FAMOUS BRANDS-REAL PEOPLE

12738997_174052194002_4.jpg

Did you know.... Some mascots are born in boardrooms — like Betty Crocker, who started as a fictional advice columnist, or the Geico Gecko, who came about during an actors strike. Others grew more organically, or at least were based in reality. Sometimes the inspiration is a heavily branded version of a company founder, and sometimes it’s a family member. Some mascots have nothing to do with the company at all. How did the little boy get on the Cracker Jack box — and what does Captain Morgan have to do with rum? These eight brand icons have origin stories based in real life.

 

1. Wendy Thomas Beat Out Her Siblings for Burger Stardom

Melinda “Wendy” Thomas, the pigtailed, redheaded girl who graces the logo for burger chain Wendy’s, was a real person — the daughter of the chain’s founder, Dave Thomas. The illustration is based on a photograph taken at a Columbus, Ohio, photo studio in 1969, when she was just 8 years old. It turns out that Wendy wasn’t the only kid up for the role. The Thomas family had four other children: her older sisters Pam, Lori, and Molly, and a brother, Kenny. Wendy, with her red hair and dusting of freckles, had, as she told People in 1990, the “all-American mug” her dad was looking for. The elder Thomas, who died in 2002, eventually regretted naming his restaurant chain after his kid because, as a de facto spokesperson, she “lost some of her privacy.” Wendy did use her face to sell Wendy’s burgers — this time as an adult — one more time in 2011, promoting a burger named after her father.

 

2. The Real Duncan Hines Couldn’t Cook

644209006588.jpg?v=638156171334830000

Before he lent his name to baking mixes, Duncan Hines, born in Kentucky in 1880, was a traveling office supply salesman, a profession that didn’t allow much time for home cooking. Instead, Hines became a discerning patron of local restaurants, taking notes on food quality and even food safety. Car trips were becoming more a part of everyday life during his days on the road in the 1920s through the 1940s, so when he self-published a guidebook called Adventures in Good Eating in 1936, it was a big hit. He updated and re-released the guide each year until he retired in 1954. His favored restaurants started displaying “Recommended by Duncan Hines” in their windows — kind of like Zagat today. After releasing a couple of popular sequels, Hines, then 72, teamed up with advertising exec Roy H. Park in 1949 to form Hines-Park Foods. The company merged with Proctor and Gamble in 1957.

 

3. Captain Morgan Was a 17th-Century Welsh Buccaneer

Buccaneers were a very specific kind of quasi-legal seafarer that sailed around the Caribbean agitating the Spanish empire, typically with financial backing from the English. Captain Morgan is perhaps the most famous of them all, and made a tidy fortune; he invested in sugar plantations and amassed a fleet of 36 ships. But in 1671, he made a crucial error when he attacked Spanish-held Panama City after England had signed a treaty with Spain. England made a show of arresting him, but when he got back to England he was knighted by King Charles II. He eventually returned to Jamaica, was appointed lieutenant governor, and lived the rest of his life in the Caribbean. Morgan didn’t have any input on the spiced rum bearing his name, as far as anyone knows, but when the distillery Seagram’s purchased the recipe from a Jamaican pharmacy in 1944, he apparently seemed like a fitting mascot.

 

4. “Boyardee” Is a Phonetic Spelling of “Boiardi”

wytigrh8mntegrr9d12h.jpg

Chef Boyardee was a real person, and he was famous before he lent his face to one of America’s most popular canned food brands. He started his career in his native Italy at age 11 and, after settling in Cleveland, Ohio, with his family in the late 1920s, opened an Italian restaurant. Customers loved the food so much that they asked how to make it at home, so he started selling pasta, sauce, and cheese, helping to bring Italian cooking into the mainstream in American households. His company was originally called Chef Boiardi, but it was hard for many Americans of the day to pronounce, so he changed the brand’s name to Chef Boy-ar-Dee. The company is best known for premade canned meals today, but when the products first hit the grocery store, Chef Boy-ar-Dee was the largest American importer of Parmesan cheese. Boiardi sold his company to American Home Foods Company in 1946, but continued consulting and appearing in commercials until around 1979.

 

5. Ronald McDonald Was Played by a Famous Weatherman

When McDonald’s began in the 1940s, it was started by two real McDonalds: Maurice and Richard, a pair of brothers in San Bernardino, California. The company grew throughout the 1950s through franchising, meaning that business owners would operate McDonald's locations under national brand guidelines. One branch in Washington, D.C., owned by John Gibson and Oscar Goldstein, sponsored a local broadcast of the incredibly popular children’s show Bozo’s Circus, starring the red-haired Bozo the Clown. The character was one of the first national celebrities for children, and even before he hit the TV screen, he lent his face to books, records, and other kids’ products. In D.C., Bozo was played by longtime Today show weatherman Willard Scott. The sponsorship was so profitable that after the series ended in 1963, the franchise hired Scott to create his own clown for advertisements: “the silliest and hamburger-eatingest clown, Ronald McDonald.” The original design featured a food tray for a hat and a styrofoam cup for a nose, a far cry from the Ronald McDonald character that would eventually become synonymous with the brand. The idea was a success, and other franchise owners started following suit with their own clowns. Eventually, Goldstein pitched their clown idea to the parent company, but the latter was reluctant at first. It only agreed because of the D.C. branch’s sales numbers, and debuted Ronald McDonald nationally at the 1965 Macy’s Thanksgiving Day Parade. McDonald’s did, however, fire Scott for being too rotund the next year, hiring a new performer and redesigning the character into something closer to what we know today.

 

6. The Cracker Jack Boy (and His Dog) Were a Founder’s Family Members 

bac84a1e6cd0a4278e9054792f6bf3c0--joke-b

The story of Cracker Jack started in the 1870s, when German immigrant Frederick W. Rueckheim and his younger brother Louis started selling bricks of popcorn out of a small office in Chicago. They began selling their caramel corn in the 1890s, and by the end of the century, their creation — and the innovative waxed box that kept its contents fresh — was a sensation. The brand was immortalized in the song “Take Me Out to the Ballgame” in 1908, cementing it as an American icon. As World War I rolled around and anti-German sentiment started rising in America, Cracker Jack needed a show of patriotism to boost declining sales. The company redesigned the boxes in red, white, and blue colors, and for an extra wholesome touch, the elder Rueckheim added an illustration of his grandson, which first appeared in ads and then packages around 1918. Sadly, the boy died of pneumonia at age 8, so his image stayed on the box as a memorial.

 

7. A 14-Year-Old Boy Designed Mr. Peanut

The Planters Peanut Company was only a decade old when it held a contest in 1916 for the company’s trademark, with a $5 prize (about $150 today). Antonio Gentile, a boy in his early teens from Suffolk, Virginia — part of the same Italian immigrant community as Planters founder Amedeo Obicidrew an anthropomorphic peanut serving hot peanuts, exercising, and walking with a gentleman’s cane. A graphic designer spruced up Mr. Peanut, giving him his trademark monocle and top hat, and in 1918, he appeared in a full-page ad in the Saturday Evening Post. It was the first national ad campaign for not just Planters, but any peanut brand. Mr. Peanut still graces Planters products and ad campaigns, sometimes with some trendy adjustments, like roller blades in the 1990s or, more recently, his death and resurrection as a little baby nut.

 

8. The Michelin Man Was Originally “the Road Drunkard”

1.3ZnKSbaBcXC87IN2_CbQxCnqdXZo7HF2D4l1dr

The Michelin Man is pretty cuddly-looking for a stack of tires, but at the turn of the 20th century, he was a little scarier, or at least more of a lush. In fact, he was originally born from a design created for a brewery. When the poster artist hired by Michelin showed co-founder Andre Michelin a rejected brewery poster design of a burly human man raising a beer mug, the co-founder wondered what he would look like if he were made of tires. The final ad was a man-shaped stack of tires with human hands and pince-nez spectacles holding a champagne glass full of road debris as if in toast, as a couple of other tire-men looked on. It read “nunc est bibendum,” a Latin quote meaning “now it is time to drink,” and, in French, “the Michelin tire drinks up obstacles.” The company called the man “Bibendum,” and after this poster, people started calling him “the road drunkard.” It would be a while before the tire took on the role of a friendly helper. Until the 1920s, he kept his glass and accessories, and often smoked a cigar; back when only the wealthy could afford cars, this helped him reach the company’s target audience. Today, he’s a more wholesome character.

 

 

Source: The Stories — and Real People — Behind 8 Famous Brand Icons

  • Like 1
Link to comment
Share on other sites

Fact of the Day - IT'S A WONDERFUL LIFE

popcorn-and-juice-for-movie-night_23-214

Did you know... Frank Capra’s 1946 film It’s a Wonderful Life is a certified American classic. The story follows George Bailey (James Stewart), a small-town banker and family man on the brink of a breakdown. When George is visited by a bumbling second-class guardian angel named Clarence (Henry Travers), he learns the error of his ways and discovers that life is, in fact, wonderful. Before you settle in for a viewing, get to know the film better with these 10 facts.

 

1. The Story Idea Came to Its Writer “Complete From Start to Finish”

In 1938, a writer named Philip Van Doren Stern had an idea for a story while shaving: A Christmas tale about a man on the brink of suicide, saved by his guardian angel. The author quickly sketched out the idea and, over the next five years, slowly transformed it into a short story. In 1943, he mailed about 200 copies of his yarn, called “The Greatest Gift,” as his annual Christmas card.

 

2. The Script Employed a Dream-Team of Writers

s.jpg

Eventually, a draft of “The Greatest Gift” fell into the hands of an agent at RKO Pictures, who paid the author $10,000 for the motion-picture rights. Attempts to transform the story into a screenplay fizzled until director Frank Capra stepped in. Capra’s team of writers — which included Dorothy Parker and the future Pulitzer Prize-winner Frances Goodrich — turned it into a viable script. Filming began in April 1946.

 

3. The Film Was Never Intended for Christmas

Amazingly, It’s a Wonderful Life — whose entire plot happens on Christmas Eve — was originally scheduled for a late January 1947 release. The studio intended their Douglas Fairbanks, Jr. vehicle Sinbad the Sailor to be its holiday release, but when production problems with Sinbad’s Technicolor caused a delay, the black-and-white movie got bumped to the earlier Christmas slot.

 

4. Jimmy Stewart Was the Real War Hero

i?id=cc487286a4052dee3dbaaee9af62ae23f75

In the movie, George Bailey's brother, Harry (Todd Karns), is a well-decorated war hero. But, in reality, that honor belonged to Jimmy Stewart. The leading man was one of the first Hollywood stars to enlist in the military after the United States entered World War II. He spent the war with the Army Air Corps and flew nearly two dozen combat bombing missions over Europe. Stewart remained active in the military for decades and eventually retired in 1968 as a brigadier general — making him America’s highest-ranking actor.

 

5. The Set of Bedford Falls Was Enormous

Filmed mostly at RKO’s movie ranch in Encino, California, the fictional town of Bedford Falls covered about four acres. The Main Street stretched three city blocks and the town itself contained dozens of buildings — and even 20 fully grown oak trees. (The buildings weren’t all newly constructed, though. Many of them had been used in the 1931 Oscar-winning film Cimarron.)

 

Click below to learn more fats about the movie "It's a Wonderful Life".

 

 

Source: Facts About “It’s a Wonderful Life”

  • Like 1
Link to comment
Share on other sites

Fact of the Day - AZTEC EMPIRE

Pyramid-of-the-Sun-Teotihuacan.jpg?w=300

Did you know.... More than five centuries after the Aztec empire’s fall to Spanish conquistadors in 1521, history buffs can’t seem to learn enough about the fascinating history of the legendary civilization. In fact, secrets are still being unearthed below the streets of Mexico City. Here are some of the fascinating facts we’ve unearthed about the daily life of this once-thriving society.

 

1. They Didn’t Call Themselves Aztecs

As with many ancient societies, much of what we know about the Aztecs comes from written accounts from outside their culture — in this case, descriptions from Spanish conquistadors who arrived in modern-day Mexico around 1519. However, the community that modern historians call “the Aztecs” actually referred to themselves as the Mexica or Tenochca people. Both names come from the region where the empire once flourished — southern and central Mexico, along with the capital city of Tenochtitlan (modern-day Mexico City). The Aztec name likely comes from the Mexica origin story describing their homeland of Aztlan (the location of which remains unknown).

 

2. The Aztec Language Is Still Alive Today

240px-Templo_Metodista_de_la_santisima_T

At the height of the Aztec empire’s reign, Nahuatl was the primary language used throughout Mexico, and had been for centuries. Colonists arriving from Spain around the early 16th century introduced Spanish, which would eventually replace Nahuatl. But the Indigenous language isn’t at all dead; more than 1.5 million people speak Nahuatl in communities throughout Mexico, plus there are efforts in the southern U.S. to teach and revive the language. Spanish and English speakers who’ve never heard Nahuatl still know a few words with Aztec origins, such as tomato (“tomatl”), coyote (“coyōtl”), and chili (“chīlli”).

 

3. The Aztec Empire Had Vast Libraries

Surviving accounts from Spanish colonists describe the voluminous libraries of the Aztecs, filled with thousands of books on medicine, law, and religion. But early historians didn’t give the Aztecs enough credit when it came to written language skills, once considering the hieroglyphic style used by scribes as primitive. Few written documents have survived the centuries since the Aztec empire’s disappearance, most destroyed by Spanish conquistadors. But more recent evaluation of the last remaining texts shows that the Mexica people had a sophisticated writing system on par with Japanese that may have been the most advanced in the early Americas.

 

4. The Mexica People Were Highly Educated

b78317715781a30d558be0f971c09bec.jpg

Aztec society had a rigid caste system dividing communities into four main classes: nobility, commoners, laborers, and enslaved people. Regardless of social standing, every child in the community attended school to receive specialized education, often for a role that was chosen at birth. Schools were divided by gender and social standing, though all Mexica children learned about religion, language, and acceptable social behavior. Children of nobility often received law, religion, and ethics training to prepare them for future leadership positions, and schools for commoners taught trade skills like sculpting, architecture, and medicine. Because Aztec culture centered on expansion and advancement through military strategy, teenage boys of all ages received military and combat training, while girls were educated in cooking, domestic tasks, and midwifery.

 

5. Aztecs Used Two Calendars

Mesoamerican calendars from societies of old have remained an interest to many people, especially those who speculate about astrological events or end of the world scenarios. But calendars used by the Aztecs weren’t too dissimilar from our own. The Mexica people relied on two simultaneous calendars: one 365-day solar calendar called the Xiuhpōhualli and a 260-day religious almanac called the Tōnalpōhualli. The solar calendar consisted of 18 months with 20 days, each month named for a significant festival or event. The religious calendar dictated auspicious times for weddings, crop plantings, and other events, using a 13-month calendar with each day represented by an animal or natural element instead of numerals.

 

6. Aztecs Wore the First Rubber-Soled Shoes

DcNIr2eV4AEPeBD.jpg

Centuries before rubber became an everyday mainstay in modern products, the ancient Mexica people were harvesting and collecting rubber tree sap for a variety of uses. Archaeological digs throughout Mesoamerica have excavated rubber balls likely used in ceremonial games or for religious offerings, but historians in the early 2000s found that Aztecs also created rubber soles for more comfortable and protective shoes. Researchers believe that Mexica artisans blended and heated rubber tree sap and extract from plants to create the rubber mixture, which could then be shaped and used for shoes, rubber bands, statues, and more.

 

7. Farmers Created Floating Fields

Constructing the city of Tenochtitlan was no small feat for the early Aztec settlers, mostly because the city was built on water. While centered on an island within Lake Texcoco, the city expanded across the lake with bridges that reached its shores with aqueducts and canals that supplied Tenochtitlan with fresh water. Farmland wasn’t vastly available on an island of more than 400,000 people, leading Mexica farmers to create floating fields called chinampas. Gardens were constructed by weaving tree branches, reeds, and sticks between poles to create an anchored base covered with mud and dead plants that broke down into nutritious soil. Chinampas doubled as a sanitation system using human waste as fertilizer, which helped crops grow vigorously while protecting drinking water from contamination.

 

 

Source: Facts You Might Not Know About the Aztec Empire

  • Like 1
Link to comment
Share on other sites

Fact of the Day - Judge Judy

i?id=4d6c1c707b904b2eaeef89de86ee2f2f-52

Did you know.... For more than two decades, Judy Sheindlin — known to her adoring audience as Judge Judy — delivered famously withering verdicts from the bench in her daytime TV show of the same name. Although Judge Judy was encased in courtroom-esque fiction, Sheindlin is a real judge (having been originally appointed to family court by NYC Mayor Ed Koch in 1982), and her sharp-tongued legal smackdowns are evidence of her genuine jurisprudence style. 

 

JudgeJudy.YouTube72-370x242.jpg

 

While Sheindlin herself is the real deal, her cases are not decided in a real court of law. Most of the cases that appeared on the serialized juggernaut Judge Judy (which began in 1996) were real disputes sourced from small claims courts, but instead of playing out in court, they went through a process known as arbitration — a method for settling disputes outside the actual legal system. (“Arbiter Judy” doesn’t have quite the same ring to it.)

 

desktop-wallpaper-television-q-a-is-this

 

Even though the show didn't take place in a real courtroom, Judge Judy still earned some serious bucks. In fact, during the tail end of the show’s tenure, from 2012 to 2020, Sheindlin made an estimated $47 million per year. She was also the highest-paid TV show host in 2018, after she sold the show’s 5,200-episode catalog for a cool $100 million to CBS. Judge Judy wrapped its final season in 2021, but that wasn’t the end for Sheindlin, who launched a brand-new show, Judy Justice, on Amazon Freevee and is currently constructing her very own “Judy-Verse” on the streaming platform. Even as an octogenarian, Sheindlin isn’t ready to hang up her robe just yet. 

 

The Supreme Court doesn’t allow video recordings of its proceedings.

?m=02&d=20130326&t=2&i=716581341&w=490&f

While cameras in the courtroom make for good television, much of the U.S. court system, especially the highest court in the land — the U.S. Supreme Court — doesn’t allow any visual recording of court proceedings. Enacted in 1946, Federal Rule 53 states that “the court must not permit the taking of photographs in the courtroom during judicial proceedings or the broadcasting of judicial proceedings from the courtroom.” In 1972, the government doubled down and banned television cameras as well. (Oral arguments have been recorded since 1955.) With the Supreme Court making groundbreaking decisions on a regular basis, there has been growing pressure to allow visual recording to help inform the American public — and it seems the Supreme Court’s camera shyness could soon come to an end. C-SPAN’s reporting on eight of the current justices’ views on the matter (excluding the latest addition, Ketanji Brown Jackson) shows that the court is open to exploring the idea. It may not have the entertainment value of The People’s Court or Judge Judy, but it’d give Americans a front-row seat to some of the most consequential legal decisions in the country’s history.

 

 

Source: Judge Judy earned $47 million per year.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - I LOVE LUCY

20-20th-century-sitcoms-still-regular-ro

Did you know.... 

When I Love Lucy first hit the airwaves in 1951, its multicultural plot and unconventional filming techniques broke the television mold both on-screen and behind the scenes. But those daring moves paid off, turning the CBS sitcom, starring Lucille Ball and her real-life husband Desi Arnaz, into one of the most successful shows of a generation — and one that transformed the standards of the television industry forever. With moments made memorable by Ball’s knack for physical comedy, evident in scenes where she's struggling with a candy factory conveyor belt or stomping grapes in a giant barrel, the show went on to win five Emmys, mega-ratings, and the hearts of fans across the country. Here are 13 surprising facts that make the show even more lovable.

 

1. "I Love Lucy" Was a Breakthrough for Interracial Marriage

5tGYbT3irV4ZuO7M65HewlVT6fx.jpg

Ball married Cuban American bandleader and actor Arnaz in 1940, and together they came up with the concept for I Love Lucy. But the idea of them playing a couple on TV was immediately met with resistance from her talent agency. “The people there said the public wouldn’t believe I was married to Desi,” she told Saturday Evening Post. “He talked with a Cuban accent, and, after all, what typical American girl is married to a Latin? American girls marry them all the time, of course, but not on TV.”

 

2. The Pilot for "I Love Lucy" Was Lost for Four Decades

I Love Lucy’s pilot episode, shot March 2, 1950, couldn’t be found for about 40 years. But one of Arnaz’s collaborators, Pepito Perez, later found a 35-millimeter version of it in his house nearly 40 years later. Though some of it was damaged, most of the footage aired as part of a 1990 CBS special.

 

3. Ball and Arnaz Insisted on Filming in Los Angeles 

hollywood-hills-sign-vertical-color-los-

At the time, New York City was the place to be for live television productions, because it had the proper facilities. But Ball and Arnaz were insistent on filming in Los Angeles, both for personal reasons and because they wanted to take advantage of using the movie industry’s facilities for heightened quality. As part of the deal, they had to take the additional production costs upon themselves, so their production company, Desilu, was given full ownership of the series, which eventually made the couple the TV industry’s first millionaires.

 

4. The Original Opening Sequence Was Animated by Hanna-Barbera

An agent called up famed Tom and Jerry animators William Hanna and Joseph Barbera to create both the opening credits and interstitials for the show in 1951. They originally drew up stick-figure versions of Ball and Arnaz for the opening sequence, but stick figures were removed in later airings of the show.

 

5. Ball Was Only the Second Woman to Appear Pregnant on Network TV

h280_38374983.jpg

When Ball became pregnant in real life, she and Arnaz considered taking a hiatus from the show — but then thought it would be an opportunity to break the mold again. “We think the American people will buy Lucy’s having a baby if it’s done with taste,” Arnaz said. “Pregnant women are not kept off the streets, so why should she be kept off television? There’s nothing disgraceful about a wife becoming a mother.” She ended up being the one of the first women to appear pregnant on a major television network and received more than 30,000 supportive letters from fans, despite the fact that the cast wasn’t allowed to say the word “pregnant” on-screen.

 

6. The Show Was Perfectly Scripted

So much of the appeal of the show comes from the natural dialogue that often seemed ad-libbed, especially in scenes between Ball and Vivian Vance, who played Ethel Mertz. Yet none of the words ever went off-script. “We knew what we were going to say and because we were thinking, we were listening to each other, and then reacting and then acting, it came out like may we’d made it up,” Ball said. “We never ad-libbed on the set when we were putting it together. It was there.”

 

Click the link below ⏬ to read more on the sitcom I Love Lucy. 

 

 

Source:  Surprising Facts About “I Love Lucy”

 

 

  • Like 1
Link to comment
Share on other sites

Fact of the Day - AUDIO INVASION

01hbh9nnh41jrke5egas.png

Did you know... The eerie, futuristic tone of the theremin is unmistakable. In horror and science fiction, it may signal the arrival of a flying saucer, a character’s impending psychotic break, or a twisted science experiment gone wrong. The electronic nature of the noise suggests otherworldly origins, and its use in films like The Day The Earth Stood Still (1951) has made it synonymous with the uncanny and bizarre.  The instrument itself, however, is hard to recognize on sight. Consisting of a box with dials and two antennae—a vertical one extending upwards from the right side and a horizontal loop antenna sticking out from the left—it looks like a gadget built for experiments rather than musical compositions. In fact, that was the original intention when it was designed as part of a Soviet research program in 1919. Despite being invented in a laboratory by a physicist-turned KGB spy, it was only a matter of time before the theremin made it big in Hollywood.

 

1. It Came From Russia

ter.jpg
Electricity was just starting to transform daily life in the early 1900s. Prior to the 1920s, less than half of U.S. homes had electrical power. Recordings of songs played on the radio, but the instruments used to play them were strictly acoustic. Silent pictures were still the norm at moviehouses. It was in this climate that Leon Theremin created what would become the world’s first mass-produced electronic instrument. Born Lev Sergeyevich Termen in St. Petersberg, Russia, in 1896, he was a tinkerer from a young age. By 7 he could take apart a watch and put it back together, and by 15 he had built his own astronomical observatory. In his early twenties, the budding physicist was recruited by the newly founded Physical Technical Institute in Petrograd. As a student at the institution, Theremin conducted research into proximity sensors for the Soviet government in the wake of the October Revolution in 1917. His goal was to build a device that used electromagnetic waves to measure the density of gases, and could thus detect incoming objects. In trying to create that instrument, he instead created one that produced a whining sound similar to the thinner strings of a violin. When he moved his hand close to the machine, the pitch lept higher, and it dropped when he pulled his hand away.  Theremin was an experienced cellist as well as a physicist, and immediately saw the musical potential of his accidental invention. The burgeoning Soviet government saw its value as well, though it lacked military applications.

 

2. Electronic Espionage

 

Vladimir Lenin invited Theremin to the Kremlin to demonstrate his instrument—then known as the etherophone—in 1922. By moving his right hand along the vertical antenna to control the pitch and his left hand along the horizontal one to adjust the volume, Theremin performed Camille Saint-Saëns’s “The Swan” and other pieces for the Russian leader. Lenin was impressed enough to send him on a concert tour around the country. The tour eventually extended to Western Europe. Theremin performed for Albert Einstein in Berlin in 1927, and the following year he brought the instrument to the United States, filling venues like Carnegie Hall and the Metropolitan Opera House with its ethereal music. The Soviet Union presented the world tour as a chance to show off its mastery of electric technology, but that wasn’t their only motive. Theremin was sent to the U.S. as a spy first and a musician second. His high status in his field granted him access to major American tech corporations like RCA, which signed a contract to manufacture his instrument for the mass market in 1929.  The company paid him $100,000 for the rights, but it would be a while before that investment paid off. The first commercial theremins cost $220—worth about $3700 today, and a prohibitively high price for many hobbyists. Because players controlled it by moving their hands through empty air, the learning curve was also steep. The Great Depression killed any hopes of it becoming an overnight sensation, and RCA suspended production. The instrument’s inventor, meanwhile, was facing his own hardships. Despite the intelligence he gathered for his home country, Theremin wasn’t welcomed as a hero upon his return. The Soviet Union was in the midst of Joseph Stalin’s political purges, and in 1939, Theremin was arrested for alleged treason and sentenced to eight years in a Gulag for scientists, where he invented bugging devices and aircraft technology for the military. Theremin’s days as a world-renowned concert performer were over, but the instrument that had adopted his namesake was just starting to take off on the other side of the globe.

 

3. The Sound of Science

 

The theremin made its cinematic debut roughly a decade after its invention. Dmitri Shostakovich became the first film composer to use it when he scored the 1930 Russian film Odna, or Alone. Instead of leaning into the instrument’s modern sound, he used it to evoke the howling Siberian winds the main character faces at the end of the movie.  In the 1940s, the unusual instrument was first featured in Hollywood film scores. Its association with sci-fi wasn’t immediate. In this decade, it was more often utilized in thrillers and mysteries to add an unsettling effect to scenes where a character experienced psychological distress. In Alfred Hitchcock’s 1945 movie Spellbound, composer Miklós Rózsa used the instrument to suggest mental instability when a white bathroom triggers the protagonist’s repressed memories of a skiing accident. The same composer also included it in his score for Billy Wilder’s The Lost Weekend, released that same year. The theremin didn’t find its true niche until the 1950s. The decade was characterized by advancements in space exploration and anxieties about nuclear war—both of which helped fuel a golden age of science fiction in cinema. One of the earliest and most famous examples of the instrument in sci-fi is Bernard Herrmann’s score for The Day the Earth Stood Still.

 

In the 1951 movie, the electric squeal helped create a threatening and uncanny atmosphere around the alien invaders that couldn’t be achieved through costumes and special effects alone; countless alien and monster flicks that followed would borrow this same musical trick. By the end of the 1950s, the American public no longer viewed the theremin as a Soviet curio; it had become the official sound of outer space.

 

4. Retro Reputation

 

Though its cultural impact peaked in the mid-20th century, the theremin saw a brief resurgence in the 1990s. This was largely thanks to Tim Burton; the instrument adds a retro, B-movie vibe to the scores for his movies Ed Wood (1994), composed by Howard Shore, and Mars Attacks! (1996), composed by Danny Elfman. In Joel Schumacher’s Batman Forever (1995), Elliot Goldenthal used a theremin to compose the Riddler’s kooky mad scientist theme. In each of these cases, the sound carried connotations it didn’t have 40 years prior. Instead of inspiring sincere dread, the high-pitched tone calls to mind an antiquated vision of the future popularized by low-budget sci-fi movies. A theremin can add a layer of nostalgia—or even ironic cheesiness—to modern media, but filmmakers can’t use it the way Alfred Hitchcock did in the 1940s and expect viewers to take them seriously. More than a century after its invention, it’s clear that the instrument will always be associated with the 1950s sci-fi boom, even if its origins as a tool for Soviet espionage are just as interesting and bizarre.

 

 

Source: Audio Invasion: How the Theremin Went From Soviet Labs to Hollywood

  • Like 1
Link to comment
Share on other sites

Fact of the Day - KOALAS NOT BEARS?

article-emperor-circle-5_0.jpg?itok=P1zr

Did you know... If you—with no prior knowledge of koalas or pouched animals in general—spotted a tree-climbing, leaf-munching, fur-covered creature in the wild, you might assume it was a small bear. That’s essentially what happened in the 18th century, and it’s the reason we still call koalas “bears” today, even when we know better. In the late 1700s, English-speaking settlers happened upon a small animal in Australia that looked like a small, gray bear with a pouch. It was soon given the scientific name Phascolarctos cinereus, which is derived from Greek words meaning “ash-gray pouched bear.” Essentially, naturalists had named the unknown animal based on its appearance and behavior, and people didn’t realize until later that the presence of a pouch is a dead giveaway that an animal is definitely not a bear.

 

18th-century-print-of-a-koala-bear.jpg?s

Eighteenth-century print of a koala bear

 

According to Live Science, koalas and bears both belong to the same class, Mammalia (i.e. they’re mammals). Then their taxonomic branches diverge: Koalas belong to an infraclass called Marsupialia. Marsupials, unlike bears, give birth to their offspring when they’re still underdeveloped, and then carry them around in pouches. Even if koalas look just as cuddly as bear cubs, they’re much more closely related to other marsupials like kangaroos and wombats. Over time, people adopted a name that the Aboriginal Darug people in Australia used for the animal, koala. (The Oxford English Dictionary’s first citation for the word dates back to the writings of Sir Everard Home in 1808: “The koala is another species of the wombat,” Home wrote. The OED notes that “koala was perhaps originally a misreading of koola.”) But bear still stuck as a modifier, and scientists never went back and replaced arctos (from arktos, Greek for bear) in its genus Phascolarctos with something more accurate. So, technically speaking, koalas are still called bears, even by scientists. Now that you know why we call koalas “bears,” check out what they sound like—we promise you’ll be surprised.

 

 

 

 

Source: Koalas Aren’t Bears, So Why Do People Call Them “Koala Bears”?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - BOO

01hc0ckc03fj45539gyp.jpg

‘Boo’ didn’t originate as a ghostly saying. 

 

Did you know..... People have screamed “boo,” or at least some version of it, to startle others since the mid-16th century. (One of the earliest examples documented by the Oxford English Dictionary appeared in that 1560s poetic thriller, Smyth Whych that Forged Hym a New Dame.) But ghosts? They’ve only been using the word boo for less than two centuries.

 

  1. The Mysterious Origins of the Word Boo
  2. Boo Gets Scarier
  3. The Influence of Spiritualism

 

The Mysterious Origins of the Word Boo

The etymology of boo is uncertain. The OED compares it with the Latin boare or the Greek βοᾶν, meaning to “cry aloud, roar, [or] shout.” Older dictionaries suggest it could be an onomatopoeia mimicking the lowing of a cow.

 

 

01000ahv-50b4_335x220.jpeg

Could cows have something to do with the origin of the word ‘boo’?

 

Whatever the origins, the word had a slightly different shade of meaning a few hundred years ago: Boo (or, in the olden days, bo or bu) was not used to frighten others but to assert your presence. Take the traditional Scottish proverb “He can’t say bo to a goose,” which for centuries has been a slick way to call somebody “timid” or “sheepish.” Or consider the 1565 story Smyth Whych that Forged Hym a New Dame, in which an overconfident blacksmith tries to hammer a woman back into her youth, and the main character demands of his dying experiment: “Speke now, let me se / and say ones bo!”

 

Or, as Donatello would put it: “Speak, damn you, speak!

 

Boo Gets Scarier
But boo became scarier with time. After all, as the OED notes, the word is phonetically suited “to produce a loud and startling sound.” And by 1738, Gilbert Crokatt was writing in Presbyterian Eloquence Display’d that, “Boo is a Word that's used in the North of Scotland to frighten crying children.”

 

In 18th century Scotland, bo, boo, and bu would latch onto plenty of words describing things that went bump in the night. According to the Dictionary of the Scots Language, the term bu-kow applied to hobgoblins and “anything frightful,” such as scarecrows. The word bogey, for “evil one,” would evolve into bogeyman. And there’s bu-man, or boo-man, a terrifying goblin that haunted man:

 

Kings, counsellors, and princes fair,
As weel’s the common ploughman,
Hae maist their pleasures mix’d wi’ care,
An’ dread some muckle boo-man
.”

 

It was only a matter of time until ghosts got lumped into this creepy “muckle boo-man” crowd. Which is too bad. Before the early 1800s, ghosts were believed to be eloquent, sometimes charming, and very often literary speakers. The spirits that appeared in the works of the Greek playwrights Euripides and Seneca held the important job of reciting the play’s prologue. The apparitions in Shakespeare’s plays conversed in the same swaying iambic pentameter as the living. But by the mid-1800s, more literary ghosts apparently lost interest in speaking in complete sentences. Take this articulate exchange with a specter from an 1863 Punch and Judy script:

 

Ghost: Boo-o-o-oh!
Punch: A-a-a-ah!
Ghost: Boo-o-o-o-oh!
Punch: Oh dear ! oh dear ! It wants’t me!
Ghost: Boo-o-o-o-oh!

 

The Influence of Spiritualism

6lAJ3cjoLbM.jpg?size=320x213&quality=95&
It’s no surprise that boo’s popularity rose in the mid-19th century. This was the age of spiritualism, a widespread cultural obsession with paranormal phenomena that sent scores of people flocking to mediums and clairvoyants in hopes of communicating with the dead. Serious scientists were sending electrical shocks through the bodies of corpses to see if reanimating the dead was possible; readers were engrossed in terrifying Gothic fiction (think Frankenstein, Zastrozzi, and The Vampyre); British police departments were reporting a heightened number of ghost sightings as graveyards were plagued by “ghost impersonators,” hoaxsters who camped out in cemeteries covered in white robes and pale chalk. It’s probably no coincidence that ghosts began to develop their own vocabulary—limited as it may be—during a period when everybody was curious about the goings-on within the spirit realm.

 

It may also help that boo was Scottish. Many of our Halloween traditions, such as the carving of jack-o’-lanterns, were carried overseas by Celtic immigrants. Scotland was a great exporter of people in the middle of the 1800s, and perhaps it’s thanks to the Scots-Irish diaspora that boo became every ghost’s go-to greeting. Now that you know why ghosts say “boo,” find out a few regional terms for spirits and haunts that you might want to work into conversation, and learn about “ghost words”—nonexistent words that somehow found their way into the dictionary.

 

 

Source: Why Do Ghosts Say “Boo”?

 

 

  • Like 1
Link to comment
Share on other sites

Fact of the Day - FRIDAY THE 13TH

SPDEzHBgGaJMwcj9l2nI=&risl=&pid=ImgRaw&r

Did you know.... There are plenty of superstitions out there, but none have woven themselves into the fabric of our culture quite like Friday the 13th. It’s inspired books, songs, and one of the most successful horror movie franchises of all time. But despite giving us anxiety, the origins of this notorious date on the calendar remain largely unknown to most. Where did it start? Does it really stretch back to the 14th century? And how does Loki figure into all of it? There are a lot of urban legends and half-truths out there, so let’s dive a bit deeper into the history of this most terrifying of days with 13 facts about Friday the 13th.

 

1. The Bible is partly responsible for the phobia surrounding Friday the 13th.

300px-Abendmahl_(Andre_Duterte).jpg

Part of superstition surrounding Friday the 13th comes from the Christian Bible. During the Last Supper, there were 13 guests: Jesus and his 12 apostles—one of which, Judas, would eventually betray him. Since then, some have believed in a superstition regarding 13 guests at a dinner table. This slowly extended to be an overall feeling that the number itself was bad luck. Of course, when Jesus was crucified, it took place on a Friday, leading some to view the day with an anxious eye. Taken separately, both the number 13 and Friday have since made their way into modern superstitions.

 

2. Loki also played a part in inspiring fear of Friday the 13th.
The Last Supper is one view on the origins of our fear of 13. Another comes from Norse mythology—more specifically in the form of the trickster god Loki. In those stories, Loki tricked the blind god Höðr into killing his brother Baldr with a dart of mistletoe. Baldr’s mother, Frigg, had previously ordered everything in existence to never harm her son, except the mistletoe, which she viewed as incapable of harm. How does 13 figure into this? Some accounts say Baldr’s death took place at a dinner held for 12 gods before it was interrupted by Loki—the 13th (and most unwanted) guest.

 

3. Some point to the Knights Templar as the source of why people fear Friday the 13th (but it’s probably not true).

3rYeLF3RdFA.jpg

Contrary to what The Da Vinci Code told you, the reason people fear Friday the 13th isn’t because of the Knights Templar. On the very unlucky Friday, October 13, 1307, Philip IV of France had members of the Templar arrested—he was growing uneasy with their power and covetous of their riches. There were trials, torture, and many of the Knights were burned at the stake, eventually leading to the superstition of Friday the 13th as a cursed and evil day. That’s not quite true, though. This is a take that’s been drummed up in recent years, most visibly in Dan Brown’s best-selling novel, but in reality, the unlucky combination of Friday and 13 didn’t appear until around the turn of the 20th century.

 

4. A 1907 novel played a big part in creating the superstition.

We know a good deal about the history of our fear of 13 and of Fridays, but combined? Well, that’s less clear. One popular thought, though, points to a 1907 book by a stockbroker named Thomas Lawson. Titled Friday, the Thirteenth, it tells the tale of a stockbroker who picks that particular day to manipulate the stock market and bring all of Wall Street down. The book sold fairly well at the time, moving 28,000 copies in its first week. And it must have struck a chord with early 20th century society, as it’s said to have caused a real-life superstition among stockbrokers regarding trading and buying stocks on the 13th. While not the first to combine the dates, Lawson’s book is credited with popularizing the notion that Friday the 13th is bad news. The fear among brokers was so real that a 1923 New York Times noted they “would no more buy or sell a share of stock today than they would walk under a ladder or kick a black cat out of their path.”

 

5. Stockbrokers have reason to be nervous on Friday the 13th.

base_20286605.webp

Lawson’s book was pure fiction, but the history of the stock market on Friday the 13th can be either profitable or absolutely terrifying, depending on the month. On most Friday the 13ths, stocks have actually risen—according to Time, they go up about 57 percent of the time, compared to the 52 percent on any other given date. However, if it’s a Friday the 13th in October … be warned. There’s an average S&P drop of about 0.5 percent on those unlucky Fridays in October. And on Friday, October 13, 1989, the S&P actually saw a drop of 6.1 percent—to this day, it’s still referred to as a “mini crash.”

 

6. Good things happen on Friday the 13th, too.

On Friday, July 13, 1923, the United States got a brand new landmark: The famed Hollywood sign was officially christened on that day as a promotional tool for a new housing development. But before the sign took on its familiar image, it initially read “Hollywoodland”—the full name of the development that was being built on the hills above Los Angeles. The sign took on its current “Hollywood” look in 1949 when, after two decades of disrepair, the Hollywood Chamber of Commerce decided to remove the last four letters and just maintain the first nine.

 

 

Click below ⏬ to read more of Friday  the 13th.

 

Source: Fascinating Facts About Friday the 13th

  • Like 1
Link to comment
Share on other sites

Fact or the Day - TRICK OR TREAT, WHY?

mqdefault.jpg

Did you know... Each Halloween, hordes of costumed kids trudge from door to door exclaiming the same phrase at each stop: “Trick or treat!” It’s really a treat-only affair, since adults always shell out candy and children rarely have tricks up their sleeves (except perhaps for those dressed as magicians). In other words, they may as well save half a breath and simply shout “Treat!” So, where did the term come from?

 

Halloween Hijinks
Halloween wasn’t always about cosplay and chocolate bars. During the 19th century, Irish and Scottish children celebrated the holiday by wreaking (mostly harmless) havoc on their neighbors—blowing cabbage smoke through a keyhole to stink up someone’s house, frightening passersby with turnips carved to look ghoulish, etc. 
According to History.com, kids didn’t give up that annual mischief when they immigrated to the U.S., and Americans happily co-opted the tradition. Toppled outhouses and trampled vegetable gardens soon gave way to more violent hijinks—like the time a Kansas woman almost died in a car crash after kids rubbed candle wax on streetcar tracks, for example—and these pranks escalated during the Great Depression.

 

5bbb635801145555ac06d949?width=400

 

In short, tricks were a huge part of Halloween throughout the early 20th century. So, too, were treats. For All Souls’ Day in the Middle Ages, people went door-to-door offering prayers for the dead in exchange for food or money, a tradition known as souling. A similar custom from 19th-century Scotland, called guising, entailed exchanging jokes or songs for goodies. While it’s not proven that modern treat-begging is directly derived from either souling or guising, the practice of visiting your neighbors for an edible handout around Halloween has existed in some form or another for centuries.

 

Canada Coins a Catchphrase
With tricks and treats on everyone’s minds come October, it was only a matter of time before someone combined them into a single catchphrase. Based on the earliest known written references to trick or treat, this may have happened in Canada during the 1910s or 1920s. As Merriam-Webster reports, a Saskatchewan newspaper mentioned the words together in an article from 1923. “Hallowe’en passed off very quietly here,” it read. “‘Treats’ not ‘tricks’ were the order of the evening.” By 1927, young trick-or-treaters had adopted the phrase themselves.

Hallowe’en provided an opportunity for real strenuous fun,” Alberta’s Lethbridge Herald reported in 1927. “No real damage was done except to the temper of some who had to hunt for wagon wheels, gates, wagons, barrels, etc., much of which decorated the front street. The youthful tormentors were at back door and front demanding edible plunder by the word ‘trick or treat,’ to which the inmates gladly responded and sent the robbers away rejoicing.” The phrase appeared in Michigan’s Bay City Times the following year, describing how children uttered “the fatal ultimatum “Tricks or treats!’” to blackmail their neighbors into handing out sweets.

 

Donald Duck’s Endorsement
Sugar rationing brought trick-or-treating to a temporary halt during World War II, but the tradition (and the phrase itself) had gained popularity once again by the early 1950s—with some help from candy companies and a few beloved pop culture characters. Charles Schulz depicted the Peanuts gang cavorting around town in costume for a Halloween comic strip in 1951; and Huey, Dewey, and Louie got to go trick-or-treating in a 1952 Donald Duck cartoon titled Trick or Treat.

 

 

 

Fortunately, the treat part of the phrase has thoroughly overtaken the trick part. But if you stuff rank cabbage in your neighbor’s keyhole this Halloween, we won’t tell.

 

 

Source: Why Do We Say “Trick or Treat” on Halloween?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - DR. JEKYLL AND MR HYDE

9780679734765.jpg

Did you know.... “I am pouring forth a penny dreadful,” Robert Louis Stevenson wrote a friend in the autumn of 1885. “It is dam dreadful.” The pulp piece the Treasure Island author was referring to was the Strange Case of Dr. Jekyll and Mr. Hyde, a novella about a man with a (now notorious) split personality: the good Dr. Jekyll and the terrible Mr. Hyde. The book taps into fundamental truths about human nature, and has influenced everything from the detective story to the Incredible Hulk. Here’s what you should know.

 

1. The story for Dr. Jekyll and Mr. Hyde came to Robert Louis Stevenson from a dream ...

dl-portrait-npg-robert-louis-stevenson.j

Stevenson had long been fascinated with split personalities but couldn’t figure out how to write about them. Then one night, he had a dream about Dr. Jekyll and Mr. Hyde. “In the small hours of one morning ... I was awakened by cries of horror from Louis,” his wife Fanny said. “Thinking he had a nightmare, I awakened him. He said angrily: ‘Why did you wake me? I was dreaming a fine bogey tale.’” Stevenson later elaborated on the dream in an essay called “A Chapter On Dreams”: “For two days I went about racking my brains for a plot of any sort,” he wrote, “and on the second night I dreamed the scene at the window, and a scene afterward split in two, in which Hyde, pursued for some crime, took the powder and underwent the change in the presence of his pursuers. All the rest was made awake, and consciously[.]

 

2. ... and it may have been influenced by a cabinet from his childhood.

Many historians speculate that the duality of Dr. Jekyll and Mr. Hyde was inspired by an 18th-century Edinburgh cabinet maker named Deacon Brodie, a respectable town councilor and an extremely successful craftsman. Brodie’s job gave him access to the keys of the rich and famous, which he made copies of in order to rob them at night. After a string of heists, he was eventually caught and hanged (according to legend, on a gallows that he helped design).  Brodie’s story fascinated the people of Edinburgh, including Stevenson—even though the thief died more than 60 years before Stevenson was born. The future writer grew up with a Brodie cabinet in his room, and in 1880, he cowrote a play called Deacon Brodie, or the Double Life. But the cabinet, and the man who built it, may have influenced Jekyll and Hyde, too: In 1887, Stevenson told an interviewer that the dream that inspired his story involved a man “being pressed into a cabinet, when he swallowed a drug and changed into another being.” 

 

3. Dr. Jekyll and Mr. Hyde was written in just a few days.

3-Vercueil.png
A lifelong invalid, Stevenson was sick with tuberculosis when he wrote the famous tale. He’d recently suffered a lung hemorrhage and was under doctor’s orders to rest and avoid excitement. Still, that didn’t stop him from cranking out the first draft of the 30,000-word novella in somewhere between three and six days flat, and then a second, rewritten draft in another meager three days (more about that in a minute).

 

4. Stevenson may have been on cocaine when he wrote it.
In the book, Dr. Jekyll takes a drug from a chemist that turns him into another person. He likes it—until he loses control of the drug. Stevenson may have been drawing from personal experience. It's been reported that he was prescribed medicinal cocaine to treat his hemorrhage (it was discovered in the 1880s that cocaine tightens blood vessels), and that the inspired dream for the story occurred during a cocaine-fueled slumber. Stevenson later professed an affection for the drug, and his wild writing stint is consistent with someone on cocaine. Then again, it’s also consistent with a man faced with financial problems (as Stevenson was) and his own mortality, swept up by inspiration and a great idea.

 

5. The first draft of Dr. Jekyll and Mr. Hyde was destroyed ...

157px-Jekyll_and_Hyde_Title.jpg
According to one version of events, after reading the manuscript for Jekyll and Hyde, Fanny criticized its failure to successfully execute the story’s moral allegory (among other things). Fanny later recounted that she then found her husband sitting in bed with a thermometer in his mouth. He pointed to a pile of ashes in the fireplace, revealing he’d burned the draft. “I nearly fainted away with misery and horror when I saw all was gone,” she wrote

 

6. ... possibly by Stevenson’s wife, Fanny. 

There are actually several theories as to how the first draft went up in flames. In 2000, a letter found in an attic revealed more of Fanny’s thoughts on the book, and her mysterious role in the manuscript’s burning. “He wrote nearly a quire of utter nonsense,” she wrote to friend and poet W.E. Henley. “Fortunately he has forgotten all about it now, and I shall burn it after I show it to you. He said it was his greatest work.” The artifact contradicts Fanny’s previous recounting, as well the one her son told of Stevenson burning the manuscript after he and Fanny got in a fight. In any case, Stevenson spent six weeks revising the book before it was ready for publication.

 

 

Click below  ⏬to read more of Jekyll and Hyde

 

Source: Strange Facts About ‘Dr. Jekyll and Mr Hyde’

Edited by DarkRavie
  • Like 1
Link to comment
Share on other sites

Fact of the Day - REDUPLICATIVE WORDS

images?q=tbn:ANd9GcSeKNFEtv0Pp0Bun75B4MR

Did you know... You may not like jibber-jabber or when life turns helter skelter, but it’s hard not to like words created by what linguists call “reduplication.” Sadly, not all reduplicative words, despite their charm, catch on. Here’s a look at 12 that deserve to be rescued from their mostly forgotten place in lexical history.

 

1.  Pribble-prabble 
This word, which has been around since the 1500s, has the same meaning as its root, pribble, which is defined as “an argument or quarrel, especially one that’s petty or insignificant.” The expression pribbles and prabbles means the same. Needless to say, every comment section in the multiverse is full of pribble-prabble.

 

2. Curly-murly

01039WFdm7w.jpg?size=320x363&quality=96&

This word from the 1700s basically means “really curly,” so feel free to use it the next time you see someone with next-level curls. Curly-murly could also come in handy when making coiffure requests of well-read hairdressers.

 

3. Evo-devo
Evo-devo first appeared in a 1997 issue of Science magazine, and it has a more scientific sense than the rest of the list: “Rudolf Raff and other pioneers have joined forces to create a young field called evolutionary developmental biology, or ‘evo-devo.’” So technically, evo-devo is an abbreviation, but it walks, talks, and looks like a reduplication.

 

4. Fingle-fangle
This term is related to newfangled, which conveys a dismissive attitude toward new stuff, suggesting it’s a bunch of bells, whistles, and crapola. A fingle-fangle is either a piece of junk or an idea so whimsical and insubstantial that it’s barely worth discussing. The OED’s oldest example—from 1652—includes the phrase fingle-fangle fashion, which is fitting. Anything fashionable is probably not going to last.

 

5. Flaunt-a-flaunt

256x144.jpg?_=1644140468

Resembling words like rub-a-dub and pit-a-pat, the 16th-century term flaunt-a-flaunt was often applied to birds—or people who strutted like birds. An excessive touchdown celebration could be considered a flaunt-a-flaunt display.

 

6. Gibble-gabble 
This word for meaningless babbling dates back to the 1600s and is related to gabble, meaning “Rapid, unintelligible speech.” It can be an adjective as well as a noun, as seen in a 1693 reference to “Gibble gabble Gibbrish.”

 

Click the link below ⏬ to read more about rare reduplicative words.

 

 

Source: A Higgledy-Piggledy Look at 12 Rare Reduplicative Words

 

 

  • Like 1
Link to comment
Share on other sites

Fact of the Day - KING TUT AND POP CULTURE

ef.jpg

Did you know.... As far as rulers go, King Tutankhamun wasn’t a particularly significant figure to Ancient Egypt. The young pharaoh assumed the throne at age 9 (around 1314 BCE) and died just a decade later following a lifetime of health struggles. But despite his brief reign, King Tut is one of the best-known rulers from Ancient Egypt, and the gold mask of his face is recognized around the world.  Tut’s fame has less to do with the life he lived than what he left behind. When British archaeologist Howard Carter and his team cracked open the his burial site in the Valley of the Kings near Luxor, Egypt, in 1922, it was untouched by grave-robbers. Never before in modern times had a pharaoh’s tomb been found in such a pristine state, and the discovery became a sensation. Tutankhamun wasn’t a famous figure previously, but he quickly became one through movies, music, and magazines. Even parodies of “Tutmania”—like Steve Martin’s catchy Saturday Night Live song—have become cultural giants in their own right. Here are more ways King Tut has shaped pop culture over the last century.

 

1. “Old King Tut” got people on the dance floor.

 

Few archaeological discoveries are sensational enough to inspire hit songs, but that was the case with “Old King Tut” in 1923. The jaunty tune from songwriters Harry Von Tilzer and William Jerome depicted the Egyptian ruler as a lady’s man with a tomb loaded with “gold and silver ware” and “souvenirs.” The song’s popularity coincided with the Charleston, and it was a popular number to dance to in the 1920s.

 

2. King Tut inspired a horror movie franchise.

Zita-1.jpg

Unlike other classic monsters like Dracula and Frankenstein, the Mummy didn’t come from literature. The Universal Pictures horror movie from 1932 was instead inspired by the real-life discovery of Tutankhamun’s tomb. After Howard Carter located the mummy and his riches in 1922, a series of misfortunes befell prominent figures connected to the expedition. Lord Carnarvon, who helped fund the mission, died of sepsis from a mosquito bite that same year. His secretary Richard Bethell, who accompanied Carter on the trip, died in 1929 under mysterious circumstances—possibly murder. These and other strange incidents fueled rumors of a “mummy’s curse” unleashed by Tutankhamun when his tomb was disturbed. The myth became the basis for The Mummy starring Boris Karloff, in which a team of archaeologists accidentally bring a mummified Egyptian priest back to life. Screenwriter John L. Balderston had previously covered the discovery of King Tut as a journalist, and his real-life expertise enriched his fictionalization. In addition to its sequels, The Mummy has inspired numerous spinoffs and reboots that all rely on the concept of the “mummy’s curse” that was popularized by Tut’s tomb.

 

3. Flappers embraced Egyptian style.
Tutankhamun became an influencer of sorts more than 3000 years after his death. When his tomb was discovered, the signature look of Ancient Egypt infiltrated Western fashion. American women in the 1920s channeled the timeless aesthetic with kohl eyeliner, bobbed hairstyles, and ornamental jewelry featuring Egyptian motifs. The fad reached its height in the Flapper era, but influence from Tut’s time can still be found in the fashion industry today.

 

4. Movie theaters took a cue from Egyptian architecture.

240px-Egyptian_Theater_Park_City_Utah_by

Egyptian-inspired architecture has seen many revivals throughout history. Howard Carter’s discovery in the Valley of the Kings triggered one of the later waves in the 1920s, and this time it blended with the art deco movement. One of the main venues for Egyptian revival architecture during this decade was the cinema, which was exploding in popularity. Dozens of so-called “Egyptian theaters” featuring columns, sphinxes, and other Ancient Egyptian-inspired designs were constructed in the 1920s, and just a fraction continue to operate today. 

 

5. Pulp magazines went to ancient Egypt.
Howard Carter’s expedition was perfect fodder for the adventure pulp magazines of the 1920s and ‘30s. Heroes traveling to “exotic” locales often found themselves in Ancient Egypt, where they would have to contend with vengeful mummies. In accordance with the myth spurred by King Tut’s discovery, these mummies were usually capable of inflicting terrible curses. 

 

6. Ancient Egypt became a marketing tool.
Many company took advantage of Tut fever in their marketing, even if their products had nothing to do with Ancient Egypt. Cards depicting the young pharaoh came in cigarette cartons, and lemons were sold under the label “King Tut Brand” (because nothing screams “fresh produce” like a millennia-old mummy). Other entrepreneurs were more creative in how they embraced Egyptian themes; the stage magician Carter The Great incorporated images and story elements inspired by Tut’s discovery into his act.

 

7. Steve Martin sang “King Tut” on SNL.

 

In the late 1970s, America was gripped by Tutmania 2.0. An exhibition titled “Treasures of Tutankhamun”—featuring artifacts from his tomb like his iconic gold mask—toured the U.S., renewing a cultural obsession with the historical figure in its wake. Millions of people saw it, including celebrities like Andy Warhol and Elizabeth Taylor. The craze was still going strong when Steve Martin donned Ancient Egyptian garb and performed “King Tut” on Saturday Night Live in 1978. With lyrics like “He gave his life for tourism,” the novelty song was meant to parody the commercialization of the exhibit—but the single ended up doing the very thing it mocked when it went platinum. Forty years later, “King Tut” remains one of SNL’s most enduring segments—even if the original context is lost on teens who have only seen clips on TikTok.

 

8. King Tut fought Batman.
Tutankhamun is technically part of the DC universe—or at least a Batman villain who believes he’s a reincarnation of the boy king is. King Tut debuted in the Adam West-led television series in 1966. The protagonist never reached the same level of notoriety as Catwoman or The Joker, but he’s arguably the most successful villain who originated with the ’60s show rather than the comics. Just don’t expect him to make an appearance in Matt Reeves’s next Batman film.

 

9. The candy industry jumped on the Tut bandwagon.

 

The second wave of Tutmania eventually reached the candy aisle. In the 1980s, Terry’s—the makers of those foil-wrapped chocolate oranges—sold a treat called the Pyramint. It consisted of a mint fondant-filled dark chocolate shell shaped like an Egyptian pyramid. Kids around this time also enjoyed Tut-inspired Yummy Mummies. The item was similar to Fun Dip, the main difference being that the candy sticks were meant to evoke bandaged-wrapped mummies (yummy!).

 

10. King Tut’s mask became a political symbol.

caption.jpg?w=300&h=300&s=1

In addition to being an iconic piece of Ancient Egyptian art, Tutankhamun’s funerary mask is one of the most recognizable artifacts of all time. In the past century, the golden face has been used for much more than selling souvenirs; in fact, several groups adopted it as a political symbol. In postcolonial Egypt, the mask came to represent cultural pride and independence. It’s been used as a message of resistance, appearing as graffiti in Cairo during the 2011 revolution. Beyond Egypt, members of the African diaspora have reclaimed the symbol from the colonialist powers that have profited from it historically. The king’s face has appeared on the cover of the NAACP’s monthly magazine and in works by Harlem Renaissance artists. Tutankhamun has had an eventful afterlife as a dynamic part of our culture—even if that afterlife came a few thousand years later than expected.

 

 

Source: Ways King Tut Influenced Pop Culture

  • Like 1
Link to comment
Share on other sites

Fact of the Day - WOODY WAGONS

01h7txjxz0r8jyz733ey.jpg

Did you know... 

Not even Clark Griswold takes as much abuse en route to Walley World as the “Wagon Queen,” the station wagon carrying his family from Chicago to California in 1983’s National Lampoon’s Vacation. With its pea green chassis and wood paneling, the Queen was so viscerally revolting that the movie is credited with ushering in—or at least symbolizing—the demise of the all-family vehicle. But for over a decade, these clunky automobiles were a common sight on highways. And many of them sported a now-inexplicable feature: faux wood grain exterior paneling. Cars that looked like stereo speakers on wheels were the aesthetic choice for many, and the mere mention of them evokes visions of a dashboard crammed with 8-track tapes and cigarette lighters. They even earned an appropriately kitschy nickname: “woody wagons.”

 

Side Swiped
It’s hard to pinpoint the exact origins of wood panel couture. According to Apartment Therapy, wooden wall panels in architecture date back to Elizabethan and Tudor-style design. Sometimes it was utilitarian—wood was better for insulating a house—and other times it was ornamental. After World War II, the explosion of home construction meant finding cheap ways to make interiors feel warmer. Wood and wood paneling was key.

 

ford-model-a-station-wagon-woody_u-l-pzn

 

People looked for that same vibe in their automobiles, too. When car production began ramping up in the 1920s, it wasn’t unusual for manufacturers to use wood for the entire body of the vehicle. Horse-drawn carriages, boats, and planes were, after all, made of wood, and steelmaking was expensive. Some enterprising types added aftermarket wooden panels to give a car a more distinguished appearance. It was a status symbol, as the constant upkeep required for wood—weather-sealing, varnishing, polishing—was an expensive endeavor. Automaker Henry Ford was a proponent of the approach. His Ford company bought 400,000 acres of forest in Michigan so wood for auto bodies could be harvested. In 1929, Ford introduced the first mass-produced “woody,” a $695 bargain (about $12,000 in today’s dollars) made of maple, birch, and mahogany and with a single piece of glass for the front windshield. Side windows were just open spaces with curtains. Ford marketed it as a commercial truck. That’s because vehicles with more cabin space were growing in popularity. The term station wagons grew out of vehicles that were used to pick people up from train stations, hauling their belongings to vacation destinations. But by the late 1940s, producing “woodies” was no longer cost-effective. The cars had to be made by hand, and an artisanal approach to mass production was financially impossible. To get the look, carmakers opted to use a more durable chassis like steel and then apply wood paneling for the exterior. (The Chrysler Town and Country was among the popular models of the era.)

 

Expert Panel
Just when it seemed like the trend was due to expire, along came surfers. Beach bums of California found that used wood and wood-paneled cars held up better on the west coast thanks to its lack of snow; the ample storage of station wagons in particular gave them plenty of room for their surfboards.

 

vintage-woody-on-hawaiian-beach-ed-freem

 

Wood was still desirable, but its lack of durability was a problem. (Try getting into a highway accident in a car made of birch.) In the 1970s, technological developments made the fabrication of faux wood grain paneling feasible and cheap. These pieces could be synthetic, but have the appearance of oak or cedar. That meant one could easily accessorize their home in faux wood, with everything from stereo equipment to your Atari video game console sporting the look. It also meant cars could be decked out in water- and wear-resistant wood finishes. In some cases, the “wood” was just a vinyl decal applied to the body. While station wagons remained the premier canvas for wood paneling—more that 1 million of the vehicles were sold for three consecutive years in 1971, 1972, and 1973—wood grain eventually moved to other car types. Some models, like the Pontiac Acadian, had a faux wood finish as an option, meaning someone would have to willingly pay extra for the privilege. Others, like the AMC Pacer, virtually forced drivers to accept it. It wasn’t necessarily that wood paneling died out. It was that the station wagon collapsed and took out everything associated with it. The wagon’s demise correlated with the surge of interest in the minivan, a more nimble family vehicle first introduced in 1984 by Chrysler. Minivans were also more fuel efficient, which was a key benefit after the oil crisis that consumed part of the ‘70s. While the woody may have gone off-road, it hasn’t been forgotten. Classic car collectors value their craftsmanship, and finding one in good condition—without the expected rot—can mean a six-figure sum being fetched at auction. Maybe Clark Griswold should have kept his.

 

5031eb6e0c7262da473f54df1de0c949.jpg

1976 Pontiac Arcadian

 

image-placeholder-title.webp

1970s AMC Pacer

 

ca.0224-ca-1minivan-1.jpeg?quality=90&st

1984 Chrysler Minvan

 

 

Source: When 'Woody Wagons' Ruled the Road
 

  • Like 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...
Please Sign In