Jump to content

Fact of the Day


DarkRavie

Recommended Posts

Fact of the Day - WHAT IS A GLABELLA?

makeup-artist-does-facial-hair-260nw-193

Did you know...  You know your head, shoulders, knees, and toes (knees and toes), but has anyone ever introduced you to the glabella? This isn’t some hidden-away body part like the back of the elbow or something spleen-adjacent — it’s smack dab in the middle of your face. Latin for “smooth, hairless, bald,” the glabella is the small patch of skull nestled in the middle of your two superciliary arches (also known as your eyebrow ridges). Many people know of the glabella because of the wrinkles, or “glabellar lines,” that can appear in the area. 

 

Although smooth and hairless today, the glabella wasn’t always so. Our human ancestors, including Neanderthals, instead sported formidable brow ridges that likely evolved to display social dominance. As the brain of Homo sapiens grew, this brow receded until only the smallest of ridges survived — along with the smooth bit of bone in between. But the fortunes of this little piece of anatomical real estate weren’t just tied to evolution. Women in ancient Greece saw the unibrow as a beautiful feature, so much so that they’d paint soot on their glabellas to form a faux unibrow. Throughout the following centuries, fashion’s notion of the ideal eyebrow changed, but the glabella remained more or less true to its smooth, hairless name. Unless Frida Kahlo’s famous unibrow becomes a modern fashion trend, it’ll likely stay that way.

 

Eyebrows played a crucial role in the rise of Homo sapiens.
Eyebrows are an often-overlooked asset of human beauty. Folks write poetry about gorgeous eyes and ballads on beautiful smiles, but eyebrows, while certainly an obsessed-over feature in modern beauty trends, rarely receive as much adoration. Yet according to anthropologists, the fuzzy caterpillars on our foreheads are vital to the survival of our species. A study in 2018 found that eyebrows figure prominently in human social interaction and aided early humans in forming large, complex social groups. One of these interactions occurs when people see each other at a distance — in that situation, people unconsciously raise their eyebrows in a way that apparently shows they’re not a threat. Eyebrows similarly raise toward the middle to signal sympathy, and their micro-movements can also play a key role in expressing trustworthiness or deception. With this ability to convey subtle emotions in only an “eyebrow flash,” humans formed larger and more diverse social groups on our journey toward becoming the dominant animal on the planet.

 

 

Source: The space between your eyebrows is called the “glabella.”

  • Like 1
Link to comment
Share on other sites

Fact of the Day - THE MICROWAVE

OIP.6Y8XUUcTWrq3pszQcNAXmwAAAA?w=300&h=1

Did you know... The history of technology is filled with happy accidents. Penicillin, Popsicles, and Velcro? All accidents. But perhaps the scientific stroke of luck that most influences our day-to-day domestic life is the invention of the microwave oven. Today, 90% of American homes have a microwave, according to the U.S. Bureau of Labor Statistics, but before World War II, no such device — or even an inkling of one — existed. 

 

During the war, Allied forces gained a significant tactical advantage by deploying the world’s first true radar system. The success of this system increased research into microwaves and the magnetrons (a type of electron tube) that generate them. One day circa 1946, Percy Spencer, an engineer and all-around magnetron expert, was working at the aerospace and defense company Raytheon when he stepped in front of an active radar set. To his surprise, microwaves produced from the radar melted a chocolate bar (or by some accounts, a peanut cluster bar) in his pocket. After getting over his shock — and presumably cleaning up — and then conducting a few more experiments using eggs and popcorn kernels, Spencer realized that microwaves could be used to cook a variety of foods. Raytheon patented the invention a short time later, and by 1947, the company had released its first microwave. It took decades for the technology to improve, and prices to drop, before microwaves were affordable for the average consumer, but soon enough they grew into one of the most ubiquitous appliances in today’s kitchens.

 

The discovery of evidence for the Big Bang was also an accident.
In 1964 at Bell Labs outside Holmdel, New Jersey, radio astronomers Arno Penzias and Robert Wilson were frustrated with their antenna. The sensitive equipment was picking up a persistent buzzing noise that the pair first thought might be coming from the machine itself, nearby New York City, or even pigeons nesting in the antenna. However, once every explanation appeared to be accounted for, the two astronomers still detected the hum no matter where they pointed the antenna in the sky. After speaking with astronomers at Princeton University, the duo realized that they had actually detected the cosmic microwave background, which is leftover radiation from the Big Bang and evidence for the very beginning of the universe some 13.8 billion years ago. Fourteen years later, Penzias and Wilson were awarded the Nobel Prize in physics for their groundbreaking — and serendipitous — discovery.

 

 

Source: The microwave was invented by accident, thanks to a melted candy bar.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - BAGELS

caption.jpg?w=300&h=300&s=1

Did you know.... New York and Montreal are famous for their bagels today, but who made the first bagel—and how did they rise to popularity as a beloved breakfast staple?


In 1976, Associated Press reporter Jules Loh shared his advice for Southerners traveling to New York for that year’s Democratic National Convention. After explaining that New Yorkers say youse instead of y’all and can’t pronounce pecan correctly, he described their exotic cuisine. “They call breakfast breakfast, but ordering it will be a problem for you,” he wrote. “Forget grits, which are unheard of. They eat something called a bagel, which is as hard to describe as it is to chew. Don’t send it back—it’s supposed to be that hard.”

 

Around this time, bagels were transforming from a regional specialty item to a mainstream breakfast staple in the U.S. In 2020, more than three in five Americans reported eating bagels, and according to a survey from 2022, the average person consumes 38.7 bagels per year. The baked good can be found in supermarkets, fast food chains, and office break rooms across the country—though whether the frozen, pre-sliced version truly qualifies as a bagel is a matter of debate. 

 

 

 

The bagel’s success is undeniable, but its path to breakfast dominance wasn’t straightforward. The journey was long and winding, much like the line at your local bagel shop on a Sunday morning.

 

What Makes a Bagel a Bagel
People have been rolling dough into rings for centuries. The shape serves a clever purpose: Foods with holes can be hung up on a rod or string, making them easy to transport and display in large quantities. Italian taralli and Middle Eastern ka’ak are all examples of this design—but technically they’re not bagels. The doughy rings most Americans are familiar with are distinguished by their cooking method as well as their form. 

 

Making bagels takes some complicated science. After the dough rings are shaped, they have to rest for up to 48 hours in a refrigerator. This process is called “retarding,” and it helps flavors develop in the dough through fermentation [PDF]. It’s also essential for those tiny blisters that form on the crust during the baking stage.

 

08131118_5f34a304c1418.jpg

 

Before they go in the oven, bagels are traditionally boiled. A brief dip in a hot water bath gelatinizes the starch on the surface of the dough. The starch granules swell with water until they dissolve, which unlocks the starch molecules and allows them to absorb additional water. This increases the moisture content in the bagel and contributes to its chewy texture. Parboiling the bagels also deactivates the yeast on the surface of the dough, which can’t survive at high temperatures. 

 

This step gives the bagel its crust, which is also responsible for its unique consistency. The water molecules on the bagel’s outer layer become bound, meaning they’re less prone to evaporate during the baking process. In a regular loaf of bread, evaporation is what makes the crust, well, crusty. Because a bagel’s crust sets early in the cooking process, the dough doesn’t rise much in the oven. This keeps the crumb dense and chewy.

 

It’s sometimes suggested that poaching the dough was more than a matter of taste. There’s a popular story that because bread was associated with Christian communion, Jews were banned from baking it, and bakers skirted the bans by tweaking their recipes to include boiling. Bagel researcher Maria Balinska describes this as a folk tale, though. 

 

The Oxford Encyclopedia of Food and Drink in America gives another theory. Jewish dietary law requires that before bread is eaten, hands need to be washed and a blessing said. But clean water wasn’t always available, so observant Jews might not have been able to eat bread while away from home. So, this theory goes, by boiling the dough first, it’s somehow been loopholed into being kosher without the blessing and hand-washing.

 

The Unclear Origins of the Bagel
One of the most common parboiled treats to come out of Jewish bakeries was obwarzanek—a ring-shaped Polish snack that may have derived from pretzels brought over by German immigrants in the 14th century. According to one story, they rose to prominence when Poland’s first female ruler, Jadwiga, gave up fine breads and pastries for Lent. Instead of abstaining from carbs altogether, she made obwarzanek her slightly less indulgent bread of choice for the holy season.

 

Another legend traces the bagel’s origin to 1683. That year, the Polish king Jan Sobieski allied with Austria to achieve victory against invading Turkish forces. A Viennese baker reportedly celebrated the feat by baking dough in the shape of a stirrup to honor the king’s love of horses. The circular baked good was named beugel, or “stirrup,” in German. Plenty of experts are dubious about the veracity of that story, too. There’s evidence that bagels may predate that time period, and the tale is suspiciously similar to a popular origin story for croissants. 

 

We might never learn the exact origin of the boiled and baked good, but considering a similar Germanic word can mean “ring,” the word bagel likely has roots in German. From there it morphed into the Yiddish beygl, which turned into the anglicized term used today.

 

New York vs. Montreal
The bagel underwent a transformation in the 19th century. Jewish immigrants from Eastern Europe were arriving in the U.S., and they brought with them culinary traditions from the old country. Thousands settled in New York City, which quickly became the bagel capital of not just America, but the world. Early Polish bagels were tough with wide holes in the center that made them unsuitable replacements for sandwich bread. Jewish bakers adapted their recipes to suit American tastes by shrinking the holes and softening the texture without sacrificing the chew.

 

258s.jpg

 

At the same time, Jewish immigrants from across Eastern Europe were mixing elements of their cuisines to create new Jewish American dishes. A number of cured and smoked fish, which were essential to surviving long winters in Europe, were paired with bagels. These types of fish, like lox, continued to prove practical in the new world, but for different reasons. Families packed into tenement buildings without stoves or running water often struggled to cook at home. Even if Jewish families did have access to a functioning kitchen, they would have abstained from using it during Shabbat, instead picking up prepared foods from local businesses during the day of rest. Smoked salmon and bagels purchased from the local bakery and appetizing store therefore became a quick and accessible meal. Newly-invented cream cheese wasn’t a traditional Jewish ingredient, but its rich fattiness made it the perfect pairing for salty cured fish. Soon, other toppings like tomatoes, capers, and red onions entered the picture to make bagel and lox a fully-contained meal. 

 

easy-bagel-recipe-1-320x320.jpg

 

Across the Canadian border, bakers in Montreal were experimenting with different bagel styles. Though the exact origin of the Montreal-style bagel is debated, historians agree they first appeared around the turn of the 20th century as Jewish immigrants from Eastern Europe were settling in the city. Unlike New York bagels, the Canadian version is smaller, denser, and sweeter thanks to the honey-flavored water it's boiled in. The two styles are often pitted against each other, but it’s more of an apples and oranges—or New York slice and Chicago deep-dish—situation. Eat one of each and then regret and/or celebrate your decision as you see fit.

 

How the Bagel Gets Made
Bagels were a convenient choice for customers in the early 1900s, but making them was laborious. The process of making one batch—which included kneading the dough, fermenting it overnight, boiling it, and baking it—could easily take 24 hours or more. That work was often done in dirty, underground rooms in front of scorching hot ovens. It wasn’t unusual for the cellars to reach ambient temperatures of 120 degrees

 

These rough conditions gave rise to one of the strongest labor unions in New York City history. Founded in the 1930s, Bagel Bakers Local 338 consisted of around 300 Jewish bakers. You had to have a family connection to be considered, and even then a three-to-six-month apprenticeship and a minimum rolling speed of 832 bagels an hour were required to become a member. The exclusive membership came with an enticing upside. In 1960, the starting pay for oven workers was $150 for 37 hours of work a week, which is the equivalent of more than $80,000 annually today. Additional benefits included healthcare, dental, vision, overtime, 11 holidays, three weeks vacation, and 24 free bagels for every full work day. 

 

The union drew up new contracts each year, and as the controlling force behind New York’s favorite breakfast item, they had a lot of negotiating power. When Local 338 went on strike, they forced the city into “bagel famines,” shuttering the majority of bagel shops for weeks at a time. Faced with throngs of hungry customers and no way to feed them, employers were eventually forced to grant the workers’ requests. 

 

The union dissolved in the 1970s in the face of a rapidly-changing industry. For decades, bagels were an artisan product that could only be made by hand. New innovations in food production—such as preservatives, revolving ovens, and rolling and shaping machines—made them easier and cheaper for businesses to produce, thus taking away the workers’ bargaining power. The machine-made bagels were softer and closer to regular bread than traditional recipes, but companies had little problem selling them.

7.png

 

One of the biggest changes to the bagels world came from father-and-son-team Harry and Murray Lender. Harry owned a wholesale bagel shop in New Haven, Connecticut—one of the few outside New York. The Lenders realized that freezing bagels preserved their texture and flavor, making mass distribution possible for the first time. Throughout the 1960s and ‘70s, packaged bagels began appearing on supermarket shelves across the country, and by the ‘90s they were as mainstream as fast food. In some cases, they were fast food. An LA Times article from 1993 describing the bagel as “America’s Newest Food Craze” reported on Burger King serving limited-time bagel breakfast sandwiches—a fairly new phenomenon at that point. 

 

These were discouraging times for old-school bagel makers. As the process became easy to automate, and the product became cheaply and readily available, it was easy to see how making bagels by hand could become a lost art. Thankfully, that doesn’t seem to be happening any time soon. Many bagel shops in New York City and beyond continue to roll and shape their dough the old-fashioned way, and their hard work is often rewarded. The most popular bagel places can attract lines of people willing to wait for a fresh, high-quality version of something they can otherwise just pick up at the grocery store. In many cases, these artisan products haven’t changed from what was served a century ago—though if you prefer a bagel that’s soft, rainbow, or cinnamon raisin, that’s available, too.

 

Source: A Brief History of Bagels

  • Like 1
Link to comment
Share on other sites

Fact of the Day - AUGHTS

01j2yktfgcfczedddk6a.jpg

Did you know... Suggestions for what to call the period of time from 2000–2009 ranged from ‘the nillies’ and ‘the oh-ohs’ to ‘the double zeroes’ and ‘the noughties.’ So how’d we land on ‘the aughts’?

 

Most decades are easy to name. It takes no thought at all to realize we’re in the ‘20s, and just about every other decade is equally easy: the ‘30s, ‘40s, ‘50s, ‘60s, ‘70s, ‘80s, ‘90s, and teens.

Then there’s the first decade of the 21st century. Why do we refer to that period of time, from 2000–2009, using the aughts?

 

The Meaning of Aught
The aughts was suggested because of those ‘00s: Aught (or ought) means “zero,” and it’s a corruption of the older word naught, which dates all the way back to Old English. (If you recognize it, it’s likely from the expression all for naught.)

 

According to the Oxford English Dictionary (OED), aught and ought emerged in the 1800s; the oldest known example appeared in Maria Edgeworth’s 1822 book Frank: A Sequel to Frank in Early Lessons: “It was said … that all Cambridge scholars call the cipher aught and all Oxford scholars call it nought.”

 

While we tend to refer to the beginning of the 20th century as the 1900s today, according to Slate, the aughts was one of “the most common” terms for that period of time in that era, which was apparently “a logical extension of the fact that Americans living at the turn of the century referred to individual years as ‘aughts,’ meaning zero, as in ‘nineteen aught one,’ ‘nineteen aught two,’ etc.”

 

Aughts vs. Noughties

Another term for the first decade of this century used primarily in the UK and Australia has been successful enough to make the OED: the Noughties (or Naughties).

 

The term has been in use since at least a 1989 William Safire column in The New York Times on the topic of what to call the first decade of the new millennium: “That postcard touches on several possibilities suggested by scores of … third-millennium freaks ... The Naughties was suggested by 40 readers.”

 

A 1991 New Scientist article also used the term, with a finger-wagging tone: “With regard to Richard Caie’s question about suitable names for the next two decades … : considering the moral decline of society as a whole, the next decade must surely be the noughties.”

 

By 1999, however, what to call the upcoming decade was far from settled, with the BBC noting that “No one seems to have been able to provide an answer to the puzzle of what the next decade will be called.  … The ‘noughties’ could be the one to head the—admittedly sorry—list of contenders.”

 

But few say “Naughties” or “Noughties” today, probably because it feels like a schoolmarm dressing down a child, or, as the BBC put it, “a polite, middle-class code for the reproductive organs”—funny ways to talk about a decade, no matter how you look at them.

 

What’s in a Name?
Interestingly enough, even during the aughts, there wasn’t universal agreement on what to call them. As  linguist and language columnist Ben Zimmer wrote at the Oxford University Press blog:

 

It’s a curious situation: here we are at the end of 2007, and we still lack a commonly accepted term for the current decade. Very often English speakers deal with this quandary by employing the strategy of ‘no-naming’ (a term that sociolinguists use to describe the avoidance of address terms when one is unsure what to call one’s interlocutor). You can hear this kind of no-naming when a radio station announces that it plays ‘hits from the ’80s, ’90s … and today!’ But that’s hardly a satisfying solution. Surely we can do better in the next two years before the decade runs out?”

 

Zimmer ran down several suggested monikers for the decade, which included everything from the 2000s to names like the nillies, the deccies, the double zeroes, the oh-ohs, and the pre-teens.

 

But none of those terms made the jump into common usage. For whatever reason, it seems like the aughts—archaic or not—gets the job done. Though, as staff writer Rebecca Mead wrote at The New Yorker in 2009, it wasn’t a perfect solution:

 

[T]he adoption of ‘the aughts’ as the decade’s name only accelerates the almost complete obsolescence of the actual English word ‘aught,’ a concise and poetic near-synonym for ‘anything’ that has for centuries well served writers, including Shakespeare ... To call the decade ‘the aughts’ is a compromise that pleases no one, and that has more than a whiff of resigned settling about it.”

 

 

Source: Why Do We Call the 2000s “the Aughts”?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - DIRECTIONAL SYSTEMS

businesswoman-blank-sign-post-dilemma-26

Did you know.... Not everyone gives directions the way you do—in fact, the way people tell others how to get where they want to go can vary by city, town, and culture. Some of these directional systems might just change how you navigate the world.

 

Imagine telling someone there’s an ant on their southeast leg or giving directions by dancing a figure-eight. While these scenarios might sound absurd, they’re actually examples of real navigational systems used around the world. In an age where we’re increasingly reliant on GPS and smartphone maps, it’s easy to forget that humans (and other species) have been finding their way for millennia using incredibly diverse and often ingenious methods. Let’s take a journey through the world of guiding systems—no satellite signal required.

 

1. The Bees’ Waggle Dance

 

Honeybees have perfected a navigation system that puts us and our smartphones to shame. When a forager bee discovers a prime food source, it returns to the pitch-dark interior of the hive and performs a “waggle dance,” wiggling its abdomen from side to side while vibrating its wings. The hive becomes a living map, and the bee a dancing cartographer. The dance—which was first decoded by Karl von Frisch in the 1940s—provides directions relative to the position of the hive.

 

Here’s the kicker: The angle of the dance relative to vertical indicates the direction of the food source in relation to the sun, while the duration of the central run signals the distance. It’s as if the bee is drawing a map with its body. Even more impressive, bees adjust their dances to account for the sun’s movement over time, even on overcast days—demonstrating an understanding of celestial mechanics that would make Galileo proud. Move the hive, though, and they will need some time to reorient themselves.

 

2. Bali’s Mountain–Sea Axis
In Bali, you won't hear locals talking about cardinal directions like north and south. Instead, their world revolves around a geocentric directional system—one based on topography and landmarks. The primary directions are kaja (towards the mountain, typically the central volcano Gunung Agung) and kelod (towards the sea). This system is rounded out with kauh (clockwise around the shore) and kangin (counterclockwise).

 

The historical roots of this directional system lie in the flow of water through Bali's terraced rice paddies. Water, essential for rice cultivation, naturally flows from the mountainous regions towards the sea. This geographical reality has shaped not only agricultural practices but also the Balinese perception of space and direction.

 

Balinese homes and temples are also oriented with this system. The most sacred area, utama, of the home is located in the kaja-kangin (mountain-east) direction, where the family temple is typically placed. The least sacred area, nista, is in the kelod-kauh (sea-west) direction, often housing the toilet and animal pens.

 

3. The Guugu Yimithirr’s Absolute Direction
The Guugu Yimithirr of northern Queensland, Australia, are so reliant on cardinal directions that their language lacks words for left and right. Instead, they use north, south, east, and west for everything, from describing the location of a nearby object to giving complex directions across long distances.

 

For instance, a Guugu Yimithirr speaker might say, “There’s an ant on your southeast leg” instead of “There’s an ant on your left leg.” Or when asking someone to move in a certain direction, they might say, “Can you move a bit to the north-northeast?” rather than “Can you move a bit to your left?” This extends to larger-scale directions too. To describe a journey, they might say, “We traveled north for two days, then northwest for another day, crossing two rivers that flowed southwest.”

 

Speakers maintain a constant awareness of cardinal directions, developing an internal compass that’s always on. Research suggests this unique feature enhances their spatial memory and navigation skills. In experiments, Guugu Yimithirr speakers have shown an uncanny ability to point accurately to distant locations, even in unfamiliar environments or inside buildings, showcasing how deeply this directional thinking is ingrained in their cognitive processes.

 

4. Southern Ontario’s Lake-Oriented “North–South”

320px-Junction_of_Highway_401_and_410,_M

To the north of Lake Ontario, locals have thrown conventional compass directions out the window in favor of a system that would make cartographers scratch their heads:  South often means “towards the lake,” even when you’re actually heading more east than south. This quirky orientation is most noticeable on major roads like Highway 410, which runs “north–south” in local parlance but actually travels northwest-southeast on a map.

 

Residents have internalized this lake-based orientation so deeply that it affects their perception of nearby cities. Many who live in Mississauga describe Brampton as being northeast of their city, based on the drive “east” to Highway 410 “northbound,” which deposits them in Brampton. In reality, Brampton lies northwest of Mississauga—a fact that often surprises locals (including the author of this piece). This quirky sense of direction isn’t unique to Mississauga; similar systems exist around Lake Ontario. In New York City and Montreal, for instance, residents orient themselves based on the layout of their local rivers rather than true compass directions.

 

5. The Polynesian Star Compass

 

For Polynesian navigators, the stars became the sea, and the sea became their road. At the heart of traditional Polynesian navigation is the star compass, a mental construct that divides the horizon into 32 houses, each associated with the rising or setting point of a particular celestial body. This map of the heavens allowed navigators to maintain their course across thousands of miles of open ocean in an age long before the invention of modern instruments.

 

But stars were just the beginning. As anthropologist Wade Davis has documented, Polynesian wayfinders read a complex set of oceanic signs: the behavior of sea life, the color of the sea and sky, even the taste of the water. They could detect the presence of distant atolls by observing how cloud formations and wave patterns changed due to unseen islands and feel the way the ocean swells were reflected or refracted by distant land masses, sensing islands beyond the horizon. This holistic system of navigation, passed down through generations, enabled Polynesians to undertake deliberate two-way voyages across the vast Pacific, populating islands scattered across an area of 10 million square miles.

 

Source: 5 of the World’s Most Interesting Directional Systems

  • Like 1
Link to comment
Share on other sites

Fact of the Day - CROCODILES

crocodilesharp-teethamphibians-250nw-699

Did you know.... The jaws of a crocodile are an amazing specimen of evolution. With a second jaw joint unlike anything found in mammals, a crocodile can spread the force of its tremendous bite throughout its mouth. In fact, crocodiles have the most powerful chomp in the animal kingdom, at 3,700 pounds per square inch for a saltwater crocodile — 30 times the force of a human bite. But that’s not the only interesting thing about a crocodile’s mouth: Their tongues are incapable of getting between those devastating jaws thanks to being permanently rooted to the floor of their mouths. A crocodile’s tongue is also held in place by a membrane attached to the roof in the back of the mouth, which keeps the throat closed when the animal is submerged.

 

A crocodile’s immobile mouth muscle isn’t a new trait — its most famous ancient ancestor, the Tyrannosaurus rex, also couldn’t move its tongue (a fact Jurassic Park got very wrong). Researchers in 2018 compared the T. rex’s hyoid bones, the bones responsible for supporting the tongue, to those of modern birds and alligators, and found they exhibited tongue inhibition like the kind seen in modern crocodilians. The king of dinosaurs likely had an immovable tongue for similar reasons. With a bite that delivered 12,800 pounds of force per inch — four times that of even the crocodile — T. rex biology made sure to keep crucial body parts (i.e., the tongue) out of the way of the most powerful bite to ever walk the Earth.

 

Crocodiles actually do cry “crocodile tears.”
When someone is feigning sadness, they’re sometimes said to be “crying crocodile tears.” This phrase linking crocodiles to their often teary-eyed display occurs in literature over the past several centuries. One of its earliest mentions appears in The Voyage and Travels of Sir John Mandeville, published in the 14th century, which says, “these serpents slay men, and they eat them weeping.” Even William Shakespeare makes note of crocodile tears in Othello. Crocodiles do “cry,” but it’s mainly to keep their eyes lubricated if they’ve been out of water for long periods. In 2007, a zoologist from the University of Florida also proved that crocodiles weep when snacking, but theorized that the tears come from forced airflow (from a croc’s copious hissing and huffing), which in turn affects the reptile’s tear glands.

 

 

Source: Crocodiles can’t stick out their tongues.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - CONVERTIBLES

01j3dh6ddpmfbmpszsnd.png

Did you know... Fewer Americans are enjoying the sunshine and fresh air of open-top cars. But for some, the shift away from convertibles signifies more than just changing tastes.

 

According to new car sales and registration data, it’s clear that the golden age of the convertible is over—for now, at least. The car’s decline has been fairly constant since the convertible boom in the 2000s, when it seemed like everyone wanted to feel the fresh air tousling their crimped hair. Now, less than 100,000 convertibles are sold in the U.S. annually, with the style accounting for only 0.6 percent of new car registrations between March 2023 and February 2024 compared to 2 percent in the mid-2000s. 

 

We’ve also lost some of the most iconic models of the period. Of the eight most popular convertibles in 2001, four have been discontinued as of 2023. The survivors are the Ford Mustang, the Mazda Miata, the Chevrolet Corvette, and the Mercedes-Benz SL. Only the Corvette (marginally) increased its sales; the other three sold less than half of what they did 20 years ago.

 

Even the 2000s, however, don’t compare to the true heyday of the convertible back in the 1960s. That’s when the convertible became more than just a car; it became a symbol. They were all over movies in the '60s and '70s, when all directors needed to do to convey a sense of freedom and adventure was have characters cruise around in one with their sunglasses on. The connection between open-top vehicles and a free-spirited sensibility dates back even further. A car dealer was quoted in a 1931 issue of the Chattanooga Daily Times saying: “The utility of the conventional closed car, combined with the smartness and freedom of the open car, provides a dual purpose unit with every essential of fine motor car transportation.” For decades to follow, the exhilarating feeling of driving a car with the top down cast a spell over American motorists.

 

So why has the convertible fallen out of fashion? The simple answer is that people stopped buying them. Some prevailing theories for the decreased demand include economic hardships since the 2008 financial crisis and changes in car trends among the wealthy. Nowadays, affluent car owners are more likely to have an ultra-efficient, potentially electric car rather than a flashy one.

 

The car's reputation has also changed with average Americans. While convertibles were once seen as “sporty,” trucks and SUVs have largely taken their place in that niche. Their ruggedness and roominess has propelled them to the top of contemporary car sales, especially within the last 10 years. In 2014, 38.6 percent of new cars registered were SUVs. In 2023, that same metric was 59.7 percent.

 

To some, however, the fall of the convertible isn’t just about finances or fashion. As Mark Dent theorizes for The Hustle, it seems to signal a larger shift, one where the personal vehicle has become more of a place to hide away than a place to experience the world. It may reflect our increasingly isolated, digital lives.

 

But like clothing, car trends tend to be cyclical. Just as the 2000s convertible craze was a resurgence of a ‘60s trend, there’s a chance convertibles will make their way back onto highways soon enough.

 

Source: Why Are Convertible Cars Disappearing From American Roads?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - BORN A YEAR OLD?

getty-images-957926836-1.jpg

Did you know... In many countries, a baby’s first birthday marks a joyous milestone for parents, honoring the many months of sleepless nights and hard work involved in welcoming a new family member. But in some places — like South Korea — babies are already considered 1 year old at birth. Korean culture calculates age in three different ways, and the oldest and most traditional way (often called “Korean age”) may have gotten its start by accounting for the time spent in utero, rounding up a nine-month gestation to a full year

 

Under this measurement, everyone gains another year of age on January 1, regardless of their actual birth date — meaning it’s possible for a baby born on December 31 to turn 2 years old the following day. Yet individual birthdays are still recorded and celebrated; in fact, South Korea has used the “international age” system that counts age by date of birth for medical and administrative purposes since 1962. A third age-counting method acts as a compromise between accuracy and culture: Babies are born at age 0, but gain a year on New Year’s Day. 

 

Knowing someone’s age is culturally important in Korea; it’s tied to language, impacting how people address their elders and interact on social occasions. However, the traditional method of determining age does cause some confusion when it comes to administering medications, vaccinations, and health care procedures that are determined by one’s years, and has also caused issues with legal disputes. In December 2022, the South Korean government passed laws that standardized the use of international age, meaning many Koreans will technically become one to two years younger.

 

In Korean culture, blood types are used to determine compatibility.
Knowing your blood type is just as important as knowing your age in South Korea, where many people believe it can make or break a relationship. For nearly 100 years, Koreans have associated personality traits with blood types in the same way believers of astrology use birth dates to understand someone’s identity. People with Type A blood supposedly have a hard time trusting others but are highly creative, while Type Bs are known for being passionate and independent. People with AB blood types are categorized as rational introverts, while Type Os are often considered natural leaders. Many scientists say there’s no known link between a person’s blood type and their personality, though the idea has taken hold in Korean pop culture, featured as a plot point in books and movies.

 

 

Source: In Korea, babies are considered 1 year old at birth.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - 2024 ELECTRIC VEHICLES IS NOTHING NEW

ifacts-fact-3978b361-b2b7-479f-8221-0cc6

Did you know.... For centuries, getting around by horse and cart was the standard mode of transportation. By the 1800s, however, these hay-powered haulers were causing problems on busy city streets. As more people moved into cities, the number of horses dramatically increased, and with so many equines on the roads — New York City had around 150,000 horses in 1890 — public health concerns emerged over disease and mountains of manure. Horse travel, frankly put, was dirty in comparison to making way by horseless carriage, aka the first electric vehicles. Marketed as clean, quiet, and easy to drive, early electric cars, which resembled traditional carriages, became so popular that by 1900 they accounted for around one-third of all automotive vehicles on roadways.

 

The earliest known full-sized electric car was designed by Robert Anderson, a Scottish inventor who built his version in the 1830s, though that car (and many of its successors) didn’t go very far; at the time, batteries were rudimentary and couldn’t be recharged. It took about three decades for electric car batteries to improve, and starting in 1881, battery-operated buses began ferrying passengers in Paris, Berlin, London, and New York. A few years later, Iowa chemist William Morrison applied for a patent for his electric carriage, which could travel around 50 miles on one charge at a top speed of 20 miles per hour. By 1897, the top-selling car in the U.S. was powered by battery, though electric vehicles would hold the market for a relatively short time. By 1913, manufacturer Henry Ford had fine-tuned the mass production of gas-powered cars, dropping their price and helping to usher in a new era of private transportation.

 

Henry Ford’s wife, Clara, preferred driving electric cars.
While anyone of means could purchase an electric car at the turn of the 20th century, many models were particularly advertised to women as “ladies’ cars,” tied to a belief (however offensively) that they were easier to drive than steam- and gas-powered alternatives. Early advertisements appealed to social norms of the time, suggesting that women could attend to their errands and social events without dirtying their attire. Ads had an element of truth — electric cars didn’t produce fumes and were quieter than gas-powered vehicles. That’s part of the reason even Henry Ford’s wife, Clara, preferred to drive one. (Clara set about her business in a Detroit Electric car, and purchased a new model every two years.) Despite the gendered advertising, electric vehicles did offer women the freedom to travel without anyone’s help, and many high-profile women carried keys to their own battery-powered vehicles, including five first ladies: Helen Taft, Ellen Wilson, Edith Wilson, Florence Harding, and Grace Coolidge.

 

Source: In 1900, about a third of vehicles were electric.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - OUROBOROS

d17b259a8e662886b3e50a2b638d9cce--ourobo

Did you know.... The serpentine symbol has represented the eternal cycle of life for thousands of years.

 

The ouroboros is an ancient symbol of a serpent consuming its own tail, seen across multiple cultures and time periods. Its circularity represents eternity and the cycle of birth, life, and death, while the word ouroboros (pronounced aw-ro-BAW-roz) comes from Greek and means “devouring [its] tail.” It was often prefixed by the word drákōn, which can be interpreted as either “serpent” or “dragon,” and so visual representations of the ouroboros have also changed between the two.

 

Depictions of the symbol have been found in ancient Egypt dating to around 1300 BCE, with the earliest known inscription of an ouroboros discovered on a gold shrine in Tutankhamun’s tomb.

 

75215ac35bf0ee839513c2ba16d26f57.jpg

 

In King Tut’s world, the ouroboros was linked to the annual flooding of the Nile that brought essential water to the crops alongside the river before receding. It was also associated with the daily passage of the sun across the sky. The Egyptians believed the sun god Ra (or Re) carried the sun on his barge across the sky each day before being consumed by his mother, Nut, every evening, and then being reborn the next day. This circularity was represented by the symbol of the serpent consuming its own tail in a never-ending cycle and it was used to adorn many tombs and monuments across ancient Egypt.

 

Egypt was not the only place where the ouroboros had mythological power. Representations of the ouroboros reflect the Norse concept of the cosmos. An enormous serpent named Jörmungandr was said to encircle the entire world, symbolizing the infinite loop of creation and destruction. The serpent is both terrible and protective at once, representing the duality of the human condition and the idea that from every ending comes a new beginning.

 

Similarly, in Hindu mythology, the great serpent Shesha is coiled around the cosmos. Shesha existed before the universe was created and will exist beyond its destruction, the myths contend, signifying the endless loop of existence. In these two mythologies the ouroboros encourages a circular, rather than linear, conception of time.

 

Allusions in Alchemy
Alchemists in the Hellenistic world adopted the symbol from ancient Egypt. A diagram of a black and white ouroboros appears in an ancient scroll called Cleopatra’s Chrysopoeia (chrysopoeia means “gold-making”). This Cleopatra was not the same as the queen of Egypt who died in 30 BCE, but a leading alchemist in Alexandria during the 3rd century CE. Cleopatra’s Chrysopoeia has been described by historians as one of the earliest science books authored by a woman, and it contains philosophical musings alongside alchemical experiments for turning common metals into gold. In this and other alchemical books, the ouroboros was used to represent eternity, shifting its meaning away from the original Egyptian link to the cycles of the Nile and the sun, and toward its more modern connotations.

 

obrist_1.gif

 

Renaissance alchemists adapted the ouroboros to their quest for the magnum opus, or “great work,” such as securing immortality or transforming lead into gold, achieved through practical experiments and philosophical debates. German engraver Lucas Jennis included an iconic image of an ouroboros in his 1625 work De Lapide Philosophico, depicting it as a wyvern (a mythical winged reptile) or dragon consuming its own tail.

 

The book contains 15 emblems that communicate the philosophical underpinnings of alchemy. The first five engravings show different versions of two competing impulses thought to be at work inside all people, for example a wild wolf and a tamed dog fighting. This concept is followed by the depiction of the ouroboros representing the sublimation of these impulses.

 

Balancing the Human and Divine
From the 2nd century CE, Greco-Roman Gnostics used the ouroboros to symbolize the tension between the divine and earthly aspects of humankind. And for them, the image of the serpent eating its own tale represents how these two sides can be balanced and unified. Gnostics believed that humans each held a tiny part of God, often represented as a divine spark, inside themselves. This reading equates the snake with humanity and represents the contrary forces of the divine and the human that find harmony in the ouroboros.

 

461bc57d582c9059663aeab27968976b.jpg

 

The ouroboros maintained its links to psychology well into the modern era. German chemist August Kekulé identified the ring-shaped structure of the compound benzene after dreaming of a serpent consuming its own tail in 1865. Psychiatrist Carl Jung conceptualized it as an archetype of human character in which we constantly seek to consume ourselves and be reborn.

 

The enduring symbolism of the ouroboros has ensured its longevity. It is found in a wide variety of visual arts, from the 19th-century funerary monument of Archduchess Maria Christina of Austria to artist Salvador Dali’s 1976 artist’s book Alchimie des Philosophes, which features an ouroboros cut into many pieces but maintaining its circularity. 

 

Today the ouroboros is a popular choice as a tattoo, perhaps alluding to the multiple meanings that people find in this ancient symbol. It reminds us of the endless cycle of life and death—and the possibility of rebirth.

 

Source: Ouroboros: The Origins and Meaning of the Snake Eating its Tail

  • Like 1
Link to comment
Share on other sites

Fact of the Day - POPCORN, HOW HIGH?

watching-movie-popcorn-wooden-background

Did you know.... Popping an afternoon snack of popcorn in the microwave generally isn’t a messy affair, considering most popcorn cooking is contained to a bag. But if it weren’t, you might have to watch out for flying kernels, since popcorn can pop as high as 3 feet while it transforms from kernel to puff. However, the tiny grains don’t just fly straight skyward as they expand; high-speed recordings of popcorn as it cooks show that the kernels actually flip like a high-flying gymnast, thanks to starches that push off a cooking surface and propel the corn into the air. 

 

The way popcorn transforms from a hard nugget to a soft and springy morsel can seem like magic, except scientists say it’s really just a trick caused by heat and pressure. Each kernel has three parts: the germ (seed) found deep within the shell, the endosperm (a starch section used to nourish the germ if planted), and the pericarp (aka the hard exterior). Moisture and starch are also packed into each tiny kernel; when heated, that microscopic amount of water creates pressurized steam. By the time a popcorn kernel reaches 350 degrees, the pressure is too much to contain and the pericarp explodes, causing the starchy endosperm to expand outward. When the process is finished, the resulting popcorn has puffed up to 40 times its original size.

 

While the popcorn industry strives to get 98% popability from each bag of kernels, there’s likely still going to be duds at the bottom of the microwave bag. In those cases, it’s likely the pericarp was cracked or the kernel didn’t have enough internal moisture, both of which prevent any pressure buildup — which means that no amount of extra microwaving will give you a few more bites.

 

Popcorn pops into two distinct shapes.
When popcorn is all lumped together in a bowl, it just looks like… popcorn. But an up-close inspection shows that kernels actually pop into one of two shapes, transforming into “butterflies” and “snowflakes” (winged, multifaceted shapes) or “mushrooms” (rounded puffs). Butterflies occur when the popped kernel turns inside out, while mushrooms are created when the kernel’s endosperm expands instead of flipping. Generally, mushrooms are sturdier and can withstand the additional cooking process to become caramel or kettle corn. Whether your bowl of popcorn gets more mushrooms or butterflies mostly depends on factors uncontrollable from your kitchen, like the popcorn plant’s genetics or how much water the plant received while it was growing in the field.

 

 

Source: Popcorn can pop up to 3 feet into the air.

  • Like 1
Link to comment
Share on other sites

Posted (edited)

Fact of the Day - TORNADO VD FUNNEL CLOUD

01j3r27kqmyjvgnf5jf7.png

Did you know.... The difference between tornadoes and funnel clouds lies in whether the clouds are touching the ground.

 

If you ever come face-to-face with a whirlwind, the proper name for it will likely be the least of your concerns. The confusion between tornadoes and funnel clouds is widespread, causing people to use the words interchangeably. The mix-up is understandable since they act similarly—but there are key differences between the weather phenomena. In short, the difference between funnel clouds and tornadoes lies in their positioning.

 

Funnel clouds are spinning columns of air that don’t touch the ground. All it takes is one intense thunderstorm to cause a funnel cloud. When wind starts blowing at varying speeds and directions, also known as wind shear, the air becomes unstable and a storm begins. Warm air rises while cool air, rain, and hail fall, resulting in rolling currents in the clouds. As the storm brews, it can become vertical while suspended in the air, forming a funnel cloud. These whirlwinds usually precede tornadoes and can dissipate within minutes of appearing; they become tornadoes once they reach the Earth’s surface and wreak havoc.

 

Of the many forms of natural disasters, tornadoes are among the most destructive. Although most don’t last long and only span a few yards wide, the worst of them can reach wind speeds of more than 250 miles per hour, destroying a pathway that is up to 50 miles long and a mile wide.

 

Tornadoes occur primarily in North America, with Texas getting the brunt of the natural disaster at an average of 120 yearly. In fact, parts of Texas, Oklahoma, Kansas, South Dakota, Louisiana, Iowa, Nebraska, and Colorado make up what's known as Tornado Alley. The potential for twisters in this area is exceptionally high compared to other parts of the U.S. There, masses of warm, moist air from the Gulf of Mexico, cold, dry air from the Rockies and Canada, and warm, dry air from the Southwest are likely to collide, creating the perfect breeding ground for tornadoes.  

 

Antarctica is the only continent that hasn’t been impacted by twisters. According to the National Centers for Environmental Information, seeing a tornado in the area is unlikely due to the continent’s cold climate and dry air. 

 

 

Source: Tornado vs. Funnel Cloud: What's the Difference?

Edited by DarkRavie
  • Like 1
Link to comment
Share on other sites

Fact of the Day - THE EARTH SHAKES?

significado-linea-viajes-330x220.jpg

Did you know... Like a lot of strange happenings, it was first noticed in the 1960s: a small seismic pulse, large enough to register on seismological instruments but small enough to go otherwise unnoticed, occurring every 26 seconds. Jack Oliver, a researcher at the Lamont-Doherty Geological Observatory, documented the “microseism” and sussed out that it was emanating from somewhere “in the southern or equatorial Atlantic Ocean.” Not until 2005 was it determined that the pulse’s true origin was in the Gulf of Guinea, just off Africa’s western coast, but to this day, scientists still don’t know something just as important — why it’s happening in the first place.

 

There are theories, of course, ranging from volcanic activity to waves, but still no consensus. There does happen to be a volcano on the island of São Tomé in the Gulf of Guinea near the pulse’s origin point, not to mention another microseism linked to the volcano Mount Aso in Japan, which has made that particular explanation more popular in recent years. Though there’s no way of knowing when (or even if) we’ll learn the why of this phenomenon, one thing’s for sure: better a microseism than a macroseism.

 

California isn’t the most earthquake-prone state.
That would be Alaska, which isn’t just the most earthquake-prone state in the country — it’s one of the most seismically active areas in the world, with 11% of all earthquakes occurring there. That’s because Alaska is part of the Ring of Fire, a nearly 25,000-mile-long area along the Pacific Ocean, characterized by volcanic and seismic activity. The second-largest earthquake ever recorded (a staggering 9.2 on the Richter scale) took place in the Prince William Sound region there on March 27, 1964, lasting about 4.5 minutes and causing a tsunami that was felt as far away as California. Beyond that, three of the eight largest recorded earthquakes in the world have also been in Alaska, as were seven of the 10 largest in America. It has experienced an average of one magnitude 7 to 8 earthquake every year since 1900 and one “great” earthquake (magnitude 8 or higher) every 13 years.

 

 

Source: The Earth shakes every 26 seconds, and scientists aren’t sure why.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - TARD SALE

ifacts-fact-2b750514-6ff8-4713-bdbe-cc3a

Did you know... Yard sales are an American tradition — especially along U.S. Route 127. It’s there that you can find the famous 127 Yard Sale, an annual event on the first Thursday through Sunday in August featuring thousands of vendors on front lawns and in church parking lots in Alabama, Georgia, Tennessee, Kentucky, Ohio, and Michigan. All in all, the “world’s longest yard sale” covers 690 miles, starting near Addison, Michigan, and ending in Gadsen, Alabama. The inaugural event took place in 1987, when a Tennessee county executive named Mike Walker conceived of the idea to encourage travelers to bypass the big interstate highways in favor of experiencing life in more rural communities. 

 

Yard sales aren’t just a great way for vendors to declutter, though — they can also be a literal treasure trove. In 2013, a seemingly nondescript ceramic bowl that had been purchased at a garage sale for $3 in 2007 sold at Sotheby’s for $2.2 million; it turned out to be a 1,000-year-old piece of pottery from the Northern Song dynasty. Even the Declaration of Independence has found its way to the bargain bin — a first printing was purchased at a flea market in 1991 because the buyer wanted the picture frame. It later went on to sell at auction for $2,420,000.

 

Oprah Winfrey hosted a “yard sale” that raised over $600,000 for charity.
In 2013, Oprah Winfrey decided to declutter her various homes and hold a massive auction-style yard sale that she calledthe biggest yard sale ever” to support one of her charities, the Leadership Academy for Girls in South Africa. The sale included items from her Montecito mansion and three additional properties in Santa Barbara. The value of each item was, of course, boosted through its association with Oprah, including a nondescript teapot worth less than $100 that ultimately went for over $1,000. That’s not to say all the items were so mundane — a set of six 18th-century Louis XVI armchairs fetched $60,000. With that major sale, plus several velvet-clad sofas that sold for $8,750, a print of one of Oprah’s “TV Guide” covers that raked in $3,000, and many more household items, the event — held at the Santa Barbara Polo and Racquet Club — raised more than $600,000 in all.

 

 

Source: There’s an annual yard sale so large it runs through six states.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - GOLD MEDAL

01j44n13h5x21er7vhrp.jpg

Did you know.... The amount of gold in Olympic medals is regulated, and there’s a lot less than there used to be.

 

The prizes awarded at the Olympics have varied over their long history. Ancient Greek competitors were given an olive branch from a wild olive tree that grew at Olympia (and some money upon returning home a champion, too). When the first modern Olympic Games organized by the International Olympic Committee were held in 1896 in Athens, winners got a silver medal and an olive branch, and runners-up received a bronze medal and a laurel branch.

 

At the 1900 Paris Games, some athletes got silver or bronze medals, but the majority received cups or other trophies. Gold medals made from solid gold were introduced at the 1904 St. Louis Games, and the first time medals were awarded to the top three placing athletes in the gold-silver-bronze order was four years later in London.

 

 

 

The 1912 Stockholm Games were the last time solid gold medals were awarded. These days, Olympic rules allow gold medals to be made mostly of .925-grade (a.k.a. sterling) silver coated in 6 grams of 24-karat gold. The second place silver medals must contain silver of a similar grade. Beyond that, the specific composition of the medals, and their design, is largely left to the host city’s organizing committee.

 

Going for (1 Percent) Gold
For this year’s Paris Games, the gold medals are made of 523 grams of silver gilded with six grams of pure gold. The silver medals contain 525 grams of silver, and the bronze medals are an alloy of copper, tin and zinc.

 

The medals have a value beyond the worth of their precious metal content, though. They’re pieces of history, and can command high asking prices on the market. First-place silver medals from the first modern Games in 1896 in Athens have sold at auction for $180,000 and $112,000 in the past few years. Medals won by famous athletes go for much more. In 2013, one of the four gold medals won by track and field star Jesse Owens at the 1936 Berlin Games was auction for nearly $1.5 million.

 

Ultimately, the gold, silver, and bronze are worth much more than their metallic contents. A story from ancient Greece, back when athletes received only a humble olive branch, says a lot about what these prizes mean. In The Histories, Herodotus writes about a group of Arcadian deserters who went to Persia looking for work. The Persians asked them what the Greeks were up to, and the Arcadians explained that their countrymen were “holding the Olympic festival and viewing sports and horse races.” The Persians asked what prizes were offered to the competitors and the Arcadians explained that the victors received a “crown of olive.”

 

“Then Tigranes son of Artabanus [a Persian regent] uttered a most noble saying,” writes Herodotus. “When he heard that the prize was not money but a crown, he could not hold his peace, but cried, ‘Good heavens, Mardonius [a Persian military commander], what kind of men are these that you have pitted us against? It is not for money they contend but for glory of achievement!’”

 

 

Source: How Much Gold is in a Gold Medal?

  • Like 1
Link to comment
Share on other sites

Posted (edited)

Fact of the Day - DOGS CAN SMELL STRESS?

de4cfea8ff994ccea262916997e79758.jpg

Did you know... A new study shows that dogs sense and react to our scents when we're stressed.

 

Dogs have a fantastic sense of smell: A canine’s nose is 10,000 to 100,000 times stronger than a human's. Now, new research shows that when they sense people's stress levels with their snouts they react accordingly. 

 

A recent study published in the journal Scientific Reports found that dogs are more likely to act pessimistically after sniffing an anxious person’s sweat. For their research, a team of scientists in the UK asked 11 volunteers to speak publicly and perform math problems in front of others to generate anxiety. In another session, the same participants watched 20-minute videos of relaxing scenery in a calm place.

 

The experimenters tracked each participant’s heart rate. They also collected breath, saliva, and sweat samples to measure cortisol—a hormone associated with stress that shows up in bodily fluids. All volunteers took anxiety questionnaires before and after the study.

 

Meanwhile, 18 dogs were trained to learn that food was always in a bowl in a specific spot. The pets also realized a bowl in another section of the room never contained food.  The researchers then observed how the canines approached new bowls placed between the previous two. The goal was to see if the animals seemed optimistic or pessimistic about the possibility of food in an ambiguous bowl.

 

After smelling sweaty clothing from the anxious volunteers on several separate occasions, the dogs were more likely to slowly approach a new bowl when placed closer to the empty bowl. That implies the dogs believed there would be no treat in the bowl and would approach it pessimistically. The dogs’ attitudes didn’t seem to change after they smelled the sweat of relaxed people. 

 

The odor of stressed-out sweat may have a bigger influence on a dog’s appetite than their decision-making abilities. Stress reduces hunger, so the scent of an anxious, sweaty human may make dogs feel so on edge that their appetites shrink. Nonetheless, the research shows that a stressed owner can negatively impact dogs.

 

The fact that dogs can be hypersensitive to their owners’ stress levels has been demonstrated before. A 2019 study investigating the phenomenon asked participants questions about their personality traits, including neuroticism and openness. They were also instructed to complete personality questionnaires for their pets covering factors like excitability, fearfulness, and aggression.

 

The researchers then analyzed the hair cortisol concentrations in 58 dogs and their owners. The researchers found that the cortisol levels of the canines were in sync with those of their owners. This implies that dogs are highly empathetic and have similar emotional responses to people. 

 

Source: Dogs Can Smell Our Stress—And It's Contagious

Edited by DarkRavie
  • Like 1
Link to comment
Share on other sites

Fact of the Day - PAUSING PREGNANCY?

220px-Macropus_eugenii_136251237.jpg

Did you know.... Evolution has devised a mind-boggling number of amazing methods for perpetuating life on Earth. But one of nature’s most impressive tricks is pumping the brakes on pregnancy with a process known as embryonic diapause. This isn’t a rare prenatal feat, either: An estimated 130 mammal species, such as mice and seals, can pause a pregnancy for anywhere from a few days to as many as 11 months, as is the case with the tammar wallaby (Notamacropus eugenii). The pause usually occurs during the blastocyst stage, when an embryo forms in the uterus but doesn’t embed into the uterine wall until conditions are right. 

 

Scientists have identified two reasons why some mammals pause pregnancies. When animals are nursing, a rise in hormones prevents embryos from implanting, which gives the nursing young time to wean off their mother. The second reason is a bit more complicated, but certain animals can pause pregnancies when external conditions — such as a lack of food or harsh temperatures — are not ideal for raising a newborn. Scientists have known about this kind of diapause since at least the 1850s, but are only now beginning to understand its inner workings. In 2020, a study found that a catalytic enzyme known as mTOR — which regulates cell proliferation, growth, and protein synthesis, and also senses a cell’s nutrient and energy levels — instigated a metabolic response related to diapause when it was inhibited. Scientists are still piecing together exactly why humans, who also have mTOR enzymes, can’t pause pregnancies; understanding how this process works could lead to advancements in stem cell research and cancer treatment.

 

Humans might be born 12 months too early.
Ever wonder why humans are born relatively defenseless compared to other mammals? Some scientists believe a human’s gestation period should be around 21 months — not nine. So what gives? Turns out, a variety of factors might explain why humans are born less developed compared to other mammalian species. The traditional belief is that natural selection favors our big brains and bipedalism at a detriment to longer gestation. These factors, combined with the small pelvises of people who give birth, create a situation where humans are essentially born prematurely. However, some scientists instead suggest that a person’s metabolism, and the energy demands of pregnancy, might be the reason. Simply put, a human can only spend so much energy daily until they max out. A person will almost always give birth right before reaching that “metabolic danger zone.”

 

 

Source: Some animals can pause their pregnancies.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - CUTE AS A BUTTON MEANING

01j458k3yq0m4cyz3yjq.jpg

Did you know.... For such a small thing, the button has been prolific in terms of vocabulary. The archaic phrase it is in a person’s buttons means “it’s in someone’s capacity to do or achieve something.” To take by the buttons is to accost someone, as if you were grabbing them by their shirt’s buttons. Dash my buttons! is an exclamation of surprise or exasperation. And to have a soul above buttons is to have aspirations beyond what’s expected from lower social classes.

 

Then there’s cute as a button—a perfect idiom for beings and things that are tiny and totally adorable.

 

The Meaning and First Uses of Cute as a Button
According to the Oxford English Dictionary (OED), cute as a button means “extremely attractive; adorable, charming.” The phrase was first recorded in 1913, in a review from the Albuquerque (New Mexico) Morning Journal: “Sam Pickard as ‘Little Lord Fauntleroy’ was ‘cute as a button.’” A use in the Arizona Independent Republic from 1938 shows the expression can work equally well as an adjective: “Dress your darling in cute-as-a-button coats with matching hat or cap.”

 

The expression is probably related to the similar bright as a button, which has been spotted in English since at least the late 1700s. The OED defines this out-of-use idiom as “animated, lively; cheerful; mentally alert, quick-witted.” So a particularly intelligent puppy could be described as bright and cute as a button.

 

Why a Button?
The word button in the sense of “A small disc or knob attached to a garment (or other fabric item) and used either as a fastener by passing it through a buttonhole or as a decoration” came into English from French around 1350, according to the OED. After that, it acquired many other meanings, from the bud on a plant to a nodule on the skin to slang for the penis (or, if plural, the testicles) and police officers.

 

The cutesy sense of button—that is, something small and adorable—has been around since at least the late 1600s. As Archibald Lovell wrote in 1696, “This is such a little Button of a World.” Several examples from the OED refer adoringly to a button of a mouth, a button of a face, or, especially, a little button of a nose. (One less-than cutesy meaning of button involves the chin: If a fighter hits another on the button, that’s probably a knockout blow.) By the 1770s, the word had evolved to refer to “A bright, cheeky, or cute person, typically a child” (or, in some areas of the U.S., “a person who lacks experience or skill; a novice”). Cute as a button evolved from there.

 

If buttons aren’t your thing, you can also say “cute as a bug in a rug” or “cute as a bug’s ear.” And if cuteness isn’t your thing, remember the original meaning of cute, which came from acute, meaning “sharp-witted.” This sense can still be seen when someone responds to a verbal jab with “Don’t get cute.”

 

 

Source: Why Do We Say Things Are “Cute as a Button”?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - STUMP SPEECH

bingham-stump-speaking-postimage.jpg

Did you know.... Every election season, U.S. presidential candidates hit the campaign trail to deliver what’s known as a stump speech. So what exactly is it, and why do we call it that?

 

The Origin of Stump Speeches

Back in Revolutionary War–era America, orators in rural communities sometimes stood on actual tree stumps to elevate themselves above listeners. By the early 19th century, the terms stump orator and stump oratory had started appearing in newspapers, and stump speech was in print by 1820. In June of that year, for example, the Knoxville Register mentioned the stump speech of a West Tennessee man running for a seat in the state legislature.

 

“It was proposed, we are informed, in a stump speech delivered by the candidate, with loud exclamations of applause to a number of the electors of the county,” the paper wrote (emphasis theirs), “That if they would elect him he would use his talents and influence to have a law passed laying a tax on the state which should be applied exclusively to paying the debts of all those who are involved.”

 

The passage illustrates what a typical stump speech involved (and still involves): a political candidate telling local people why they should vote for said candidate. Eventually—though it’s hard to say exactly when—stump speeches stopped featuring literal stumps.

 

“[W]e often mount the stump only figuratively: and very good stump-speeches are delivered from a table, a chair, a whiskey barrel, and the like. Sometimes we make our best stump-speeches on horse-back,” Baynard Rush Hall wrote in his account of pioneer life in Indiana for the 1855 edition of The New Purchase. During the climax of one memorable stump speech given from an ox cart, pranksters removed the pins keeping the cart level, causing the speaker to tumble into the dirt.

 

stump-orator-1881-padre-art.jpg

 

Hall’s book may also shed light on why stump speeches are associated with the United States. Throughout the 19th century (and beyond), as the nation expanded its borders and communities coalesced into new towns and cities, there were more opportunities to run for office. He described the “social state” as “always in ferment; for ever was some election, doing, being done, done or going to be done; and each was as bitterly contested as that of president or governor. … And everybody expected at some time to be candidate for something; or that his uncle would be; or his cousin, or his cousin’s wife’s cousin’s friend would be: so that everybody, and everybody’s relations, and everybody’s relations’ friends, were for ever electioneering.”

 

Not everyone viewed the importance of public speaking in elections as positive (or at least neutral). In an 1850 pamphlet, Scottish philosopher Thomas Carlyle eviscerated the stump orator as a “mouthpiece of Chaos to poor benighted mortals that lend ear to him as to a voice from Cosmos.” Carlyle disputed the correlation between being able to talk about accomplishing things and being able to actually achieve them—and he felt voters were too dazzled by the former to see the difference. Moreover, Carlyle believed that the focus on public speaking prevented the best leaders—in his estimation, doers, not talkers—from even running for office, leaving voters to stack the government with charismatic windbags.

 

“Your poor tenpound franchisers and electoral world generally, in love with eloquent talk, are they the likeliest to discern what man it is that has worlds of silent work in him? No,” Carlyle wrote. “Or is such a man, even if born in the due rank for it, the likeliest to present himself, and court their most sweet voices? Again, no.”

 

175px-PuckMagazine5Aug1896.jpg

 

But the reality, then and now, is that candidates have to convince people to vote for them, which is hard to do without talking.

 

The State of the Stump Speech
The modern conception of a stump speech isn’t just any speech given to a group of voters. It’s one speech that a candidate travels around repeating to various groups of voters. Naturally, we hear about them most frequently during presidential campaigns, which involve lots of travel and the largest constituency (and which usually get the most attention). While today’s presidential candidates don’t orate atop whiskey barrels or ox carts, that homespun spirit is preserved in some of the locations they choose as campaign stops: churches, union halls, and even barns.

The media often references a stump speech in conjunction with its recurring themes. In November 2020, for example, the Pittsburgh Post-Gazette mentioned that Joe Biden “gave some of his standard economy-focused stump speech.” Earlier that year, The Buffalo News said that Amy Klobuchar’s “entire stump speech [was] littered with appeals to the heartland.” During the 2016 campaign season, the same paper noted how John Kasich’s stump speech almost never failed to cover “his work to produce a budget surplus” during his time on the House Budget Committee. “He brings a national-debt clock to town halls,” the article said.

 

However, candidates modify and refine their stump speeches on the campaign trail—not unlike how stand-up comedians workshop bits while touring. In 2008, The Washington Post published an anatomy of Barack Obama’s 45-minute stump speech (transcribed from one appearance in Boise, Idaho), detailing what points were added when and even which parts garnered applause or laughter. 

 

“Many of the additions are riffs that he’s created in response to criticisms made against him, lines of attack that he absorbs and tries to turn against the opposition,” The Post wrote. After fellow candidate John Edwards accused him of being “too nice a guy” and “too conciliatory” to effect change, for example, Obama made it a selling point in his stump speech, claiming that his willingness to “reach out across the aisle” was a product of his strong principles and clear view of what he was fighting for.

 

 

 

Another tentpole of the stump speech is tailoring it to the audience with a little local color. When Hillary Clinton addressed a crowd at Tampa’s University of South Florida in September 2016, she started with, “I know I’m only the second most exciting thing that’s happened here in the last few days. Your big win to open your football season got some attention.” When Mitt Romney spoke in Bedford, New Hampshire, in December 2011, he thanked people for “coming out on a cold winter night” and mentioned that the state’s ski resorts would probably “start making snow … and get people from Massachusetts across the border to come up and ski.”

 

Generally, stump speakers are always searching for the perfect balance between specificity and universality. You want your audience to feel understood and confident that you’re committed to fixing their issues, but you also want to be broad enough not to alienate voters. So stump speeches can be heavy on the hedging. In 2016, when FiveThirtyEight tasked former Republican speechwriter Barton Swaim and former Democratic speechwriter Jeffrey Nussbaum with writing a completely bipartisan stump speech, they filled it with wording like “We need to start thinking seriously” and “The U.S. will not ignore.” As Swaim pointed out, “to ‘start thinking seriously’ about something isn’t actually to do anything,” and “not to ignore something isn’t necessarily to act.”

 

Source: The Reason Why Some Political Addresses Are Called “Stump Speeches”

  • Like 1
Link to comment
Share on other sites

Fact of the Day - POP! GOES THE WEASEL

vector-otters-paints-fence-sick-260nw-22

Did you know.... “Pop! Goes the Weasel” is one of the most pervasive kids’ songs ever written: The earworm pops up everywhere from jack-in-the-box toys to Data and Riker’s first encounter in Star Trek: The Next Generation. But not only did the tune likely not start out as a nursery rhyme, the lyrics might not even be about a weasel popping out from a hole in the ground.

 

Make a Song and Dance About It
People have been getting “Pop! Goes the Weasel” stuck in their heads since the early 1850s. Although the exact origin of the song isn’t known, it became a craze in England at the end of 1852, with an ad in The Birmingham Journal promoting dance lessons for the “highly fashionable” song that was “recently introduced at her Majesty’s and the Nobility’s private soirees.” Essentially, “Pop! Goes the Weasel” was the Victorian era version of Los del Río’s “Macarena,” with the song and dance partly taking off thanks to its association with Queen Victoria.

 

It wasn’t long before the ditty reached American ears, with the song’s sheet music being published in the United States in 1853. That same year, dance teacher Eugene Coulon described it [PDF] as “an old and a very animated English Dance that has lately been revived among the higher classes of society.” He says that it takes the form of a “Country dance,” with “Ladies and Gentlemen being placed in lines opposite to each other.” At this point in the song’s history, the only lyrics were “pop goes the weasel,” which was sung when dancers passed under the arms of others.

 

 

 

By October 1854, a song about the song had even been published, with the lyrics speaking of its overwhelming popularity: “Go where you will, you’ll hear it still, all dance Pop goes the Weasel.” Although the original tune only featured one line of lyrics, people soon started writing their own words. In November 1855, it was reported that “almost every species of ribaldry and low wit has been rendered into rhyme to suit it,” but what exactly those rhymes were is unknown. Some of the earliest surviving lyrics come from Charley Twiggs in America, who in 1856 wrote verses such as:

 

Queen Victoria’s very sick,
Napoleon’s got the measels,
Sebastopol is won at last,
‘Pop goes the Weasel.’

 

All around the Cobblers house,
The Monkey chased the people,
The Minister kiss’d the Deacons wife,
Pop goes the Weasel
.”

 

The second verse has echoes of the version now most commonly sung in the U.S. these days. It wasn’t until 1917 that mulberry bush started to replace cobblers house and that the monkey chased a weasel, rather than people. The earliest version sung in 1850s England had almost entirely different lyrics (aside from the final line):

 

 

Up and down the City-road,
In and out the Eagle,
That’s the way the money goes,
Pop goes the weasel
.”

 

This is now the second verse that is commonly sung in the UK today, with the modern first verse popping up in print by 1905:

 

Half a pound of tuppenny rice,
Half a pound of treacle.
Mix it up and make it nice,
Pop goes the weasel
.”

 

Weasels and Spinners and Slang, Oh My! 
There has been a lot of speculation about what the lyrics of “Pop! Goes the Weasel” might really mean—here are the most popular theories, including mustelid movements, a yarn measuring tool, and Cockney rhyming slang.

 

Back in 1856—when the tune still had listeners in its grip—an unnamed writer in Harper’s New Monthly Magazine stated the line was the result of mishearing. They believed it originated with Methodist preacher James Craven, who during a sermon in Virginia said, “Take a kernel of that wheat between your thumb and finger, hold it up, squeeze it, and—pop goes the weevil.”

 

 

 

Perhaps the easiest explanation is that the lyrics are literally about a weasel popping out of a hole. This theory is linked to the dance that accompanied the song, with J. Holden MacMichael writing in an 1905 edition of Notes and Queries that “The weasel is doubtless the dancer, as he or she ‘pops’ through or under the arms of the others in the same sinuous manner as a weasel enters a hole, for it was at this part of the dance that all present used to sing ‘Pop goes the weasel.’”

 

Another theory is that weasel is not referring to the animal, but to a spinner’s weasel. Yarn spun on a spinning wheel could be measured on a weasel, which featured a mechanism that would make a popping sound when the desired length had been reached. This interpretation may have been the inspiration for the textile version of the verse

 

A penny for a ball of thread,
A farthing for a needle,
That’s the way the money goes,
‘Pop goes the weasel.’

 

The words of the UK version of the rhyme are partly related to London. The Eagle is an old pub—which is still serving pints to this day—just off City Road. The final line is usually attributed to 19th-century slang: pop shop referred to pawnbrokers, so to pop something was to pawn it. There’s less certainty about what weasel means. One suggestion is that it’s a coat, which comes from the Cockney rhyme slang weasel and stoat, but Gary Martin at Phrase Finder disputes this because that phrase wasn’t used until the 1930s. Other suggestions include a purse, silverware, and a tailor’s iron.

 

The lyrics’ meaning may forever remain a mystery. And it doesn’t matter which version of the song you sing, or if you even truly understand what the words are referencing—the tune is likely to get stuck in your head no matter what.

 

Source: ‘Pop! Goes the Weasel’: The Real Meaning Behind the Nursery Rhyme

  • Like 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...
Please Sign In