Jump to content

Fact of the Day


DarkRavie

Recommended Posts

Fact of the Day - PENGUINCUBATOR

TELEMMGLPICT000343738612_16902095779940_

Did you know... Sir Allen Lane wanted to bring paperback books to the masses—and he thought a vending machine was the perfect way to do it.

 

Sir Allen Lane was the creator of Penguin Books, which is credited with popularizing high-quality mass-market paperbacks. Paperbacks existed prior to Penguin, but they were often poorly made or had trashy subject matter. Lane changed all that: He published classic literature in paperback form and legitimized the paperback. He also offered them at an affordable price (sixpence per book at launch, or about the same as a pack of cigarettes). According to an archived version of Penguin’s website, it all came about after Lane paid a visit to Agatha Christie:

 

[H]e found himself on a platform at Exeter station searching its bookstall for something to read on his journey back to London, but discovered only popular magazines and reprints of Victorian novels. Appalled by the selection on offer, Lane decided that good quality contemporary fiction should be made available at an attractive price and sold not just in traditional bookshops, but also in railway stations, tobacconists and chain stores.”

 

One of the ways Lane brought books to non-bookstore locations was the “Penguincubator,” a vending machine for his paperbacks that he invented in 1937. (He may have gotten the idea from the German publisher Reclam, which first made book vending machines in the 1910s.) You can see a photo of the machine here.

 

James Bridle writes at Publishing Perspectives that the first Penguincubator was located outside Henderson’s—a bookshop called “The Bomb Shop” due to the fact that it sold radical literature—at 66 Charing Cross Road. This “signaled his intention to take the book beyond the library and the traditional bookstore, into railway stations, chain stores and onto the streets.”

 

Unfortunately, the idea wasn’t exactly a successful one: As one bookseller recounted in The British Book Trade: An Oral History, “it had to be wheeled out and locked at the front of the shop every night, then brought in every morning.  And every morning, apparently, there were letters of complaint shoved under the door: ‘We put a shilling in this machine and no book came out of it.’  It was a complete failure.”

 

While the Penguincubator is no longer around, you can find a Penguin Books Vending Machine in England’s Exeter St Davids Train Station that was installed in 2023 in honor of Lane’s search for a book there all those years ago. According to the city of Exeter, “The machine has proven to be a hit with locals and commuters alike, garnering millions of views thanks to a string of viral social media posts and national press attention that lauded its uniqueness.” And in 2025, the machine “will play host to a curated selection of books from Penguin’s 90 years of publishing success with Exeter City of Literature managing the unique book dispenser’s inventory. Customers can expect to encounter a series of themed books in the machine to celebrate Exeter’s place in the bookish world as one of only 53 UNESCO Cities of Literature.”

 

 

Source: The Penguincubator: The 1937 Vending Machine for Books

  • Like 1
Link to comment
Share on other sites

Fact of the day - CELSIUS SCALE

96939582-thermometer-with-celsius-scale-

Did you know... On Christmas Day 1741, Anders Celsius, a professor at Uppsala University in Sweden, took the world’s first temperature measurement using degrees Celsius — well, kind of. His scale had one big difference compared to the system we use today: It was backward. Instead of 0 degrees marking the freezing point of water, it instead marked the boiling point, while 100 degrees marked the freezing point. The reason for this arrangement may have been in part to avoid using negative numbers when taking temperature readings. After all, it’s pretty cold in Sweden a majority of the year, and air temperature never gets hot enough to boil water (thank goodness). 

 

Celsius’ scale, then known as the centigrade (or “100-step”) scale, remained this way for the rest of his life, but in 1745 — one year after his death — scientist Carl Linnaeus (of taxonomy fame) ordered a thermometer with the scale adjusted to our modern orientation. Several other scientists also independently reversed the scale. Yet it wasn’t until some two centuries later, in 1948, that the International Bureau of Weights and Measures decided to rename “centigrade” to Celsius, in part to fall in line with the other major temperature scales named after their creators, such as Daniel Fahrenheit and William Thomson, 1st Baron Kelvin. 

 

Although the Swedish scientist didn’t invent, or even use, the precise scale that now bears his name, his groundbreaking work is still worthy of the accolade. Before Celsius, a couple dozen thermometers were in use throughout the world, and many of them were frustratingly inaccurate and inconsistent (some were based on the melting point of butter, or the internal temperature of certain animals). Celsius’ greatest contribution was devising a system that could accurately capture temperature under a variety of conditions, and his name now graces weather maps around the world (excluding the U.S., of course). 

 

There was once such a thing as a decimal time.
Today’s second is derived from a sexagesimal system created by the ancient Babylonians, who defined the time unit as one-sixtieth of a minute. Fast-forward to the tail end of the 18th century, and the French Revolution was in a metric frenzy. In 1795, France adopted the gram for weight, the meter for distance, and centigrade (later renamed Celsius) for temperature. However, some of France’s decimal ideas didn’t quite stand the test of, er, time. By national decree in 1793, the French First Republic attempted to create a decimal system for time. This split the day into 10 hours, with each hour lasting 100 minutes, and each minute lasting 100 seconds (and so on). Because there are 86,400 normal seconds in a day, the decimal second was around 13% shorter. Although it was easy to convert among seconds, minutes, and hours, France’s decimal time proved unpopular — after all, many people had perfectly good clocks with 24 hours on them — and the idea was abolished two years later. Since then, a couple of other temporal decimal proposals have been put forward, including watchmaker Swatch’s attempt to redefine the day as 1,000 “.beats” (yes, the period was included) in 1998 in response to the internet’s growing popularity. However, ancient Babylon’s perception of time is likely too ingrained in human culture to change any time soon.

 

 

Source: The Celsius scale was originally backward.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - SHIPWRECKS

e06cb3f2dded5a47bca5b230ece38eb7

Did you know... These are the five deepest shipwrecks ever discovered, including the USS ‘Samuel B. Roberts,’ which went to the depths of the Philippine Trench during the Second World War.

 

In October 1944, during the Battle off Samar—one of four major actions during the Battle of Leyte Gulf in World War II’s Pacific theatre—the USS Samuel B. Roberts found itself in dire straits. 

 

The destroyer escort had only a fraction of the guns and torpedoes carried by the naval warships it accompanied. It stood no chance against the Imperial Japanese naval force, which was desperate to fight off a U.S. invasion of the Philippines at Leyte Gulf. After firing every round of ammunition, smoke shell, and illumination round on board to provide a protective smoke screen for the destroyers, the Sammy B was sunk by a Japanese battleship and disappeared into the depths of the Philippine Trench, dragging around 90 of its 224 crew members with it.

 

Nearly 80 years later, American adventurer Victor Vescovo piloted his deep-sea submersible Limiting Factor in the Philippine Trench and managed to locate the wreck of the Sammy B. The ship, which had broken in two during its long descent to the seafloor, confirmed details about the Battle off Samar that had previously been known only from eyewitness accounts, such as punctures in the stern showing exactly where Japanese shells had fatally struck. 

 

The vessel’s final stand “was just an extraordinary act of heroism,” Vescovo told the BBC following the 2022 discovery. “Those men—on both sides—were fighting to the death.”

 

Under Pressure
Equally impressive is the depth at which the Sammy B settled. It lies at a staggering 22,621 feet—or 4.28 miles—below sea level, where the temperature remains around 32°F and the pressure rises to 5 tons per square inch.

 

11.jpeg?ve=1&tl=1

 

Located in one of the deepest sections of one of the deepest trenches in the world, it should come as little surprise that the Sammy B currently holds the title of the deepest shipwreck ever discovered. It broke the record held by the USS Johnston, a U.S. naval destroyer that sank during the same battle as the Sammy B and in the same deep-sea trench. An expedition team led by Microsoft co-founder and explorer Paul Allen discovered the wreck in 2019, and another expedition led by Vescovo confirmed its identity in 2021.

 

While the Sammy B remains No. 1 for the time being, it’s possible that other shipwrecks from the Second World War reached even greater depths after going under, including the still-unlocated escort carrier USS Gambier Bay and destroyer USS Hoel.

 

5 of the Deepest Shipwrecks Ever Found
Warships aren’t the only thing that have come to rest at incredible ocean depths. A list of the five deepest wrecks ever found also includes passenger and merchant ships, and all were sunk during World War II.

 

Ship: USS Samuel B. Roberts

Depth: 22,621 feet

Location: Philippine Trench

Date of Sinking: October 25, 1944

 

Ship: USS Johnston

Depth: 21,180 feet

Location: Philippine Trench

Date of Sinking: October 25, 1944, during the Battle off Samar

 

Ship: SS Rio Grande

Depth: 18,904 feet

Location: Southern Atlantic Ocean near Brazil

Date of Sinking: January 4, 1944

 

Ship: USS Indianapolis

Depth: 18,044 feet

Location: Philippine Sea

Date of Sinking: July 30, 1945

 

Ship: SS City of Cairo

Depth: Roughly 17,000 feet

Location: Southern Atlantic Ocean near St. Helena

Date of Sinking: November 6, 1942

 

Source: What Is the Deepest Shipwreck Ever Found?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - RADIUM

_thumb.jpg

Did you know... Radium is, quite famously, not good for you. Its effects on the body are deleterious, not that anyone realized this when Marie Curie discovered the alkaline earth metal in 1898 — a scientific breakthrough that led to her winning the 1911 Nobel Prize in chemistry. Before long, the dangerously false belief that radium had health benefits began to spread: It was added to everything from toothpaste and hair gel to food and drinks, with glow-in-the-dark paints made from radium still sold into the 1970s. It was marketed as being good for any “common ailment,” with radioactive water sold in small jars that shops claimed would “aid nature” and act as a natural “vitalizer.”

 

Of course, none of this was true — exposure to even a small amount of radium can eventually prove fatal. Curie had no way of knowing this at the time, just as she didn’t have the slightest inkling that her notebooks would remain radioactive for more than 1,500 years after her death. She was known to store such elements out in the open and even walk around her lab with them in her pockets, as she enjoyed how they “looked like faint, fairy lights.”

 

Marie Curie also won a second Nobel Prize.
Marie Curie wasn’t just the first woman to win a Nobel Prize — she was also the first person to win two and remains the only person to be awarded the Nobel Prize in two different scientific fields. Her first award came eight years before her Nobel Prize in chemistry, when she and her husband Pierre Curie won the 1903 Nobel Prize in physics for their work in radioactivity. More than two decades later, their daughter Irène Joliot-Curie won the 1935 Nobel Prize in chemistry along with her husband Frédéric Joliot for synthesizing new radioactive elements.

 

 

Source: Radium was added to food and drinks because it was thought to have health benefits.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - FINGERNAILS

article-2016513710341038050000.jpg

Did you know... The human body contains a panoply of biological wonders. The human eye can detect around 1 million colors, and the nose can discern a trillion distinct scents. The brain is the most complex form of consciousness in the animal kingdom, and it takes the coordination of 200 muscles just to move our bipedal bodies around. Amid all these incredible capabilities, our nails get little scientific attention. Yet they are a rarity in nature — in fact, only primates have them, thanks to the evolution of their dexterous fingers. 

 

Embedded in your nails are other tiny mysteries, including the light-colored half-moon shape at the bottom of the nail plate. Though few of us stop to think about the purpose of this mark, its existence is a vital part of our nails and also serves as an indicator of our overall health. Here’s a closer look at this curious feature of our fingernails.

 

The Scientific Name Is Latin for “Little Moon”

bd49042c9ace2e13ae84f4179c066b7c.jpg
The crescent-shaped mark at the base of the nail is known scientifically as the lunula, which is Latin for “little moon.” Although it has its own specific name, the lunula is only the visible part of a larger structure known as the nail matrix. That structure is one of the four major parts of the fingernail, along with the nail plate, nail bed, and the skin surrounding the nail (including the cuticle). Arguably, the matrix, which contains nerves, lymph, and blood vessels, is the most important of the four as it produces the cells that eventually harden into nail plates. 

 

Although the lunula can be many colors (more on that later), it typically appears white because it’s made of layers of newly formed cells that haven’t fully hardened and become transparent yet. (The rest of the nail is a pinkish color because the transparent plate allows the blood underneath to show through.) Sometimes lunulae will be easily visible and other times they can be obscured — usually because they’re hidden under the cuticle, though in some cases an obscured lunula could be a sign of a medical condition such as diabetes or heart disease.

 

The Color Can Be an Indicator of Health
The lunula, and the fingernail more generally, is a remarkable glimpse into our overall health. Typically, a healthy person will have white lunulae, but if the area is a different color it could be indicative of a potentially serious health condition. According to Healthline, the lunula can appear in various colors including blue, brown, black, red, and yellow, and can be an indicator of diabetes (pale blue), heart failure (red), renal failure (brown), or other serious conditions. This is why doctors will often examine your nails when you go in for an annual physical. 

 

It’s a Visible Part of Nail Growth

OIP._HDOfF6iV-073AogSRd_jgAAAA?cb=iwp2&r
The nail matrix serves another important function: regenerating the nail. Although it may not seem like it, our nails are always growing. They grow out from the base of the nail at a rate of roughly 1 nanometer (one-billionth of a meter) every second, which averages to about 3.47 millimeters per month. (If you’ve ever noticed that you tend to trim your fingernails more frequently than your toenails, that’s because a toe’s nail matrix produces only 1.62 millimeters of nail per month on average.)

 

Nails grow from the nail matrix (which includes the lunula), where special cells create multiple layers of keratin, the same protein that makes up hair. The typical nail has roughly 196 layers of these cells. So the lunula is essentially the visible portion of the growth zone, where new cells are actively produced right before your eyes.

 

Source: Why Do We Have Half-Moons on Our Fingernails?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - TRADEMARKED SMELLS?

young-woman-playing-bowling-trying-260nw

Did you know... When we think about trademarks, it’s usually with regard to the logos and catchy slogans of our favorite brands, or maybe even iconic sounds such as the NBC chimes or the deep rumble of the THX logo. But trademarks extend beyond what we see and hear — they can even include what we smell. In fact, a specific scent can become so closely tied to a brand that it earns legal protection.

 

This may sound strange, since smells are invisible, hard to describe, and intensely subjective. But in the right circumstances, a scent can trigger memories, emotions, and brand loyalty just as powerfully, or even more so, than a logo or jingle. Companies that understand the emotional punch of scent are increasingly looking to protect their olfactory signatures, known as smell marks.

 

Getting a scent trademarked, however, is a rare and complicated feat. As of 2023, there were only 15 officially registered scent trademarks in the United States, for products ranging from dental wax to shoe polish. Compare that to the millions of visual logos and sound marks and you can start to see just how unusual and special this form of brand protection really is. Here’s a look at how it works, which companies have pulled it off, and why scent might be branding’s next big frontier.

 

Trademarking a Scent Is an Uphill Battle
In the United States, trademarks are managed by the U.S. Patent and Trademark Office (USPTO), and the rules for scent trademarks are notoriously strict. First, the scent must be nonfunctional, which means the smell cannot be part of what makes the product useful. If a fragrance is an essential feature of the product — think the scent of a perfume or air freshener — it can’t be trademarked because it plays a fundamental role in the product’s usefulness.

 

Second, the scent has to be distinctive. It must trigger immediate brand recognition, much like seeing the Nike Swoosh or hearing Intel’s startup jingle. The average pleasant smell is not enough; it must be unique and unmistakably associated with one particular company.

 

Finally, the scent has to be describable in words. Applicants are required to submit a clear, detailed description of the smell they want to trademark. As anyone who’s ever tried to capture a scent in words can attest, this can prove to be incredibly difficult —  and describing this invisible experience in a way that satisfies legal standards is even more challenging. These hurdles mean most companies don’t even attempt to trademark their signature scents.

 

Smell Marks Can Help Cement a Brand’s Identity

use-different-colors-of-playdough-to-rep
The tiny handful of successful scent trademarks showcases the power of this type of brand affiliation. Hasbro’s Play-Doh is one of the most famous examples: The company trademarked the smell of its modeling compound, describing the scent as “a sweet, slightly musky, vanilla-like fragrance, with slight overtones of cherry, and the natural smell of a salted, wheat-based dough.” It’s an incredibly specific description, but anyone who’s played with Play-Doh can likely recall the scent instantly, demonstrating the strength of our olfactory memory.

 

Another case comes from Verizon Wireless, which trademarked a custom “flowery musk scent” used inside its stores. This smell is part of an intentional strategy to shape customer experience, adding an invisible but memorable layer to retail visits. The smell isn’t just pleasant — it quietly signals to your brain that you’re in a Verizon store before you even glance at a logo or phone display.

 

Perhaps one of the quirkiest smell marks comes from the world of bowling. Storm Products trademarked the scent of its bowling balls, producing models that smell like grape, cinnamon, and other unexpected aromas. It’s an unusual marketing tactic, but it works — bowlers can quickly associate the fruity smell with Storm’s high-performance gear.

 

Scent Is Tied to Memory — And Emotion

327683-340x227-nearby-attraction-experie
Our sense of smell is one of the most primal and emotional senses we have. It’s directly wired into the limbic system, the part of the brain that handles memory and emotional responses. A scent can instantly transport you back to your grandmother’s kitchen, a particular summer vacation, or your first car.

 

Brands understand how powerful our sense of smell can be. Scent can forge a deep, subconscious connection with consumers that can prove even more enduring than a visual symbol or catchy tune. Hotels pump signature scents into lobbies, elevators, and hallways to create a sense of relaxation and familiarity. Luxury car makers scent their showrooms subtly with leather and wood notes to enhance perceptions of craftsmanship and elegance. And amusement parks infuse the air with playful, nostalgic aromas — such as cotton candy or popcorn — to draw visitors deeper into the childlike magic of their world.

 

We All Smell Things Differently

b-_how_to_wear_perfume_150_x_150_0.jpg
Despite the benefits of smell in marketing a product or experience, trademarking a scent remains an elusive achievement. We all have a different “smellscape,” meaning each of us perceives smells differently. These differences have a range of causes, from our genetics, cultural backgrounds, or even what we’ve eaten recently. And since it can be difficult to describe the specifics of an aroma, our descriptions are often vague and subjective.

 

Even beyond the obstacles of our individual perceptions, brands must demonstrate that consumers have come to associate a particular smell with their goods and nothing else. This often requires expensive consumer studies and focused marketing efforts that drag on for years, sometimes decades. Play-Doh has been in business since 1956 — imprinting its unmistakable smell on the memories of multiple generations — but the iconic scent wasn’t officially trademarked until 2018.

 

While scent trademarks clearly remain difficult to attain, for brands willing to invest the time and money, the payoff — legal protection for one of the most enduring aspects of their identity — can be well worth all the hassle.

 

 

Source: Is It Possible To Trademark a Smell?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - SWISS ARMY KNIFE

OIP.pd87RTDGqsmDQkD1fEi0JAAAAA?cb=iwc2&r

Did you know... The tool favored by MacGyver has a multi-pronged history.

 

Anyone with a deep love for gadgets is familiar with a Swiss Army knife. The multipurpose pocket tool appears able to tackle any task, from sawing through rope to uncorking a bottle of champagne to trimming your eyebrows. It even became a metaphor for a person or device that can seemingly do it all. But is it really from Switzerland? And was it ever really deployed in the Swiss Army?

 

The Origins of the Swiss Army Knife
In the 1800s, the Swiss Army had a problem. The military observed a need for a small, portable tool that could serve a number of different purposes in the field, from maintaining a rifle to opening rations. Carrying a cumbersome tool set was impractical. Ideally, they needed an all-in-one tool that would be unobtrusive. But no one in Switzerland had the resources to craft one.

 

The idea itself wasn’t new. Multipurpose tools had been in existence for decades and even received a mention in Herman Melville’s 1851 novel Moby-Dick, which described a knife that doubled as a corkscrew and tweezers. Later, in 1880, a man named John Holler marketed an outlandish knife design with over 100 uses, with arms that extended out to deploy cigar cutters or mini-shovels.

 

Holler’s knife, which was made in Germany, was never intended to be useful, exactly. It was meant to grab attention and solidify his company’s reputation for fine cutlery. According to Smithsonian, elaborate knives like these were more about demonstrating culters’ skill. They would go on “tour,” appearing at festivals, fairs, and other public gatherings—but deploying them on the field was impossible.

 

Aside from practicality, outsourcing the knife to another country rubbed some Swiss the wrong way. Swiss knifemaker Karl Elsener believed they should keep their knife business domestic. Elsener manufactured surgical knives at his factory in Ibach-Schwyz; crafting a multi-pronged tool was well within his capability. His multipurpose knife was delivered to the Swiss army in 1891.

 

There was room for improvement. “It had a large blade, a can opener, a screwdriver and a reamer all on one side,” Elsener’s great-grandson, Carl, told The New York Times in 1991. “On the other side was nothing. It was very strong but a little heavy so my great-grandfather decided to make a more elegant knife for officers which had a corkscrew and a second blade.” This second, improved knife was given to the Swiss Army in 1897.

 

But there was still the problem of meeting production demands. Elsener got around those limitations by forming a group, the Association of Swiss Master Cutlers, that permitted other knifemakers to share in filling military orders. Elsener and another company, Wenger, would later split production duties for many years.

 

The Swiss Army Moniker
Elsener’s company was dubbed Victorinox—a blend of his mother Victoria’s name and inox, another name for the stainless steel used to make the tool. But Elsener didn’t call it a “Swiss Army Knife”—he dubbed it the Original Swiss Officer’s and Sports Knife.

 

The knife came by its more familiar name leading up to World War II, when American soldiers who couldn’t pronounce German took to calling it a “Swiss Army Knife.” The Oxford English Dictionary dates the first printed use of the term in English in 1935.

 

Like a lot of wartime tools, foods, and accessories, returning veterans brought plenty of Swiss Army knives back with them. They subsequently wound up in utility drawers and in the pockets of Boy Scouts. The knife was also fairly easily identifiable by the symbol on its body—a white cross on a red shield. Civilian models sported a red handle so they would be more visible in the snow.

 

While the knives may seem like a gimmick, they’ve proven surprisingly useful. In 1990, a physician named Charles Plotkin was on a plane when a passenger began choking. Plotkin used another passenger’s Swiss Army knife to cut a hole in the man’s neck, permitting air passage. (Plotkin should have been carrying a specialized Swiss Army Knife that came with a tracheotomy blade.)

 

Victorinox estimates roughly 500 million Swiss Army knives have been manufactured since 1891. That includes non-terrestrial sales: NASA has issued Swiss Army knives to astronauts since the 1970s. You never know when you might need a fish scaler, even in outer space.

 

Source: Why Is It Called a “Swiss Army Knife”?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - MICKEYS

computer-mouse-on-laptop-keyboard-260nw-

Did you know.... Animal-based names are surprisingly common when it comes to units of measurement. In addition to horsepower (which usually measures the output of engines or motors) and hogsheads (today mostly used for alcohol), there’s also the mickey — a semi-official means of measuring the speed of a computer mouse. Named after a certain Disney character who’s probably the world’s most famous rodent, it’s specifically used to describe the smallest measurable movement the device can take. In real terms, that equals 1/200 of an inch, or 0.1 millimeter. Both the sensitivity (mickeys per inch) and speed (mickeys per second) of a computer mouse are measured this way by computer scientists.

 

Had the original name for the device stuck, it’s unlikely this measurement system would have come about. The mouse was briefly known as a “bug” when it was invented at the Stanford Research Institute to make computers more user-friendly, though that seems to have been a working title that no one was especially fond of. (That version of the device was also extremely primitive compared to the mice of today — it even had a wooden shell.) As for how the mouse got its current name, no one can quite remember, except that that’s what it looked like.

 

A lot of people didn’t think the mouse would take off.
In perhaps one of the most infamous articles ever published about computers, the San Francisco Examiner’s John C. Dvorak wrote in 1984, “The Macintosh uses an experimental pointing device called a ‘mouse.’ There is no evidence that people want to use these things.” Written as a review of Apple’s landmark personal computer, which had launched earlier that year, Dvorak’s not-so-prescient article wasn’t exactly a hot take at the time. The relatively small number of people who used computers regularly back then were just fine using the keyboard for everything, and Dvorak was hardly alone in asserting that he didn’t want to use a mouse. His predictive abilities didn’t seem to improve with time, alas, as he also wrote that Apple should “pull the plug” on the iPhone prior to its 2007 release.

 

Source: The speed of a computer mouse is measured in “mickeys,” named after Mickey Mouse.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - DRAGONFLIES

239px-The_Golden-ringed_Dragonfly,_Cordu

Did you know... On a statistical level, some of the world’s most fearsome predators aren’t actually that fearsome. Wolves succeed in about only 20% of their attempts to catch prey, whereas lions enjoy a success rate of around 30% when working as a pack. Those numbers, though respectable, pale in comparison to the success rate of the mighty dragonfly, which catches about 95% of the prey it pursues — making it the world’s most successful hunter.

 

These insects do all their hunting in midair, of course, making the feat even more impressive; they mainly prey on small insects such as mosquitoes, flies, or butterflies. Scientists attribute this prowess to dragonflies’ nearly 360-degree field of vision, their individually controlled wings, and their brains’ unique ability to coordinate these instantaneous actions.

 

Other surprisingly adept hunters include the harbor porpoise, whose success rate hovers at around 90% (allowing them to chow down on more than 500 small fish per hour), and African wild dogs, which capture their prey more than 60% of the time — though they often lose them to larger predators such as lions and hyenas.

 

One dragonfly species’ migration has been called “the most extraordinary journey in nature.”
The more you learn about dragonflies, the more astonished you’ll be by these tiny creatures. Consider the globe skimmer, for instance, which more than lives up to its name: The “winged wanderer,” as it’s often referred to, completes the longest migration of any insect, an 11,000-mile journey between India and Africa that Discover Magazine called “the most extraordinary journey in nature” — in part because it takes several generations to complete, meaning no single dragonfly can complete it itself.

 

At just a few centimeters long, globe skimmers can fly for 90 hours straight — albeit with a fair bit of assistance from wind, which is why the journey can only be undertaken at certain times of year. To keep their energy up, they eat small insects and aerial plankton. Their exact route has yet to be plotted, however, because globe skimmers are literally too small for any existing tracking devices.

 

Source: Dragonflies are the world’s most successful hunters.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - COSMIC LATTE

how-are-planets-connected-zodiac-signs-8

Did you know..... We tend to think of space as cold and dark, but that’s only because most stars are light-years away from the pale blue dot we call home. The universe is actually quite bright on the whole, and its color has been given an appropriately celestial name: “cosmic latte.” In 2002, astronomers at Johns Hopkins University determined the shade after studying the light emitted by 200,000 different galaxies. They held a contest to give the result — a kind of creamy beige — its evocative moniker. (Other entries in the contest included “univeige” and “skyvory.”)

 

As with just about everything in the universe, however, the color isn’t fixed: It’s become less blue and more red over the last 10 billion years, likely as a result of redder stars becoming more prevalent. In another 10 billion years, we may even need to rename the color entirely.

 

NASA didn’t really spend millions of dollars developing a pen that could write in space.
The second half of this oft-cited myth contrasts NASA’s supposed approach with that of the Soviet Union, who are said to have simply given their cosmonauts pencils. American astronauts did likewise, though NASA wasn’t always thrilled about it — pencils are flammable, and their tips breaking off could lead to damage on sensitive equipment. The so-called space pens actually came from the Fisher Pen Company, which offered its AG-7 “Anti-Gravity” pen to NASA in 1965. None of the investment money came from the government, however, and astronauts and cosmonauts alike ended up using the writing tools at a cost of $2.39 per pen.

 

Source: According to astronomers at Johns Hopkins, the color of the universe is “cosmic latte.”

  • Like 1
Link to comment
Share on other sites

Fact of the Day - DAKOTAS AND CAROLINAS

OIP.-9eHWVwX4r92E3_N21TswgAAAA?cb=iwc2&r

Did you know.... We have an even 50 states thanks to these geographic decisions.

 

If the colony of Carolina and the Dakota Territory hadn’t decided to split themselves up a few hundred years ago, we’d have only 48 states right now. But why did these particular places become geographic variants of each other? Here are the answers.

 

Why Is there a North and South Carolina?

3ab9a3120a34ec317032c1c3ee9b4041.jpg

John White’s painting of an Indigenous village at the time of the English settlers’ arrival at Roanoke Island, in present-day North Carolina, 1585. | Print Collector/GettyImages
 

Though French officials had attempted to establish forts along the coast, permanent European settlement of the Carolinas began with Juan Ponce de León claiming most of the present-day southeastern U.S. for Spain in 1513 and calling it La Florida. Indigenous peoples resisted the Spanish incursion for decades and Spain eventually abandoned its efforts to settle the region. In 1585, Sir Walter Raleigh convinced a group of English settlers to establish a colony on Roanoke Island, but by 1590, their fort had been abandoned and the people had mysteriously disappeared.

 

Then, England’s Attorney General Sir Robert Heath managed the Carolina territory for King Charles I of England. Heath made no attempts at colonizing the area and, following the king’s execution in 1649, Heath fled to France. Heath’s heirs would eventually try to reassert their claim to the territory, but King Charles II ruled the claim invalid and gave ownership to a group of eight noblemen known as the Lords Proprietors. The Lords—helmed primarily by Anthony Ashley Cooper, 3rd Earl of Shaftesbury, who was influenced and assisted by the philosopher John Locke—retained control of the area from 1663 to 1729, with members of the eight-man group being replaced as necessary with other lords.

 

jpg

A 1676 map of Carolina prior to the split. | Geographicus Rare Antique Maps, Wikimedia Commons // Public Domain

 

The Lords Proprietors set up a framework for governance and settlement of Carolina and dispatched an expedition of colonists. Mostly, though, they fought constantly and were unable to make decisions that made sense for the economic development of the enormous territory. None of the original eight lords ever set foot in North America. They hired and fired a laundry list of governors, noted in their papers: “John Jenkins was deposed,” “Thomas Miller was overthrown and jailed by ... ‘armed rebels,’ ” “Thomas Eastchurch was forbidden to enter the colony,” and “Seth Sothel was accused ... of numerous crimes for which he was tried, convicted, and banished.” On top of all that, wars broke out with the Tuscarora and Yamasee tribes.

 

The lords, realizing that this strategy wasn't working, appointed a governor to oversee the entire territory and a deputy governor to handle the northern half in 1710. Two years later Carolina was permanently divided into north and south territories. The English Crown eventually took back South Carolina from the Lords Proprietors and made it a royal colony; the Crown also convinced the reluctant Lords to sell back their shares of North Carolina, and it was made a royal colony in 1729. Both retained this status until they ratified the U.S. Constitution in 1788 (South Carolina) and 1789 (North Carolina).

 

The Origins of North and South Dakota

littlebighorn.jpg

A flotilla of covered wagons and military equipment accompanies George Armstrong Custer’s 1874 expedition to the Black Hills. | Historical/GettyImages

 

Most of the land that would become North and South Dakota was acquired by the United States in the Louisiana Purchase in 1803. After Minnesota was admitted to the Union in 1858 and the federal government and Sioux officials signed the Yankton Treaty the same year, the remaining land and ceded territory was organized into the Dakota Territory. But it wasn’t until the 1874 discovery of gold in the Black Hills, the sacred land of the Sioux, did prospectors and the military really begin invading the area. (Ironically, Dakota means “friend” or “ally” in the Dakota language.)

 

Railroads followed the gold rush; settlers poured into the upper Great Plains. Until 1883, Yankton in the far southeastern corner served as the capital of the whole territory, but northern settlers refused to recognize the remote town as the center of governance. They declared their own capital, Bismarck, in 1872. This caused enough tension to require a split down the 46th parallel into two territories—but there were other factors in play.

 

President Grover Cleveland, a Democrat, and the Democratic majority in the U.S. House of Representatives resisted giving the Dakotas statehood, since the overwhelmingly Republican states would likely elect Republicans to Congress. The situation changed when Republican Benjamin Harrison was elected president and Republicans gained majorities in both houses of Congress, paving the way for a statehood bill to pass. On November 2, 1889, North and South Dakota were admitted to the union, becoming the 39th and 40th states, respectively.

 

Source: Why Are There Two Dakotas and Two Carolinas?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - THIRD EYELID

OIP.Knl6IYN0AHYbSXyFEVD2uwAAAA?cb=iwc2&r

Did you know... If you look closely in the mirror at the inside corner of either of your eyes, you’ll notice a pinkish protuberance. This thin, curved membrane sits directly adjacent to the eyeball and is called the plica semilunaris, which is an evolutionary remnant of the nictitating membrane, known colloquially as the “third eyelid.” (This is not to be confused with the lacrimal caruncle, a tiny bump at the very edge of the eye that helps keep the eye moist.) Though the third eyelid is useless for us modern humans, it once served a purpose for our prehistoric ancestors.

 

Many animals, including dogs, cats, and some birds, reptiles, and fish, still have a functioning nictitating membrane. This translucent membrane protects the eye while still allowing the animal to see, and also essentially acts like windshield wipers by removing debris and maintaining moisture. Birds rely on their nictitating membrane while in flight and fish while swimming. Its purpose in prehistoric humans remains unclear due to the lack of definitive fossil records.

 

Charles Darwin waited more than two decades to publish his theory of evolution.
From 1831 to 1836, naturalist Charles Darwin traveled the world researching evolution — but even after his return to England, he didn’t reveal his findings to the public for another two decades. Some claim Darwin feared a negative reaction from scientific and religious communities, while others suggest he used the gap to ensure his theory was irrefutable, hoping to compose an extensive, unassailable treatise before informing the world.

 

In 1858, Darwin received an essay from naturalist Alfred Russel Wallace that proposed similar evolutionary theories to his own. This unexpected development prompted Darwin to divulge his findings to the scientific community alongside Wallace. In 1859, he introduced his theory of natural selection in his work On the Origin of Species. Later, in 1871, Darwin published The Descent of Man, and Selection in Relation to Sex, in which he first publicly posited that humans descended from apes.

 

 

Source: You can still see part of your third eyelid.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - HOT DOG

01b63c8b84b3c1a4d249d6a9fb41f4bc.jpg

Did you know... You’ve probably wondered what’s really inside a hot dog before. We have the answer—though we don’t recommend reading it before your next cook-out.

 

At baseball stadiums, holiday cookouts, and in the dorm rooms of broke college students everywhere, hot dogs have become a staple meal. Each time we wield a wiener, however, rumors and innuendo over the food’s manufacturing integrity come flooding to the surface. Is this tubed meat made from monkey brains? Is there an underground network of hot dog companies that slip in cows’ feet as a filler? Why are hot dogs so nutritionally suspect?

 

Fortunately, most of your worst fears may be unfounded. Except for the feet. More on that in a moment ...

 

What Goes into a Hot Dog
Ever since Upton Sinclair uncovered the misdeeds of the meat industry in the early 1900s, the government has kept a close eye on animal product manufacturing methods. Gone were the sawdust and dog and horse parts that previously made up hot dogs and other highly-processed meats. Companies had to obey strict preparation guidelines that significantly reduced the chances of foodborne illness and forced them into using transparent food labels.

 

Hot dogs are no exception, though you might have to decipher some of the language on the label to understand what you’re really biting into. Beef, pork, turkey, or chicken dogs originate with trimmings, a fanciful word for the discards of meat cuts that are left on the slaughterhouse table. That usually means fatty tissue, sinewy muscle, meat from an animal’s head—not typically a choice cut at Morton’s—and the occasional liver.

 

This heap of unappetizing gristle is pre-cooked to kill bacteria and transformed into an even more unappetizing meat paste via emulsion, then ground up and pushed through a sieve so that it takes on a hamburger-like texture. A number of things could be added at this point, including ascorbic acid (vitamin C) to aid in curing, water, corn syrup, and various spices for taste. Less appetizing ingredients can also include sodium erythorbate, which the National Hot Dog and Sausage Council swears is not actually ground-up earthworms:

 

In contrast to a popular urban legend, erythorbate is NOT made from earthworms, though the U.S. Department of Agriculture reports receiving many inquiries about erythorbate’s source. It is speculated that the similarity in the spelling of the words 'erythorbate' and 'earthworms' has led to this confusion.”

 

Got that? No worms. After another puree, the meat paste is pumped into casings to get that familiar tubular shape and is then fully cooked. After a water rinse, the hot dog has the cellulose casing removed and is packaged for consumption. While not exactly fine dining, it’s all USDA-approved.

 

Hot Dog Labeling
More skittish consumers should pay attention to packaging labels. If you see variety meats or meat by-products, that means the hot dog probably has heart or other organ material in the meat batter. Additives like MSG and nitrates are also common, though all-natural dogs usually skip any objectionable ingredients. If it’s labeled “all beef or “all pork,” you can be assured it's coming from muscle tissue of that animal, not organs.

 

But those trimmings? By definition, they can contain a lot of things that come off an animal, including blood, skin, and even feet. It’s all edible, though some might object to the very idea of eating random cow or pig parts. At least none of it is actual human meat, as some people feared when a Clear Lab food advocacy test in 2015 showed 2 percent of hot dog samples contained human DNA. That was more likely due to human error and trace amounts of hair or fingernails making their way into the batch, not a worker falling into the vat. Enjoy!

Source: What’s Really Inside a Hot Dog?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - WORST ANIMAL DADDIES

OIP.mwwC7twA3GxMe6fwEg1B0wAAAA?cb=iwc2&r

Did you know... These dads won’t be getting any Father’s Day cards this June.

 

In addition to getting your dad a card or new tie this Father’s Day, be sure to thank him for not trying to eat you when you were young. Devouring babies may sound savage and strange, but when it comes to certain species, kids becoming a meal for their fathers is just par for the course.

 

Lions

You may already know that a male lion that recently became head of his pride will usually kill all the cubs sired by the previous leader. But while that makes lions terrible step-dads, it doesn’t make them terrible fathers. What makes lions bad dads is a combination of greed and laziness. Papa lions spend most of their day lying in the shade, waiting for one of their mates to bring home dinner. The female does the majority of the hunting and pretty much all of the parenting; the male’s job is to protect his territory from other prides and scavengers like hyenas.

 

Once the mama brings home her kill, the male lion is always the first one to eat and he often leaves only scraps for the rest of the pride—including any of his recently weaned children.

 

Grizzly Bears

borowitz-grizzly-bear-290.jpg

It’s rare for any animal-kingdom father to eat his own young when he isn’t desperate for food, but the male grizzly bear will do just that. These creatures are extremely protective of their territories—which can range all the way up to 1500 miles—and are opportunistic hunters, willing to kill and eat anything that happens to enter their home turf. Even cubs, whether they’ve sired them or not. Males may also kill cubs to force their mother to go into estrus so he can breed with her.

 

Bass

There are a lot of bad aquatic fathers. In fact, even those that are highly protective of their spawn, like male bass, are still prone to eating their own children. In the case of the bass, this occurs after most of the newborns have swum away and a few stragglers remain. Suddenly daddy stops protecting his kids from predators and becomes a predator himself, swallowing up all of the stragglers as a reward to himself for helping the strong ones stay alive.

 

Sand Goby

Male-sand-goby-Pomatoschistus-minutus_Q6

Similarly, the male sand goby is relentless about guarding his eggs from predators, but even if he has plenty of extra food available, he will still eat about a third of his brood. Research into how he decides which eggs to keep and which to eat reveals that size matters:  male gobies tend to eat the largest eggs. In many species, large babies mean a higher chance for survival—and thus, they are the most protected members of the family—but the sand goby knows that the largest eggs take longest to hatch. Pops snacks on the eggs that would take the longest to develop so he can get out of there and back to mating as soon as possible.

 

Assassin Bug

With a name like “assassin bug” you’d hardly expect this insect to be sweet, but filial cannibalism is still pretty gruesome. The male assassin bug is tasked with protecting his eggs until they hatch. His tactic mostly involves eating the eggs on the outside edges of the brood, which are otherwise most likely to fall victim to parasitic wasps. This defensive strategy is so hardwired that the bugs do it even in laboratory settings completely devoid of any potential parasites. Scientists believe this is because eating the eggs doesn’t only protect the insects against possible parasites, but also provides the male assassin bug with ample nutrients when his guard duty leaves him unable to forage.

 

Interestingly, assassin bugs do have a bit of a soft spot—the males are some of the only insects that are willing to adopt broods from other fathers. (They don’t eat any extra eggs when their kids are adopted.)

 

Source: 5 of the Worst Fathers In the Animal Kingdom

  • Like 1
Link to comment
Share on other sites

Fact of the Day - MOVIE "TRAILERS"

875dafa63089b7f74fa19c5dfab8e2d5.JPG

Did you know.... In the early days of moviegoing, you didn’t just buy a ticket for one feature-length film and leave once the credits started rolling. You were instead treated to a mix of shorts, newsreels, cartoons, and, eventually, trailers — which, per their name, played after the movie rather than before — with people coming and going throughout the day. The idea for trailers came from Nils Granlund, who in addition to being a business manager for movie theaters worked as a producer on Broadway, which explains why the first trailer was actually for a play: 1913’s The Pleasure Seekers.

 

Chicago producer William Selig took the idea further that same year by ending each installment of his serialized action-adventure short films with a tantalizing preview of the next chapter — a precursor to ending movies and TV shows on a cliffhanger. Today there are production houses that exclusively make trailers and are handsomely rewarded for their efforts, sometimes to the tune of millions of dollars. 

 

One company made almost every trailer for 40 years.
Between 1919 and 1960, almost every movie trailer was produced by the National Screen Service (NSS) — a near-monopoly that also included posters and other marketing materials. As is the case for a lot of cinematic innovations from the era, we have Alfred Hitchcock to thank for changing that: The “master of suspense” began making his own trailers, including a six-and-a-half-minute preview of Psycho, and other filmmakers followed suit. Trailers have long been recognized as an art form unto themselves, with many moviegoers arriving to theaters early just to see them.

 

 

Source: Movie previews are called “trailers” because they were originally shown after the movie.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - MONKEY BREAD

monkey-bread-1-300x300.jpg

Did you know... The beloved pastry has a whimsical—yet undoubtedly odd—name.

 

Monkey bread—a sticky pull-apart pastry that’s typically made from canned biscuit dough—is a sugary, cinnamony treat. Perhaps it’s a traditional part of your family’s Christmas feasts. Or maybe grandma was known for whipping it up for special brunches.

 

Despite its seemingly silly name, monkey bread has nothing to do with actual monkeys. So why is this sweet pastry named after primates? Let’s dig into the history of monkey bread, starting from the very beginning.

 

Monkey Bread’s Hungarian Roots
Before it became a centerpiece on Americans’ tables, this dish was known as something else entirely. Food historians trace its roots to aranygaluska, a Hungarian dessert that translates to “golden dumpling.” This pull-apart sweet bread was brought to the U.S. by Hungarian Jewish immigrants in the late 19th century.

 

Aranygaluska was a bakery staple in immigrant communities, especially in California, for several decades. In the 1970s, Betty Crocker even featured it in a cookbook; the book labeled the sweet dish as “Hungarian Coffee Cake.”

 

Eventually, the dish became known as “monkey bread”—a name that has stuck around to this day. And thanks to actress and first lady Nancy Reagan (a big fan of the treat), monkey bread made it to the White House Christmas table, cementing its status as a classic. The pastry has continued to evolve; now, there are seemingly countless recipes floating around the internet for traditional monkey bread, other sweet versions, and even savory spins on the dish.

 

The Many Theories Behind Monkey Bread’s Name
There’s no clear answer as to how monkey bread got its name.  But, like most good mysteries, there are several theories at play.

 

The most common explanation is that it’s named after the way it’s eaten: with your fingers, pulling apart the sticky pieces of dough one by one, much like a monkey might eat something. 

Some also trace it back to 20th-century slang. In the 1940s, monkey food was Southern slang for casual snacks you could pick at. That, combined with jumble bread—another old-timey term for breads made from small pieces of dough—could have led to monkey bread.

 

Another theory credits silent film star ZaSu Pitts, who reportedly used the term in a 1945 cooking column after bringing the recipe home from Nashville. Pitts was known for her lavish Hollywood parties; and apparently, her monkey bread was a hit.

 

In the end, monkey bread might just be one of those names that stuck, literally and figuratively. Like the dessert itself, it’s a little weird and oddly delightful.

 

 

Source: Why Is It Called “Monkey Bread”?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - BIRDS

320

Did you know.... South America is known for its stunning avian diversity, with colorful toucans, ubiquitous parrots, and an untold number of other feathered friends. (Seriously, there are new species being discovered every year.) But no country in South America — or the world, for that matter — compares to Colombia. With around 1,900 bird species within its borders, the country hosts nearly 20% of all avian species in the world, which is more than any other nation. Although some of the most common varieties — like sparrows, tanagers, and finches — may be recognizable to birders in more northern climates, the critically endangered blue-billed curassow (Crax alberti) and the rare Cauca guan (Penelope perspicax) are just a few of the dozens of species endemic to Colombia.

 

And the country takes its natural wonders seriously. As one of the most biodiverse nations in the world, with the Amazon taking up 35% of the country’s landmass, Colombia committed to declaring 30% of its land a protected area by 2030 — and got it done eight years early. A 2023 study also found that Colombia takes an unusual approach to conserving its natural areas by adding biodiversity protection as a secondary goal of many other policy initiatives, such as ones addressing poverty and civil strife. That doesn’t mean Colombia is immune to threats of deforestation and climate change, but the country is working hard to protect its bounty — which includes 10% of the world’s total species. 

 

Colombia is home to a world-famous river known as the “liquid rainbow.”
Some of the world’s rivers are known for historical reasons (Italy’s Rubicon) or their proximity to major centers of power (London’s Thames), but one of the most amazing rivers in the world lies in the backwoods of Colombia. In fact, it was so well hidden that the river was only discovered by non-Indigenous people a little more than 50 years ago. Called Caño Cristales, or the “Crystal Channel,” the river is located in central Colombia’s Sierra de La Macarena National Natural Park and is known for its vibrant display of colors, earning it the nickname “liquid rainbow.” The river gets its mixture of yellows, greens, blues, blacks, and especially reds from the reproductive process of aquatic plants (Macarenia clavigera) that live in the riverbed. Because water levels are affected by the country’s wet and dry seasons, the best time to glimpse this river is from May until November.

 

 

Source: Colombia has more bird species than any other country.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - THE TOOTH FAIRY

i?id=ca1a0e6f77340c8ba89b793522d898607dd

Did you know... While the Tooth Fairy herself may be surprisingly modern, the bits of folklore that went into her creation are hundreds of years old.

 

The Tooth Fairy is a familiar figure to millions of children around the world. The mythical character is most popular in English-speaking countries, with kids knowing that if they put a lost baby tooth under their pillow at night, the elusive Tooth Fairy will sneak in and replace it with money. Although not as big of a deal as a visit from the other two major fictional gift-givers—Santa Claus and the Easter Bunny—a visit from the Tooth Fairy is still highly anticipated. 

 

But compared to jolly St. Nick and the egg-bearing bunny, both of whom have roots that date back hundreds of years, the Tooth Fairy is a relatively modern invention. Here’s the strange—and surprisingly rodent-filled—origin story of the winged figure. 

 

The Tooth Fairy Takes Flight
The Tooth Fairy has been swapping milk teeth for money for generations. But it wasn’t until the 1970s that the history behind the folklore started being uncovered. In 1972, Rosemary Wells, a professor at Chicago’s Northwestern University Dental School, was asked by a student about the history of the Tooth Fairy. “I thought I’d simply go to the library, get the information and bring it back,” she explained in a 1992 interview.

 

But Wells couldn’t find anything about the mythological fairy and so decided to conduct her own investigation. After years of research, she became the foremost Tooth Fairy expert—her business card even identified her as the “Tooth Fairy consultant.”

 

While the myth of the Tooth Fairy may seem like a tale as old as time, the story’s first mention in print is surprisingly recent. In a September 1908 issue of the Chicago Daily Tribune, the Household Hints column featured a tip from reader Lillian Brown: “Many a refractory child will allow a loose tooth to be removed if he knows about the tooth fairy. If he takes his little tooth and puts it under the pillow when he goes to bed the tooth fairy will come in the night and take it away, and in its place will leave some little gift.”

 

Tales of the Tooth Fairy were likely being shared orally around the time Brown wrote in with her tip, but the figure doesn’t pop up again in print until 1927, in Esther Watkins Arnold’s short children’s play The Tooth Fairy. The myth then continued to spread its wings throughout the 20th century—particularly after World War II.

 

 1189598_image.220x220_q85_crop.jpg

 

Folklorist Tad Tuleja suggests three reasons for the Tooth Fairy’s rise in popularity during the mid-20th century. Firstly, people experienced greater prosperity after the war, which meant many parents could now afford to give their kids a little bit of money. It was also around this time that the traditional family set up became more child-orientated; this led to parents being more likely to soothe their children’s small anxieties (for instance, over losing a tooth). Finally, there was the popularity and influence of fairy-filled Disney films—from the Fairy Godmother in Cinderella (1950) to Tinkerbelle in Peter Pan (1953). 

 

There’s usually a general consensus about what mythical characters look like—for instance, Santa is typically bearded, rotund, and red-suited—but the lines are a little more blurred with the Tooth Fairy. In 1984, Wells conducted a survey and found that 74 percent of participants believed the Tooth Fairy was female, while 12 percent thought the figure was male (the remaining 8 percent thought they could be either gender). Some children don’t even picture the Tooth Fairy as a humanoid being at all: Wells documented one kid who imagined a Tooth Fairy Dragon. In today’s culture, the Tooth Fairy is most often depicted as a small female fairy, but there are also some more creative modern interpretations, such as half-hummingbird Toothiana from Rise of the Guardians (2012).

 

It All Started with a Mouse
Although the Tooth Fairy is typically anthropomorphic, the myth may have originated from older Continental European stories of a Tooth Mouse. To this day, in many counties, the tooth-for-money swap is said to be performed by a small rodent rather than a winged fairy. It’s thought this mouse-based myth may have been blended together with the numerous children’s tales about fairies to produce the Tooth Fairy that we know today.

 

In France, baby teeth are collected by La Petite Souris (The Little Mouse), who can be traced back to Madame d’Aulnoy’s 1697 fairy tale La bonne petite souris (The Little Good Mouse). The story features a fairy who can turn into a mouse and who knocks out an evil king’s teeth (but doesn’t exchange them for money). This tale was translated into English in 1890—less than two decades before the Tooth Fairy first appeared in print.

 

d6bee4d19bc9a7a9e3e5070732a0c1100834d214

 

In Spain, the tooth-collecting mouse is El Ratoncito Pérez (Pérez the Little Mouse), who first appeared in Fernán Caballero’s Cuentos, oraciones, adivinanzas y refranes populares (1877). But Pérez didn’t become the Tooth Mouse until 1894, when Luis Coloma was asked to write a story for Alfonso XIII, the child King of Spain who had just lost his first milk tooth. Rather than cash, Pérez left a present fit for a king—the Order of the Golden Fleece—under the fictional monarch’s pillow. The story was first published in English in 1914, when tales of the Tooth Fairy were starting to take root.

 

Stories of a Tooth Mouse weren’t the first time that myths and rituals had been created around childhood tooth loss, though. In the Old Norse poem Grímnismál, it’s said that Álfheimr—the Land of the Elves—was a “tooth gift” for the god Freyr. In New Guinea and Senegal, it was tradition to bury baby teeth, while in South Korea kids would throw their pearly whites onto the roof.

 

But the ritual of offering teeth to a mouse is the most prevalent and enduring practice (although it’s now rivaled by the Tooth Fairy), having been documented in cultures around the world. Along with various countries in Europe, folklore about a Tooth Mouse ranges from Ukraine and South Africa to numerous Latin American countries. Children don’t always receive money; in some countries the tooth is offered in a sympathetic magic exchange, with the belief being that it’ll make their adult gnashers grow in as strongly as a rodent’s teeth.

 

 

Source: The Strange Origins of the Tooth Fairy

  • Like 1
Link to comment
Share on other sites

Fact of the day - STONE TOOLS

capuchin-monkey-orana-wildlife-park-260n

Did you know... Humans are often thought of as the smartest animals, and one of the perks of our top-notch brains (with a little help from our opposable thumbs) is supposedly that we’re the only species that can use tools. That’s what we used to think, anyway. More recently, research has shown that our tool-use ability is not as unique as we once believed. Take, for instance, the capuchin monkey. Research published in 2019 showed that these pint-sized creatures, native to Central and South America — and sometimes known as “organ grinder” monkeys — have been using stone tools to process food for more than 3,000 years. 

 

Archaeologists analyzing a site in Brazil’s Serra da Capivara National Park discovered that the monkeys had used rounded quartzite stones to smash open cashew husks against tree roots or stone “anvils.” After digging through layers of sediment in four phases of excavation, the scientists found stone tools that had been used by the capuchins dating back around 3,000 years. The researchers also found signs that the monkeys’ tool use had changed over time — the creatures first used smaller stone tools, and then around 560 years ago, switched to larger ones, which may have meant they were eating harder foods, according to National Geographic. This evolution could have occurred due to different groups of capuchins moving into the area, or a change in the local plants. Either way, the study marked the first time such an evolution in tool use had been seen in a nonhuman species. Scientists suspect that further exploration of this site, and others like it, could give an unprecedented look at humanity’s own tool-use evolution, which began millions of years ago. Furthermore, primates — the taxonomic order to which humans also belong — aren’t the only ones gifted with brains capable of using tools. Elephants, dolphins, and a variety of birds are only a few of the other species that use tools — whether sticks, rocks, or tree limbs — to survive and thrive on planet Earth.

 

Orangutans know how to make instruments.
When it comes to primitive tools, instruments don’t usually count — that is, unless you’re an orangutan. In 2009, scientists revealed that orangutans use folded leaves to make sounds that may trick predators into thinking they’re bigger than they actually are. These musical noises, called “kiss squeaks,” were even used by wild orangutans who sensed the human researchers as a threat. This discovery is the first known nonhuman instrument and nonhuman tool used for communication. It’s also not even the extent of the orangutans’ impressive, tool-making abilities. A 2018 study revealed that orangutans were better at making tools than human children up to age 8. This growing body of scholarship only shows that complex intelligence is not a trait exclusively enjoyed by Homo sapiens.

 

 

Source: Monkeys have been using stone tools for thousands of years.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - DEFECTIVE CARS

lemon-fleuressence.jpg

Did you know... Having your engine burst into flames after driving off the car lot is a sour experience. But where did the association with lemons come from?

 

Shopping for a car, new or used, can be a nerve-wracking experience. If you buy new, you risk sticker shock and dealer add-ons. Purchase used cars and you’re never quite sure if your new vehicle will turn into a headache 10 miles down the road.

 

In the latter case, we sometimes refer to defective cars as “lemons.” States even refer to their consumer protection bills against crummy cars as “lemon laws.” But why do we associate a death trap with a tangy fruit?

 

The Connection Between Cars and Lemons
According to Green’s Dictionary of Slang, using lemon to denote a fraudulent or worthless purchase dates back to 1909; its use in reference to cars specifically goes back to 1923, when one used car dealer profiled in The Oakland Tribune is said to have “congratulated himself upon having rid himself of a lemon finally.” Lemon as a noun or adjective has often been associated with something unpleasant or unpalatable—as some people find the tartness of the lemon to be—or something that’s turned sour.

 

The car-lemon connection may have been cemented with an ad Volkswagen ran in the 1960s. Like most of their minimalist advertising from the period, it consisted of a photo of a car and a stark caption: “lemon.” The copy goes on to say that Volkswagen’s quality inspectors had caught several flaws with this particular car, ensuring it didn’t arrive to a dealership with those blemishes intact.

 

“We pluck the lemons,” the ad concluded. “You get the plums.”

 

The Origin of Lemon Laws
It wasn’t until 1975, though, that consumers had federal lemon protection. The Magnuson Moss Federal Trade Commission Improvements Act guaranteed consumers wouldn’t be stuck with a faulty consumer product, including cars, or suffer unreasonable warranty terms.

 

The law applies to consumer items of all types, though cars were of particular concern as they’re often the most expensive item prone to mechanical failure a person can buy. It quickly became known as “the lemon law,” though it really refers more to the warranty of the vehicle than the vehicle itself.

 

In New York, for example, state law says that a new car must conform to the manufacturer’s warranty and that, if  repair cannot be made within a reasonable number of attempts, the purchaser is due a refund.

 

The lemon laws can vary by state and by vehicle condition, so it’s important to know which rules apply. It’s also crucial to get an inspection and pull a motor vehicle history report when buying used and to pay attention to what a dealer’s window sticker might say about a vehicle being sold with a guarantee or as-is.

 

There is one situation where having a lemon can pay off—sort of. According to analysts at iSeeCars, who examined used car prices against the MSRP of a new car, one color had the lowest depreciation at 4.5 percent, far lower than the average of 15 percent. That color? Yellow.

 

Source: Why Do We Call Defective Cars “Lemons”?

  • Like 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...
Please Sign In