Jump to content

Fact of the Day


DarkRavie

Recommended Posts

Fact of the Day - CURVED METAL BARS

34a00af8be4d9d9e9c54d5d8e97d1d08.jpg

Did you know..... Some old-fashioned window guards look like they have a potbelly. Turns out, this isn’t actually a design flaw.

 

London, New York, Paris. Some of the most romanticized cities in the world share a common feature: an abundance of metal bars covering building windows.

 

The bars don’t detract from the beauty of the buildings, but rather, add a certain elegance to them. Strolling along historic avenues, the gently curved ironwork—often featuring ornate details and motifs—can appear more like a piece of art than a security device.

 

However, it also raises the question: what function does it actually serve? Below, we break down the history behind these old-fashioned window guards and why they’re shaped so oddly in the first place.

 

A Brief History of Window Guards

Metal bars covering windows are an age-old method for preventing break-ins in urban areas. They’re also useful for keeping people in—to keep them from falling out, of course. 

 

In New York, the law states that every apartment building of three or more units that may house children under 10 years of age must have some sort of guard on the windows for safety purposes.

 

Montgomery County, Maryland, has a similar regulation: all apartment units above the ground floor housing children aged at 11 or under must have window bars [PDF] or some sort of stop to prevent windows from opening more than four inches. Though they’re effective safety mechanisms, not everyone is a fan; some people feel like they make their homes look “like a prison.”

 

But in fact, beyond their usefulness, well-made cast iron and wrought iron window bars are often used to enhance aesthetics. They became especially popular in 19th-century construction, with ironworkers pulling from various artistic movements (such as Rococo, Gothic, and Renaissance styles) to make unique patterns.

 

Certain cities stood out more for their distinctive window guards, too. Charleston, South Carolina, for instance, was known for signature palmetto designs. Meanwhile, Chicago was best known for more minimalistic, geometric designs.

 

The Reason Why Some Old Iron Window Guards Have a Bulge
Across the world, window guards can add a touch of historic elegance to any structure. But there’s one feature in particular that often makes more ornate sets of window bars stand out: the “potbelly.” This refers to the curve found on the lower half of some of these bars (the shape mimics that of a human stomach).

 

The extra room at the bottom is there for two major reasons: For one, it accommodates planter boxes that people might like to hang in their windows. Having a curved window guard helps residents grow flowers, herbs, and other plants right from their windowsills, but with a certain degree of security.

 

This enforced metal gap creates a wider space between people inside and outside the window, especially on a building’s ground level (which is where you may be most apt to see some of these potbellied window bars). This gap means that those inside have ample room to lean when peering outside the window, and those who may be trying to reach inside are pushed farther away.

 

Do “Belly Bars” Actually Work?
Despite their antique charm, however, window bars aren’t always the preferred method for protecting a house’s entry points.

 

The famous Property Brothers, for example, seem to prefer a more open aesthetic for windows, getting rid of bars to create a light and airy look.

 

For homeowners who share the same preferences, there are other methods for home security that are less obvious, such as reinforced glass, proper locks, and even strategically placed shrubbery. 

 

 

Source: Why Are the Metal Bars on Some Windows Curved?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - GIRAFFE HEARTS

1451855220_12.jpg

Did you know.... While exact numbers vary depending on factors such as body size and sex, the average giraffe heart weighs approximately 25 pounds — roughly 40 times more than the 10-ounce heart of a human adult. In addition to this stark weight differential, a giraffe heart measures 2 feet long, nearly five times a human’s 5-inch heart. 

 

Giraffe hearts can also pump 16 gallons of blood per minute, which is more than 10 times greater than the 1.5 gallons that flow through a human heart in that same time frame. Furthermore, studies indicate that giraffe hearts make up 0.5% to 0.6% of the animal’s total body mass — slightly higher than the average measurement of 0.47% to 0.48% in our species.

 

This notable size gap extends to other organs as well. For instance, a giraffe’s lungs can hold 12 gallons of air, whereas the average set of human lungs has a maximum capacity around 1.6 gallons. And while Gene Simmons of Kiss is famous for his abnormally long tongue, it still pales in comparison to that of a giraffe, which clocks in around 21 inches long. The average human tongue ranges from 3.1 to 3.3 inches long.

 

The first successful human heart transplant was in 1967.
The first known attempted heart transplant took place in 1905, when one canine’s heart was implanted — albeit unsuccessfully — into the neck of another dog. A little more than half a century later, in 1964, a human heart transplant was attempted for the first time, when doctors attempted to implant the heart of a large chimpanzee into a dying human. This effort also ultimately proved futile. But on December 3, 1967, a major advancement was made in Cape Town, South Africa, when Dr. Christiaan Barnard performed the first successful human-to-human heart transplant. Barnard implanted the heart of 25-year-old car accident victim Denise Darvall into the body of 53-year-old Louis Washkansky. The heart functioned as intended, though Washkansky passed away 18 days later from pneumonia. Barnard’s second transplant proved more enduring, as the recipient lived for nearly 19 months after the operation.

 

 

Source: Giraffe hearts weigh about 40 times as much as human hearts.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - FIREFIGHTERS IN ANTARCTICA

976679_1_0420-WFRANCHINA_standard.jpg?al

Did you know.... While you might imagine Antarctica as primarily a land of ice and seabirds, the human presence on the continent has meant that it's occasionally home to something a little, well, warmer. The Antarctic Fire Department — the only full-time, professional fire department on the southernmost continent — serves the U.S.-run McMurdo Station, the largest research station in Antarctica. (Other stations have part-time fire brigades.) The population at McMurdo can grow to more than 1,000 people in the summer season, between October and March, as scientists arrive to study phenomena such as melting glaciers and migrating penguins. McMurdo’s infrastructure includes around 85 buildings with amenities such as dormitory housing, three bars, yoga classes, and hiking trails. The Antarctic Fire Department also serves the U.S.-run Amundsen-Scott South Pole Station and the U.S. Air Force Airfields, and the firefighters pride themselves on responding to any incident at these sites within two minutes

 

The Antarctic Fire Department staffs nearly 55 specially-trained firefighters, who are based at either McMurdo Station or the Amundsen-Scott South Pole Station. Dispatchers field about 350 emergency calls yearly, mostly involving small fires, odor complaints, and hazardous materials. One routine duty is spraying every incoming flight with a deicing foam that also stops engine fires and dripping oil. Firefighters must be ready to battle the continent’s fierce winds, which encourage flames, and use fire engines with pumps that continually cycle water throughout the custom fire trucks to prevent the liquid from freezing. Individual deployments last between three and 13 months — a long time to be away from family, friends, and fresh produce — and each shift lasts 24 hours. It can be grueling, but for those seeking adventure and camaraderie, few things beat putting your survival skills to the test in one of the harshest, and most exquisite, settings on Earth.   

 

Women were nominated to lead Chicago’s and Los Angeles’ fire departments for the first time in 2021 and 2022.
On May 14, 2021, then-Chicago Mayor Lori Lightfoot selected Annette Nance-Holt to be the city’s first fire commissioner, the top post in the roughly 2,550-person Chicago Fire Department. The City Council approved Nance-Holt’s nomination the following month, making her the first woman — and woman of color — to oversee the city’s fire department in its 162-year history. “As fire commissioner, I intend to show the next generation of young Black women that they too can achieve any and everything they set their minds and hearts to,” said Nance-Holt, who has served the department for more than three decades. Then, on January 18, 2022, then-Los Angeles Mayor Eric Garcetti chose Kristin Crowley as his pick to helm the local fire department. Crowley had previously been part of the 3,435-member Los Angeles Fire Department for 22 years. She took the oath of office on March 25, 2022, becoming the first female fire chief since the department was founded in 1886. Overall, however, women still make up less than 10% of the U.S. fire service.

 

 

Source: There are firefighters in Antarctica.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - OVERCOOKED MUSHROOMS?

How-to-Cook-Mushrooms-1-1200-200x300.jpg

Did you know.... Overcooking ingredients is one of the most common mishaps in the kitchen and can result in mushy vegetables, tough meats, and other gastronomic woes. Mushrooms, however, are incredibly forgiving, being almost impossible to overcook. Their ability to maintain an agreeable texture over a wide range of cooking times is all due to the unique cellular structure of fungi. The secret lies in chitin, the material that forms the cell walls in mushrooms. 

 

Chitin, which is also found in insect exoskeletons and crustacean shells, is very durable and heat stable — unlike the cellulose found in plant cells or the proteins in animal tissue. In most foods, cooking often produces dramatic structural changes. The proteins in meat go through a process of denaturation and coagulation, causing the meat to firm up and, when overcooked, become tough. Vegetables, meanwhile, are held together by pectin, which starts to break down during cooking, releasing the bond between cells and making the vegetables turn soft — potentially too soft if overcooked.

 

But thanks to the magic of chitin, mushrooms maintain their structural integrity, and therefore their firmness, when cooked for even long durations. Any textural change that occurs in mushrooms while cooking is more likely due to water loss than cellular breakdown. Mushrooms have a high water content, and this liquid is released while cooking, which concentrates the flavor and changes the texture slightly without compromising structure. So while it is possible to burn mushrooms through overly high heat and negligence, it’s difficult to overcook them, whether you’re sautéing a chanterelle or roasting a portobello.

 

The Armillaria ostoyae honey mushroom is the heaviest living organism on Earth.
Deep within the Malheur National Forest of Oregon lives the heaviest living thing on Earth: a giant mushroom playfully dubbed the “humongous fungus.” This gigantic specimen of Armillaria ostoyae honey mushroom is estimated to weigh somewhere between 7,500 and 35,000 tons and occupies a total area of 2,385 acres — equivalent to 1,350 soccer fields.

 

DNA testing has revealed this to be a single organism, consisting of a massive mycelial network located mostly underground. While the humongous fungus can claim to be the heaviest living organism in the world, it’s not necessarily the largest in terms of area. A specimen of Posidonia australis seagrass, located in Shark Bay in Western Australia, covers an area of approximately 77 square miles — equivalent to around 28,000 soccer fields.

 

Both the Oregonian Armillaria ostoyae and the Shark Bay seagrass rank among the oldest living organisms on Earth. Based on current growth rates, the seagrass is estimated to be around 4,500 years old, while the honey mushroom is estimated to be at least 2,400 years old and possibly even as ancient as 8,650 years.

 

 

Source: It’s nearly impossible to overcook mushrooms.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - THE BIRDS AND THE BEES

The-birds-and-the-bees.jpeg

Did you know.... Birds and bees get all the action—and Samuel Taylor Coleridge was jealous.

 

The phrase the birds and the bees is hazy by design. It’s used to tell children about the mechanics of human sex without actually mentioning sex or humans. It's prudish poetry that has somehow endured throughout the years, but its origins, like its definition, aren't entirely clear.

 

Origins in English Literature
The term is thought to have two possible origins, according to the Los Angeles Times. The Romantic poet Samuel Taylor Coleridge is credited with referring to the two animals in the context of love in his 1825 poem “Work Without Hope”:

 

All Nature seems at work. Slugs leave their lair—
The bees are stirring—birds are on the wing—
And Winter, slumbering in the open air,
Wears on his smiling face a dream of Spring!
And I, the while, the sole unbusy thing,
Nor honey make, nor pair, nor build, nor sing. ...

 

Unfortunately for Coleridge, this fleeting passage had a lasting legacy, and his jealousy of local birds and bees has been etched into eternity.

 

However, University of Southern California linguistics professor Ed Finegan found an earlier use of the phrase in the Diary of John Evelyn, a chief source of historical information about life in 17th-century London. In describing the interior of St. Peter’s Basilica in Rome in the 1640s, Evelyn wrote:

 

That stupendous canopy of Corinthian brasse; it consists of 4 wreath'd columns—incircl'd with vines, on which hang little putti, birds and bees.”

 

Finegan suggested that birds and bees appearing so close to putti (a.k.a. cherubs) implied that Evelyn was alluding to human sexuality. And because Evelyn’s Diary was published around the time that Romantic poets were active, the writers were likely inspired by this phrasing and made it into a euphemism.

 

The Birds and the Bees Crosses the Pond
An early use of the birds and the bees in an American publication occurred just after the start of the Civil War. A New York Times correspondent at the U.S. Capitol remarked on the lusty quality of spring in Washington, D.C.:

 

It is a warm, sunny day, this 20th day of April. The air is redolent of bursting buds, and the Capital Park is jubilant with the gushing songs of the birds and the humming of the honey-bees. The Northern air that has ‘aggressed’ upon us for a week past has been driven back by the rebellious South wind, that comes, fresh from the fair faces it has caressed, and the waving tresses through which it has wantoned, to enchant the soul with its balmy breath, and entrance the mind with its dreamy sweetness.”

 

The convoluted origins of the birds and the bees may inspire you to skip the phrase altogether the next time a child asks you where babies come from.

 

Source: Where Does the Term ‘The Birds and The Bees’ Come From?
 

  • Like 1
Link to comment
Share on other sites

Fact of the Day - COOL CATS

cat-sleeping-260nw-1268032231.jpg

Did you know.... Here’s a hint: It’s not by sweating through their paws.

 

When temperatures rise, humans sweat, dogs pant, and cats ... don’t move enough to overheat? Well, partially. Cats, who need to maintain an internal body temperature of 101°F to 102°F, have several methods for keeping cool in sweltering weather—only one of which involves knowing better than to over-exert themselves on hot days.

 

Conduction allows cats to cool themselves off or warm themselves up via contact with objects of a different temperature. This is why you can often find your cat seeking out cool kitchen or bathroom tiles on a hot day. But this works for a dog or a person, too. What about when that’s just not enough?

 

Do Cats Sweat Through Their Paws?
It’s a misconception that cats sweat through their paws to cool themselves off. As summer wears on you might see moist paw prints, but as veterinarian Kimberly May told The Washington Post, “any secretions there or from their nose, mouth, or tongue are not for sweating; they’re for protection and moisture and are insufficient to cool the blood.”

 

Instead, cats recreate the sweating process—which works to cool humans via evaporation—by grooming themselves regularly. The saliva from their tongues acts like sweat that cools their body when it evaporates—which is why you can also help cool your cat down by using a damp washcloth to lightly wet their fur. In extreme weather, cats will also pant, but unlike dogs who pant regularly to keep themselves cool, a panting cat is a sign of more dangerous over-heating or other serious disease.

 

Should I Shave My Cat in the Summer?
And if you’re tempted to shave your feline friend to help keep him cool—don’t!

“Fur acts as a thermal regulator to slow down the process of heat absorption,” James H. Jones, an expert in comparative animal exercise physiology and thermoregulation at the University of California at Davis, told The Washington Post.

 

 

 

Fur coats are highly evolved—in the winter they keep animals warm, but in the summer, they work both to protect delicate skin from the sun and slow dehydration (Jones notes that, according to research, shaved camels fared worse in the deserts than those with their fur intact).

 

But even with these methods for keeping cool, cats also rely on the perks of domesticity to stay comfortable. So even though they evolved from wild ancestors and are able to tough it out, leave the A/C (or a fan) on for your cats when you go out, and make sure to leave them plenty of water.

 

Source: How Do Cats Cool Themselves Off?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - THE WHISTLE REGISTER

TRU16494_BrandGrid-sam3_03.jpg

Did you know.... Many vocal coaches divide the human voice into three main registers, or ranges of tones — chest, middle, and head. The most familiar of these is the chest register, also known as the speaking voice (at least for men; women tend to speak a bit higher). The head register comes to life when singers are trying to hit high notes, and the middle register falls somewhere in between. But there are two other registers at the extreme ends of the singing spectrum. First, there’s vocal fry, the sound vocal cords make when they’re struggling to hit low notes, which creates an almost growling, popping sound. And then there’s the whistle register — the highest vocal register a singer can produce. 

 

This register is a bit of a mystery, primarily because the epiglottis (a flap of cartilage in the throat) closes over the larynx when it happens, blocking the view of the vocal cords and making it impossible to record the anatomical structures that create the register. The undisputed current master of the whistle register is Mariah Carey, who’s been wowing audiences with it since 1990. (Exhibit A: Her trills at the conclusion of 1991’s “Emotions.”) In 2020, Carey and fellow pop vocal acrobat Ariana Grande harmonized their whistle registers during an awe-inducing performance of “Oh Santa.” But the whistle register comes with a “don’t try this at home” warning. Because people rarely access it, using the whistle register extensively can cause damage. So to sing like Mimi, find a coach. 

 

Around 70 cultures speak in whistles.
Whistled languages are perhaps as old as civilization. In the fifth century BCE, the ancient Greek historian Herodotus described an Ethiopian language similar to the squeaking of bats. Fast-forward 2,500 years, and there are about 70 cultures around the world that still use whistled languages. The advantages are pretty clear: Whistles can be heard several miles away and are an extremely useful tool in cultures that must be heard across deep ravines and towering mountains. Some form of whistled language has been found on nearly every continent, from the Arctic-dwelling Inuit to the forest hunters of the Amazon. And like spoken languages, they can have major differences. Asian whistled languages tend to replicate the melodies of sentences, while Turkish and Spanish whistled languages replicate vowel sounds as whistles and then create consonants through abrupt note shifts. (Whistled languages are always based on the local spoken language, at least these days.)

 

 

Source: The whistle register is the highest vocal register a human can reach.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - SUN SHADES

s-l300.jpg

Did you know.... Do they really reduce that hot box effect in cars, or are they all just part of some big marketing scheme.

 

Whether you’re going to the beach or the park for a picnic this summer, chances are that you’re going to need to park your car ahead of all that fun in the sun.

 

But the sun that you’re enjoying is the same sun that’s also increasing the temperature inside your vehicle. So when you return, it’s hot, sticky, and humid inside. You can even burn yourself if you’re not too careful. In fact, the interior can rise in temperature up to 200°F, depending on the day’s weather. That’s as hot as an oven.

 

Why does the temperature in your car get so hot on the inside, even if the temperature is cooler outside? If you’ve ever wondered how that happens, and whether or not sun shades actually keep cars cool, keep reading on for more insights below.

 

The “Greenhouse Effect” in Cars: Is It a Myth in the Summer?
Sunlight has a broad spectrum of light wavelengths, including ultraviolet, visible, and infrared. When sunlight enters a parked car, specific wavelengths are able to pass through, such as visible light, while others, like ultraviolet and infrared, are trapped inside. These wavelengths become trapped because of the thick glass of the windshield, windows, and rear window.

 

It’s the same reason why a greenhouse is warmer on the inside compared to the temperature of the air outside. The sunlight is trapped inside and absorbed by the seats, dashboard, steering wheel, and other surfaces. This can increase the wear and tear on your car’s interior, though, which can lower its resale value over time.

 

Can Sun Shades Help?
Yes, sun shades (also known as sun visors) have been shown to significantly reduce temperatures in vehicles by up to 25 percent, compared to not using them at all. It’s also an inexpensive solution, with sun shades available for around $10 to $30, depending on size, on online retailers like Amazon and Walmart. But overall, sun shades can provide some peace of mind, especially if you’re worried about leaving a window cracked open while you’re away, but there are a few more specific tips and tricks that may be helpful, which we'll break down below.

 

How To Make Cars Cooler in the Summer
The easiest and most cost-effective way to prevent your motor vehicle from heating up come summer is to crack a window open about one to two inches after you park. With this simple move alone, you could reduce temperatures by up to 30 degrees, as it offers a means of escape for those ultraviolet and infrared wavelengths from the sun.

 

Before you exit the car, consider turning the steering wheel 180 degrees, so the top half of the wheel won’t be in direct sunlight. When you return, the top half will be cooler to the touch. Additionally, you can use beach blankets to cover your seats, keeping them out of the sun.

 

However, one of the best ways to keep cool when parked is to use a trusty sun shade over the windshield. This can help to block most sunlight from entering. As the shades are made from reflective materials, such as mylar or aluminum foil, and feature layers of nylon or polyester for insulation and durability, they prevent sunlight from being absorbed into your vehicle’s interior.

 

 

Source: Do Sun Shades Actually Keep Cars Cool During Summer?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - LONGEST KNOWN CAVE SYSTEM

1669022499-887-thumbnail-width740height5

Did you know... On September 9, 1972, spelunkers exploring Kentucky’s Mammoth Cave system made an incredible discovery. While plumbing the darkest recesses of the nearby Flint Ridge Cave system, then the longest known cave in the world, the group suddenly spotted a well-groomed tourist trail that belonged to Mammoth Cave. The spelunkers quickly realized that the two cave systems were actually one, making Mammoth Cave the longest cave in the world. (The world’s second-longest cave, Sistema Ox Bel Ha in Quintana Roo, Mexico, is around 150 miles shorter.) With the discovery, what was once simply one of America’s oldest tourist attractions became one of the world’s grandest caves. 

 

This historic revelation was the culmination of thousands of years of human exploration of the cave system. The first peoples to enter the cave were Native Americans, who explored roughly 19 miles of its interior some 5,000 years ago. Tribes used the cave for shelter, and the discovery of mummified remains suggests they also found the caves sacred in some way. In the years leading up to the U.S. Civil War, an enslaved cave guide named Stephen Bishop extended the cave’s known length while also creating its first map. Entire families of explorers traversed the caves leading up to the 1930s, until Mammoth Cave became a national park in 1941. With around half a million visitors every year, Mammoth Cave continues to inspire awe just as it did many thousands of years ago.

 

The world’s largest cave has its own weather system.
In Vietnam’s Phong Nha-Ke Bang National Park lies Hang Son Doong — the largest cave in the world. While the cave was discovered by a local in 1990, it took almost two decades for an official expedition to rediscover it and explore its vast interior, which encompasses a dumbfounding 1.35 billion cubic feet. Because of the cave’s immense size and consistent year-round temperature (a very comfortable 73 degrees Fahrenheit), when warm air enters the cave it forms clouds, which then cool and produce rain inside the cave, creating a unique rainforest ecosystem. This occurs at the cave’s mouth and at two dolines, spots where the cave ceiling collapsed around a millennia ago. These cave clouds only form in Vietnam’s warm and muggy spring, summer, and fall months. During the winter, when outside temperatures are cooler than the cave’s ever-consistent 73 degrees, the weather inside Hang Son Doong is crystal clear, much like in many other much smaller caves on the planet.

 

 

Source: The world’s longest known cave system is in Kentucky.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - RAW POTATOES

6097a7b73cecd05f1724463f21c6b7e2.jpg

Did you know... You can, but you may not like what happens.

 

According to the USDA, potatoes are the most heavily-consumed vegetable in America: The average person downs roughly 50 pounds of them annually. If you weigh 200 pounds, you’re consuming a quarter of your body weight in spuds each year.

 

That statistic includes French fries and potato chips. What it doesn’t measure is whether anyone actually eats potatoes raw and unprepared. While that option is common for other vegetables like tomatoes and broccoli, chomping down on a potato like it’s an apple seems a bit perverse. But is it actually dangerous?

 

The Danger of Eating Raw Potatoes
Unlike raw meat, which can harbor dangerous bacteria like E. coli or salmonella, a raw potato is unlikely to be a source of catastrophic illness. But it’s still not a great idea to eat one uncooked.

 

A raw potato contains solanine and chaconine, two glycoalkaloids, as well as a protein known as lectin. When ingested, all of them can cause digestive upset ranging from gas and bloating to stomach cramps. In larger amounts, solanine and lectins can lead to headaches and vomiting. In really large amounts, you might get into some serious neurological symptoms.

 

Surprisingly, exposure to sunlight can actually cause a potato to produce more solanine as it turns green. In theory, a raw potato sitting on a windowsill could ruin your day (and your guts).

 

w0SR9FAoKmcWMMlbkWrUvA.jpg

 

Outside of that, you would need to eat a lot of potatoes—pounds and pounds—before risking solanine poisoning. Once cooked, some of these compounds are virtually eradicated, as are any lingering bacteria or contaminants from the soil. (Solanine, however, tends to remain on or near the skin.) Cooking a potato also breaks down the resistant starches that are difficult for the body to absorb and wind up acting as prebiotics to aid in digestion. While beneficial in small amounts, these starches are likely to prompt stomach issues unless they’re broken down into simple sugars through heat.

 

There’s another reason to avoid consuming potatoes raw: Uncooked, they tend to be bitter and simply don’t taste very good.

 

Why People Eat Raw Potatoes
Advocates of consuming potatoes raw—and there are some—point to a more favorable nutritional profile when a potato is left uncooked. A raw potato contains up to twice as much vitamin C as a baked potato, for example. Others point to the resistant starches being beneficial for gut and overall digestive health.

 

But there are plenty of ways to up your vitamin C intake and consume prebiotics other than eating uncooked potatoes. Worse, the lectins found in raw potatoes are considered an antinutrient, meaning they can interfere with the absorption of other nutrients in the body.

 

So what if you take a bite of a raw potato by accident? Most likely, there’s nothing to worry about. But as a rule, if you’re going to meet the average quota, make sure your 50 pounds of annual potatoes are cooked.

 

 

Source: Is It Safe to Eat Raw Potatoes?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - JELLYFISH

sea-moon-jellyfish-260nw-796511380.jpg

Did you know... While we humans can quite happily sit out in the sun for hours (with adequate sun protection, of course), jellyfish can disappear completely if washed ashore on a sunny day. This is because a jellyfish’s delicate body is composed of at least 95% water, unlike the human adult body, which is about 60% water.

 

As you may suppose, jellyfish are jellylike in consistency — at least when well hydrated — but they aren’t actually fish. They are in fact plankton, ranging in size from less than an inch to nearly 8 feet long, with tentacles that can measure an impressive 100 feet or more. (The largest jellyfish is longer than a blue whale.) Jellyfish have no bones, no brain, and no heart, and they use only rudimentary sensory nerves at the base of their tentacles to detect light and odors.

 

Due to their structure and exceptionally high water content, jellyfish can evaporate within hours in a process known as deliquescing if they’re stranded on a beach in the sun. The jellyfish shrinks as its water evaporates away, leaving behind nothing but a faint imprint on the sand.

 

Box jellyfish rank among the ocean’s deadliest creatures.
When we think of dangerous sea creatures, our minds tend to jump straight to sharks. In reality, there’s a far greater threat floating in the open water: the box jellyfish. These nearly transparent creatures possess up to 15 tentacles, each growing to roughly 10 feet in length.

 

Each tentacle has about 5,000 stinging cells, whose venom is considered to be among the most potent and deadly in the world. While typically used to instantly stun or kill prey such as fish and shrimp, a box jellyfish’s venom can also be fatal to any humans who come too close. The sting is so unbearably painful that human victims have been known to go into shock and drown — or die from heart failure — before even reaching the shore.

 

While there are an average of six fatalities from shark attacks per year, box jellyfish stings result in between 40 and 100 human fatalities annually, although experts believe the true figure is likely far higher. As such, box jellyfish can claim to be the deadliest creatures in the ocean.

 

Source: Jellyfish can evaporate if left out in the sun.

  • Like 1
Link to comment
Share on other sites

Posted (edited)

(Thursday's)

Fact of the Day - FIREFLIES

240_F_168566332_Wqc4ns5jCL0x2P8Ss6PLNGWy

Did you know.... A new study shows that vulnerable fireflies might still have a chance.

 

Over the past few decades, firefly population have been declining due to factors like light pollution, pesticides, and habitat loss. Now there's a ray of hope: A recent study suggests there’s reason to be optimistic for these insects. They’re more prevalent this summer than they have been in years.

 

What’s Up With This Wave of Fireflies?

Mangzhong2023_1920x1080.jpg_sm

According to Popular Science, residents across the U.S. have been seeing a spike in firefly numbers in recent weeks. There have even been upticks in urban areas, such as New York City and Washington, D.C. 

 

While firefly numbers still aren’t what they used to be, the change signals a positive outlook for the insects. The increased numbers of the glowing bugs in many states could be attributed to the factors below:

  • Weather: Climate plays a significant part in firefly reproduction, as they tend to seek out wet soil to lay their eggs. Many states saw decent rainfall this year, which could have led to the insects’ population growth.
  •  Lifecycle: Firefly larvae live for about two years before pupating and becoming the bioluminescent creatures we’re familiar with. When the insects emerge from their pupal stage to find mates, they will live for a few weeks, depending on the environment and species. Firefly prevalence can vary by year because of this factor, and some places may be simply experiencing good timing this summer.

 

Why Are Fireflies Important?

7086c52c9bb8b1189787a41b0b63e237.jpg

Fireflies aren’t just aesthetically pleasing; They also help the environment. A 2019 report from the Xerces Society for Invertebrate Conservation highlighted the ecological benefits of fireflies. The larvae of the species primarily feed on snails and slugs, both of which damage plants. These insects also contribute to the diets of many creatures in the animal kingdom, especially various spider species. Environment America also shares that some species feed on pollen and nectar, benefiting many flowering plants.   

 

We may be lucky enough to enjoy their twinkling light shows for years to come if we make an effort to take care of them. You can help fireflies out by doing simple things, such as turning off lights at night so as not to confuse the insects, and mowing lawns less frequently. It also won’t hurt to avoid using pesticides outside and share your awareness about the lovely creatures.

 

Not everyone gets the pleasure of seeing fireflies light up their environment in person, but you can watch a video of synchronizing fireflies doing their thing in Thailand.

 

 

Source: Fireflies Are Surging This Summer, in a Rare Win for the Insects

Edited by DarkRavie
  • Like 1
Link to comment
Share on other sites

Fact of the Day - ORIGINS OF PLAYING CARDS

pokernye-kombinatsii-cover-300x200.jpg

Did you know.... Playing cards aren’t just one of the most ubiquitous objects in human culture (who doesn’t have a deck lurking in a drawer somewhere?) — they’re also one of the most iconic. Whether new and neatly packaged or old and well-thumbed, cards have a certain mystique about them. From the casino table to the magician’s hand, these simple pieces of plastic-coated paper have achieved a status that transcends their simple yet elegant design. 

 

Yet despite this familiarity, few people know the fascinating journey that cards have taken throughout history. Here, we take a look back through time to trace the origin of playing cards.

 

Ancient Origins 
The earliest known written reference to playing cards is found in Chinese literature from the 10th century, though there are no details about card markings or the particular games played. In The Invention of Printing in China and Its Spread Westward, author Thomas Francis Carter notes that playing cards likely originated in China around the same time as paged books, writing, “As the advent of printing made it more convenient to produce and use books in the form of pages, so was it easier to produce cards.” 

 

Carter goes on to explain how these cards, known as “sheet-dice,” began to appear before the end of the Tang dynasty, which ruled China from 618 to 907 CE. He also suggests the possibility that “sheet-dice” evolved in two different directions during the Song dynasty (960-1279 CE). Some were eventually made using bone or ivory and developed into games such as mahjong, while others retained their paper form, were embellished with new and more intricate images and designs, and became the true ancestors of modern playing cards. 

 

180px-Mamluk_kanjifah_cards.png

Playing Cards Take Shape
As trade routes expanded during the Song dynasty, early playing cards began to spread westward along the Silk Road, carrying with them the fundamental concepts that evolved into the decks we recognize today. The most important stage on this journey happened in the Islamic world. By the 14th century, playing cards had reached the Mamluk Sultanate, which controlled Egypt and parts of the Middle East, at which point the cards underwent a significant transformation.

 

Thanks in part to the discovery of one particular set of cards from the 1400s, we can see how card design progressed toward something simil (440ar to modern decks. The Mamluk pack, as it is sometimes referred to, was discovered in 1931 in Istanbul’s Topkapi Palace Museum. The deck is divided into four suits, with 13 cards per suit. It has just 47 cards, but if it were complete, it would have contained 52 cards, just like today. 

 

The design of this centuries-old deck is also surprisingly similar to the packs of cards we use today. The cards feature a symbol for each of the four suits: cups, coins, swords, and polo sticks, which reflect the culture and interests of the Islamic aristocracy. And each suit contains 10 numbered cards as well as three court cards: the king (malik), the viceroy or deputy king (naib), and the second deputy (naib thani).

 

Origins of the Four Modern Suits
Playing cards made their way to Europe in the late 14th century. Some theories suggest they were brought back by returning Crusaders, which is possible, although scant supporting evidence exists. It’s more likely they came through trade with the Islamic world, including with the Mamluks. 

 

Thanks to written accounts from Spain, France, and Switzerland, we do know that playing cards grew in popularity in Europe from 1370 to 1400, although standardization was still a long way off. During the 15th century, European decks sometimes contained five rather than four suits, and specific regional tastes meant that different suit motifs also emerged. Germans, for example, used hearts, acorns, bells, and leaves, while the Italians favored cups, swords, batons, and coins. 

 

It was the French, however, who made perhaps the most significant contribution to modern playing card design. In the late 1400s, they adapted the German suits to create pique, coeur, carreau, and trèfle — known in English as spades, hearts, diamonds, and clubs.

 

French card makers also simplified the production process by using stencils and developing more efficient printing techniques, making cards more affordable and widely available. This helped popularize the design in Europe, and the colonial exploits of the French, Spanish, and British introduced the newly standardized playing cards to the rest of the world.

 

Source: Where Did Playing Cards Come From?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - TSUNDOKU

portrait-beautiful-brunette-woman-lookin

Did you know.... It’s often said that “there’s probably a German word” for unusual situations that are difficult to express in English, but sometimes there’s actually a Japanese word instead. Tsundoku, for example, describes the act of buying books and never reading them. Many bibliophiles can surely relate. Doku can be used in Japanese as a verb that means “reading,” and tsun comes from tsumu, which means “to pile up.” According to University of London Japanese studies professor Andrew Gerstle, the word appears to have been coined in 1879 in a satirical reference to a teacher who didn’t read the many books he owned. Despite that, the term — which can also refer to the piles of books themselves — doesn’t carry a particularly negative connotation in Japan.

 

For some, tsundoku might be anxiety- or even guilt-inducing — who hasn’t bought an imposing tome such as James Joyce’s Ulysses with every intention of reading it, only to pick up something lighter instead time after time? But it doesn’t have to be that way. There can be a joy to “practicing tsundoku,” since every unread book on your shelf can be thought of as a literary adventure in waiting. There’s no time like the present, but neither is there any harm in leaving Don Quixote for just the right moment.

 

There’s a Japanese phrase for when you think you’re going to fall in love.
In addition to hitomebore, a word for love at first sight, the Japanese language also has a more nuanced phrase for “the feeling upon first meeting someone that you will inevitably fall in love with them” — koi no yokan. It’s closer to predicting love than actually feeling it just yet. The term is common in shoujo manga, or comic books aimed at teenage girls, although it also has a particular resonance for older generations, who married at a young age and didn’t fully know their spouse until after tying the knot. Despite — or perhaps because of — the fact that there’s no precise English equivalent, the phrase has inspired both a short film and a rock album of the same name.

 

 

Source: The Japanese word “tsundoku” describes the act of buying books and never reading them.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - BABY VIEWING WINDOW

x120

Did you know.... They used to be a staple of hospital maternity wards around the country—so, what happened?

 

For much of the 20th century, hospital maternity wards featured a curious design choice: large glass windows that allowed families to gaze at rows of newborns, all bundled up and sleeping in neat, orderly rows.

 

These so-called “baby viewing windows” gave proud families their first chance to spot their newest member—but nowadays, they’ve seemingly vanished. Upon further investigation, it becomes clear that their disappearance actually reveals a great deal about shifting attitudes toward birth, bonding, and even hospital marketing.

 

The Origins of Baby Viewing Windows
After childbirth began shifting from home to hospitals in the early 1900s, many hospitals established separate nurseries where nurses cared for newborns away from their mothers. These large windows weren’t just practical: They were meant to be a spectacle. As Smithsonian Magazine explains, hospitals used them to show off rows of healthy babies as proof of their modern, high-quality care.

 

It’s also worth noting that the concept of putting babies in the public eye for all to see wasn’t entirely new. In the early 20th century, premature babies were often displayed in incubators at fairs and amusement parks to help raise money for their care.

 

For decades, fathers weren’t typically allowed in delivery rooms either, so the nursery window was often their first real introduction to their new child. As Smithsonian notes, these glass-front nurseries helped project an image of hospitals as safe, nurturing places where science kept these tiny patients healthy and strong. The window moment became a rite of passage, not to mention a favorite photo op.

 

The Evolution of Modern-Day Hospital Maternity Wards
In the 1970s, hospitals began rethinking this approach. Instead of separating newborns from their mothers, they began promoting “rooming-in,” where babies stayed in the same room with their parents 24 hours a day. This new approach came with a long list of benefits: it encouraged breastfeeding, helped parents bond more quickly, and made mothers feel more confident caring for their newborns.

 

Around this time, as Time reports, those once-beloved nursery windows soon started to feel outdated. Families preferred privacy and hands-on time with their new babies over the idea of putting them on public display. By the 1990s and early 2000s, growing security concerns also contributed significantly to their decline. As a result, hospitals became more cautious about disclosing the exact location of newborns to protect family privacy. Soon, it became clear that nursery windows no longer aligned with the public’s expectations for safety and confidentiality.

 

Despite all this, our desire to show off newborns hasn’t waned; instead, it has simply evolved with the times. Many hospitals now offer online galleries (sometimes called web nurseries) where parents can share professional photos with friends and family. A private login is typically required to access the images, creating a modern, digital, and more secure version of the traditional nursery window.

 

All in all, the move away from glass showcases reflects a broader cultural shift. Instead of treating childbirth like a distant medical event, today’s hospitals focus on intimacy and immediate family connection, keeping babies close from day one—literally.

 

 

Source: Why Did Baby Viewing Windows Disappear From Hospitals?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - CACAO BEANS

4173016775f11a0469119d72899f9cd4.jpg?nii

Did you know.... You may love chocolate, but probably not as much as the Aztecs did. This Mesoamerican culture, which flourished in the 15th and early 16th centuries, believed cacao beans were a gift from the gods and used them as a currency that was more precious than gold. The biggest chocoholic of them all was the ninth Aztec emperor, Montezuma II (1466–1520 CE), who called cacao “the divine drink, which builds up resistance and fights fatigue. A cup of this precious drink permits a man to walk for a whole day without food.” To say he practiced what he preached would be an understatement: Montezuma II was known to drink 50 cups of hot chocolate a day (from a golden goblet, no less). His preferred concoction is said to have been bitter and infused with chilis. 

 

Needless to say, that was an expensive habit. Aztec commoners could only afford to enjoy chocolate during special occasions, whereas their upper-class counterparts indulged their sweet tooth more often. That’s in contrast to the similarly chocolate-obsessed Maya, many of whom had it with every meal and often threw chili peppers or honey into the mix for good measure.

 

Candy bars skyrocketed in popularity after World War I.
Morale boosts were hard to come by during World War I, but one thing was sure to get the job done: chocolate. In America, the military chocolate tradition dates all the way back to the Revolutionary War, when the cocoa-loving George Washington included the treat in his soldiers’ rations. For our frenemies across the pond, every soldier received a King George Chocolate Tin in 1915; U.S. WWI rations were solicited from chocolate companies in 20-pound blocks, then cut down and hand-wrapped. Doughboys and Tommies (slang for U.S. and U.K. WWI soldiers, respectively) brought their sweet tooth home with them, and confectioners were happy to oblige. Candy bars became massively popular in the decade following World War I — more than 40,000 different kinds were produced in the U.S. alone by the end of the 1920s. These regional specialties began to die out following the one-two punch of the Great Depression and the outbreak of World War II, when Hershey’s was commissioned to create more than 3 billion ration bars for the U.S. Army. They’ve remained an industry titan ever since, and still claim the highest market share of any American confectionery by a sizable margin.

 

 

Source: Aztecs considered cacao beans more valuable than gold.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - MALT vs MILKSHAKE

image-300x200.jpg

Did you know.... Whether you love dipping fries in shakes or malts, you should know the subtle yet tasty difference between the two.

 

Malts and milkshakes are both quintessential American treats and the perfect end to any diner meal. Although the two ice cream-based drinks look incredibly similar, one crucial ingredient sets them apart. Here’s how to tell the difference between the frosty beverages.

 

How to Define Malts and Shakes
There’s only a slight difference in ingredients when it comes to malts and milkshakes. Both creamy drinks have a base made of ice cream and milk. While vanilla and chocolate milkshakes are classics, people also get creative with the drinkable dessert, sometimes adding candies, cookies, and even whole slices of cake to it.

 

A malt, meanwhile, is a milkshake with malted milk powder. The ingredient gives the drink a nuttier depth of flavor compared to your average milkshake. In short, all malts are milkshakes, but not all milkshakes are malts.

 

MALT
Primary Ingredients: Milk, ice cream

Is it a milkshake?: Yes

Does it contain malted milk powder?: Yes

Taste: Sweet, creamy, toasty, nutty

 

MILKSHAKE

Primary Ingredients: Milk, ice cream

Is it a milkshake?: Yes

Does it contain malted milk powder?: No

Taste: Sweet, creamy

 

So What Is Malt, Exactly?
546748771db70897f64807c897760b55.jpg

 

Now that you know what a malted milkshake is, you may be wondering what malt is on its own. The ingredient is made by processing cereal grains—mainly barley, but others are sometimes utilized. The complicated procedure involves soaking the grains until they sprout, heating them to halt growth, and then grinding them into a fine powder. 

 

The product has many uses. After the mashing stage, brewers can add yeast to the mixture to ferment the malt and make beer. According to Britannica, the majority of malt produced is used for beer production. Malt is an important ingredient in malted whiskey as well. Bakers may also add malt to flour to make baked goods, such as bagels and Belgian waffles. 

 

But people usually don’t use plain malt for milkshakes; that’s where malted milk powder comes in. It’s made by combining malt with evaporated milk solids. The result is a toasty, earthy flavor that adds a layer of complexity to an otherwise straightforward milkshake.

 

Popular Foods and Drinks Containing Malt

  • Beer
  • Malted milkshakes
  • Malted milk balls
  • Ovaltine
  • Bagels
  • Malt whiskey
  • Malt vinegar

If you’re someone who likes to dip fries in their milkshake, you’re not alone. Science has proven that the sweet and salty combination is appealing to many people.

 

Source: Malt vs. Milkshake: What’s the Difference?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - HOT BEVERAGE MAKE YOU COOL DOWN?

dffd9cdecbff0b556c3c09d40eb22e24.jpg

Did you know... It’s counterintuitive, but downing a hot drink on a hot day may actually cool you off. Here’s why.

 

When it’s hot outside, few things are more refreshing than an ice-cold beverage—unless you’re one of those folks who swears the best way to cool down on a sultry day is to drink a steaming cup of tea.

 

Common sense suggests that ice water would be the better option. Getting a near-freezing cold beverage into your body should lower your core temperature and offer temporary respite from the blazing heat around you, right?

 

How Hot Drinks Can Cool You Down
That’s not exactly how the human body reacts to heat. A 2012 study from the University of Ottawa had cyclists drink water at different temperatures while they cruised at moderate speed, and then measured their core body temps. Researchers found that drinking the hot beverage triggered a disproportionately high sweat response without significantly raising the athletes’ core temperature. And since sweating is the body’s primary way of cooling itself, the results showed that a hot drink is actually better at cooling you down than a cold one.

 

“If you drink a hot drink, it does result in a lower amount of heat stored inside your body, provided the additional sweat produced when you drink the hot drink can evaporate,” Dr. Ollie Jay, senior author of the study, told The Skeptical Enquirer.

 

Sweating is the Key to Cooling Off
Of course, there are some catches. One is that you won’t feel the effects until your sweat has evaporated fully, contrasting with the instant effect of an ice water hit. The other, much bigger one is that it only works under certain conditions. If it’s humid, if you’re sweating a lot already, or if you’re wearing clothes that trap moisture on your skin, then drinking a hot drink is only going to make you hotter.

 

So while it seems counterintuitive, having a hot drink on a hot day actually can cool you down. Turns out the people downing boiling coffee in July knew better than all of us.

 

Source: Does Drinking a Hot Drink Really Cool You Down?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - TOY HALL OF FAME

2-300x199.jpeg

Did you know... From teddy bears to train sets, classic playthings of youth often conjure memories of a gleaming toy store, holidays, or birthdays. So curators at the Strong National Museum of Play branched out when they added the stick to their collection of all-time beloved toys. Among the most versatile amusements, sticks have inspired central equipment in several sports, including baseball, hockey, lacrosse, fencing, cricket, fishing, and pool. Humble twigs are also ready-made for fetch, slingshots, toasting marshmallows, and boundless make-believe. 

 

Located in Rochester, New York — about 70 miles northeast of Fisher-Price’s headquarters — the Strong acquired the fledgling National Toy Hall of Fame in 2002. (It was previously located in the Gilbert House Children's Museum in Salem, Oregon.) To date, more than 70 toys have been inducted, including Crayola Crayons, Duncan Yo-Yos, and bicycles. The stick was added in 2008, three years after another quintessential source of cheap childhood delight: the cardboard box. 

 

Sticks were the first timekeeping device used by humans.
Circa 3500 BCE in the modern-day Middle East, Mesopotamians rooted sticks in the ground to craft the earliest versions of sundials. The approximate time could be determined by measuring the length and position of the stick’s shadow. Over the next 1,500 years, Egyptians substituted stone obelisks that functioned in a similar way. Since the late 19th century, America has been home to the world’s tallest obelisk, the 555-foot Washington Monument.

 

 

Source: The stick has been inducted into the National Toy Hall of Fame.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - DON'T MESS WITH TEXAS

Employment-Security-Commission.jpeg

Did you know... Three decades ago, Texas was facing an enormous problem: trash, as far as the eye could see, piled up along its scenic and city roadways. The cleanup was arduous and costly — by the mid-1980s, the Texas Department of Transportation (aka TxDOT) was spending nearly $20 million each year in rubbish removal along highways alone. To save money (and the environment), leaders of the Lone Star State knew they had to get trash under control, which they decided to do with a series of public service announcements. But little did TxDOT know that its cleanliness campaign would become larger than life. 

 

The iconic line, dreamed up by an Austin-based ad agency, initially launched on bumper stickers deposited at truck stops and fast-food restaurants. The first “Don’t Mess With Texas” commercial, which aired at the 1986 Cotton Bowl, honed in on Texans’ love for their land, telling viewers that littering was not only a crime but “an insult” to the state’s landscape. The phrase — spoken in that first commercial by Dallas-born guitarist Stevie Ray Vaughan amid a bluesy version of “The Eyes of Texas” — soon became a rallying cry for Texans. The spot was so popular that TV stations around the state received calls asking for it to be aired again. Within a year, TxDOT estimated that roadside litter had dropped by 29%. The ad campaign continued — featuring celebrities such as Willie Nelson, George Foreman, and LeAnn Rimes — and is credited with reducing highway trash by 72% in its first four years. The slogan has become only more popular over time, used at protests, declared by presidential candidates, and chanted at football games — all proof that state pride is held deep in the hearts of Texans.

 

A spot that was once the world’s largest landfill is now a park.
Humans have always generated trash, but how we’ve dealt with it has changed over time. Communities of the past often tossed their refuse out into the streets or in designated dumping sites. In fact, the sanitary landfills used today — where trash is compacted, then covered with dirt — didn’t emerge until 1937. About a decade later, New York supersized its sanitation system by creating the Fresh Kills Landfill, which covered 2,200 acres on Staten Island (about three times the size of the city’s famed Central Park). By 1955, the site was considered the world’s largest landfill, with barges delivering 28,000 tons of trash per day by the 1970s. The former dump site has since been redeveloped into Freshkills Park, partially opening to visitors in 2012 amid ongoing work that will continue through approximately 2036. It has also become home to wildlife, including more than 100 bird species.

 

 

Source: The phrase “Don’t mess with Texas” was created to discourage road littering.

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...
Please Sign In