Jump to content

Fact of the Day


DarkRavie

Recommended Posts

Fact of the Day - MICROSEASONS

164152922655201.jpg?_=1641529226

Did you know.... Winter, spring, summer, fall — outside of the tropics and the planet’s poles, most temperate areas of the globe experience the four seasons to some extent, although how we choose to view those weather changes can differ from country to country. Take, for example, ancient Japan’s calendar, which broke the year into 72 microseasons, each lasting less than a week and poetically in tune with nature’s slow shifts throughout the year.

 

Japan’s microseasons stem from ancient China’s lunisolar calendar, which noted the sun’s position in the sky along with the moon’s phases for agricultural purposes. Adopted by Japanese citizens in the sixth century, the lunisolar calendar broke each season into six major divisions of 15 days, called sekki, which synced with astronomical events. Each sekki was further reduced into three ko, the five-day microseasons named for natural changes experienced at that time. Descriptive and short, the 72 microseasons can be interpreted as profoundly poetic, with names like “last frost, rice seedlings grow” (April 25 to 29), “rotten grass becomes fireflies” (June 11 to 15), or “crickets chirp around the door” (October 18 to 22).

 

In 1685, Japanese court astronomer Shibukawa Shunkai revised an earlier version of the calendar with these names to more accurately and descriptively reflect Japan’s weather. And while climate change may affect the accuracy of each miniature season moving forward, many observers of the nature-oriented calendar find it remains one small way to slow down and notice shifts in the natural world, little by little.

 

Japan recently added more than 7,200 new islands to its territory.
Japan’s archipelago has four major islands and thousands of smaller ones, though a recount in 2023 found that there were far more islets than previously known. The island nation has recognized 6,852 islands in its territory since 1987, though advancements in survey technology led to the realization that there are many more — a staggering 14,125 isles in total. In the 1980s, Japan’s Coast Guard relied on paper maps to count islands at least 100 meters (328 feet) in circumference, though surveyors now realize many small landmasses were mistakenly grouped together, lowering the total number. Today, the country’s Geospatial Information Authority uses digital surveys to get a glimpse of the chain’s smaller islands for a more accurate count, while also looking for new islands created from underwater volcanoes. However, only about 400 of Japan’s islands are inhabited, while the rest remain undeveloped due to their size, rugged terrain, and intense weather conditions.

 

 

Source: The ancient Japanese calendar had 72 microseasons.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - WHITE COAT

doctor-coat-stethoscope-on-hanger-260nw-

Did you know... Uniforms convey a sense of competency across professions ranging from delivery person and airline staff to chef and firefighter. The psychological implications may be even stronger when it comes to matters of health: According to one study published in the medical journal BMJ Open, doctors who don the traditional white coat are perceived as more trustworthy, knowledgeable, and approachable than those who administer to patients in scrubs or casual business wear.

 

The 2015-16 study drew from a questionnaire presented to more than 4,000 patients across 10 U.S. academic medical centers. Asked to rate their impressions of doctors pictured in various modes of dress, participants delivered answers that varied depending on their age and the context of proposed medical care. For example, patients preferred their doctors to wear a white coat atop formal attire in a physician's office, but favored scrubs in an emergency or surgical setting. Additionally, younger respondents were generally more accepting of scrubs in a hospital environment. Regardless, the presence of the white coat rated highly across the board — seemingly a clear signal to medical professionals on how to inspire maximum comfort and confidence from their patients.

 

Yet the issue of appropriate dress for doctors isn't as cut and dry as it seems, as decades of research have shown that those empowering white coats are more likely to harbor microbes that could be problematic in a health care setting. In part that’s because the garments are long-sleeved, which offers more surface area for microbes to gather — a problem that’s compounded because the coats are generally washed less often than other types of clothing. Although no definitive link between the long-sleeved coats and actual higher rates of pathogen transmission has been established, some programs, including the VCU School of Medicine in Virginia, have embraced a bare-below-the-elbows (BBE) dress code to minimize such problems. Clothes may make the man (or woman), but when it comes to patient safety, the general public may want to reassess their idea of how our health care saviors should appear.

 

Western doctors dressed in black until the late 1800s.
If the idea of a physician or surgeon wearing black seems a little morbid, well, that may have been part of the point in the 19th century. After all, the medical field had more than its share of undertrained practitioners who relied on sketchy procedures such as bloodletting, and even the work of a competent doctor could lead to lethal complications. However, Joseph Lister’s introduction of antisepsis in the 1860s dramatically cut the mortality rate for surgical patients, and with it, the perception of the possibilities of medicine underwent a major shift. While black had once been worn to denote seriousness, doctors began wearing white lab coats like scientists to demonstrate their devotion to science-based methodology, a sartorial presentation that also reflected an association with cleanliness and purity. By the turn of the century, the image of the black-clad physician was largely consigned to the remnants of an unenlightened age.

 

 

Source: People consider doctors more trustworthy when they wear a white coat.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - SUGAR RUSH

240_F_191984462_r16Q4Udal8Eefczua9zAsLTm

Did you know... Sugar rushes might be a myth, according to scientists—but the science behind why sugar crashes happen is all too real.

 

We’ve all heard of the so-called “sugar rush.” It’s a vision that prompts parents and even teachers to snatch candy away from kids, fearing they’ll soon be bouncing off the walls, wired and hyperactive. It’s a myth American culture has clung to for decades—and these days, it’s not just a kid thing. Adults are wary of sugar, too.

 

Some of this fear is warranted—diabetes, the obesity epidemic—but the truth is, sugar doesn’t cause hyperactivity. Its impact on the body isn’t an up-and-down thing. The science is clear: There is no “sugar rush.” To find out how and why the myth started, we need to go back to well before the first World War—then pay a visit to the 1970s.

 

America’s Complicated Relationship With Sugar
According to cultural historian Samira Kawash, America has had a long, complex, love-hate relationship with sugar. In Candy: A Century of Panic and Pleasure, Kawash traces the turn from candy-as-treat to candy-as-food in the early 20th century. At that time, the dietary recommendations from scientists included a mix of carbohydrates, proteins, and fats, with sugar as essential for energy.

 

Not everyone was on board: The temperance movement, for example, pushed the idea that sugar caused an intoxication similar to alcohol, making candy-eaters sluggish, loopy, and overstimulated. In 1907, the chief of the Philadelphia Bureau of Health estimated that the “appetite” for candy and alcohol were “one and the same,” Kawash writes. On the flip side, other scientists suggested that sugar from candy could stave off cravings for alcohol—a suggestion that candymakers then used in their advertisements.

 

e897a9b926ae60bca0a3258156275853--candy-

 

While the debate about sugar as an energy source raged in America, militaries around the world were also exploring sugar as energy for soldiers. In 1898, the Prussian war office became the first to commission a study on the sweet stuff—with promising results: “Sugar in small doses is well-adapted to help men to perform extraordinary muscular labor,” early researchers wrote. German military experiments introduced candy and chocolate cakes as fortification for the troops, and the U.S. military added sugary foods to soldiers’ diets soon after. When American soldiers returned from World War I, they craved sweets, which “propelled an enormous boom” in candy sales that has lasted to this day, Kawash wrote on her blog, The Candy Professor. American advertisers framed candy as a quick, easy source of energy for busy adults during their workday.

 

As artificial sweeteners moved into kitchens in the 1950s, candymakers struggled to make their products appeal to women who were watching their waistlines. One industry group, Sugar Information Inc., produced a tiny “Memo to Dieters” pamphlet in 1954 designed to fit inside chocolate boxes. “Sugar before meals raises your blood sugar level and reduces your appetite,” it claimed. But by the 1970s, the sugar-positivity heyday had started to wane.

 

The Origins of the Sugar Rush Myth
The idea that sugar causes hyperactivity gained traction in the early 1970s, when more attention was being paid to how diet might affect behavior. One of the major figures studying the possible connection between diet and behavior was an allergist named Benjamin Feingold, who hypothesized that certain food additives, including dyes and artificial flavorings, might lead to hyperactivity.

 

He formalized this into a popular—yet controversial—elimination diet program. Though certain sugary foods were banned from the program for containing dyes and flavorings, sugar itself was never formally prohibited. Still, thanks in part of the Feingold diet, sugar started to become the poster child for diet and hyperactivity.

 

It wasn’t until the late 1980s that serious doubts about sugar’s connection to hyperactivity began to be raised by scientists. As FDA historian Suzanne White Junod wrote in 2003 [PDF], the 1988 Surgeon General’s Report on Nutrition and Health concluded that “alleged links between sugar consumption and hyperactivity/attention deficit disorders in children had not been scientifically supported.” Despite “mothers’ mantra of no sweets before dinner,” she noted, “more serious allegations of adverse pediatric consequences … have not withstood scientific scrutiny.”

 

A 1994 paper found that aspartame—an artificial sweetener that had also been accused of inducing hyperactivity in children—had no effect on 15 children with ADHD, even though they had consumed 10 times more than the typical amount.

 

A year later, the Journal of the American Medical Association published a meta-analysis of the effect of sugar on children’s behavior and cognition. It examined data from 23 studies that were conducted under controlled conditions: In every study, some children were given sugar, and others were given an artificial sweetener placebo like aspartame. Neither researchers nor children knew who received the real thing. The studies recruited neurotypical children, kids with ADHD, and a group who were “sensitive” to sugar, according to their parents.

 

83d5d8ec960df83223ab5e0ed57b54c7.jpg

 

The analysis found that “sugar does not affect the behavior or cognitive performance of children.” (The authors did note that “a small effect of sugar or effects on subsets of children cannot be ruled out.”)

 

“So far, all the well-controlled scientific studies examining the relationship between sugar and behavior in children have not been able to demonstrate it,” Mark Wolraich, an emeritus professor of pediatrics at the University of Oklahoma Health Sciences Center who has worked with children with ADHD for more than 30 years and the co-author of that 1995 paper, told Mental Floss in 2018.

 

Yet the myth that consuming sugar causes hyperactivity hasn’t really gone away. One major reason is the placebo effect, which can have powerful results. The idea that you or your children might feel a sugar rush from too much candy isn’t unlike the boost you hope to feel from an energy drink or a meal replacement shake or bar (which can contain several teaspoons of sugar). The same is true for parents who claim that their kids seem hyperactive at a party. Peer pressure and excitement seem to be to blame—not sugar.

 

“The strong belief of parents [in sugar’s effects on children’s behavior] may be due to expectancy and common association,” Wolraich wrote in the JAMA paper.

 

It works the other way, too: Some parents say they’ve noticed a difference in their kids’ behavior once they take out most sugars from their diets. This strategy, like the Feingold diet, continues to attract interest and followers because believing it works has an impact on whether it actually works or not.

 

Which isn’t to say there are absolutely no links between sugar consumption and poor health outcomes. A 2006 paper found that drinking a lot of sugary soft drinks was associated with mental health issues, including hyperactivity, but the study’s design relied on self-reported questionnaires that were filled out by more than 5000 10th-graders in Oslo, Norway. The authors also noted that caffeine is common in colas, which might have a confounding effect.

 

In another study, conducted by University of Vermont professor of economics Sara Solnick and Harvard health policy professor David Hemenway, the researchers investigated the so-called “Twinkie defense,” in which sugar is said to contribute to an “altered state of mind.” (The phrase Twinkie defense comes from the 1979 trial of Dan White for killing San Francisco city district supervisor Harvey Milk and Mayor George Moscone. His lawyers argued that White had “diminished capacity and was unable to premeditate his crime,” as evidenced in part by his sudden adoption of a junk-food diet in the months before the murders. White was convicted of voluntary manslaughter.)

 

In their survey of nearly 1900 Boston public high schoolers, Solnick and Hemenway found “a significant and strong association between soft drinks and violence.” Adolescents who drank more than five cans of soft drinks per week—nearly 30 percent of the group—were significantly more likely to have carried a weapon.

 

square-1489699394-delish-soda.jpg?resize

 

But Solnick told Mental Floss the study isn’t evidence of a “sugar rush.”

 

“Even if sugar did cause aggression—which we did not prove—we have no way of knowing whether the effect is immediate (and perhaps short-lived) as the phrase ‘sugar rush’ implies, or whether it’s a longer-term process,” she said. Sugar could, for example, increase irritability, which might sometimes flare up into aggression—but not as an immediate reaction to consuming sugar.

 

Harvard researchers are looking into the long-term effects of sugar using data from Project Viva, a large observational study of pregnant women, mothers, and their children. A 2018 paper in the American Journal of Preventive Medicine studied more than 1200 mother-child pairs from Project Viva, assessing mothers’ self-reported diets during pregnancy as well as their children’s health during early childhood.

 

“Sugar consumption, especially from [sugar-sweetened beverages], during pregnancy and childhood, and maternal diet soda consumption may adversely impact child cognition,” the authors concluded, though they noted that other factors could explain the association.

 

“This study design can look at relationships, but it cannot determine cause and effect,” said Wolraich, who was not involved in the study. “It is equally possible that parents of children with lower cognition are likely to cause a greater consumption of sugar or diet drinks, or that there is a third factor that influences cognition and consumption.”

 

The Science of the Sugar Crash
Though the evidence against the sugar rush is strong, a “sugar crash” is real—but typically it only affects people with diabetes.

 

According to the National Institute of Diabetes and Digestive and Kidney Diseases, low blood sugar—or hypoglycemia—is a serious medical condition. When a lot of sugar enters the bloodstream, it can spike the blood sugar level, causing fluctuation, instability, and eventually a crash (a.k.a. reactive hypoglycemia). If a diabetic’s blood sugar levels are too low, a number of symptoms—including shakiness, fatigue, weakness, and more—can follow. Severe hypoglycemia can lead to seizures and even coma.

 

For most of us, though, it’s rare. Endocrinologist Dr. Natasa Janicic-Kahric told The Washington Post in 2013 that “about 5 percent of Americans experience sugar crash.”

 

You’re more likely to experience it if you do a tough workout on an empty stomach. “If one exercises vigorously and doesn’t have sufficient intake to supplement their use of calories, they can get lightheaded,” Wolraich said. “But in most cases, the body is good at regulating a person’s needs."

 

So what you’re attributing to sugar—the highs and the lows—is probably all in your head.

 

 

Source: That Sugar Rush Is All in Your Head—But Here’s Why It Happens

  • Like 1
Link to comment
Share on other sites

Fact of the Day - BODY TEMPERATURE

fievre-tile-480x200.jpg

Did you know.... In 1851, German physician Carl Wunderlich conducted a thorough experiment to determine the average human body temperature. In the city of Leipzig, Wunderlich stuck a foot-long thermometer inside 25,000 different human armpits, and discovered temperatures ranging from 97.2 to 99.5 degrees Fahrenheit. The average of those temperatures was the well-known 98.6 degrees — aka the number you hoped to convincingly exceed when you were too “sick” to go to school as a kid. For more than a century, physicians as well as parents have stuck with that number, but in the past few decades, experts have started questioning if 98.6 degrees is really the benchmark for a healthy internal human temperature. 

 

For one thing, many factors can impact a person’s temperature. The time of day, where the temperature was taken (skin, mouth, etc.), if the person ate recently, their age, their height, and their weight can all impact the mercury. Furthermore, Wunderlich’s equipment and calibrations might not pass scientific scrutiny today. Plus, some experts think humans are getting a little colder, possibly because of our overall healthier lives. Access to anti-inflammatory medication, better care for infections, and even better dental care may help keep our body temperatures lower than those of our 19th-century ancestors. 

 

In 1992, the first study to question Wunderlich’s findings found a baseline body temperature closer to 98.2 degrees. A 2023 study refined that further and arrived at around 97.9 degrees (though oral measurements were as low as 97.5). However, the truth is that body temperature is not a one-size-fits-all situation. For the best results, try to determine your own baseline body temperature and work with that. We’re sure Wunderlich won’t mind.

 

Technically, humans can hibernate.
Many mammals — from the humble ground squirrel to the majestic grizzly — practice some form of hibernation, slowing down certain bodily functions to survive winters. Naturally, that raises a question: “Humans are mammals. Can we hibernate?” While the answer is slightly more complicated than it is for a pint-sized rodent, the answer is yes … with caveats. The main component of hibernation is lowering body temperature. When this occurs, the body kicks into a low metabolic rate that resembles a state of torpor, a kind of extreme sluggishness in which animals require little to no food. Because most of our calories are burned up trying to keep our bodies warm, de-prioritizing that requirement would essentially send humans into hibernation — but this is where it gets tricky for Homo sapiens. First, humans don’t store food in our bodies like bears do, so we’d still need to be fed intravenously, and second, sedatives would be needed to keep us from shivering (and burning energy). In other words, it would be a medically induced hibernation, but hibernation nonetheless. A NASA project from 2014 looked into the possibility of achieving this kind of hibernation for long-duration space travel, and while the findings weren’t put into practice, there were no red flags suggesting a biological impossibility. Today, NASA continues its deep sleep work by gathering data on the hibernating prowess of Arctic ground squirrels.

 

 

Source: The average body temperature is not 98.6 degrees.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - THEY CALL THEM SOULS, WHY?

what-to-expect_flight.jpg

Did you know.... The slang has no official role in travel lingo, but there are plenty of good reasons aviation professionals keep using it.

 

When news of the Titanic disaster first broke in 1912, the International Herald Tribune reported that “of the...souls on board the great ship, only 675, mostly women and children, have been saved.” (The actual number of survivors was 705.) When United Airlines Flight 232 suffered a loss of hydraulic fluid in 1989, the pilot and an air traffic controller exchanged information about the number of “souls on board”—296 in all, 111 of them perishing.

 

When commercial travel casualties are either imminent or being reported, spokespeople and media members will often call the deceased “souls,” or “souls on board.” So will air traffic controllers and pilots when inquiring about those present on a plane encountering a high-risk situation. It can even be heard in relation to space travel. When the Columbia exploded in 2003, New York Mayor Michael Bloomberg said that “The loss of the space shuttle Columbia and the seven souls on board is a startling reminder of the perils of space travel and the bravery and courage of our astronauts.”

 

So why do professionals refer to travelers as souls instead of passengers?

 

The Origin of Referring to Passengers as Souls
According to the National Air Traffic Controllers Association (NATCA), there is no official mandate by the Federal Aviation Administration (FAA) or in air traffic control guidance that requires aviation professionals to refer to passengers as souls. It is, however, something that air traffic controllers appear to use as part of their informal shorthand. A 1948 manual for air traffic controllers noted that “souls on board” was more traditional in nature than anything formal; the term can also be found periodically in official FAA documents.

 

“That term ‘souls on board’ doesn’t ring a bell insofar as the [Air Traffic Control Manual] is concerned,” retired controller Rod Peterson was quoted as saying by the NATCA in 2016. “However, it was something I learned ‘by legend’ in my air traffic control development.”

 

The phrase has been around a long time. Naval authorities used it at least as far back as the 1800s. In 1848, for example, the Ocean Monarch of Boston was downed by fire. Its captain noted in a statement to media that there were “380 souls on board,” with 229 surviving the catastrophe.

 

Using souls to describe ship occupants was probably reinforced by popular culture: A 1937 film starring Gary Cooper was titled Souls on Board. The phrase eventually migrated to aviation in the 20th century, a result of shared lexicon between the industries. While soul is a synonym for individual, that’s not the only reason why the terminology persisted.

 

7cf81bf945d94ca144a392ca47ab14b30fb5db0a

 

Why We Refer to Passengers as Souls
Soliciting the number of souls on board is usually an alarming sign: It means air traffic control is looking to confirm the number of people that might need to be rescued or recovered in the event of an aviation mishap.

 

There are a few pragmatic reasons aviation professionals use souls in communication about passengers. Citing the number of passengers is distinct from the total number of occupants on a plane, which includes the pilots, flight attendants, other crew, and even airline professionals who might be jumping on in an unofficial capacity. When communicating with a pilot, asking about the number of souls has brevity: Asking about the total number of people on board, including travelers and crew, does not.

 

It also avoids confusion when referencing the number of seats aboard an aircraft. While a seat is normally occupied by a person, it might also be taken by a companion animal or some kind of inanimate object like a musical instrument. (Some people also buy a second seat for additional room.)

 

As a colloquial term, souls may also have become tradition owing to the gravity implied by the phrase. Air traffic controllers, pilots, and ship captains have a sobering responsibility in keeping people safe. Invoking souls may be a reminder of that obligation, and a likely reason that even though it’s not an official edict, it continues to be part of an oral tradition.

 

 

Source: Why Do We Refer to Airplane and Ship Passengers as ‘Souls’?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - DRINKS TASTE DIFFERENT

what-does-ginger-ale-taste-like-cheffist

Did you know... Food and drink often taste different on an airplane, usually more bland. But ginger ale maintains a crisp, dry flavor that makes it known for being even better when enjoyed in the air. It all has to do with the way cabin conditions affect our taste buds. Humidity levels inside an airplane cabin generally hover around just 20%, though this can dip even lower. This dryness — combined with low cabin pressures — reduces oxygen saturation in the blood, which in turn lessens the effectiveness of some taste receptors.

 

A 2010 study commissioned by German airline Lufthansa found that typical cabin conditions inhibit our taste buds’ ability to process salty flavors by as much as 30% and sweet flavors by as much as 20%. And a 2015 study suggests that loud noises in your standard cabin impact the body’s chorda tympani facial nerve, which also lessens the intensity of any sweet-tasting fare.

 

In the case of ginger ale specifically, passengers typically report that it tastes less sweet than normal in the air. However, while our taste buds may not be able to sense the sugar, the beverage still possesses a sharp, extra-dry flavor, which is often thought to feel more refreshing than ginger ale on the ground. The crispness comes from the slightly spicy nature of ginger flavoring. It makes ginger ale an especially popular beverage aboard planes, and many travel guides recommend ordering the drink in flight for its unique  flavor.

 

The first in-flight meals were sold on a 1919 flight from London to Paris.
When the first scheduled commercial flights began in 1914, they lacked many modern amenities, including in-flight meals, which weren’t served until 1919 aboard a Handley Page Transport plane connecting London and Paris. On October 11, the company offered passengers boxed lunches containing sandwiches and fruit, which cost 3 shillings (equal to around $11 today).

 

In-flight dining made its way to United States airlines by the late 1920s, with Western Air Express helping pioneer the concept. It offered passengers meals containing fried chicken, fruit, and cake on flights between Los Angeles and San Francisco, though they were unheated and prepped prior to departure. In 1936, United Airlines became the first major airline to install galleys and ovens on planes, allowing crews to heat meals in flight for the first time.

 

 

Source: Ginger ale actually does taste different on an airplane.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - WHY A BUCK?

financial-accounting-MB.jpg

Did you know... An old slang term for money has bucked the trend and remains relevant today.

 

There’s a lot of slang for American currency, from moolah to dough to greenbacks to dead presidents. (Though not all bills carry presidential faces: The $10 bill features Alexander Hamilton, technically making it a dead Secretary of the Treasury.) But the most pervasive example might be referring to cash as a buck. Gas is three bucks a gallon; it costs 12 bucks to see a movie; you give the pizza delivery driver eight bucks for a tip. So when and why did we start to refer to money in denominations of bucks?

 

Buck Through the Centuries
Buck is one of the more versatile words in the English language. Perhaps the oldest use is its role in Old English, where it was then (and now) used to describe a male goat or deer, among other male animals. It was co-opted circa the 14th century to describe a libidinal young man, and later was used to describe an ambitious person (whom we might now call a “young buck”). In the 1800s, it was also used as a slur against Black or indigenous men.

 

One could also use the phrase buck up, meaning “dress well,” or say “buck up” in reference to having the motivation to get things accomplished.

 

From Buckskin to Buck
The precise etymology of buck as currency—specifically a dollar—is unclear, but there are theories. Because deer were known as bucks, their hides were called “buckskin,” which was a form of currency in the 18th century. According to Huffington Post, a mention of buck in this context can be found in a 1748 journal entry in which Pennsylvania Dutch pioneer Conrad Weiser values whiskey at “5 bucks.”

 

The Oxford English Dictionary dates buck in the context of a dollar to 1856, the earliest known printed mention. The Democratic State Journal in California made note of a crime in which “Bernard, assault and battery upon Wm. Croft, [deprived] the sum of twenty bucks.”

 

The slang term persisted into the 20th century. The OED notes this sample from McClure’s magazine in 1903: “A man ... passed around some gold watches…twenty bucks they cost you over the counter.”

 

The Value of a Buck
There was no direct conversion rate between one single buckskin and a denomination. That value depended on how thick the pelt was, and this varied from animal to animal. At the time of Weiser’s entry, the U.S. dollar didn’t even exist yet. But because buckskin did represent some form of monetary value, it became synonymous with printed money.

 

It makes sense, but so does another explanation: that buck was derived from sawbuck, the slang term for a $10 bill and so named for the Roman numeral X that appeared on early bills that reminded people of a wood-chopping frame known as a buck.

 

However it came to be, the usage of buck meaning “money” has risen steadily from the mid-19th century. If you invoke the term, it’s likely most everyone will know what you mean—but if you want to be a little more creative, you can also opt for bread, bank, or clams.

 

Source: Why Do We Call a Dollar a “Buck”?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - SPAGHETTO

OIP.T8xZGI3H7qovzW-q0T2gnAAAAA?rs=1&pid=

Did you know... If you go into an Italian restaurant and order spaghetto, chances are you’ll leave hungry. That’s because “spaghetto” refers to just a lone pasta strand; it’s the singular form of the plural “spaghetti.” Other beloved Italian foods share this same grammatical distinction — one cannoli is actually a “cannolo,” and it's a single cheese-filled “raviolo” or “panino” sandwich. Though this may seem strange given that these plural terms are so ingrained in the English lexicon, Italian language rules state that a word ending in -i means it’s plural, whereas an -o or -a suffix (depending on whether it’s a masculine or feminine term) denotes singularity. (Similarly, “paparazzo” is the singular form of the plural “paparazzi.”) As for the term for the beloved pasta dish itself, “spaghetti” was inspired by the Italian word “spago,” which means “twine” or “string.” 

 

Despite pasta’s deep association with Italy, it’s far from an Italian invention. Though its precise origins are somewhat obscure, Arab traders are thought to have introduced pasta to Sicily sometime in the eighth or ninth centuries. Even pasta sauce isn’t originally Italian: Tomatoes were brought to Europe in the 16th century by explorers from the New World, with the first tomato sauce recipe appearing in a 1692 Italian cookbook written by chef Antonio Latini. More than 300 years later, spaghetti is a perennially popular dish, even if most of us haven't always known what to call it.

 

Thomas Jefferson helped popularize pasta in the United States.
Around the time he served as U.S. minister to France (1784–1789), future President Thomas Jefferson wrote, “The best macaroni in Italy is made with a particular sort of flour called Semola, in Naples.” Jefferson even tasked his secretary and diplomat William Short with tracking down a machine for making “maccaroni,” a term he used to describe pasta in general. Jefferson was known for offering pasta to his dinner guests during his presidency, and even had his own written recipe for an early form of mac and cheese that survives to this day. He was also known for serving White House visitors other European delicacies of the time, such as macaroons and ice cream. Though Jefferson was the famous face often connected to pasta’s growing popularity, his Black, enslaved cooks were the ones truly responsible for crafting the delicious dishes – among them James Hemings, Peter Hemings, Edith Hern Fossett, and Frances Gillette Hern.

 

 

Source: The name for a single spaghetti noodle is “spaghetto.”

  • Like 1
Link to comment
Share on other sites

Fact of the Day - EASTER BUNNY

OIP.WeAmhFIngRPRvtIEEHPL4AAAAA?rs=1&pid=

Did you know... Hares were once linked to a Germanic Pagan goddess who never even existed.

 

We’re all familiar with the legend of the Easter Bunny—the magical lagomorph who delivers colorful chocolate eggs to children as a holiday treat. These days, we tend to think of the Easter Bunny as a rabbit, but the first written reference to the legend actually features a hare. In his 1682 essay “De Ovis Paschalibus” (“Concerning Easter Eggs”) Georg Franck von Franckenau describes German children searching for eggs supposedly laid by a hare—a ritual we would now readily identify as an Easter Egg Hunt.

 

There are other European Easter traditions involving hares, including eating their meat and hunting them. One report from England in 1620 describes a reward of “a calf’s head and a hundred of eggs for their breakfast, and a groat in money” for any young men of the parish who could catch a hare and present it to the parson by 1 p.m. on Easter Monday. Clearly,  then, the involvement of hares in the celebration of Easter dates back many centuries—but where does the association come from?

 

The Folk Origins of Ostara
Perhaps the most popular origin myth for the Easter Bunny concerns the Germanic Pagan goddess of spring, Ostara, to whom hares were supposedly sacred. One story states that Ostara rescued an injured bird by transforming it into a magical hare, and, in gratitude, the hare now marks the goddesses’s springtime festival by laying beautiful colored eggs—an ability carried over from its previous form. However, the provenance of Ostara’s legend has been widely questioned: “a goddess called Ostara isn’t known from ancient sources at all,” writes Stephen Winnick, a folklorist at the Library of Congress’s American Folklife Center. 

 

Grimm-Brothers-Jacob-and-Wilhelm-Grimm.j

As it turns out, she is far from an ancient deity. Winnick traces the Ostara myth only as far back as 1835, when it was invented by renowned folklorist and linguist Jacob Grimm (one half of famous fairytale collecting duo, the Brothers Grimm). Grimm was inspired by the 8th-century writings of the Venerable Bede, which refer to an Anglo-Saxon goddess named Eostre whose festival was celebrated in the spring. He postulated that there must also be an equivalent German goddess and named her Ostara, arguing that the Christian Church had usurped her festival and retained its name.

 

The association between Ostara and hares wasn’t made until 1874, when the mythologist Adolf Holtzman suggested it as a way to explain the popularity of hare-related Easter traditions. “The existence of Ostara’s cult in Germany was a conjecture made by a folklorist,” summarizes Winnick, “but her connection to rabbits or hares was even more of an academic stretch.” Despite this, discourse connecting her to the Christian celebration of Easter, and to the tradition of the Easter Hare, became commonplace in 19th-century Europe and still persists today.  

 

A Springtime Symbol
While there is no suggestion of Ostara’s existence prior to Grimm’s claim, scholars do now believe Bede’s assertion that a goddess named Eostre was worshiped as a local deity in the south and east of Anglo-Saxon Britain. There is no strong evidence to indicate she was particularly associated with hares, though, but the animals were sacred to a number of other pre-Christian deities, including the Roman goddess Diana and Celtic goddess Andraste.

il_340x270.6120305034_3f6r.jpg

 

Hares were generally revered in the period: Numerous amulets featuring hares have been discovered, zooarchaeological evidence implies it was verboten to eat them, and there is also some evidence to suggest their use in fertility rituals. Indeed, it is in part the high fecundity of hares that associate them with spring. As Winnick points out, hares and rabbits are not merely symbolic of the season, but are actually present during it: “the connection is not merely one of cultural convention,” he writes, “but rather exists in nature independent of culture.” The same is true of rabbits, eggs, and flowers. As such it is reasonable to suggest that multiple cultures might independently elect to adopt these as seasonal emblems. 

 

The precise origins of the Easter Bunny are in all likelihood lost to the mists of time. What we can all agree, though, is that this candy-wielding hare makes for one fun—and delicious—holiday tradition.

 

 

Source: The Surprisingly Controversial Origins of the Easter Bunny

  • Like 1
Link to comment
Share on other sites

Fact of the Day - PACIFIC OCEAN

OR+Cannon+Beach.jpg?format=500w

Did you know.... The largest and oldest ocean basin on Earth, the Pacific has roughly twice as much water as the Atlantic. Yet it didn’t receive the name we know today until the 16th century. On November 28, 1520, Portuguese navigator Ferdinand Magellan — after 38 days of weathering the treacherous waters of the strait that’s now named after him at the tip of southern Chile — became the first European to reach the ocean by way of the Atlantic. Happy to have the harrowing journey behind him, Magellan referred to this new ocean as “Mar Pacifico,” meaning “Peaceful Sea.” While the moniker made sense at the time, today we know that both the Pacific and Atlantic can be tumultuous at times.

 

Yet “Pacific” isn’t the only name this big blue expanse has been known by. In 1513 — seven years before Magellan glimpsed the Pacific — Spanish conquistador Vasco Nunez de Balboa led an expedition across the isthmus of Panama and named the sea he found on the other side the far less poetic “el mar del sul,” or the “South Sea.” However, the most authentic moniker for the Pacific Ocean may be the Hawaiian term “Moananuiākea.” Interestingly, this name — perhaps over a thousand years old — is closely related to the Maori “Te Moana Nui a Kiwa,” meaning the “Great Ocean of Kiwa” (Kiwa being a Maori guardian of the sea). So while “Pacific” is the name most of us now know, it’s certainly not the one used by the people who mapped and sailed the Pacific’s 63 million square miles for centuries before the Europeans arrived.

 

Ferdinand Magellan wasn’t the first person to circumnavigate the globe.
Most people learn in history class that Ferdinand Magellan was the first person to circumnavigate the globe during his famous voyage from 1519 to 1522, but the truth is a lot more complicated. For one, the famous (or infamous) explorer never actually finished the voyage from Spain to the Moluccas (Spice Islands), because he was killed in the Philippines in 1521. Another mariner on his expedition, Juan Sebastián del Cano, brought the Victoria, the last surviving vessel of Magellan’s fleet, back to Spain in September 1522. But even if Magellan had survived that skirmish, the first person to actually circumnavigate the globe may have been an enslaved individual named Enrique, whom Magellan had seized during the Portuguese conquest of Malacca in 1511. Eight years later, Enrique served as an interpreter on Magellan’s globe-trotting quest. After Magellan’s death, Enrique abandoned the mission only a few hundred miles short of Malacca. If he returned home in 1521 (we’ll likely never know), then he’d officially be the first person to ever travel the entire globe.

 

 

Source: The Pacific Ocean was named because Ferdinand Magellan thought it was “pacific,” or peaceful.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - DE-EXTINCTION

0*VoJW7YJRsZJTliEZ

Did you know... “Just making something look like an extinct animal isn’t the same as resurrecting it.”

 

Do you think you could live alongside dodos and woolly mammoths? Colossal Biosciences thinks you can. The biotech company, which claims it recently resurrected the dire wolf, focuses on “de-extinction,” or attempting to bring extinct animals back from oblivion.

 

De-extinction allegedly works by extracting DNA samples from an extinct animal’s bones, sequencing enough of that animal’s genome as possible, and then using a DNA editor like CRISPR to edit genes of a related or descendant animal to be more like the extinct one. Then scientists use that DNA to create fertilized embryos that they then grow into what is said to be a resurrected species. It’s important to note that the de-extinction process is incredibly expensive, and can’t perfectly recreate the DNA of a species that has gone extinct. Scientists simply try to get as close as possible.

 

So, regardless of what Colossal and other biotech companies say about the process, it’s not possible at this point in history to bring back an extinct species—not even for the “new” dire wolf, which Ken Angielczyk, curator of fossil mammals at Chicago’s Field Museum, says is like “the as-seen-on-TV version of a dire wolf.” In that widely-reported case, Colossal scientists used a gray wolf genome to stand in for the extinct dire wolf’s and made changes to only 14 out of thousands of genes to make the “resurrected” creature look more like an ancient dire wolf.

 

“It’s like if you took a chimpanzee and edited its genome to be a little taller and hairless, and then said you brought back Neanderthals,” Angielczyk says. “Just making something look like an extinct animal isn’t the same as resurrecting it.”

 

Unless you have the extinct creature’s entire genome and some close living relatives to that creature—as in, not something that was a close relative to them thousands of years ago—you can’t really duplicate an animal from the past. Theoretically, it’s possible, Angielczyk says, but more than likely it won’t ever happen. Compared to a species’ original genome, he says, anything re-introduced would have an enormous number of differences.

 

Invasive Species in Today’s World
We also need to consider how those species being brought back would live in today’s world.

 

“We have to be careful about how to approach this in a conservation context,” Angielczyk says. “It’s morally better to conserve animals from the beginning instead of bringing them back.” 

If a species has gone extinct, our current world and environment is more than likely inhospitable to that species. Earth has more people than when that species was around, which means the animal had more space to live and more food to eat—and less interference from humans. Scientists would need to do a ton of research and preliminary experiments to ensure a newly re-introduced species could actually survive in the modern world. And once that species arrives again, who’s to say if we’d be able to successfully live in harmony with them, or they with us? Reintroducing an extinct creature could create an invasive species in the modern environment, since other animals would have evolved to fill the original species’ ecological niche after it died out. 

 

Critics say that it’s more ethical (and easier, and cheaper) to preserve a living species’ environment and protect those creatures from harm. Ecologists have argued that our focus should be on saving endangered species rather than genetically engineering dire wolves or mammoths back from the dead. We already know extant species can survive and we can survive alongside them. 

 

Overall, Angielczyk and other scientists are skeptical about the de-extinction process, though he maintains some optimism. He notes that notifying the public about these scientific advancements should be done with caution.

 

“It’s interesting work and potentially valuable,” he says, “but [biotech companies] need to be more tempered in their communications to the public.”

 

 

Source: Does De-Extinction Actually Work?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - EASTER

46478e5a9f2c790bf43393e64fb41885.jpg

Did you know.... Easter is named after an ancient pagan goddess. Or is it?

 

The Spanish word for Easter is Pascua. In Italian, it’s Pasqua. French speakers have Pâques. All of these words derive from pascha—both Latin and Greek for Passover, the Jewish festival during which Jesus was supposedly resurrected.

 

Since English (unlike Spanish, Italian, and French) isn’t a Romance language, it’s not surprising that its word for the holiday bears no resemblance to those other three. Easter is often said to have been inspired by a different festival: that of the pagan goddess Eostre.

 

Eostre is a flowy-haired, flower-adorned woman with lots of rich lore. She represents rebirth, dawn, spring, and fertility; tales about her often feature rabbits and eggs. The general implication is that Christians took one long look at a pagan legend and shifted it wholesale onto their own holiday, right down to the name.

 

But the truth behind the word Easter is much murkier than that origin story suggests.

 

Bede Between the Lines

Bedav.jpg

 

The oldest account of Eostre is found in the influential 8th-century work De Temporum Ratione, or The Reckoning of Time, by the Northumbrian monk and scholar Bede. According to him, the Old English term for the fourth month was “Eosturmonath … which is now translated ‘Paschal month,’ and which was once called after a goddess of theirs named Eostre, in whose honor feasts were celebrated in that month. Now they designate that Paschal season by her name, calling the joys of the new rite by the time-honored name of the old observance.”

 

With so little to go on, some scholars have questioned whether there ever was a deity named Eostre. Did Bede jump to conclusions about the meaning of Eostur, envisioning a goddess when it simply meant, say, “east” or “eastern”? It’s not impossible: Eostur likely shares a Germanic base with east, and that base is closely related to a batch of ancient words for dawn—owing to the sun’s rising in the east. We can’t say for sure that Eostur (or Eostre, as Bede rendered the term as a Latin moniker) was a goddess who personified the dawn of spring, or just a nod to the thing itself.

 

But there is circumstantial evidence to support Bede’s claim. For one thing, as historian Henry Mayr-Harting pointed out in The Coming of Christianity to Anglo-Saxon England, “it is possible that Bede’s father and almost certain that his grandfather could remember the heyday of Northumbrian heathenism.” In other words, Bede may have gotten his intel on Eostre’s festival from people who actually witnessed it.

 

220px-Northern_England-Historic_counties

 

We also know of Eosterwine, a.k.a. Easterwine, a 7th-century Northumbrian abbot whose name appears to have meant “Eostre’s follower.” Although “eastern friend” is another valid possibility, there’s a conspicuous lack of Anglo-Saxon names referring to northern, southern, or western friends. Meanwhile, several names do refer to followers or friends of a god or some other being—including Ingwine (Ing was a deity), Oswine (os meaning “god”), Freawine (“lord”), and Aelfwine (“elf”).

 

Not to mention that ancient history boasts a deep bench of dawn deities, from the Hindu goddess Usha (or Ushas) to Greek mythology’s Eos. In northwest Germany in 1958, remnants of ancient Roman altars were uncovered that bear an inscription to Matronae Austriahenae— apparently a group of mother goddesses whose name could be connected to Eostre. We don’t know if the goddesses themselves were related; all we know is that Eostre and the Austr- of Austriahenae are linguistically similar. All this to say that Eostre, if she existed, wasn’t the only goddess named for something eastern.

That’s a good way to sum up the etymology of the word Easter, too: It came from something eastern, be it a glorious deity or just an old root word. But believing Bede still leaves one question unanswered. If the only thing he told us about Eostre was her feast month, where did we get the rest of her lore?

 

Grimm Reaper

images?q=tbn:ANd9GcTcjBvynFdzYLzWWQcMOQN

 

The short answer—as Richard Sermon explored in his book Easter: A Pagan Goddess, a Christian Holiday, & Their Contested History—is Germany. In the 17th century and beyond, German writers theorized pagan origins for their Easter-related terms and traditions. Easter in German is Ostern, which resembles a number of German place names—from Osterberg (berg meaning “mountain”) and Osterholz (holz is “wood”) to Osterndorf (dorf is “village”). Perhaps these were worship sites for Eostre, or “Ostera,” as one writer called her. Perhaps German Christians celebrated Easter with bonfires because their pagan progenitors had done the same to fete Ostera.

 

Jacob Grimm expanded this line of thinking in his 1835 book Deutsche Mythologie, describing “Ostara” as a goddess who “seems … to have been the divinity of the radiant dawn, of upspringing light, a spectacle that brings joy and blessing, whose meaning could be easily adapted to the resurrection-day of the christian’s God.” Grimm formed his portrait of Ostara from a mix of linguistics and knowledge of other dawn deities—not from any proof that people actually worshipped her.

 

Subsequent scholars added to Ostara’s legacy by using her to make sense of other German Easter customs. Adolf Holtzmann, for example, wrote in 1874 that Germany’s Easter hare (Osterhase) was “inexplicable” to him, “but probably the hare was the sacred animal of Ostara; just as there is a hare on the statue of [the Celtic goddess] Abnoba.”

 

300px-Ostara_by_Johannes_Gehrts.jpg

 

By the early 20th century, Ostara had hopped from the realm of measured (albeit pretty speculative) academic theory into popular culture’s colorful imagination. Depictions proliferated of an ethereal and often vaguely Greco-Roman fairy goddess surrounded by Easter iconography. In 1903, one Baptist reverend wrote that Ostara’s festival “was originally observed in a most hilarious manner, by nude dancers who indulged in the foulest immoralities.”

 

These sources are now old enough that we tend to take them at face value; careful phrasing like Grimm’s “seems” and Holtzmann’s “inexplicable” and “probably” gets buried beneath the Ostara legends that their work helped inspire. In short, an ancient goddess may have given us the word Easter—but modern folks gave that goddess an identity.

 

Source: Why Is Easter Called “Easter”?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - A FOOT WIDE

0f5292ecf41cc5a3c20151808af6da1d--city-s

Did you know... If you suffer from claustrophobia, you might want to avoid the world’s narrowest street. Spreuerhofstrasse — located in Reutlingen, Germany — measures 1 foot, 0.2 inches at its tightest, and a meager 1 foot, 7.68 inches at its widest, at least when last evaluated for Guinness World Records in 2006. The 65-foot-long street is also limited vertically; those over 5 feet, 10 inches have to duck at the exit, and many who pass through are pelted with drips from overhead gutters. Despite those inconveniences, tourists flock to the record-holding passageway. 

 

Sandwiched between two buildings in Reutlingen’s oldest area, Spreuerhofstrasse was initially created not as a tourist attraction, but by a 300-year-old construction faux pas. In 1726, much of the city was destroyed by a fire, and residents rebuilding the area disregarded regulations for wider spaces between buildings that were meant to prevent future devastating blazes. For its first 100 years, Spreuerhofstrasse’s status as a street was debatable, but local lore suggests that in 1820 it received its official designation as a municipal street thanks to a slender town official who could easily squeeze down the alleyway. 

 

However, no one is sure how long Spreuerhofstrasse will be able to hold on to its record. Within the last decade, area officials have become concerned about the adjacent buildings, as their walls slowly close in on the street’s space. If Spreuerhofstrasse becomes too narrow to pass — or widens, in the case of demolitions — the street would lose its world record, possibly to another competing lane, like England’s 14th-century Parliament Street, which measures just 25 inches wide.

 

Salt Lake City has the widest streets of any major U.S. city.
Not all cities follow the same guidelines when it comes to designing their roadways. Take, for example, Salt Lake City, where the streets in the city’s heart are a hefty 132 feet wide. That’s at least double the width of streets in cities such as San Francisco and New York. Salt Lake City’s massive streets were inspired by Mormon religious leader Brigham Young; when Mormon pioneers arrived in Utah and began constructing the city in 1847, Young declared the streets should be wide enough for drivers to turn their wagons around without “resorting to profanity.” However, wide streets aren’t the easiest (or safest) for pedestrians when it comes to crossing, which is why city officials are looking to use some of that extra space for bike lanes and additional sidewalks.

 

 

Source: The world’s narrowest street is only a foot wide.

  • Like 1
Link to comment
Share on other sites

Fact of the day - BURIED BONES

images?q=tbn:ANd9GcSnFxPbHFG-6f2k170_TM0

Did you know.... Dogs can tear up a lawn with an overwhelming desire to dig and bury things.

 

If you’ve ever found your dog’s favorite toy nestled between pillows or under a pile of loose dirt in the backyard, then you’ve probably come to understand that dogs like to bury things. Like many of their behaviors, digging is an instinct. But where does that impulse come from?

 

Why Dogs Like to Bury Food, Toys, and Other Objects
Cesar’s Way explains that before dogs were domesticated and enjoyed bags of processed dog food set out in a bowl by their helpful human friends, they were responsible for feeding themselves. If they caught a meal, it was important to keep other dogs from running off with it. To help protect their food supply, it was necessary to bury it. Obscuring it under dirt helped keep other dogs off the scent.

 

This behavior persists even when a dog knows some kibble is on the menu. It may also manifest itself when a dog has more on its plate than it can enjoy at any one time. The ground is a good place to keep something for later.

 

But food isn’t the only reason a dog will start digging. If they’ve nabbed something of yours, like a television remote, they may be expressing a desire to play. A dog may bury its own toys, too. They could be doing this because they feel possessive over the object or fear it’ll be taken away; they may be trying to hide it so other dogs (or even people) can’t steal it. Their desire to buy a toy or other household item could be a response to boredom or anxiety as well.

 

Do All Dogs Breeds Like to Dig Holes?
Some dog breeds are more prone to digging than others. Terriers, dachshunds, beagles, basset hounds, and miniature schnauzers go burrowing more often than others, though pretty much any dog will exhibit the behavior at times. While there’s nothing inherently harmful about it, you should always be sure a dog in your backyard isn’t being exposed to any lawn care products or other chemicals that could prove harmful. And if you are worried about your pup digging too many holes in your backyard, there are several steps you can take to curb the behavior. You should also probably keep your remote in a safe place, before the dog decides to relocate it for you.

 

 

Source: Why Do Dogs Like to Bury Things?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - BURNING CALORIES

excellent-movie-young-beautiful-girl-260

Did you know... We're all familiar with the feelings that come with watching a fright flick — the sense of dread that engulfs us as a character enters a foreboding place, ominous music building, etc. According to a 2012 study commissioned by the video subscription service Lovefilm, these heart-pounding moments can do more than cause a good old-fashioned scare, however. Of the 10 movies tested, half caused participants to burn at least 133 calories, more than the amount used up by a 140-pound adult on a brisk 30-minute walk.

 

Granted, this limited study was hardly robust enough to earn a write-up in a peer-reviewed journal. Yet the science behind the results is essentially valid, thanks to human hard-wiring that traces to when our primitive ancestors had good reason to fear the monsters lurking in the night. When exposed to a harrowing situation, our sympathetic nervous system triggers the "flight or fight" response, which sends adrenaline into the bloodstream, diverts blood and oxygen to muscles, and kicks heart activity into a higher gear. Add in the outwardly physical reactions often prompted by the scariest scenes, such as jumping back in your seat or instinctively reaching for a companion, and it's easy to see how sitting through The Shining (184 calories) or Jaws (161 calories) delivers results akin to sweating through a workout.

 

There are other benefits to putting ourselves through this sort of simulated danger, including the release of endorphins and dopamine, which allows us to feel relaxed and fulfilled after "surviving" the events witnessed on screen. Of course, not everyone is a fan of the frightening imagery in The Exorcist (158 calories) or Alien (152 calories), and researchers caution that stress can outweigh the gains for people who are genuinely repulsed by these movies. If health is your goal and the sight of blood makes you queasy, you're better off rising from the couch and getting your legs moving instead of watching someone else flee the clutches of a zombie.

 

Competitive chess players can burn up to 6,000 calories per day during a tournament.
If scary movies aren’t your cup of tea and you want another creative way to burn calories, then competitive chess may be your ticket. According to Stanford University researcher Robert Sapolsky, a chess player can go through 6,000 calories a day over the course of a tournament, about three times the daily amount expended by the average person. The reasons are largely the same as those previously mentioned — the heightened tension of a high-stakes game forces bodies into a state of energy-consuming overdrive. However, the effects are magnified by the behavior of participants, who often skip meals and endure sleepless nights as they obsess over strategy. As a result, top players have taken to training like professional athletes to prepare for the grueling toll of tournaments. Norway’s Magnus Carlsen, for example, partakes in an array of activities that include running, soccer, skiing, and yoga, a regimen that helped him reign supreme as the undisputed world chess champion from 2013 to 2023.

 

Source: Watching a scary movie can burn as many calories as exercise.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - WHY THEY TURN RED

nova-scotia-lobster.jpg

Did you know.... It isn’t a result of burning them alive, in case the guilt was putting a damper on your delicious seafood feast.

 

If the fire-engine red hue of the lobster on your plate makes you painfully aware that it was boiled alive, think of it this way: The bright color is simply the result of a chemical reaction.

 

The usual greenish-blue shade that lobsters have in the sea serves them well in life, camouflaging them from the predatory eyes of cod, haddock, and other large fish that prowl the ocean floor. Anita Kim, a scientist at the New England Aquarium, explained to Live Science that this color results from the combination of two molecules.

 

One is astaxanthin, a bright red carotenoid that lobsters absorb by eating things that contain it. The other is crustacyanin, a protein that already exists in lobsters. When crustacyanin binds with astaxanthin, it twists the molecule into a different shape, which changes how it reflects light. So instead of red, live lobsters are blue.

 

Then, when you boil one of the tasty crustaceans, the heat causes the crustacyanin molecules to contort into new shapes. In doing so, they release the astaxanthin molecules, which rebound to their original shape and red color. Michele Cianci, a biochemist at Italy’s Marche Polytechnic University where the phenomenon was investigated, likened it to manipulating a rubber band with your hands. “You can impose any kind of configuration you want,” he told Live Science. “When you release the rubber band, it goes back to its own shape.”

 

The same thing happens with shrimp, which go from ghostly gray to pink when you cook them. How, then, do flamingos turn pink from eating raw, almost colorless shrimp? Crustacyanin releases its hold on astaxanthin during the flamingos’ digestion process just like it does when heated.

 

By the way, don’t feel guilty about having torn your lobster away from its one true love—they don’t really mate for life.

 

 

Source: The Reason Why Lobsters Turn Bright Red When You Boil Them

  • Like 1
Link to comment
Share on other sites

Fact of the Day - CALENDER SYNESTHESIA

calendar-on-desk-colorful-sticky-260nw-2

Did you know... If someone were to ask what you did last August, you might open your calendar to jog your memory. But for others, thinking back to the past (or ahead to the future) conjures up vivid mental shapes that help them clearly picture the passage of time. Roughly 1% of the population can visualize time as complex spatial arrangements. It’s a phenomenon called “calendar synesthesia,” in which people “see” vivid manifestations of days, weeks, months, years, or even decades in the form of shapes and patterns. 

 

For example, they may see the months of the year as a circle that surrounds the body, with the current month right in front of them. Or they may visualize years as a straight line, with past years to the left and future ones to the right. Scientists are unsure about what causes calendar synesthesia — or any form of synesthesia, for that matter (such as “seeing” colors or music in the mind). What we do know is this condition occurs when the stimulation of a single sensory pathway (e.g., sight or sound) triggers the stimulation of another (e.g., the visualization of spatial imagery).

 

A 2016 study conducted by neuroscientist V.S. Ramachandran analyzed one particular subject who perceived calendars in a “V” shape written in Helvetica font. The subject reported that the calendar expanded or contracted based on where she stood, and she was also able to repeatedly trace consistent angles and lengths within this imaginary calendar using a laser pointer.

 

Another test subject from the study viewed months of the year as a Hula-Hoop, where December always passed through her chest. She was able to recount clear memories when looking left “toward” the calendar, though she had more difficulty remembering those details while looking “away” to the right. These tests led researchers to conclude there was “clear unambiguous proof for the veracity and true perceptual nature of the phenomenon,” and that calendar synesthesia is connected to parts of the brain responsible for processing visual information and recalling the past.

 

Between 2% and 4% of people can’t picture things in their mind.
Aphantasia is a harmless condition in which the brain is unable to conjure mental images. While many of us can imagine pictures in lucid detail, people with mild aphantasia can see only dim or vague representations of those objects, and some are unable to visualize anything at all.

 

According to a 2021 study, aphantasia affects 3.9% of the population. Other estimates claim 15% of those affected only experience the condition with their eyes closed. Many people are born with congenital aphantasia and may go their whole life without realizing anything is different. Others develop the condition later, usually due to an illness or injury, so the change is more apparent. Experts may diagnose aphantasia using a Vividness of Visual Imagery Questionnaire, which was created in 1973 to determine how the imagination differs from person to person.

 

Source: Around 1% of the population can “see” time.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - ANIMAL VOCALS

pair-wild-laughing-kookaburras-dacelo-26

Did you know... Animal vocalizations can change depending on their environment.

 

Back in 1986, researchers Bob Seyfarth and Dorothy Cheney took infant rhesus and Japanese macaque monkeys and switched them shortly after birth. Each was placed in a socially similar but acoustically different environment. The question: Would the monkeys develop regional vocalizations that were contrary to how their species normally communicated? Can animals actually develop regional accents or language in the same way a Boston human will inevitably pronounce park as pahk?

 

How Environment Affects Animal Vocalizations
For decades, scientists have leaned on the idea that animal communication is more dependent on genetics than locale. A dog is not going to learn to meow just because it’s raised around cats. But it’s possible for animals to adopt more subtle inflections that are not necessarily native to their ancestry.

 

As BBC Science Focus points out, numerous studies have observed animals modifying their communication depending on their environment. Chaffinch birds, for example, don’t put the same flourishes on their songs when raised in isolation as they do when raised in a social setting, giving them a regional voice. Other songbirds change up their tunes based on what locals are doing: White-crowned sparrows will combine elements of several different songs in different ways depending on where they are.

 

The same has been demonstrated in goats. One 2012 study found that younger goats (kids) alter their bleating to match the bleats of goats they’ve just met in a social setting.

 

These are not accents as humans think of them: A bird won’t adopt a British lilt just because their owner has relocated from New York. But if one applies the definition as a distinctive method of expression inspired by a region, then animals do indeed have accents.

 

Why Do Some Animals Have ‘Accents’?
Some animal experts theorize that certain species use accents to identify associates from untrusted strangers. It’s not that a goat, or bird, should fear their peers; it’s that an unfamiliar sound may signal predators, and identifying threats is key to the survival of a species.

 

As for those switched-at-birth primates: Typically, rhesus monkeys like to make a noise known as a gruff when playing, while Japanese macaques usually make a cooing sound. Both species can use the same noises, just in different contexts. But their foreign environment made virtually no difference in their vocalizations: As resus monkeys gruffed, Japanese macaques cooed. Put another way, dropping a monkey in Boston isn’t going to suddenly get them grahffing.

 

 

Source: Do Animals Have Regional Accents?

  • Like 1
Link to comment
Share on other sites

Fact of the Day - EEL BLOOD

Monster-Fish.jpg

Did you know.... Eating eel is common around the world, especially in Japan, where it’s often found in sushi. But whether it’s freshwater or marine eel, the animal is always served cooked, because toxins found in its blood can cause extreme muscle cramping if consumed by humans. This cramping can affect your body’s most important muscle — the heart — which is why eating raw eel can be fatal. Luckily, when eels are cooked, those deadly toxins break down and the animal becomes safe to consume. This is good news for chefs, since eel provides a rich taste similar to squid but with a softer texture. 

 

Although eel blood is a particularly dangerous fluid, that didn’t stop French physiologist Charles Richet from experimenting with the stuff in the early 1900s. Inspired by fellow countryman Louis Pasteur and his discoveries in immunology, Richet experimented with a toxin found in eel blood serum and discovered the hypersensitivity reaction known as anaphylaxis. “Phylaxis, a word seldom used, stands in the Greek for protection,” Richet said during a lecture after receiving the Nobel Prize for his work in 1913. “Anaphylaxis will thus stand for the opposite.” So while the everyday eel may be a slippery, slimy, and all-around unappealing animal to some, it holds a distinguished position in the annals of both scientific history and culinary delight.

 

Electric eels inspired the world’s first battery.
From smartphones to electric cars, today’s world is powered by batteries, and it’s all thanks to electric fish and one stubbornly curious Italian chemist. Near the end of the 18th century, Alessandro Volta wanted to see if he could artificially recreate the electric organs found in electric eels (which are technically not eels) and rays. These organs look like stacked cells that closely resemble a roll of coins, and are used to stun potential prey with up to 1,000 volts. Volta tried to mimic this structure by stacking sheets of various materials to see if he could similarly produce electricity. All of his experiments failed, until he stumbled across a winning combination: alternating copper and zinc disks separated by paper soaked in salt water. While Volta originally named the world’s first battery an “artificial electric organ,” he actually discovered a wholly separate mechanism for creating electricity. Instead, fishes like eels use a process similar to how human nerves transmit electricity, but on a much larger scale. Yet because of Volta’s happy electrochemical accident, you can read these words on your favorite battery-powered, eel-inspired device.

 

 

Source: Eel blood is poisonous to humans.

  • Like 1
Link to comment
Share on other sites

Fact of the Day - A GOOD TIME

OIP.ao8n7xPCv7scJOAlTfaxIQAAAA?rs=1&pid=

Did you know... Henry de la Poer Beresford, the 3rd Marquis of Waterford, had a little something to do with it. Or did he?

 

There’s an old etymological folk tale that claims the phrase painting the town redmeaning “to have a boisterously (or even violently) good time”—alludes to an actual event from the early 1800s, in which an unruly English nobleman went quite literally to town, armed with a can of red paint.

 

The 3rd Marquis of Waterford’s Wild Night
The story goes that on April 6, 1837, Henry de la Poer Beresford, the 3rd Marquis of Waterford, spent a drunken day hunting and gambling with a band of his aristocratic companions at the Croxton Park races in Leicestershire before heading off to the nearby town of Melton Mowbray for food (and yet more drinks). At around 2 a.m., the group arrived at a tollgate on the outskirts of the town, but were refused entry on account of their drunkenness. With nothing to keep them entertained, the gang decided instead to find their own fun, and ultimately embarked on a riotous spree in Melton Mowbray in the early hours of the morning.

 

The marquis and his crew rode around the outskirts of town to gain entry by another route and returned to the tollhouse at which they had been refused entry, and began boarding up its windows and doors. The hapless tollkeeper inside was awoken by the noise and tried to fire his gun at them, but he forgot to add the powder to the barrel, so the gang got away.

 

Back in the town square, the marquis and his friends began destroying flowerpots and absconding with door knockers, overturned a caravan (with some poor victim asleep inside), and tore down the local Red Lion pub sign before tossing it into a canal. Then, they somehow got their hands on a can or two of bright red paint, which they began daubing all over the local buildings—even going so far as to climb on one another’s shoulders to reach the upper floors of another local pub, The White Swan. When the town watchman attempted to intervene to stop the vandalism, he too supposedly received a coating of paint.

 

By morning, the town was in disarray—and the marquis and his companions found themselves on the wrong side of the law. Although it took several months to bring them and the case against them to court, the revelers were eventually fined £100 each for their night of debauchery and vandalism.

 

But is this tale of someone literally “painting a town red” the origin of this expression? The events of the night of April 6, 1837, are well documented, with contemporary accounts and local court records detailing everything that happened. Such a night of unruliness was by no means out of character for the Marquis of Waterford, either; even his entry in the Oxford Dictionary of National Biography records him as a “reprobate” who stole from Eton College, was asked to leave Oxford University (after which “he was to be found most frequently at the racetrack, on the hunting-field, or in the police courts”), and had several questionable habits—including challenging strangers to fights, tipping over carts, and smashing windows. So extraordinary was Waterford’s behavior, in fact, that when an eccentric fire-breathing acrobat known as Spring-Heeled Jack began terrorizing Victorian London, he eventually landed on the list of suspects.

 

But as strong as the evidence may be that the marquis is the origin of painting the town red, the link is far from certain—not least because it seemingly took another five decades for the phrase to find its way into print.

 

Painting the Town Red in Print
Considering that the Marquis of Waterford’s night of rioting took place in 1837, it seems odd that, per the Oxford English Dictionary, one of the earliest references found of the expression (at least so far) comes from a newspaper printed not in rural England, but in Stanford, Kentucky—and not until 1882, when the Semi-weekly Interior Journal wrote that “He gets on a high old drunk with a doubtful old man, and they paint the town red together.” (The phrase appeared in other newspapers in the state as early as 1880.) Were paint the town red really a regional British English invention, we would expect to find some evidence of it from the UK sometime between 1837 and 1882.

 

So despite the obvious overlaps, it’s possible that the Marquis of Waterford’s night of “painting the town red” is nothing more than a coincidence. Which begs the question: If that’s the case, where else might this expression have come from?

 

There are plenty of theories. One suggests that the phrase painting the town red somehow refers to the red-light districts found in American frontier towns. Another is that it somehow alludes to the town of Jaipur in India (whose buildings were literally painted bright pink for a visit by Queen Victoria in 1876).

 

Then again, perhaps paint the town red is just a quirk of English slang: paint, or more specifically nose paint or nose rouge, was mid-19th century slang for drink, based on the image of a drunkard’s face flushing red. Did painting the town red emerge as some kind a play on that? In the absence of any more evidence than is currently available, it’s certainly a possibility.

Source: Why Is Having a Good Time Called “Painting the Town Red”?

  • Like 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...
Please Sign In