Jump to content

Fact of the Day


DarkRavie

Recommended Posts

Fact of the Day - THE GREAT LAKES

lake-superior-400x225.jpg

Did you know.... The Great LakesErie, Huron, Michigan, Ontario, and Superior — hold one-fifth of all the fresh water on the Earth’s surface. Their combined coastline extends for over 10,000 miles. Each year, they attract several million tourists from the U.S. and around the world. But those facts are pretty basic. Whether it’s pirates, shipwrecks, or Babe Ruth’s first official home run, these fascinating tidbits are a bit less straightforward.

 

1. Lake Michigan Once Had a Pirate Problem

71KuEVN0AHL._AC_SX355_.jpg

Forget the Pirates of the Caribbean: The Great Lakes had their own share of buccaneers who patrolled the dangerous waters and terrorized the lake traffic, mainly in the mid-1800s into the early 1900s. But instead of gold, lumber and alcohol were the main prizes to be won. One of the more famous characters was “King” James Jesse Strang, a self-proclaimed religious leader (and looter). “Roaring” Dan Seavey was another — the only one to actually face charges of piracy on the Great Lakes. The former Navy sailor set false lights along the coastline, lured ships to their doom, and plundered the wreckage. Fortunately for locals, as the population along the Great Lakes grew in the 20th century, the pirate problems on the Great Lakes eventually faded away.

 

2. The Great Lakes Basin Is Home to More Than 30 Million People

The Great Lakes hold about 84% of North America’s surface fresh water and thus play a vital role in the agricultural, power, and transportation industries. Some 34 million people live within the Great Lakes Basin, representing almost a third of the Canadian population and nearly 10% of the U.S. population. Together, the five lakes form a key part not just of North America’s cultural and geographic heritage, but also its technological and economic future.

 

3. Only Lake Michigan Is Completely Within the U.S.
chicago-lake-michigan-north-avenue-260nw

The Great Lakes border six Midwestern states — Illinois, Indiana, Michigan, Minnesota, Ohio, and Wisconsin — and two Canadian provinces, Ontario and Quebec. Lake Michigan is the only one of the Great Lakes that does not touch any Canadian territory.. The lake’s name derives from the Ojibwa word mishigami, which means “large lake.” The state of Michigan was named after the lake, and it’s often nicknamed the “Great Lakes State” since it is the only state to touch four of the five lakes.

 

4. The World’s Largest Underground Salt Mine Is Under Lake Huron

What is now the Goderich Salt Mine was discovered by accident in 1866. A local flour mill owner was drilling for oil in the hope of striking it rich, and instead found rock salt, the first bed ever found in North America. Much of the mine is located some 1,800 feet below Lake Huron. That’s as deep as Toronto’s CN Tower is tall. The rock salt is shipped along the St. Lawrence Seaway before being distributed across the country for use in icy weather. It is now the world’s largest underground salt mine, covering an area of 2.7 million square miles.

 

5. It’s a 6,500-Mile Journey to Drive Around All the Lakes

33706635-mapa-wielkich-jezior.jpg?ver=6

To see as much as you can of the Great Lakes without actually stepping foot on a boat, consider the Circle Tour. In 1988, the Great Lakes Commission created a scenic driving route that encircles all five lakes, passing through eight states and Ontario. To complete the entire Circle Tour would mean driving a total of 6,500 miles, but you can break it into smaller chunks by choosing to circle just one of the lakes. If the Lake Superior tour (1,287 miles) seems a touch too long, try the routes around Lake Michigan or Erie, which are each closer to 600 miles. Just remember to allow plenty of time to get out and enjoy the view.

 

6. Legend Has It That Babe Ruth’s Ball Is in Lake Ontario

According to baseball legend, Babe Ruth’s first official home run sent a ball straight into Lake Ontario, where it remains to this day. In 1914, Ruth joined the Providence Grays, and in September of that year, the team traveled to Hanlan’s Point, Toronto, to play the Toronto Maple Leafs. In the sixth inning, he hit a home run, sending the ball far out of the stadium. What really happened to it is unknown. Most likely, a fan found it, or it was lost over time. However, a persistent local legend is that Ruth hit the ball clean into the lake — and it might one day be washed to shore.

 

Click the link below ⏬ for more facts of the Great Lakes

 

 

Source: Facts You Might Not Know About the Great Lakes

  • Like 1
Link to comment
Share on other sites

Fact of the Day - TEAM NAMES

2674170-thumb.jpg?v=16

Did you know.... You don’t have to actually watch sports to be aware of the country’s most famous teams — or wonder how they got their names. While some are obvious (the Boston Red Sox wear red socks), others are anything but. If you’ve ever wondered what a knickerbocker is or what the 2020 World Series champions have been “dodging” all these years, read on for the story behind six teams’ unique names.

 

1. New York Knickerbockers

il_300x300.3541128376_ehh9.jpg

Though almost always called the Knicks these days, New York’s oldest basketball team is still officially known as the Knickerbockers. If you don’t know what a knickerbocker is, you’re hardly alone — the team even has an explanation on its NBA page. History buffs will remember that New York was settled by the Dutch and was even known as New Amsterdam for a time; the “knickerbocker” name is in honor of that history. The term refers not only to the distinct style of pants worn by those settlers but also to the pseudonym Washington Irving used for his 1809 book A History of New York From the Beginning of the World to the End of the Dutch Dynasty: Diedrich Knickerbocker. At the time, the word was used as an affectionate term for both New Yorkers in general and the settlers’ descendants in particular.

 

2. Green Bay Packers

photo-1617285271879-e21ea15cf8e9?ixlib=r

Corporate sponsorship is nothing new. Just ask the NFL's third-oldest franchise, which celebrated its centennial in 2019 and has been winning championships since long before the Super Bowl became football’s top prize. The team was cofounded\ in Wisconsin by George Whitney Calhoun and Earl “Curly” Lambeau, the latter of whom struck a deal with the company he worked for at the time: The Indian Packing Company would provide $500 for uniforms, equipment, and the right to use their athletic field, and in return, Lambeau would name his team the Packers. It was quite the bargain. (For context, SoFi recently paid $400 million for the naming rights to the new stadium where the Los Angeles Rams and Chargers play.) Alas, the meat-packing company ceased to exist just two years later, when it was absorbed by the Acme Packing Company — whose name briefly appeared on team uniforms in 1921 — but its legacy lives on through the Packers to this day.

 

3. St. Louis Blues

XmlstlAYEEt_LC-4Nb3FnX1lBPCrmbf-rYjodIpU

Lots of teams are named after fierce animals and local landmarks. Not many are named after songs. The rare — and possibly only — exception would be the St. Louis Blues, a hockey team whose moniker is derived from W.C. Handy’s song of the same name. First recorded in 1914, the classic tune has been covered by everyone from Louis Armstrong and Bing Crosby to Dizzy Gillespie and Bessie Smith. Blues owner Sid Salomon Jr. chose it as the team's namesake because "no matter where you go in town there's singing. That's the spirit of St. Louis." Unlike most expansion teams, the Blues were instantly successful — they made it to the Stanley Cup Finals in 1968, 1969, and 1970, but were swept in all three series. Don’t feel too bad, though — they finally won the big one in 2018.

 

4. Los Angeles Dodgers

division-series-new-york-mets-v-los-step

Not unlike the Utah Jazz, the Dodgers trace their name to their original city — Brooklyn, where the team was founded as the Grays (and later the Bridegrooms) in 1883. Writers began referring to them as the "Trolley Dodgers" in 1895, when trolley cars became ubiquitous in the borough. At the time, the subtle art of evading those vehicles was as much a pastime in Brooklyn as playing baseball. The team officially adopted the nickname and became the Dodgers in 1932, ultimately keeping the title even after their 1958 move to Los Angeles, despite now being in a city that isn’t exactly known for its public transportation. Although the name sounds quaint, historical context reminds us that it had a far different connotation at the time. “In the 1890s, the electric trolley terrified many New Yorkers,” Joseph P. Sullivan wrote in his essay “The Terror of the Trolley.” “The electric streetcar was much faster than a horse streetcar and caused many accidents. In Brooklyn especially, the trolley frequently killed or maimed young children. As a result, the electric trolley became a symbol of the chaotic nature of modern, urban life.”

 

5. Indiana Pacers

92a1e5fcc061aa4f085daf623c8357db--banker

When basketball came to Indianapolis in 1967, it was probably inevitable that the new franchise would draw inspiration from the city’s most famous event: the Indy 500. Indiana’s capital and most populous city has long been synonymous with the annual race, which was established in 1911 and is billed as the Greatest Spectacle in Racing to this day. Among the Indy 500’s many traditions is the pace car, which has been used in the race since its very first edition. The pace car’s purpose is both ceremonial and highly important: Its appearance on the track signals a caution period during which racers aren't allowed to pass either it or the competitor in front of them, often to allow safety technicians to clear the track of obstructions or wait until it’s safe to drive at full speed again. It’s considered an honor, as well as an advertising opportunity, for a manufacturer to provide the Indy 500’s pace car — the vehicle will be seen by millions, after all.

 

6. San Francisco 49ers

1986_Jeno's_Pizza_-_27_-_Dan_Bunz_(cropp

If you aren’t up to date on your California history, the number 49 might not carry much significance. But there's a reason it's called the Golden State, and that reason is the gold rush that began in 1848 and reached its peak in 1849. The California Gold Rush brought some 300,000 people to the state over the course of seven years, with hopeful prospectors becoming known as forty-niners. Formed nearly a century later in 1946, San Francisco's first major sports team took its name from those prospectors. Seventeen years later, the Philadelphia 76ers followed suit by naming themselves after the year America declared its independence from Great Britain.

 

 

Source: How 6 Sports Teams Got Their Distinctive Names

  • Like 1
Link to comment
Share on other sites

Fact of the Day - AUGUST

welcome-august-alphabet-letter-with-gree

Did you know.... In the Northern Hemisphere, the month of August means fun in the sun — the last hurrah of summer before “back to school” rolls around and the rush toward Halloween and the winter holidays picks up steam. There are a lot of interesting tidbits about August, so grab some sunscreen and a Popsicle while we share six of them in honor of our (now) eighth month.

 

1. August Wasn’t Always This Way

41bde1a591f492b2842d28fe3185fda3.png

August is a month that knows its way around a calendar. Not only was it not initially the eighth month, but it also didn’t always have 31 days. The Roman calendar borrowed heavily from complicated Greek lunar calendars when it first began; the Roman year originally had 10 months containing 304 days total, with the new year commencing on the first of Martius, the month we now call “March.” Sextilis (which eventually became August), originally the sixth month, had 29 days. Subsequent reforms added two additional months, bumping some month names to spots that no longer agreed with their new position in the calendar. (For example, “September” means “the seventh month,” but it is now the name of the ninth.) Some of these inconsistencies remain. Julius Caesar (namesake of the month July) instituted further calendar reforms, eliminating leap months and declaring that most years contain 365 days (except for leap years). When the Julian calendar was introduced in 45 BCE, Sextilis got 31 days. Rome’s first emperor, Julius’ great-nephew Augustus Caesar, renamed SextilisAugust” — by then the eight month — in honor of himself in 8 BCE.

 

2. August Begins With Lammas

Lammas-Day-640x514.jpeg

After Lammas Day, corn ripens as much by night as by day,” or so goes the saying. In the British Isles and northern Europe, August is a month for bringing in the harvest of summer. The first of August is the holiday of Lammas, a Cross-Quarter Day that marks the halfway point between the summer solstice and the autumnal equinox. The Celts celebrated Lammas as Lughnasadh, while the early Christians transitioned the pagan rites into a “loaf mass,” where villagers took loaves baked with grains from the first harvest to be blessed.

 

3. August Is Filled With Stars

FB_IMG_1612033223901.jpg

The so-called “dog days of summer” aren’t named that because of hot dogs, but because between July 3 and August 11, the sun rises and sets with Sirius. The brightest star visible from Earth, Sirius is part of the constellation Canis Major (“Greater Dog”) and is often referred to as “the Dog Star.” (August 26, however, is National Dog Dayso every dog does have its day in August.) Sirius was worshipped as the goddess Sopdet in ancient Egypt, as its position in the night sky predicted the flooding of the Nile River. Sirius isn’t the only star show happening in the August sky — the Perseids meteor shower is at its peak, and the Kappa Cygnids also make an appearance.

 

4. August Is Filled With Famous Days

5a5c12f4e3692.image.jpg?resize=400,317

August is an eventful month, packed with anniversaries both celebratory and sad. The Genoese explorer Christopher Columbus and a crew of 90 men set sail from Spain on August 3, 1492, arriving in North America in October of the same year. The world gasped on August 22, 1911, the morning after Leonardo da Vinci's masterpiece the “Mona Lisawas stolen from the Louvre. (It was recovered two years later.) On August 6 and August 9, 1945, U.S. forces detonated two atomic bombs over the Japanese cities of Hiroshima and Nagasaki — the only use of nuclear weapons in war. And in 1963, more than a quarter of a million people gathered in Washington, D.C., to hear Reverend Martin Luther King Jr. give his “I Have a Dream” speech on August 28.

 

5. August Is a Big Month for Volcanoes

583ff6a6-f0df-461b-972c-4b11da80e987_338

Although modern research may change the date, history has long recounted that the apocalyptic eruption of southern Italy’s Mount Vesuvius (near present-day Naples) occurred on August 24, 79 CE. The volcano’s fury killed between 13,000 and 16,000 people, completely destroying the towns of Pompeii and Herculaneum. Much better documented is the August 26, 1883 eruption of Krakatoa, in Indonesia’s Sunda Strait. Although the island of Krakatoa was uninhabited, the resulting fallout and tsunamis caused the deaths of around 36,000 people, making it one of the deadliest volcanic events in recorded history. In August 2022, Iceland’s Fagradalsfjall volcano erupted near Reykjavik. And olive-green peridot, one of August’s birthstones, is forged in the fires of volcanoes.

 

6. European Cities Put Out the “Gone Fishin’” Signs in August

1f2aab33-hunting-320x229.jpeg

Europeans enjoy a generous amount of vacation time, and August is a favored time for residents of major cities to escape overheated streets — and the tourists who crowd them. August also coincides with school holidays, so locals go on vacation (preferably to the shore or cooler mountains) right along with foreign visitors. While some services will remain open to cater to tourists, many of the best restaurants and shops will simply shut their doors so staff can go off on their own happy holidays.

 

 

Source: Absorbing Facts About August

  • Like 1
Link to comment
Share on other sites

Fact of the Day - GEMSTONES

Emerald.jpg

Did you know.... Gemstones are fascinating in appearance alone — these jewels are, after all, designed to be eye-catching — but behind them is a story to suit every interest, whether you’re an armchair geologist or just love pretty things. Astronomy buffs can marvel at the diamonds sparkling throughout the cosmos. For mythology buffs, there’s a teetotaling origin story that will change the way you look at amethysts. And if you have opinions on birthstones, wait until you hear how they evolved. These seven facts might just change the way you see gemstones forever.

 

1. Rubies and Sapphires Have the Same Base Mineral

Corundum is a colorless mineral that’s the second-hardest natural substance on Earth, just behind diamonds. While the average person probably doesn’t recognize this aluminum oxide in its pure form, with just a few impurities it becomes a household name. With a touch of chromium, it becomes a ruby, and just a few hints of iron and titanium turns it into a sapphire. This isn’t a unique phenomenon. Variations of the gemstone beryl, an aluminum silicate, include emerald, morganite, and aquamarine. Some garnets are called hessonite, rhodolite, and andradite. Amethyst is a kind of quartz. Sought-after color variations of gems like diamonds and topaz also come from impurities. Contrary to what you might think, impurities aren’t always a bad thing!

 

2. The Sun Could Someday Turn Into a Giant Diamond

image611160211.jpg

Right now, the core of our sun is a hotbed of nuclear fusion. While some stars explode in a giant supernova and become neutron stars or black holes, our sun is a medium-mass star. After several billion years, it will burst into a red giant, then leave behind its core as a white dwarf. Here’s where it gets interesting: White dwarfs are one of the highest-gravity environments in the galaxy, with a gravitational field that can be 350,000 times that of Earth’s. This compresses the oxygen and carbon of its core, causing it to crystallize. Diamonds are pure carbon that has crystallized under high pressure. (The ones on Earth formed in the planet’s core and were brought to the surface in ancient volcanic eruptions.) So while there’s some oxygen mixed in, the core of a white dwarf is essentially a diamond. After decades of theory, in 2013 scientists actually observed this phenomena in the cosmos. Astronomers at the Harvard-Smithsonian Center for Astrophysics identified a 10 billion-trillion-trillion-carat core just 50 light-years from Earth, in the constellation Centaurus. And in 2014, astronomers announced that they’d found an 11 billion-year-old crystallized dwarf the size of Earth.

 

3. Modern Birthstones Evolve Based on Marketing

As a concept, birthstones date back pretty far, from the Christian Bible to the mystical gemstones of Hindu tradition. The tradition of wearing a stone for the month you were born began to gel in 16th-century Poland or Germany, likely due to increased trade between Europe and Asia. While these traditional gemstones certainly overlap with modern ones, there are some notable changes: March, for example, was once bloodstone, not aquamarine. In 1912, however, the birthstone list became a wildly successful marketing tactic. The National Association of Jewelers standardized the 12 birthstones by month, choosing stones that most jewelers could produce and sell easily. That last part is key, and specific birthstones have continued to evolve over the last century. Many classic, perennial favorites have stayed in place — diamonds for April and sapphire for September, for example. Some months shifted based on color: December has been assigned a wealth of blue stones, from the traditional turquoise and lapis lazuli to the more modern blue zircon, blue topaz, and tanzanite. Others, like October, have shifted significantly. October’s traditional birthstone is the opal, which is still widely recognized. But in 1952, the Jewelers of America swapped in pink tourmaline to match the rest of the transparent list. As recently as 2016, Jewelers of America added spinel to the August list as part of a marketing campaign.

 

4. Amethysts Were Used as Ancient Drinking Protection

240_F_376884324_z8UwIpUJnUCF8mibeOiKUeFp

Amethysts were so widely used as wards against intoxication or hangovers in ancient times that it’s where they got their name: It comes from “not drunk” in ancient Greek. The actual mythology around the amethyst varies, but many of the stories involve Dionysus, the Greek god of wine, grapes, and drunkenness. In one version, Dionysus becomes enamored with a mortal woman named Amethystos, who was, to put it mildly, not into it. She prayed to her preferred god, Artemis, to help keep her chaste, and in response Artemis turned her into a statue of clear quartz. Dionysus either poured, spilled, or cried wine onto it, staining it purple. So in 2021, when archaeologists unearthed an amethyst ring from the former site of — what else? — the largest known winery of the Byzantine era, they speculated that its former owner could have been trying to ward off the worst effects of drinking. The team, which had been excavating a site in modern day Yavne, Israel, said that it’s impossible to know for sure.

 

5. Garnets Were Named for Pomegranates

While it’s not quite as interesting as “not drunk,” the name “garnet” also has a somewhat decadent origin. In the 13th century, a German theologian named the gem from the Latin word granatus, which means “grain” or “seed,” in this case referring to pomegranate seeds. He wasn’t wrong: A small, oval garnet could absolutely be mistaken for a snack in the right context.

 

6. Not All Gemstones Are Stones

avatar.jpg.320x320px.jpg?26e

While most things we consider “gemstones” are minerals, in practice the distinction has less to do with chemistry and more to do with aesthetics. Calcareous concretions (pearl-like growths from certain mollusks) and pearls are the only gems to grow within living creatures. Precious coral comes from the hardened skeleton of dead coral polyps. Jet is fossilized wood. Amber is fossilized tree resin, and is one of the earliest gemstones to be carved for jewelry. All of these make fine, eye-catching stones, even if they’re missing the crystalline glint of an emerald.

 

7. The First Lab-Grown Diamonds Appeared in the 1950s

Lab-grown diamonds have grown in popularity as a more ethical and less expensive alternative to mined diamonds. These diamonds are often called “synthetic diamonds,” even though their chemical makeup is exactly the same. After more than a century of people trying to figure out how to DIY diamonds, scientists at the General Electric Research Laboratory were the first to announce their success in 1954 — although it took them a second to figure out they did it. After they left their high-pressure equipment on overnight, a blob popped out, but it didn’t look like a diamond. They began to suspect otherwise when the material broke high-end polishing equipment, something only a diamond could do. X-ray tests confirmed their suspicions. It later turned out that Union Carbide and the Swedish company ASEA got there just slightly earlier, in 1952 and 1953, but kept their findings secret. These small, rough diamonds were great for industrial applications, but they weren’t ready to shine just yet. Higher-quality diamonds appeared in the 1970s, although they were easy to tell apart from natural diamonds under a microscope, and hard to scale. The technology slowly improved, and in the 1990s, diamond industry titan De Beers (who played a pivotal role in our idea of the diamond engagement ring in the mid-20th century) got concerned enough to develop detection machines. Today, most “synthetic” diamonds are made with a lower-pressure process called chemical vapor deposition, which uses heated gas in a vacuum chamber at extremely low pressures — very different from the high-pressure environment in which diamonds grow inside the Earth.

 

 

Source: Mind-Blowing Facts About Gemstones

  • Like 1
Link to comment
Share on other sites

Fact of the Day - CHICKENS

marans-huhner-marans-henne-47fe6e7f_9d4a

Did you know... It’s true: Chickens really are descendants of dinosaurs, walking the Earth as one of the closest living relatives to the Tyrannosaurus rex. But that’s not the only impressive thing about these fowl. Chickens are incredibly adaptive creatures found in nearly every part of the world — barring Antarctica and the Vatican City — and are able to fly short distances, swim, and even communicate with the outside world before hatching from their shells. Read on for six more facts about these curious, clucking egg-layers.

 

1. Some Early Chickens Were Considered Sacred Animals

Scientists aren’t exactly sure when humans first domesticated chickens. Some research had estimated that humans first became flock-keepers around 8,000 years ago or more, perhaps somewhere in China, India, or Southeast Asia. But more recent research shows the first clear evidence for domestic chickens in the archaeological record is only about 3,500 years ago, from a site in Thailand. And some archaeological evidence supports an idea that the earliest human-raised chickens may not have been eaten, but instead revered. Archaeologists have unearthed the bones of whole chickens at dig sites in Britain and Europe, which researchers have carbon dated to the Iron Age. None of the birds had been butchered, they were primarily older in age when they died, and one had a healed leg fracture, possibly from the help of a human caretaker. On occasion, the birds were buried alongside humans, possibly used as psychopomps, aka animals tasked with leading the deceased to the afterlife. Writings from Julius Caesar indicate the earliest Britons didn’t eat chickens, and instead raised the birds for “their own amusement or pleasure,” a practice that remained until Romans introduced eating the birds around 43 CE.

 

2. Ancient Chickens May Have Had Teeth

Roman+mosaic+cockerel+310708fburr20.jpg

Like most birds, chickens are toothless, equipped instead with gizzards (muscles in the digestive tract) that help break down their food for digestion. Their omnivore diet first enters their crop, a pouch-like organ that stores and softens food, before it moves to their digestive system. From there, food moves to the gizzard. While this system allows chickens to forage and feast without chompers, scientists believe poultry of the past may have eaten differently — with teeth. That’s because the earliest known birds had teeth, though the feature began to disappear more than 100 million years ago in place of developing beaks. However, some researchers believe it’s still possible for chickens to grow teeth, since their DNA contains the genetic code (which stuck around to help modern chickens grow feathers). In 2006, scientists were able to make small genetic modifications that enabled chicken embryos to develop teeth, which looked similar to reptile teeth — though the chickens were ultimately prevented from hatching.

 

3. Chickens Can Recognize One Another… and Humans

Chickens aren’t often considered to be especially bright animals, though there’s evidence they’re smarter than we once believed. Scientists have long studied chickens, with the first research into chicken intelligence emerging around the 1920s thanks to observation of their pecking order (aka how the birds establish social hierarchies in their flock). In the 100 years since, researchers have determined that chickens have a wide range of communication skills, able to produce 24 different vocalizations that alert their fellow fowl about predators, food, and an interest in mating. Chickens are also capable of differentiating between numbers and can identify patterns and shapes. Those memory skills help chickens recognize up to 30 other birds, a process that starts within 36 hours after hatching, when chicks imprint on their mother hen. Chickens can also recognize human faces, and even have preferences for who they find attractive — a 2002 study found that chickens preferred looking at humans with more symmetrical faces (just like humans do).

 

4. The World’s Oldest Living Chicken Is Named Peanut

images?q=tbn:ANd9GcQ6XdolcGCutC-JIcA4qml

Backyard chickens are often considered food-producing pets, providing companionship and entertainment while also laying eggs. Most hens live for between six and eight years, and typically lay eggs for the first three to four years of their lives. Sometimes, they even become record holders, like Peanut, the world’s oldest living chicken. Born in southeastern Michigan in 2002, Peanut reached the verified age of 20 years and 304 days on March 1, 2023. Initially believed to be a dud egg, Peanut was nearly abandoned as a chick before her owner heard the bird pipping from inside her shell; with some assistance, Peanut successfully hatched and became an inside-dwelling pet for the first few years of her life. Peanut laid eggs until age 8 — some of which produced her living grandchildren and great-grandchildren, who reside in the same backyard she roams. Today, the geriatric hen spends her days sleeping, eating, and even watching TV as she inches towards the record for the world’s oldest known chicken ever — an achievement held by a bird named Muffy, who was born in 1989 and reached 23 years and 152 days old (she died in 2012).

 

5. There Are More Chickens on Earth Than People

Our planet is home to a lot of humans. There are more of us now than at any other point in known history, yet we’re still outnumbered by chickens. In November 2022, the global human population hit 8 billion, with projections showing there may be 9.7 billion of us by 2050. But even then, there will probably be more chickens, considering that at last count, in 2021, their  population clocked in at 25.8 billion, largely thanks to commercial poultry farming. In some regions, the ratio is particularly evident; Delaware residents are reportedly outnumbered by 200 chickens to every one human.

 

6. A Rooster Once Crashed a President’s Inaugural Ball

6a0120a58aead7970c01761602eaab970c-600wi

Chicken is common fare at even the fanciest of dinners, though in 1973, a rooster who wasn’t on the menu still found its way into one of the country’s most upscale parties: the presidential inaugural ball. Following his successful reelection campaign, President Richard Nixon held an extravagant inaugural celebration at the Smithsonian’s Museum of History and Technology (now called the American History Museum). One of the gallery’s exhibits on farm life included real, living chickens — including a rooster who escaped from its pen and into the party. The bird caused a minor commotion as it mingled among guests, but was promptly captured and returned to its display by S. Dillion Ripley, an ornithologist (aka bird expert) who served as the Smithsonian’s eighth secretary.

 

 

Source: Chicken Facts to Cluck About

  • Like 1
Link to comment
Share on other sites

Fact of the Day - POPULAR BABY NAMES

images?q=tbn:ANd9GcQ7mniufldPUfl2ZvjeBm1

Did you know.... If you’ve ever wondered how popular your name is, it’s easy to find out. In 1998, the Social Security Administration began ranking the top 1,000 most common first names submitted on Social Security card applications for each year dating back to 1880. The administration then whittled down the list to the 200 most popular names of each decade, tallying up how many people share the same identifier. That list has become a tool for parents-to-be looking for the perfect name, and a warning for those trying to avoid name trends. Here are the most popular boy and girl names from the past century.

 

1. 1920s

Top Boy Names: Robert, John, James, William, Charles

Top Girl Names: Mary, Dorothy, Helen, Betty, Margaret

Boy names were relatively traditional and Eurocentric 100 years ago. William and Charles gave off strong, regal impressions, which is no surprise considering their origins — both have Germanic roots and were used abundantly among British, French, and Spanish monarchs. While girl names were similar to the prior decade, newcomer Betty was less formal than its original form of Elizabeth during a decade where women sought financial and social independence (but still not as zany as flapper-inspired names such as Fern and Iola).

 

2. 1930s

e23d2e8b49b18bdd7f2c9e1c219d56c5.jpg?nii

Top Boy Names: Robert, James, John, William, Richard

Top Girl Names: Mary, Betty, Barbara, Shirley, Patricia

Seemingly out of nowhere, the name Patricia catapulted to the country’s top-five spot for girl names, when just 10 years prior it ranked 104. But why? It’s possible an influx of Irish immigrants in the early 20th century helped popularize the name. As a feminine form of Patrick — Ireland’s patron saint — Patricia seems traditionally Irish, though a survey of Irish Americans suggests it’s more commonly used in the U.S. than in the Emerald Isle itself. It’s likely a name that bridged the gap between heritage and new homeland, helping young Irish Americans hold onto their family history while blending into American culture with an easy-to-pronounce name. Patricia remained a top-five name throughout the 1950s, spawning shortened names such as Trish, Patti, and Tricia as its popularity waned.

 

3. 1940s

Top Boy Names: James, Robert, John, William, Richard

Top Girl Names: Mary, Linda, Barbara, Patricia, Carol

Traditional names like Richard and James continued to reign supreme for boys born in the 1940s; with an ongoing war, it’s likely parents reused family names to honor loved ones stationed overseas. New names for girls, however, emerged, with Carol becoming a trendy alternative to the longer Caroline. Often given to wintertime babies, Carol was considered an uplifting holiday name that honored the season’s musical hymns. It peaked during the 1940s and fell from the top-10 list by 1951. Equally prominent Barbara, which became common in the 1800s, also fell out of style by the early ‘50s, but ranks overall as the sixth-most popular name for a girl over the last century, with 1.3 million women sharing the name.

 

4. 1950s

s29rxx.jpg

Top Boy Names: James, Michael, Robert, John, David

Top Girl Names: Mary, Linda, Patricia, Susan, Deborah

The 1950s marked a shift in Mary’s role as the top girl name of all time, ending a run that had dominated the name leaderboards since the 1880s — the Social Security Administration’s oldest data. It’s no surprise considering the name means “beloved” and is an ode to the Virgin Mary. History has no shortage of famed Marys, ranging from queens and actresses to fictional characters like Mary Poppins. While less common now (holding spot 124 in 2020), similar names have carried on, such as Maria and Mariah. From 1921 to 2020, more than 3.1 million babies in the U.S. shared the simple, four-letter name.

 

5. 1960s

Top Boy Names: Michael, David, John, James, Robert

Top Girl Names: Lisa, Mary, Susan, Karen, Kimberly

Leonardo da Vinci’s most famous painting may have spurred a name trend during the 1960s. The “Mona Lisa” made its first trip to the U.S. in 1963, displayed at the National Gallery of Art in Washington, D.C., and created social excitement that led 2 million spectators to view the portrait. While the name Lisa had reached the number one spot a year before the painting’s tour, it held firm for seven more years until being dethroned in 1970. The ‘60s also ceded some traditional boy names for more modern styles, with Michael starting its run as the top boy name for decades to come.

 

6. 1970s

ussr-leningrad-circa-1970-vintage-250nw-

Top Boy Names: Michael, Christopher, Jason, David, James

Top Girl Names: Jennifer, Amy, Melissa, Michelle, Kimberly

The 1970s brought about a major shift in common boy names. With Richard and William becoming “old-fashioned,” parents opted for the ever-popular Michael and David. But one name ascended in a way few other names have: Jason. The name shot up the charts from spot 87 in the 1960s to third place in the 1970s. While sounding modern, Jason actually has Greek origins; in mythology, heroic Jason embarks on an epic quest to restore his family to his homeland’s throne. The name fad quickly dissipated, dropping down to the 11th-most popular spot in the 1980s and further in the ‘90s, but it has echoes in 2010’s Jaxon and Jaxson.

 

7. 1980s

Top Boy Names: Michael, Christopher, Matthew, Joshua, David

Top Girl Names: Jessica, Jennifer, Amanda, Ashley, Sarah

Christopher wasn’t a new name in the 1980s — it has Latin and Greek origins, becoming common among Christian followers during the Middle Ages in honor of a third-century saint who protected travelers. It’s unclear why Christopher reached such heights in the ‘80s, though it could have been influenced by the number of Christophers on stage and screen; actors Christopher Reeve, Christopher Walken, and Christopher Lloyd got their big breaks in the late ‘70s. For girls, names like Jessica and Sarah maintained peak popularity until the early 2000s, around the same time parents began seeking out more unique names.

 

8. 1990s

220px-6_%D0%BC%D0%B5%D1%81%D1%8F%D1%87%D

Top Boy Names: Michael, Christopher, Matthew, Joshua, Jacob

Top Girl Names: Jessica, Ashley, Emily, Sarah, Samantha

The name Michael was the highest-ranking boy name for five short years — 1954 to 1959 — only to come roaring back in 1961 and then holding the No. 1 spot through the 1990s. Its Hebrew origins refer to the sword-wielding archangel Michael, at one time making it a common name among soldiers and military families. In its last decade of acclaim, the name was boosted by a number of celebrities: singers Michael Jackson and Michael Bolton, basketball great Michael Jordan, and actors Michael Keaton and Michael J. Fox. In 2020, Michael remained the 12th most popular name and was the moniker given to 4.3 million boys since 1921.

 

9. 2000s

Top Boy Names: Jacob, Michael, Joshua, Matthew, Daniel

Top Girl Names: Emily, Madison, Emma, Olivia, Hannah

New millennium, new names ... right? Not so much. The top names of the 2000s — while seemingly fresh compared to years of Jennifers, Lisas, and Williams — mostly have old roots. The popular boy names have biblical ties, along with Hannah and Olivia (which refers to the symbolic olive tree). Madison, traditionally a boy name, was commonplace throughout the 1800s. Just 100 years prior, Emma was the 13th most popular name in 1900, ranking low on baby name charts until the early 2000s.

 

10. 2010s

getVideoPreview?id=1252121774743&idx=1&t

Top Boy Names: Noah, Liam, Jacob, William, Mason

Top Girl Names: Emma, Olivia, Sophia, Isabella, Ava

Just like decades before, naming trends don’t often disappear easily — and it’s evident with names like Emma, Olivia, and Sophia hanging on for a second decade. Compared to popular names 100 years before, modern names feel like a departure from Eurocentric names, and that’s because naming websites and social media provide access to more diverse names than ever before. Where some parents look to trend-free, steadfast names (such as William), others consider unique monikers that help their kids stand out in a world of Isabellas (consider Athena, ranked at 173). While new baby name trends are emerging — specifically nature-based names, like August and Sage, and gender-neutral names, like Charlie and Blake — there’s no clear science as to why some names become standouts while others languish for decades. Some linguists and naming experts theorize that times of social change and upheaval spawn new, creative names. If that’s the case, 2020’s top picks may be the most unique we’ve seen in a while.

 

 

Source: The Most Popular Baby Names in Each Decade

  • Like 1
Link to comment
Share on other sites

Fact of the Day - ROGUE WAVES

01h55e9bnsp0r2qr7max.jpg

Rogue waves have the power to sink freighters.

Did you know..... Rogue waves, also known as freak waves or monster waves, have long captured the attention of sailors and sea enthusiasts—and in recent decades, marine scientists. These massive, towering waves seemingly appear out of nowhere, posing a significant threat to ships, offshore structures, and people in their path. Read on to learn more about these fascinating natural occurrences.

 

1. Rogue waves are often described as “walls of water.”

th_onda.jpg

A large wave towering astern of the NOAA ship ‘Delaware II’ in the Atlantic Ocean, 2005. / Personnel of NOAA ship ‘Delaware II,’

Rogue waves are extremely large and powerful waves that appear suddenly in open water, surpassing the average height of the tallest surrounding waves by at least double. While there is no universally accepted height threshold to qualify as a rogue wave, they have been observed ranging from 26 feet to as tall as 100 feet. These bodies of water are characterized by their steepness, sharp crests, and immense destructive power, making them objects of both fear and fascination.

 

2. Rogue waves don’t have a single distinct cause.
Several factors contribute to the formation of rogue waves. The convergence of multiple wave systems is one key element: When waves with different wavelengths (space between their crests) and amplitudes (the height from trough to crest) meet, they can combine and reinforce each other, resulting in a sudden increase in wave height and power. Other factors include strong ocean currents, changing wind patterns, and the presence of underwater topographical features such as reefs or deep channels, which can concentrate waves in a specific area. Even considering these known causes, rogue waves are still rare and unpredictable.

 

3. Rogue waves are different from tidal waves and tsunamis.
Although rogue waves can cause devastating effects similar to tidal waves and tsunamis, each type of wave has distinct characteristics and causes. Tidal waves result from the gravitational interactions between Earth, the moon, and the sun, which cause sudden rushes of water up rivers or narrow bays during certain tidal conditions. Tsunamis, on the other hand, are caused by undersea earthquakes, volcanic eruptions, or landslides, generating incredibly powerful and far-reaching waves.

 

4. Rogue waves can disable and even sink container ships and oil rigs.

 

Modern ships and offshore structures like oil rigs are constructed to withstand expected conditions at sea, which include maximum wave heights of 15 meters (about 50 feet). Yet rogue waves typically surpass these heights and cause major damage. One colossal wave crashed over a cruise ship called the Viking Polaris on a trip to Antarctica in December 2022, killing one passenger and injuring four, in addition to breaking windows and other parts of the vessel. Rogue waves have also caused other freak accidents worldwide.

 

5. There are more rogue waves than you might think.
Once considered rare and mythical in maritime lore, rogue waves have emerged as more frequent phenomena than previously believed, with estimates suggesting that one in every 10,000 waves is rogue. A 2019 study analyzing 22 years of measurements gathered by wave buoys also found an increase in the waves’ height between 1994 and 2016. But efforts to understand and forecast these waves are hindered by the limited data and the waves’ unpredictable nature.

 

6. Rogue waves are more likely to occur in some parts of the world.
Rogue waves have the potential to appear in oceans and large bodies of water worldwide, but certain locations have a higher likelihood of encountering them. Off the southeast coast of South Africa where the Agulhas current flows is one; the North Atlantic Ocean, where the powerful Gulf Stream and other major ocean currents converge, is also notorious for birthing these colossal waves. Parts of the South Atlantic, Indian, and Pacific oceans have witnessed their fair share of rogue wave incidents as well.

 

7. Rogue waves can form in freshwater lakes.

f1bbe979701af6c2ce548bacb68e7480--edmund

The S.S. ‘Edmund Fitzgerald’ may have been destroyed by a rogue wave on Lake Superior.

Rogue waves are mainly connected with oceans and seas, but they can surprise us by appearing inland. In one famous example, a rogue wave may have caused the tragic shipwreck of the S.S. Edmund Fitzgerald on Lake Superior in 1975. The massive cargo ship disappeared from radar during a gale and sank off Whitefish Point, Michigan. All 29 crew members were lost.

 

8. Several shipwrecks in recent history are attributed to rogue waves.
Though scientists lack hard data about rogue waves prior to the mid-1990s, some researchers have attributed historical shipwrecks to them in hindsight. One theory for the mysterious disappearance of the U.S.S. Cyclops in 1918 is that a rogue wave or an unexpected, severe storm fractured the vessel and its heavy cargo of manganese ore dragged the ship to the ocean’s depths. In 1974, the Norwegian tanker Wilstar sustained structural damage (but didn’t sink) likely caused by a rogue wave, while in 1978, German freighter M.S. München transmitted a distress signal reporting that a colossal wave HAD struck the ship. No one survived. A rogue wave is also thought to have smashed the swordfish boat Andrea Gail in the North Atlantic in 1991, chronicled in Sebastian Junger’s bestseller The Perfect Storm. Modern cruise ships have not been immune to damage, either. The Bremen and Caledonian Star had their bridge windows shattered by waves in the South Atlantic estimated to be 98 feet (30 meters) tall. The incidents occurred just days apart in 2001. The Holland America cruise ship M.S. Prinsendam also faced two 39-foot (12-meter) rogue waves near Cape Horn in 2007 [PDF], resulting in numerous injuries and medical evacuations.

 

9. The Draupner wave marked the first recorded measurement of a rogue wave.

 

On January 1, 1995, an extraordinary event took place on the Draupner oil-drilling platform situated about 100 miles off the Norwegian coast. The platform had a device called a sea surface elevation probe that recorded a massive wave of 85 feet (26 meters) crashing into the structure. Thanks to this instrument, the Draupner wave holds the distinction of being the first rogue wave ever recorded and confirming the existence of the long-rumored freak occurrences.

 

10. Rogue waves have become pop culture icons.

 

Under the Wave off Kanagawa, the famous woodblock print by the Japanese artist Katsushika Hokusai, portrays a towering wave with boats in the foreground and a tiny Mount Fuji in the distance. Though viewers often mistake the wave for a tsunami, historians and scientists have suggested that the print depicts a rogue wave because it appears to be driven by wind rather than an earthquake. A rogue wave also starred in 1972 disaster flick The Poseidon Adventure, though the captain and crew on the S.S. Poseidon’s bridge incorrectly cites the wave’s cause as an underwater earthquake. In the 2005 remake Poseidon, the oversight is corrected and a crew member just feels like “something’s off” before the wave slams the ship.

 

 

Source: Surprising Facts About Rogue Waves

  • Like 1
Link to comment
Share on other sites

Fact of the Day - DNA

gty_230313_genetic_mutation_DNA_molecule

Did you know.... The world runs on deoxyribonucleic acid, or DNA. Apart from being an excellent spelling bee word, DNA also provides the genetic instructions for the growth, function, and reproduction of all living organisms and viruses. Two polynucleotide chains coil around one another and form the double helix DNA structure that makes you, you. Despite it being the microscopic engine that makes life on Earth possible, humans have only known about the existence of DNA for about 150 years. In that time, scientists have discovered a lot about these genetic building blocks — so much so that doctors can now use gene therapy to treat cancer, while biologists ponder whether to bring back entire extinct species, such as the woolly mammoth. These six facts explore the incredible science of DNA: its discovery, its function, and its impact on human history.

 

1. Human DNA Contains 3 Billion Base Pairs

blog-300x170-720-1638598065-effect-of-ge

Base pairs form the rung on the twisted DNA ladder, in which each “rung” is composed of nucleotides containing nitrogen bases adenine (A), thymine (T), guanine (G), and cytosine (C). Because adenine always pairs with thymine, and guanine to cytosine, DNA chains are often expressed as just a series of letters (e.g., “AGGTCCAATG” is an expression of 10 base pairs). Human DNA contains 3 billion of these base pairs stretched across 23 pairs of chromosomes, each with different instructions. Of the total 46 chromosomes, we receive half from our mother and half from our father. The nucleus of every somatic cell (i.e., not sperm or eggs) in the human body contains these chromosomes, but certain cells only access the relevant chromosome for its particular function (eye color, for example, is restricted to a certain section of chromosome 15). DNA usually codes for a protein or group of proteins, which form cells that then become living tissue coalescing together into organs that, when put together, wind up as you.

 

2. Humans Share 98.8% of Their Genome With Chimpanzees

Sit a human next to another great ape, such as a chimpanzee or bonobo, and the differences are pretty stark — but our DNA suggests otherwise. The closest living genetic cousin to Homo sapiens, chimpanzees and bonobos each share about 98.8% of our DNA sequence. The similarity comes from the fact that humans shared a common ancestor with these primate species around 9.3 million to 6.5 million years ago, which is basically last week in the context of Earth history. But as humans, chimps, and bonobos evolved separately, the differences slowly grew, with each species adding its own divergent DNA. Although a 1.2% difference doesn’t seem like a lot, small changes in DNA can have major consequences. After all, with 3 billion base pairs, that means that there are still 35 million differences between humans and chimps. It’s also worth noting that even if we share genes with chimps, those genes can express differently, with some turned up high in humans, while the same gene can be a low hum in a chimp or bonobo. All of these differences combined is what separates humans from their primate cousins — and all other living things for that matter. In fact, humans share around 60% of their genes with bananas, and that’s something any self-respecting primate can get behind.

 

3. In the 19th Century, DNA Was Called “Nuclein”

csm_MUT-Schlosslabor-Miescher_046d891f20

Although our current understanding of DNA really started to take off with the description of the double helix structure in 1953, scientists had already known about the existence of DNA for nearly a century by that time. Swiss chemist Johann Friedrich Miescher discovered DNA in 1869, although to arrive at that discovery he had to undergo some less-than-savory science. At the time, Miescher was studying white blood cells, which fight infections and diseases. Although notoriously tricky to extract from a human’s lymph nodes, white blood cells could be found in abundance on used bandages. So Miescher traveled to local health clinics, took their used bandages, and wiped off the pus and grime. He then bathed the cells in warm alcohol to reduce the lipids and also used enzymes to eat through the proteins. What was left behind was some kind of gray matter that Miescher (successfully) identified as a previously unknown biological substance, which he called “nuclein.” In the early 1880s, German physician Albrecht Kossel discovered the substance’s acidic properties as well as the aforementioned nitrogen bases, and by the end of the decade, nuclein was renamed to the more accurate “nucleic acid.”

 

4. The Discovery of DNA’s Double Helix Is Controversial

On May 6, 1952, British chemist Rosalind Franklin oversaw the taking of the first photograph depicting DNA’s double helix structure, at King’s College London. Technically her 51st X-ray diffraction pattern, the image became known as simply “Photo 51.” Yet the 1962 Nobel Prize for the discovery of the molecular structure of DNA only honored her colleague Maurice Wilkins, along with English physicist Francis Crick and American biologist James Watson. So what gives? In 1953, Crick and Watson had written a paper revealing DNA’s twisting shape to the entire world, and only in the paper’s final paragraph mentioned that the discovery was “stimulated by a knowledge of the general nature of the unpublished experimental results and ideas” of two scientists at King’s College. In Watson’s own autobiography, he mentions that Franklin had no idea that her results had been shared with Crick and Watson via her colleague Wilkins, and when she published her own paper later, the reception wasn’t nearly as earth-shattering. Recent studies have suggested that Franklin was a true collaborator with Watson and Crick, despite receiving much less credit than her male colleagues. Dying at age 37 in 1958 from ovarian cancer (likely due to her work with X-rays), Franklin was ineligible for the 1962 Nobel Prize. (By custom, the award was not handed out posthumously at the time, a rule that became codified in 1974.) Thankfully, history has slowly brought Franklin’s contributions to light and, in 2019, the European Space Agency even announced that their newest Mars rover would be officially renamed the “Rosalind Franklin” — a pretty stellar constellation prize.

 

5. All Humans Have Some Trace of Neanderthal DNA

05e399a1df51c4c15471_62685839.jpg?v=8f45

DNA contains all the information that makes up all living things, but it also reveals interesting facts about our past. For one thing, all humans share 99.9% of the same genes, with the 0.1% caused by substitutions, deletions, and insertions in the genome (an important tool for understanding diseases). We also know, thanks to DNA, that humans are much less genetically diverse than other animal species. This means that all 8 billion humans today grew from a population of only 10,000 breeding pairs of Homo sapiens, and that our ancestors likely experienced genetic bottlenecks that caused serious population declines. Amazingly, glimpses into our human lineage are also locked away in our DNA, because every human on the planet has some genetic material adopted from a completely different species of human — Neanderthals. Although Homo sapiens are the only human species on the planet today, the Earth has played host to upwards of 20 different human species throughout millions of years. For a time, Homo sapiens shared the planet with Neanderthals (Homo neanderthalensis) and even interbred with them. Remnants of those dalliances still live within our chromosomes, passed on from generation to generation. Although Europeans and Asians share the largest percentage of Neanderthal DNA (around 2%), Africans also share a small percentage (which wasn’t discovered until 2020).

 

6. Scientists Didn’t Finish Sequencing the Complete Human Genome Until 2022

In October 1990, the Human Genome Project formed to accomplish one goal: to sequence the entire human genome. Regarded as “one of the most ambitious and important scientific endeavors in human history,” the project essentially mapped a blueprint of human biology that greatly improved medicine and sequencing technology. It took 13 years to map the human genome (there are 3 billion base pairs after all), but the project finally declared success in 2003: ''We have before us the instruction set that carries each of us from the one-cell egg through adulthood to the grave,'' said leading genome sequencer Robert Waterston at the time. However, the announcement technically jumped the gun, because the Human Genome Project had only sequenced what was technologically possible, which came out to about 92% of the genome. The last 8% proved to be much trickier, because these regions contained highly repetitive DNA. Over the next two decades, advancements in DNA sequencing methods and computational tools allowed scientists to close the gap, and on April 1, 2022, the Telomere-to-Telomere consortium announced that all 3 billion base pairs had finally been sequenced.

 

 

Source: Amazing Facts About DNA

  • Like 1
Link to comment
Share on other sites

Fact of the Day - CAMPING

camping-tent-near-mountain-river-260nw-5

Did you know.... Getting outside to see, hike, and sleep in the great outdoors is a classic summer activity, one that’s been popular among wilderness enthusiasts and nature novices for nearly 200 years. While camping has waxed and waned in popularity over the decades, the call of the wild beckoned more than 50 million Americans outdoors in 2020 and 2021, a pandemic-inspired trend that hasn’t let up. And with more than 130 national parks filled with campgrounds — plus thousands of state and local parks with their own overnight accommodations — there’s ample space to park an RV or set up a tent just about anywhere. Read on for five more facts about camping.

 

1. The Civil War Helped Popularize Camping in the U.S.

1e2393170a23c56ffd18c9bb487fdcd4--va-hos

For Union and Confederate soldiers, camping wasn’t the fun activity we consider it today — it was a necessity of the conflict. Troop movements required soldiers on both sides to move long distances, carrying everything they needed to eat and sleep until they reached their next encampment (one possible origin for the word “camping”). While many Civil War conscripts did settle for longer periods of time in cabins and forts (especially during the freezing winter months), camping was a common occurrence. At the time, sleeping under the stars wasn’t seen as glamorous, but that changed after the war’s end. In the years following the Civil War, camping slowly transformed from being a primitive military necessity to a romanticized activity. According to historian Phoebe S. K. Young, the idea of sitting around a campfire with friends, just like soldiers had, was one way the country tried to reframe the war’s impact during the tumultuous time of reconstruction. (In other words, maybe parts of the war hadn’t been that bad, or so the idea went.) Campers of the later Victorian era set off into nature to test their survival skills, looking to get away from the creature comforts of (then) modern society, and promoting camping as a vacation from the rigidity of daily life — an idea that’s stuck around ever since.

 

2. Early Sleeping Bags Had a Different Name

162346917.tVEzYgMb.jpg

Bed rolls and other camp bedding have been around as long as humans have been trying to get comfortable z’s while dozing on the ground; some of the oldest surviving sleep sacks were made from warm animal hides. But in 1876, Welsh inventor Pryce Jones rolled out his version of the sleeping bag, which most closely resembles the ones we pack on our camping trips today. It had a different name, though: the Euklisia Rug. Made from wool, the Euklisia Rug was essentially a blanket that could be folded over its occupant and fastened closed to keep them warm; the original design even included a pocket for an inflatable pillow. Jones’ invention was initially picked up by the Russian army, which bought his design in bulk; 60,000 of his so-called rugs were purchased for troops during the Russo-Turkish War, though not all would be delivered. The inventor was stuck with 17,000 after Russia canceled its order during the conflict. He sold them through his mail-order business, which helped the product catch on.

 

3. The First RVs Appeared in 1910

320px-R.R._Conklin's_auto_bus_LCCN201469

Just two years after Henry Ford unveiled his Model T car, eager outdoor enthusiasts were looking for ways their automobiles could get them out into nature, and sleep there, too. In 1910, Pierce-Arrow’s Touring Landau debuted in Madison Square Garden, complete with many of the amenities modern recreational vehicles have today. The Touring Landau featured a foldable back seat that transformed into a bed, a sink that folded from the chauffeur’s seat, a telephone to communicate with the driver, and a toilet. The car wouldn’t be the last of its kind; by 1915, New York inventor Roland R. Conklin rolled out his upgraded version, a bus that could hold 11 people and had a shower, a kitchen, and a hidden bookcase (although the vehicle was for his personal use only). RV manufacturers continued expanding the portable campers, adding more of the comforts of home through the late 1920s, until the Great Depression led RV sales to drop. (However, some savvy Americans turned the campers into inexpensive mobile homes.) By World War II, RVs became the framework for mobile hospitals and other forms of war effort transportation, though they eventually returned to their original purpose — camping and vacationing — in the 1950s and beyond.

 

4. You Can Thank Girl Scouts for S’mores

mqdefault.jpg

S’mores are the stuff of culinary legend — almost everyone enjoys them, but hardly anyone knows how they became so popular. Turns out the gooey, chocolatey treat dates back to around the 1920s, when they were called “Some-mores.” One of the first s’mores recipes appeared in 1927’s Tramping and Trailing with the Girl Scouts, a scouting guide that instructed brigades of campers on how to set up camp, hike safely, and build fires (a necessity for melting marshmallows). By the 1970s, Girl Scout manuals updated the name to the shortened “s’mores,” arguably a bit easier to say with a mouth full of sticky dessert. In the decades since, s’mores have become traditional campground fare, even honored with their own holiday on August 10.

 

5. Seven Principles Can Help You Be a Superb Camper

c7f7d448f375830f0becf8c9142423db--campin

Most hikers and outdoor explorers head outside for a chance to reconnect with nature, an experience that can be restoring and enjoyable. Unfortunately, the impact of humans on our natural world can sometimes dampen the adventure. That’s the motivation behind Leave No Trace, an outreach program that works by educating the public about minimizing our recreational footprint. Emerging around the 1960s and ’70s when backpacking and camping boomed in popularity, Leave No Trace introduced seven principles, supported by national parks and other conservation groups, that help keep landscapes pristine and enjoyable for all. Most of the guidelines now seem like no-brainers: Properly dispose of trash where it belongs, respect wildlife by giving animals space, and plan ahead for your outdoor adventure to stay safe, for example. But the list of outdoor ethics also provides tips for keeping campfires forest-friendly and picking the perfect campsite without disturbing local flora and waterways. Familiarizing yourself with the long-standing outdoor code can help make your time at camp more enjoyable — now, and for years to come.

 

 

Source: Adventurous Facts About Camping

  • Like 1
Link to comment
Share on other sites

Fact of the Day - U.S. FLAG

018197002dd7ae702dbb20e890900f3a_Generic

Did you know.... The history of the U.S. flag is almost as multifaceted as the people it represents. With dozens of different iterations, the Stars and Stripes has frequently changed as the country’s borders have expanded and new states have been added to the union. Today’s 50-star flag is hoisted at sporting events, schools, government buildings, and near the homes of millions of Americans throughout the world. These six facts pull together the threads of the flag’s 250-year history, including its creation, its symbolism, and where we’ll eventually have to squeeze that 51st star.

 

1. Betsy Ross Didn’t Design the Original U.S. Flag

770d33cf601d4f8dbefc6d5f813ac6e6--first-

The most enduring myth about the origin of the U.S. flag is that Betsy Ross, an American upholsterer living in Philadelphia during the Revolutionary War, created the first flag at the behest of George Washington. Historians aren’t sure that ever happened, however. The source for Ross’ involvement came from her own family, nearly a century after Ross reportedly created the flag. Apart from her descendant’s account, no evidence suggests that Ross sewed the first flag. Instead, some historians think Francis Hopkinson, a signer of the Declaration of Independence and designer of other seals for U.S. government departments, is likely the first flag’s creator. Evidence exists that Hopkinson sought payment for the design of the “flag of the United States of America” (he thought a “Quarter Cask of the Public Wine” ought to do it). Although Hopkinson was denied payment, Congress approved his flag on June 14, 1777 (celebrated today as Flag Day). Thankfully, historians now generally give Hopkinson the vexillological accolades he deserves.

 

2. The First U.S. National Flag Featured the Union Jack

5c8922e08c9590d8e1ebaf55ce3978b8.jpg

State militias fought the Revolutionary War’s opening skirmishes using colonial banners, but by the winter of 1775, the Second Continental Congress became the de facto war government of the fledgling U.S. — and they needed a flag to unite the cause. Congress went with something already on civilian and merchant ships, the British red ensign, and sewed on six horizontal stripes resembling the 13 red-and-white stripes on today’s flag. This creation became known as the Grand Union flag, and even featured the Union Jack (sans the St. Patrick’s cross) as a canton (the innermost square on the top left), instead of the usual constellation of white five-pointed stars. The resulting flag was first hoisted on December 3, 1775, on the man-of-war Alfred, by none other than John Paul Jones, one of the greatest naval commanders in U.S. history. Years later, in 1779, the famous naval officer recalled the day: “I hoisted with my own hands the Flag of Freedom…

 

3. There Are 27 Official Versions of the American Flag

Although the Grand Union Flag was the first banner to unite the colonies’ cause under one emblem, the flag isn’t regarded as an “official” U.S. flag. That lineage begins with the passage of the Flag Act of 1777, which states “[t]hat the flag of the thirteen United States be thirteen stripes, alternate red and white; that the union be thirteen stars, white in a blue field, representing a new constellation.” The colors themselves represent valor (red), purity (white), and vigilance (blue). Some flags included different versions to appeal to this description, such as the so-called Betsy Ross flag, Hopkinson’s 3-2-3-2-3 star arrangement flag, and the Cowpens flag (basically the Betsy Ross, but with a star in the middle of the circle). Hopkinson’s creation is widely regarded as the first conception of what would be recognizable today as a U.S. flag. Throughout the years, the flag has undergone 26 small changes in order to add new stars for new states joining the union. The first change came in 1795, with the addition of Vermont and Kentucky (which added two extra stripes as well), and this version is what’s known to history as the Star-Spangled Banner. The last canton edit came on August 21, 1959, when President Eisenhower issued Executive Order 10834, establishing today’s 50-star flag following Hawaii’s statehood.

 

4. The Original “Star-Spangled Banner” Still Exists

29906170001_3781987924001_thumb-3acd8ac4

After a night of heavy bombardment during the Battle of Baltimore in the War of 1812, American forces stationed at Fort McHenry raised the Star-Spangled Banner (the 15-star flag) on the morning of September 14, 1814. Seeing this flag while standing aboard a British ship and negotiating the release of a prisoner, author Francis Scott Key composed the poem “Defence of Fort M’Henry,” which later became the lyrics for the U.S. national anthem (adopted by Congress in 1931). The original flag was sewn by Mary Pickersgill and Grace Wisher, her enslaved servant, and stretched some 30 feet by 42 feet — an extremely large flag at the time. The gargantuan size of the Star-Spangled Banner was a specific request of Fort McHenry’s commander, George Armistead, who told the head of Baltimore’s defenses that “it is my desire to have a flag so large that the British will have no difficulty in seeing it from a distance.” Amazingly, the very flag that inspired the 35-year-old poet more than 200 years ago still exists, and is now in the care of the Smithsonian Institute — though sadly not quite in its original condition. For nearly a century, the flag remained in the care of Armistead’s descendants, who made a habit of cutting off pieces of the flag to give as souvenirs. Today, the Star-Spangled Banner is only 30 feet by 34 feet. Although the Smithsonian has recovered many of the lost pieces over the years, some prominent pieces — including a missing 15th star — have never been recovered.

 

5. The U.S. Flags on the Moon Are Probably White Now

p1m.jpg?ct=d839b4fe7e0f

Today, the U.S. flag is one of the only banners that’s been hoisted somewhere other than planet Earth. Six of the NASA Apollo missions (1969 to 1972) planted a U.S. flag on the moon (the Apollo 11 flag reportedly fell down when the astronauts blasted off from the lunar surface), but decades of UV radiation from unfiltered sunlight have likely bleached the remaining flags white. For example, the Apollo 11 Stars and Stripes wasn’t some meticulously designed space flag capable of surviving the harsh lunar climate, but a $6 nylon flag that may have been purchased at a Sears Roebuck in the Houston area. Some have theorized that the nylon could’ve disintegrated completely, but NASA has examined flag sites using the Lunar Reconnaissance Orbiter and has found evidence of the flags still “flying.” In November 1969, only a few months after Neil Armstrong took his famous “one small step” on the moon, U.S. Congress passed a law stipulating that a U.S. flag would adorn any moon, planet, or asteroid during missions fully funded by the Americans. In other words, an international effort to land on Mars, for example, means no U.S. flag will fly on the red planet (not that it’d last very long anyway).

 

6. The U.S. Flag Might Need a 51st Star Pretty Soon

images?q=tbn:ANd9GcTXtVZBcEyA5e8qNj19qYi

The 50-star U.S. flag is the longest-serving banner in U.S. history, being the country’s official flag for more than 60 years (the 48-star flag comes in second, at 47 years). However, three primary candidates — Puerto Rico, Washington, D.C., and Guam — could one day necessitate a new U.S. flag with 51 (or more) stars. To solve this constellation conundrum for all future generations, the online magazine Slate asked a mathematician to develop a model for the U.S.’s 51-star flag, as well as other flags containing as many as 100 stars. Potentials for a 51-star flag include six alternating rows of nine and eight stars, or a variation on the 44-star Wyoming pattern (created to accommodate Wyoming’s admission to the union in 1890), which would use five rows of seven stars sandwiched between two rows of eight stars. (As a refresher, the current flag has five rows of six stars and four rows of five stars.) This isn’t the only competing 51-star flag; the pro-statehood New Progressive Party of Puerto Rico designed a flag similar to the original Betsy Ross flag, but the circle is instead jam-packed with 51 stars. The most likely 51st state, Puerto Rico, continues its push for statehood, and it’s possible that the long reign of the 50-star flag could be nearing its end.

 

 

Source: Fascinating Facts About the U.S. Flag

  • Like 1
Link to comment
Share on other sites

Fact of the Day - DOGS

happy-smiling-danish-swedish-farm-260nw-

Did you know.... Dogs and humans all over the world have been enjoying a mutually beneficial best friendship for perhaps tens of thousands of years. They’re the first animals we domesticated, and have been constant companions ever since. Sometimes dogs have a job they help us with, like sheep herding or duck hunting. But others are literally just here for the cuddles, and dog people are happy to oblige. Even after all those years, we’re still learning about dogs, including more about how our unlikely animal friendship began. But plenty of dog questions have delightful answers — like whether they dream, how they learn their names, and why they slobber all over us. These seven dog facts will send you running to cuddle your closest very good boy (or girl).

 

1. How Did Dogs Evolve From Wolves?

Today’s domesticated dogs evolved from majestic, wild wolves, but looking at a tiny, trembling chihuahua, it can be hard to imagine how that even worked. It took a really long time, especially for breeds that seem very distant from their ancient grandparents. Scientists still don’t know exactly how those first wolves befriended humans, but it appears to have happened at least 15,000 years ago. A study of ancient wolf genomes published in 2022 found that dogs may have been domesticated twice, once in Asia and once in the Middle East or nearby, with the populations subsequently intermingling. But the evidence is far from conclusive, and dogs may have been domesticated just once, in Asia, and then later bred with wolves that lived in or around the Middle East. Regardless, most scientists now agree that dogs evolved from gray wolves. The exact mechanism is still unclear. Wolves, after all, are pretty dangerous, and scientists are still scratching their heads about what prompted humans to feel safe around them in the first place. Regardless, your people-pleasing golden retriever is a pretty far cry from its lupine ancestors. (Your shih tzu, on the other hand, might be closer than you think.)

 

2. Do Dogs Dream?

1606382353_6088

If you’ve spent a lot of time around dogs, you’ve probably seen them twitching or kicking in their sleep. It’s hard to know exactly what’s going on in a dog’s mind, but they do exhibit brain wave patterns much like we do when we’re in our most dream-heavy phase of sleep. So what do dogs dream about? In one study, scientists removed or deactivated the part of the brain that keeps dogs from moving around in their sleep (yikes). These dogs started to move when they entered the dreaming stage of sleep, and began acting out their dreams, doing breed-specific behaviors. According to dog psychology researcher Stanley Coren, “What we've basically found is that dogs dream doggy things. So, pointers will point at dream birds, and Dobermans will chase dream burglars.” This indicates that dogs probably just dream about their everyday actions.

 

3. Why Are Some People Allergic to Dogs?

Around 10% to 20% of humans are allergic to cats or dogs. There’s a common misconception that people allergic to furry friends are allergic to the fur itself, but they’re actually allergic to proteins found in skin cells, saliva, and urine — so if you’re allergic to dogs, you might still be allergic to a hairless dog. When someone allergic to dogs is exposed to those proteins, as with other allergies, their immune system reacts as if the substances are harmful. Some dogs are marketed as “hypoallergenic,” but there’s really no breed that’s guaranteed to not trigger allergies. It is possible, however, that someone can be more allergic to one dog than another. The best way to figure out whether you’re allergic to a specific dog is just to spend time around it, so starting out by fostering a pup before committing to a long-term companion might be the way to go.

 

4. How Do You Convert Human Years to Dog Years?

20210629004542.jpg

For decades, people have used the phrase “dog years” to compare stages in dogs’ lives to similar stages in human lives — such as whether they’re children, teens, adults, or seniors. There’s a common misconception that one human year is equivalent to about seven dog years, but it’s not all that simple. According to the American Kennel Club (AKC), a 1-year-old medium-sized dog is roughly equivalent to a 15-year-old human. The second year of that dog’s life is around nine human years, and after that, each year is about five years. This varies from dog to dog, though, especially since large dogs tend to age faster than smaller dogs. The AKC estimates that a smaller dog, like a Pomeranian, is around age 56 after 10 years, while a very large dog, like a Great Dane, would be more like 79.

 

5. Why Do Dogs Lick People?

Dogs licking people is often interpreted as a sign of affection, and it very well might be. Some wild dog species lick their pack members to welcome them home, and it can absolutely mean that your dog is happy to see you. That’s not the only reason your dog might lick you, though. You could just taste really good, especially if you just finished a meal. It could also be a combination of the two: Licking may have started as a food-seeking behavior and evolved into a sign of affection. It could also be a sign of submission. Obsessive licking, however, can be indicative of a larger problem like allergies, boredom, or pain — so if you’re worried about what it might mean, it’s worth a trip to the vet to check it out.

 

6. Can You Change a Rescue Dog’s Name?

Rescue-Dog-in-Abu-Dhabi-UAE.jpg?fit=300,

So you’ve fallen in love with a rescue dog, but its name is Supercalifragilisticexpialidocious. You can’t exactly be expected to shout that across the dog park. Fortunately, it’s perfectly fine to change a dog’s name after adoption. In some cases, the dog got that name at the shelter and hasn’t even had it for very long — but you can change it even if the dog’s had the name for years. If you do decide to change your new friend’s name, it just requires a little consistency and patience. You may have to use their old name a couple of times along the way, but with plenty of positive reinforcement, your dog should fully accept their new moniker. Don’t worry — they won’t be offended!

 

7. Can Dogs See Color?

Some dog senses are more amplified than those of humans. Most dogs can hear high-pitched frequencies that are completely silent to us, and with a sense of smell that may be up to 10,000 times more powerful than ours, they take in much more of the world via scent than sight. But how does their vision measure up? While sight varies among both individual humans and dogs, a typical dog can see fewer colors than a typical human — but contrary to popular belief, they don’t see in black and white. They can also see yellows, blues, and combinations of the two. It’s similar to a human being who has red-green color blindness. Dogs may still have one vision advantage over humans, though: Their eyes are better adapted to see in the dark.

 

 

Source: Very Good Questions About Dogs, Answered

  • Like 1
Link to comment
Share on other sites

Fact of the Day - PIZZA

320px-Pizza-3007395.jpg

Did you know.... It’s hard to define pizza. Is it flatbread with toppings, or something more specific? While flatbread has existed in many cultures for centuries, pizza as we know it today is more commonly associated with Italy, particularly Naples. What was once a niche regional dish has become one of the most popular foods in the world; one survey suggested that “pizza,” meaning “pie,” is the best-known Italian word outside Italy, beating out even “spaghetti.” Despite the dish’s European roots, America has welcomed pizza as its own, spurring specific regional styles from New York to California — and one lesser-known variant that even has built-in dessert. When exactly did pizza take off in the states? What is pizza like back in Italy? What’s the deal with pizza rolls? These six facts about pizza may have you heading out to grab your favorite slice.

 

1. The First Recorded Pizza Delivery Was in 1889

697546515-612x612.jpg

In 1880s Naples, pizza was a staple food for the working class, although nobility turned up their noses at it. It didn’t really catch on in the rest of the country until the newly crowned Queen Margherita paid a visit to the seaside town in 1889. One night, the legend goes, she grew tired of fancy meals and asked for some local cuisine. Pizza chef Raffaele Esposito made the queen three pizzas, including what we know now as the Margherita pizza — tomato, basil, and mozzarella for the three colors of the Italian flag — and hand-delivered them. Legend has it the queen took one bite of that pizza and said it was one of the best things she had ever eaten, which is how it got its name. It’s entirely possible that more informal pizza delivery happened about town before this, but it’s certainly the first one that went down in history. There is some controversy about whether this story is actually true, but regardless, the pizzeria where Esposito worked, Pizzeria Brandi, still displays a royal thank-you note on its walls.

 

2. Pizza Took Off in America After World War II

While pizzerias had existed for decades beforehand, especially in working-class Italian communities, pizza didn’t penetrate everyday life in America until after World War II. Soldiers came home after sampling the dish abroad, and pizza quickly became a booming business. Pizzerias started popping up in every state in the country, especially after the Bakers Pride commercial pizza oven launched around 1945. National chains began to emerge in the late 1950s: Pizza Hut in 1958, Little Caesar’s in 1959, and Domino’s in 1960, to name a few. Today, demand for chain pizza is dropping a little, but the pizza market is still strong.

 

3. Pizza Was One of the First Things Sold on the Internet

e9a1777cd6c60466ce9a3ca627d723f8.jpg?nii

Ordering pizza online may seem pretty newfangled, but the first pizza was sold on the internet nearly 30 years ago. Way back in the mid-’90s, online shopping was in its infancy. The first online vendor, as we know them today, was NetMarket, which launched in the summer of 1994. (Its first sale was a Sting CD.) Less than a month later, Pizza Hut launched PizzaNet, its first online ordering service. Back then, with only dial-up internet and no cloud computing services, setting up online ordering was a more onerous task, requiring the company to install a server at its Wichita, Kansas, headquarters. The pilot program was limited to Santa Cruz, California, so after customers were done, their orders traveled over the internet to Wichita, then back out to a local Santa Cruz Pizza Hut. Somewhat defeating the purpose, that local Pizza Hut would then call to confirm the order over the phone. The first order, according to Pizza Hut, was a mushroom, pepperoni, and extra-cheese pizza. It was a big year for online shopping — Amazon launched in 1994, too, and eBay followed soon after in 1995. PizzaNet, sadly, wasn’t as successful, but Pizza Hut eventually relaunched online ordering in 2001.

 

4. Colorado-Style Pizza Has Built-In Dessert

Many regions of the United States have their own styles of pizza, like the big thin slices of New York and the bready Philly tomato pie. Colorado-style pizza, also known as mountain pie, is a little less famous, but it’s definitely unique. The thickest part is the braided crust, which surrounds a tall stack of toppings. The meats are precooked so they don’t make a mess. It’s deep, but a far cry from a big melty Chicago deep-dish. The built-in dessert is that distinct braided crust, which comes with dipping honey to top off an all-in-one meal.

 

5. Italian Pizza Is Strictly Regulated

pizza-margherita-e-sempre.jpg?w=300&h=30

You know how Champagne only comes from the Champagne region of France, and anything else is just sparkling white wine? Italy has a bunch of similar rules. Many of these rules govern wine, but pizza is a protected consumable, too. Specifically, Neapolitan pizza, or pizza napoletana.

In order to be sold as pizza napoletana, the pie has to be 35 centimeters (around 14 inches) or less in diameter, have a raised rim of 1 to 2 centimeters, and follow a host of other requirements, including flour type, kneading technique, and equipment. No rolling pins are allowed. A specific type of oregano must be used. Only certain certified varieties of Italian tomatoes are acceptable. And there are only two types of Neapolitan pizza: Margherita (topped with tomato, basil, mozzarella, and additional cheese) and marinara (tomato, oil, oregano, and garlic). One guide published by the Associazione Verace Pizza Napoletana, the organization devoted to protecting and verifying the traditional pie, is 21 pages long — and they periodically check on restaurants that claim to serve the stuff. The dish has been standardized in Italy since the late 1990s, and got special recognition from the European Union in 2009.

 

6. Totino’s Uses 25 Different Recipes for Pizza Rolls

Supply chain issues during the COVID-19 pandemic hit the food industry hard, major distributors included. At the same time, customers were preparing to spend long periods at home by stocking up on their grocery-store favorites, including the popular frozen-aisle snack Totino’s Pizza Rolls. To keep up with demand and keep shelves stocked, scientists at parent company General Mills made up 25 different recipes for pizza rolls, with small substitutions like cornstarch for tapioca starch, so they could just use whichever one was most convenient at any given time. They’re not the only company to adjust to supply chain issues with recipe changes, but 25 variations is certainly a strong commitment to keeping this favorite slumber-party snack in stock. It’s a good thing they’re not beholden to an incredibly strict list of national standards — but to be fair, if America were to enshrine one pizza product, it would probably be pizza rolls.

 

 

Source: Delicious Facts About Pizza

  • Like 1
Link to comment
Share on other sites

Fact of the Day - SLOTHS

sloth-sm.jpg

Did you know... Between their serene movement and their permanent smiles, it’s hard to not love sloths. Hanging high up in the trees of Central and South America, both two- and three-toed sloths have captured so many human hearts that some cry at the mere sight of them. (Have you ever seen a baby sloth wearing pajamas? Now you have.) Because sloths often prefer to keep their distance from us, there’s still a lot to learn about them — but what we do know is fascinating. After all, how many animals have miniature ecosystems in their fur and helped avocados survive? These six facts about sloths will have you running toward your nearest sloth rescue.

 

1. Sloths Navigate Mostly By Touch

981b4cec8f09f009272cc3b573e987f7--sloth-

Sloths don’t have great hearing or eyesight, so they navigate the world primarily by touch using their incredible spatial memory — and they have a keen sense of smell, which helps them find food. Their vision is especially bad; sloths have a condition called monochromacy, meaning they have no cone cells in their eyes at all. This makes them not only colorblind, but mostly blind in dim light and completely blind in bright light. Three-toed sloths can’t even see 5 feet in front of them.

 

2. For Tree-Dwelling Animals, They Have a Terrible Sense of Balance

GettyImages-888112696.jpg?itok=xandG6eA

Despite living high up in trees, sloths have little use for balance; they hook themselves onto trees firmly (so firmly, they can sleep suspended), and move very slowly. Since sloths don’t need the same level of motion control as many other mammals, the mechanisms that help a human or a squirrel, for example, find their footing in a tree eroded over generations. When sloths do lower themselves to the ground, usually for their once-per-week trip to the bathroom, they have a lot of trouble moving around gracefully.

 

3. Sloths Are Weirdly Good Swimmers

cd77309f-8117-4404-a3cf-cee68d918718_155

Sloth senses, their musculature, and even their ears have evolved almost perfectly for a narrow set of circumstances: hanging from trees, and moving slowly around trees. On the ground, they’re clumsy and vulnerable. So it might surprise you that they’re actually kind of speedy swimmers. Amazingly, they move three times as quickly in the water as they do in trees. The gas in their stomachs makes them surprisingly buoyant, so all they have to do is paddle those big long arms to cross even wide rivers in the Amazon.

 

4. The Three-Toed Pygmy Sloth Is Critically Endangered

xLoBFO8B1WA.jpg?size=320x213&quality=96&

Three-toed pygmy sloths are the smallest in both size and population. They’re about 40% lighter than brown-throated sloths, and only became recognized as a distinct species in 2001. Sadly, they are critically endangered, meaning they have an extremely high risk of becoming extinct. Three-toed pygmy sloths started evolving separately from their larger counterparts on Escudo de Veraguas, an island that became isolated from mainland Panama around 9,000 years ago. Researchers still don’t know a lot about their diet, habitat, or even their population — it could be anywhere between 500 and 1,500. Most sloths are not endangered. However, the maned three-toed sloth, which lives along a small stretch of rainforest coastline in southeastern Brazil, is considered vulnerable.

 

5. Sloth Fur Contains an Entire Ecosystem

its-that-time-of-year-again-4-e163709980

Each strand of sloth fur contains microcracks, which, along with the creatures’ extremely slow speed, allows algae and fungi (some that aren’t found anywhere else) to grow freely. That algae turns green and creates an extra layer of camouflage for the animals during rainy seasons. Sloth fur is also home to multiple unique species of moths that rely on sloths for survival. When sloths descend to the forest floor for their weekly bathroom break, the moths lay eggs in the dung, which then hatch and fly up to the trees to return to the sloth’s fur. When the insects die and decompose, they fertilize the algae, creating more camouflage and, perhaps, a nutritious snack for grooming sloths. Gross as it may sound, that dirty fur could hide some medical miracles, thanks to some sloth-exclusive fungi. One researcher found at least 28 distinct strains growing on a three-toed sloth, some with the potential for treating diseases — including breast cancer.

 

6. Elephant-Sized Sloths Used to Roam All of North America

2010_1108_Sloth_lead.jpg

Today, sloths seem elusive. They live exclusively in lowland forests in Central and South America, and spend most of their time camouflaged high up in the treetops. However, they were far more commonplace for our human ancestors of the late ice age, who could have encountered sloths the size of elephants as far north as Alaska and the Yukon Territory. One fossil was even found more than 8,000 feet above sea level in the Rocky Mountains. Large clawed ground sloths (Megalonyx) grew to about 10 feet long and weighed around 2,200 pounds. Shasta ground sloths were a little smaller, and had a much narrower habitat, but were still quite large at 9 feet long and up to 550 pounds. You may have extinct giant sloths to thank for avocados existing, since they were one of a handful of large mammals able to swallow an entire avocado pit and pass it in a new location, allowing more trees to grow. However, humans eventually had to take over cultivation of avocados manually.

 

 

Source: Fun Facts About Sloths

  • Like 1
Link to comment
Share on other sites

Fact of the Day - SCIENTIST

julia-koblitz-RlOAwXt2fEA-unsplash-300x2

Did you know.... For most of human history, scientists haven’t been called “scientists.” From the ancient Greeks to 18th-century Enlightenment thinkers, terms such as “natural philosopher” or the (unfortunately gendered) “man of science” described those who devoted themselves to understanding the laws of the natural world. But by 1834, that pursuit had become so wide and varied that English academic William Whewell feared that science itself would become like “a great empire falling to pieces.” He decided that the field needed a simple word that could unify its disparate branches toward one goal — and the inspiration for this word came from someone who wasn’t a “man” of science at all.

 

Scottish mathematician and science writer Mary Somerville’s book On the Connexion of the Physical Sciences is a masterwork of science communication. Published in 1834, it’s often considered the very first piece of popular science, a work that successfully described the complex scientific world for a general audience. Crucially, it also framed the pursuit of science as a connected, global effort and not as fractured professions siloed in separate “societies.” While writing a review of Somerville’s book, Whewell used his new word to describe the men and women striving for this previously unknown knowledge. Much like an “artist” can create using a variety of media, so too can a “scientist” seek to understand the world in a variety of ways.

 

Some argue that the scientific method was first used by a Muslim natural philosopher in the 11th century CE.
During the Islamic Golden Age (mid-seventh to mid-13th centuries, often concentrated in Baghdad), Muslim thinkers expanded human knowledge with advancements in astronomy, engineering, music, optics, manufacturing, and (some argue) by creating the very bedrock of modern science itself, the scientific method. At its most basic, the scientific method is a framework that guides scientists toward facts by using hypotheses tested with controlled experiments. Working mostly in Cairo in the early 11th century, polymath Ibn al-Haytham used this method to produce some of his greatest breakthroughs in optics, one of which included the camera obscura (an optical device that was a forerunner of the modern camera). By the 13th century, al-Haytham’s work had been anonymously translated and found its way into the hands of Roger Bacon, an English philosopher who embraced al-Haytham’s empirical approach and formed the foundations of modern European science.

 

The history of “scientist”
Today is a red-letter day for readers of The Renaissance Mathematicus; I have succeeded in cajoling, seducing, bullying, bribing, inducing, tempting, luring, sweet-talking, coaxing, coercing, enticing, beguiling[1] Harvard University’s very own Dr Melinda Baldwin into writing a guest post on the history of the term scientist, in particular its very rocky path to acceptance by the scientific community. First coined by William Whewell at the third annual meeting of the British Association for the Advancement of Science in 1833 in response to Samuel Taylor Coleridge’s strongly expressed objection to men of science using the term philosopher to describe themselves, the term experienced a very turbulent existence before its final grudging acceptance almost one hundred years later. In her excellent post Melinda outlines that turbulent path to acceptance, read and enjoy.

 

J.T. Carrington, editor of the popular science magazine Science-Gossip, achieved a remarkable feat in December of 1894: he found a subject on which the Duke of Argyll (a combative anti-Darwinian) and Thomas Huxley (a.k.a. “Darwin’s bulldog”) held the same opinion. Carrington had noticed the spread of a particular term related to scientific research. He himself felt the word was “not satisfactory,” and he wrote to eight prominent writers and men of science to ask if they considered it legitimate. Seven responded. Huxley and Argyll joined a five-to-two majority when they denounced the term. “I regard it with great dislike,” proclaimed Argyll. Huxley, exhibiting his usual gift for witty dismissals, said that the word in question “must be about as pleasing a word as ‘Electrocution.’

 

The word? “Scientist.”

177px-John_Pettie_(1839-1893)_-_George_D

Duke of Argyll

 

huxley.jpg

Thomas Huxley

 

Today “scientist” is not only an accepted title—it is a coveted one. To be a “scientist” is to be someone with an acknowledged right to make knowledge claims about the natural world. However, as the 1894 debate suggests, the term has a fraught history among English-speaking scientific practitioners. In retrospect, Huxley and Argyll’s rejection of “scientist” might seem merely quaint, even petty. But the history of the word “scientist” is not just a linguistic curiosity. Debates over its acceptance or rejection were, in the end, not about the word itself: they were about what science was, and what place its practitioners held in their society.

 

william-whewell.jpg

William Whewell

 

The English academic William Whewell first put the word “scientist” into print in 1834 in a review of Mary Somerville’s On the Connexion of the Physical Sciences. Whewell’s review argued that science was becoming fragmented, that chemists and mathematicians and physicists had less and less to do with one another. “A curious illustration of this result,” he wrote, “may be observed in the want of any name by which we can designate the students of the knowledge of the material world collectively.” He then proposed “scientist,” an analogue to “artist,” as the term that could provide linguistic unity to those studying the various branches of the sciences.

 

Most nineteenth-century scientific researchers in Great Britain, however, preferred another term: “man of science.” The analogue for this term was not “artist,” but “man of letters”—a figure who attracted great intellectual respect in nineteenth-century Britain. “Man of science,” of course, also had the benefit of being gendered, clearly conveying that science was a respectable intellectual endeavor pursued only by the more serious and intelligent sex.

 

Scientist” met with a friendlier reception across the Atlantic. By the 1870s, “scientist” had replaced “man of science” in the United States. Interestingly, the term was embraced partly in order to distinguish the American “scientist,” a figure devoted to “pure” research, from the “professional,” who used scientific knowledge to pursue commercial gains. “Scientist” became so popular in America, in fact, that many British observers began to assume that it had originated there. When Alfred Russel Wallace responded to Carrington’s 1894 survey he described “scientist” as a “very useful American term.” For most British readers, however, the popularity of the word in America was, if anything, evidence that the term was illegitimate and barbarous.

 

nature-masthead.jpg?w=500&h=250

 

Feelings against “scientist” in Britain endured well into the twentieth century. In 1924, “scientist” once again became the topic of discussion in a periodical, this time in the influential specialist weekly Nature. In November, the physicist Norman Campbell sent a Letter to the Editor of Nature asking him to reconsider the journal’s policy of avoiding “scientist.” He admitted that the word had once been problematic; it had been coined at a time “when scientists were in some trouble about their style” and “were accused, with some truth, of being slovenly.” Campbell argued, however, that such questions of “style” were no longer a concern—the scientist had now secured social respect. Furthermore, said Campbell, the alternatives were old-fashioned; indeed, “man of science” was outright offensive to the increasing number of women in science.

 

In response, Nature’s editor, Sir Richard Gregory, decided to follow in Carrington’s footsteps. He solicited opinions from linguists and scientific researchers about whether Nature should use “scientist.” The word received more support in 1924 than it had thirty years earlier. Many researchers wrote in to say that “scientist” was a normal and useful word that was now ensconced in the English lexicon, and that Nature should use it.

 

However, many researchers still rejected “scientist.” Sir D’Arcy Wentworth Thompson, a zoologist, argued that “scientist” was a tainted term used “by people who have no great respect either for science or the ‘scientist.’” The eminent naturalist E. Ray Lankester protested that any “Barney Bunkum” might be able to lay claim to such a vague title. “I think we must be content to be anatomists, zoologists, geologists, electricians, engineers, mathematicians, naturalists,” he argued. “‘Scientist’ has acquired—perhaps unjustly—the significance of a charlatan’s device.” In the end, Gregory decided that Nature would not forbid authors from using “scientist,” but that the journal’s staff would continue to avoid the word. Gregory argued that “scientist” was “too comprehensive in its meaning … The fact is that, in these days of specialized scientific investigation, no one presumes to be ‘a cultivator of science in general.’” And Nature was far from alone in its stance: as Gregory observed, the Royal Society of London, the British Association for the Advancement of Science, the Royal Institution, and the Cambridge University Press all rejected “scientist” as of 1924. It was not until after the Second World War that Campbell would truly get his wish for “scientist” to become the accepted British term for a person who pursued scientific research.

 

Tracing the acceptance or rejection of “scientist” among researchers not only gives us a history of a word—it also provides insight into the self-image of scientific researchers in the English-speaking world in a time when the social and cultural status of “science” was undergoing tremendous changes. Interestingly, the history of “scientist” shows that the word’s adoption cannot be straightforwardly associated with the professionalization of the sciences. “Scientist” was used in America to separate scientific researchers from “professionals.” In Britain, many researchers viewed “scientist” as a term that threatened their social and intellectual identity, a term that would open science up to any “Barney Bunkum” rather than confirm it as a selective, expert endeavor. Perhaps those who denounced the word might have been reassured by a glimpse into the future of the “scientist”—or perhaps they would still think that “scientists” might be better off as zoologists, chemists, and physicists.

 

 

Source: The word “scientist” dates back only to 1834  |  The History of Scientist

  • Like 1
Link to comment
Share on other sites

Fact of the Day - ANIMALS

antarctic-400x284.jpg

Did you know... Did you know that rats giggle when tickled, or that Norway once knighted a penguin? Where can you find the world’s only egg-laying mammals? Read all about it with this compilation of the most intriguing animal facts from around our website, and learn more about the furry, feathered, and scaly friends we share our world with.

 

1. Koala Fingerprints Are Almost Indistinguishable From Those of Humans

1682591593_9947.jpeg

Every fingerprint is unique, but that doesn’t mean they’re easy to tell apart — especially since humans aren’t the only species that’s developed them. Chimpanzees and gorillas have fingerprints too, but it’s actually koalas — far more distant on the evolutionary tree from humans — whose prints are most similar to our own. This was first discovered by researchers at the University of Adelaide in Australia in 1996, one of whom went so far as to joke that “although it’s extremely unlikely that koala prints would be found at the scene of a crime, police should at least be aware of the possibility.” That discovery lent support to one of the primary theories in the centuries-long debate over the purpose of fingerprints and their swirly microscopic grooves: They help grasp. Koalas’ survival depends on their ability to climb small branches of eucalyptus trees and grab their leaves to eat, so the fact that they developed fingerprints — which assist in that action — independently of primates millions of years ago is likely no coincidence.

 

2. Some Bats Sing Love Songs

567f164f0c898a6361d8899b9a56068e--outdoo

Anyone who’s ever serenaded their sweetheart has more in common with bats than they might think. In 2009, researchers at the University of Texas at Austin and Texas A&M studied the vocalizations of Tadarida brasiliensis — the Brazilian free-tailed bat, more commonly known as the Mexican free-tailed bat — and found the tunes to be surprisingly nuanced love songs. Though difficult for humans to hear, the songs consist of unique syllables that combine to form three types of “phrases”: chirps, buzzes, and trills. The males combine these phrases in different ways to attract females — and to warn other males to stay away. What makes this especially remarkable is that, until recently, bats weren’t thought to communicate with one another in such a structured way. But when the researchers listened to recordings of two free-tailed colonies in Austin and College Station, Texas, they discovered that they “use the same ‘words’ in their love phrases,” according to lead researcher Kirsten Bohn.

 

3. Cats Can Be Allergic to People

e8f7dfde75d619ea145de97389ed2a37.jpg?nii

If you love cats but can’t have one of your own because you’re allergic, the feeling may be mutual. It isn’t common, but cats can be allergic to people. The condition is rare in part because we humans usually bathe regularly and thus don’t shed as much dead skin or hair as other animals (and it’s somewhat unclear how much of a problem human dander may be for felines). That said, cats are fairly sensitive to chemicals and sometimes have a negative reaction to certain perfumes, laundry detergents, and soaps. Cat allergic reactions look much the same as the ones humans get — they may manifest as sneezing, runny noses, rashes, hives, or other uncomfortable symptoms. In rare cases, cats can even be allergic to dogs. (Maybe that’s why some of them don’t get along.)

 

4. The Pattern of Every Tiger’s Stripes Is Unique

4UdIJIOI6s8.jpg?size=320x240&quality=96&

Not unlike human fingerprints, the pattern of every tiger’s stripes is one of a kind. And though those markings are invariably beautiful, they aren’t just for decoration. Biologists refer to tiger stripes as an example of disruptive coloration, as their vertical slashes help them hide in plain sight by breaking up their shape and size so they blend in with tall grass, trees, and other camouflage-friendly environments. Tigers are solitary hunters who ambush their prey, so the ability to remain undetected while on the hunt is key to their survival. Markings also differ among subspecies, with Sumatran tigers having the narrowest stripes and Siberian tigers having fewer than the rest of their big cat brethren.

 

5. Norway Once Knighted a Penguin

chinstrap-penguin-antarctica-january-201

Before he was a knight, Sir Nils Olav was a king — king penguin, that is. The flightless seabird was made both mascot and an honorary member of the Norwegian King’s Guard after the battalion visited the Edinburgh Zoo in 1972 and Major Nils Egelien had the idea to adopt a penguin. Sir Nils (he’s named for both Egelien and former King of Norway Olav V) quickly ascended through his country’s military ranks, receiving a promotion each time the King’s Guard returned to the zoo around performances for the Edinburgh Military Tattoo. The 2008 knighthood took place before 130 guardsmen and a crowd of several hundred people, during which King Harald V of Norway read out a citation describing Sir Nils as a penguin “in every way qualified to receive the honor and dignity of knighthood.” The penguin knighted in 2008 wasn’t the original Nils Olav, however. He was preceded by two others, inheriting their name and title when they went to the great penguin colony in the sky. (Penguins often live about 15 to 20 years, though some king penguins can live over 40 years in captivity.)

 

6. Ravens Can Remember Human Faces

artworks-000016411945-ajxp5l-crop.jpg?43

Ravens are smart — really smart. Studies have shown that they can use tools, remember human faces, and even plan for the future. This behavior cuts both ways for humans: Edgar Allan Poe’s favorite birds have demonstrated a tendency to both favor people who show them kindness and hold grudges against those who treat them poorly. These preferences aren’t fleeting, either — they may last for years. Raven intelligence is comparable in some cases to that of chimpanzees, which are among the smartest members of the animal kingdom. What’s more, they aren’t the only ones upending the “bird brain” stereotype: Other members of the corvid family — namely crows, jays, and magpies — have displayed exceptional intelligence as well. So the next time you encounter a raven, be sure you get on its good side. You may make a new friend who won’t forget you anytime soon.

 

7. Reindeer Eyes Change Color — They’re Golden in Summer and Blue in Winter

small_1679380274-bd4325c2dc.jpeg

Rudolph’s nose may have been red, but his eyes were blue — except in the summer, when they would have been golden. That’s because reindeer eyes change color depending on the time of year, which helps them see better in different light levels. Their blue eyes are approximately 1,000 times more sensitive to light than their golden counterparts, a crucial adaptation in the dark days of winter. Only one part changes color, however: the tapetum lucidum, a mirrored layer situated behind the retina. Cats have it, too — it’s why their eyes appear to glow in the dark. This part of the reindeer retina shines a different hue depending on the season.

 

Click the link below ⏬ to learn more about Animals.

 

 

Source: Our Most Interesting Facts About Animals

  • Like 1
Link to comment
Share on other sites

Fact of the Day - WOODSTOCK

1508165-woodstock-unused-ticket_600.jpg

Did you know.... The Woodstock Music & Art Festival, billed as “3 Days of Music and Peace,” began on August 15, 1969, on a dairy farm in the town of Bethel in New York’s Catskill Mountains. The festival defied expectations, drawing somewhere between 400,000 and 500,000 people (far more than the 50,000 anticipated to attend). It turned into a history-making celebration of counter-culture, defining the free-wheeling, peace-loving spirit of 1960s hippie culture. Here are 10 facts you might not know about arguably the most influential music festival in history.

 

1. No Town Wanted to Host the Festival

2812.jpg?width=300&quality=85&auto=forma

Woodstock’s organizersMiami Music Festival alum Michael Lang, Capitol Records executive Artie Kornfeld, and entrepreneurs John Roberts and Joel Rosenmansearched various upstate New York locations, originally trying to put the event in the town of Woodstock and then the town of Saugerties before settling on the town of Wallkill. But residents objected, and the board passed an ordinance prohibiting gatherings over 5,000 people and then officially banning the festival on July 15, leaving the organizers scrambling to find a new venue just weeks before the event.

 

2. A Dairy Farm Was Rented Out for the Event

Sullivan County dairy farmer Max Yasgur stepped in and rented out his 600-acre farm for the event. “I never expected this festival to be this big,” he said at the time. “But if the generation gap is to be closed, we older people have to do more than we have done.” Yasgur, who died in 1973 at the age of 53, put the property — located in Bethel, about 60 miles from Woodstock — up for sale in 1971. He’s also remembered for stepping in and providing milk, cheese, and butter when the festival ran out of food, as well as free water, while others had been charging for it. “How can anyone ask for money for water?” Yasgur said at the time.

 

3. The Festival Was Free — But Wasn’t Meant To Be

AF1QipP8T4YuowqWNGatr6oh1XSV4RSyPYJYvLn2

About 100,000 tickets ranging from $6 for a day to $18 for the weekend had been sold ahead of time, but so many people arrived early that organizers didn’t have time to build the ticket booths, among other things. “You do everything you can to get the gates and the fences finished, but you have your priorities,” Lang told The Telegraph. “People are coming, and you need to be able to feed them, and take care of them, and give them a show. So you have to prioritize.”

 

4. Creedence Clearwater Revival Was the First Band to Sign On

The legendary lineup of 32 acts who played in addition to Hendrix included Joan Baez, Santana, The Grateful Dead, The Who, Janis Joplin, and Crosby, Stills, Nash & Young. But the first one to officially sign on was Creedence Clearwater Revival. “Once Creedence signed, everyone else jumped in line and all the other big acts came on,” the band's drummer Doug Clifford said. “The next acts to sign on the dotted line were Jefferson Airplane, Joe Cocker, and Ten Years After.”

 

5. Some of Music’s Biggest Names Passed on Woodstock

711.jpg

Not all musicians were eager to join the festival, including The Beatles, The Doors, Bob Dylan, Led Zeppelin, The Rolling Stones, and Joni Mitchell. Scheduling conflicts were the culprit for Zeppelin (they played in New Jersey’s Asbury Park that weekend) and The Stones (frontman Mick Jagger was shooting a movie in Australia), while, unbeknownst at the time, John Lennon was about to quit The Beatles the following month. As for The Doors? “We were stupid and turned it down,” guitarist Robby Krieger later admitted.

 

6. A Young Martin Scorsese Helped Direct a Woodstock Documentary

Just a few days before the concert, Lang and Kornfeld made a deal with director Michael Wadleigh to film a documentary. Among the team that Wadleigh — who had followed President Richard Nixon on his campaign trail in 1968 — put together was recent NYU film school graduate Martin Scorsese as the assistant director. “At one point, Marty tried to take a nap in a pup tent under the stage,” cameraman Hart Perry told Rolling Stone. “He knocked over the pole, and the whole thing collapsed. He had claustrophobia and was screaming for somebody to help him. But he wasn’t Martin Scorsese yet, he was just some schmuck from Little Italy.” The Warner Bros. documentary — simply titled Woodstock — went on to win the Oscar for Best Documentary Feature.

 

Click the link below ⏬ to read more about Woodstock.

 

Source: Things You Might Not Know About Woodstock

  • Like 1
Link to comment
Share on other sites

Fact of the Day - ODDLY NAMED FOODS

hot-dog-design-template-21a65445f01ceddd

Did you know.... Whether you’re venturing out to a new restaurant or sharing a home-cooked meal with friends, chances are most of the foods you encounter are pretty self-explanatory. Mashed potatoes, scrambled eggs, or chocolate cake — even without much of a description, it’s usually easy to discern what will be gracing your plate. But even some of the culinary delights that have become standard American fare carry unusual monikers that may have you wondering about their mysterious origins. Let the backstory on these seven oddly named foods give your brain a mental palate refresher.

 

1. Hot Dogs

1ab6a499-c9ae-491f-8398-eaf940f7045d.jpe

Despite originating in Germany, hot dogs are an essential American food — an estimated 7 billion hot dogs are served up each summer in the U.S. alone. And with that many sausages on the grill, the name for a food that doesn’t involve any actual dogs has become completely mainstream. But where did it come from? Some food historians believe that early songs and jokes gave the wieners their name, suggesting that sausage meat came from dogs. But a more likely story is that German butchers named early American frankfurters “dachshund sausages” after the long and skinny dogs they resembled, which was eventually shortened to “hot dogs.”

 

2. Sweetbreads

sweetbreads_101045-4560.jpg?size=338&ext

Beware the common confusion about sweetbreads: They’re neither sugary nor baked. That’s because sweetbreads aren’t at all a pastry, but instead a type of offal (organ meats). These small cutlets are actually the thymus and pancreas glands from calves or lambs. While sweetbreads may seem off-putting to some diners, they’re known by many chefs to be exceptionally tender with a mild flavor — which could explain their misleading name. The first recorded mention of the British dish dates to the 1500s, a time when “bread” (also written “brede”) was the word for roasted or grilled meats. In conjunction with being more delicate and flavorful than tougher cuts, the name “sweetbread” likely took hold.

 

3. Head Cheese

headcheese-google-320x180.jpg

There’s no dairy involved in making head cheese. In fact, the dish more closely resembles a meatloaf than a slice or wedge of spreadable cheese. That’s because head cheese is actually an aspic — a savory gelatin packed with scraps of meat and molded into a sliceable block. As for the name, head cheese gets its label in part from the remnants of meat collected from butchered hog heads. And while not a cheese, it’s likely the dish is named such because early recipes called for pressing the boiled meats together in a cheese mold. Head cheese is popular throughout the world, especially in Europe, where it's known by less-confusing names. In the U.K. butchers call the dish “brawn,” and meat-eaters in Germany refer to it as “souse.”

 

4. Pumpernickel Brea

6c55c8f5bc4e73721364ccab4f5cfe2cef2870de

Most bread names are self-explanatory: cinnamon-raisin, sandwich wheat, potato bread. So what exactly is a “pumpernickel”? Originating in Germany, this dark and hefty bread combines rye flour, molasses, and sourdough starter for a dough that bakes at low heat for a whole day. Many American pumpernickel bakers speed up the process by using yeast and wheat flour, which makes for a lighter loaf that reduces (or altogether removes) pumpernickel’s namesake side effect: flatulence. German bakers of old acknowledged the bread’s gas-inducing ability with an unsavory nickname: pumpern meaning “to break wind,” and nickel for “goblin or devil.” Put together, the translation reads as “devil’s fart” — a reference to how difficult pumpernickel could be on the digestive tract.

 

5. Jerusalem Artichokes

shutterstock_1841658613-1-350x250.jpg

If there’s any vegetable that suffers from bad branding, it may just be the Jerusalem artichoke — a bumpy root crop that’s not actually an artichoke and doesn’t have any link to Israel. Unlike their real counterparts, Jerusalem artichokes are actually the edible tuber roots of a sunflower species, similar in appearance to ginger root (real artichokes produce purple, thistle-like flowers that turn into above-ground edible bulbs). Jerusalem artichokes were first called “sunroots” by Indigenous Americans, who shared the tubers with French explorers in the early 1600s. Upon arriving back in France, the vegetables were called topinambours. Italian cooks renamed them girasole, aka “sunflower,” in reference to their above-ground buds. As sunroots spread throughout Europe, the girasole morphed into “Jerusalem” thanks to mispronunciation, with the addition of “artichoke” in reference to the vegetable’s flavor.

 

6. Dutch Baby Pancakes

german-pancake-with-berries-270x180.jpg

Few foods are universal, but pancakes may be the exception. While they may be made with culture or region-specific ingredients, nearly every country has some variation of the pancake. Queue the Dutch baby, a baked treat with a name that misidentifies both its origin and size. Also known as a German pancake or pfannkuchen, Dutch babies are a blend of popovers and crepes baked in a large skillet or cast-iron pan, topped with fruit, syrup, or powdered sugar. So how did these dinner-plate-sized pancakes get their most popular moniker? Culinary legend attributes the misnomer to the daughter of a Seattle restaurant owner, who mistakenly subbedDutch” for “Deutsch” (meaning German). The eatery downsized its versions into miniature servings and deemed the pancakes “Dutch babies.”

 

7. Grasshopper Pie
grasshopper_pie.jpg

Insects are protein-packed main courses in many countries, but the idea of chomping down on bugs isn’t appealing to all stomachs. Luckily, this bug-branded dessert is entirely free of its namesake insect. Grasshopper pie features a cookie crust and fluffy filling made from whipped cream, mint and chocolate liqueurs, and green food coloring. Fittingly, grasshopper pie often makes its appearance at springtime celebrations just as the leaping bugs are emerging from their winter slumber, but that’s not where the name comes from. While hitting peak popularity during the 1950s and ‘60s, grasshopper pie is actually a dessert version of the grasshopper cocktail, which first debuted some four decades prior. Philibert Guichet, a New Orleans restaurateur, invented the drink as part of a cocktail competition in 1919, naming his creation for its bright green hue.

 

 

Source: The History of Some Oddly Named Foods

  • Haha 1
Link to comment
Share on other sites

Fact of the Day - CAMELS

240px-King_Of_Desert_(152441155).jpeg

Did you know.... At first glance, camels may seem like a biological anomaly, a large mammalian creature somehow capable of surviving in the world’s hottest and most desolate climates. That’s because camels have been forged by the desert itself, with every piece of their biology seemingly purpose-built to survive anything Earth’s arid landscapes can throw at them. These six facts about camels will give you a deeper understanding of their astonishing biology, their importance in world history, and their surprising evolutionary roots.

 

1. There Are Only Three Camel Species

Types-of-Camels-282x300.jpg

Only three species of the Camelus genus are still living today. The first two — the dromedary or Arabian camel (Camelus dromedarius) and the Bactrian or Mongolian camel (Camelus bactrianus) — are both domesticated species. The most obvious difference between the two is that a dromedary has only one hump, while a Bactrian camel has two. The dromedary camel has existed in the wild for more than 2,000 years, and was, like its alternative name suggests, first domesticated in the Arabian Peninsula. The Bactrian camel, named for the Persian province of Bactria around modern Afghanistan, Uzbekistan, and Tajikistan, was a popular pack animal in Asia and filled caravans along the ancient Silk Road. The third camel, known simply as the wild Bactrian camel (Camelus ferus), is visually akin to its domesticated cousin. In fact, it was once believed that the wild camel was simply a feral version of its similarly named relative, but genetics confirmed that the two camels separated some 1.1 million years ago. With a population of less than 1,000, the wild Bactrian camel is the eighth-most critically endangered mammal on the planet. Today, it’s mostly found in the remote parts of the Gobi desert in Mongolia and China.

 

2. A Camel’s Hump Stores Fat — Not Water

mqdefault.jpg

A persistent camelid myth is that these “ships of the desert” store water in their hump(s). Instead of H20, camels actually store fatty tissue that can be drawn upon when food is scarce — a common occurrence when traipsing the desert. Because they store fat vertically in their humps (and not throughout their body), camels can also dissipate excess heat more quickly. These strange humps may not be the most elegant thermoregulating solution Mother Nature has ever devised, but it certainly works for them.

 

3. North America Used to Have Its Own Native Camels

194815131-%E3%83%A9%E3%82%AF%E3%83%80.jp

Some 11,700 years ago, the last native North American camelid species, in a genus known as Camelops, went extinct — a strange end for a species that originally evolved on the continent 44 million years ago during the Eocene period. The creature stood approximately 7 feet tall, weighed around 1,800 pounds, and looked remarkably similar to today’s dromedary, though experts are not 100% certain whether Camelops had a hump like its Arabian cousin. The Camelops died out around the same time as other large mammals in North America, such as mastodon and giant beavers, likely due to increased human hunting. Camels (imported from the Mediterranean and the Middle East) did make a small comeback in the U.S. during the mid-19th century, when the U.S. government thought they would be the perfect beasts of burden for delivering supplies to military outposts in the Southwest. The Army’s short-lived Camel Corps was soon disbanded due to the Civil War and other factors, however, and the herds were sold off or let loose, with many roaming wild for years. Feral camels were spotted in the deserts of the Southwest up until the early 20th century.

 

4. A Camel Can Drink 30 Gallons of Water in 13 Minutes

herd-camels-drinking-water-pushkar-260nw

If a dromedary or Bactrian camel comes upon a chance oasis or watering hole, it’s game time. Camels don’t lap water like most mammals, but instead suck down water almost like a vacuum, drinking as many as 30 gallons of water in 13 minutes. Such a deluge of water would be fatal for humans (and most other mammals) because the increased amount of fluid would dilute our blood and cause our cells to explode, but camels don’t have this problem because they can essentially store water in their first stomach, or rumen. This stomach allows the water to be released to the blood over the course of several hours. Camels also have superpowered blood cells capable of expanding to twice their size. As a camel uses up water and fatty tissue, its hump(s) will actually begin to deflate, but give a camel some food and a lot to drink, and it’ll spring back as good as new.

 

5. Camels Are Nearly as Fast as Racehorses

image-260nw-588505472.jpg

They may not look it, but camelids are extremely agile, and can reach speeds of 40 miles per hour at a dead sprint. At endurance speeds, camels can maintain a speed of roughly 25 mph for an hour, or 12 mph for eight hours. This doesn’t quite match a horse in terms of sheer sprinting speed, but things change when the race moves to a camel’s home turf. A camel’s feet are much larger than a horse’s hooves, and that extra surface area helps camels stay on top of sand for easier transportation. Because camels are so impressively fast — the Greek root word within “dromedary” means “running,” after all — camel racing is a popular sport in many parts of the world, and has been for millennia, especially in the Arabian Peninsula. Camel racing became more formalized in the late 20th century, and is now a major sport drawing participants from around the globe.

 

6. Camels Are Perfectly Built for the Desert

group-camels-walking-liwa-desert-250nw-6

Camels, both Bactrian and dromedary, are purpose-built for the desert. Yes, their fat-storing humps, impressive thermoregulation, oval-shaped blood vessels, and wide feet all aid these incredible creatures as they traverse some of the world’s most arid landscapes — but that’s really only the beginning of a camel’s fine-tuned, desert-ready biology. For example, camels have three sets of eyelids, and two sets of eyelashes for batting away sand and dirt. Although it may seem counterintuitive, a camel’s furry coat keeps it from sweating, insulates it from the heat, and also keeps it warm when the temperature drops (it can get deathly cold when the sun sets in the desert). Also, because the sand can sometimes be scorching, camels have leather-like, heat-resistant pads on their knees, elbows, feet, and sternum, so they can lay down without getting burned. They even lack a certain skin fold found in other animals so that air can continue circulating under their bodies when lying down. Even their lips and tongues are extra hardened so they can eat prickly desert plants that other animals have to give a hard pass. Thanks to millions of years of evolution, the camel is truly one of the desert’s greatest masterpieces.

 

 

Source: Sturdy Facts About Camels

  • Like 1
Link to comment
Share on other sites

Fact of the Day - WINE

5c7e8b39222c44baa8ca411d677d5639.jpeg

Did you know.... Wine has conquered the world. In 2021, global wine consumption topped 23.6 billion liters, or roughly 9,440 Olympic-size swimming pools’ worth of vino. Here are some more surprising facts about reds, whites, and rosés, from their long and illustrious history to the reasons you might want to avoid drinking wine left over from shipwrecks.

 

1. People Have Been Making Wine for Thousands of Years

1exb0.jpg

Between 2007 and 2010, archaeologists excavated a cave near Areni, Armenia, which contained the remnants of an ancient winemaking operation. They unearthed a press for crushing grapes, jars for fermentation and storage, ceramic cups, and the remains of grape vines, skins, and seeds. (The organic material had been preserved by a hardened layer of sheep dung, which protected it from decay.) By analyzing a compound called malvidin, which makes grapes reddish-purple, the researchers estimated that the site was active around 4000 BCE, during the Copper Age, making it the oldest known winery. Even earlier biomolecular evidence of viniculture dates from about 6000 BCE. The oldest type of wine still made today is Commandaria, a sweet red-white dessert blend from Cyprus that dates back to 2000 BCE.

 

2. Almost All Wines Are Grown From a Single Species of Grape

HD-wallpaper-grapes-fresh-fruits-vitamin

The mother vine of almost all wines today is Vitis vinifera, a grape likely native to Western Asia. Over millennia, winemakers have domesticated and cross-bred the vines to create subspecies with distinct colors, flavors, and suitability to different climates. About 8,000 cultivars exist today, including well-known varieties like pinot noir, chardonnay, sauvignon blanc, and merlot. V. vinifera vines have long been cultivated in regions with hot, dry summers and mild winters, such as Italy, Spain, and France, but the U.S., Chile, Australia, and South Africa are also major producers, among other countries.

 

3. In the 19th Century, an Insect Nearly Wiped Out France’s Wine Industry

200px-Dactylosphaera_vitifolii_1_meyers_

One downside of basing a global wine industry on a single grape species is that it can be decimated by a particular disease or pest. A grape-attacking aphid called phylloxera, native to North America, was accidentally imported to France in the 1860s. Whereas indigenous American grape species had built up resistance to the pest, French winemakers had guarded the purity of their vines to ensure their wines’ high quality, which made the plants susceptible to assault from the foreign bug. As a result, phylloxera tore through French vineyards in the late 19th century and forced French winemakers to graft phylloxera-resistant American vines onto the French vines to save them.

 

4. A Wine’s Terroir Can Be Legally Protected

TuJG_FLqcJU.jpg?size=320x213&quality=96&

The 19th-century French vintners initially resisted the plan to graft American rootstocks onto their precious vines over fears that their wines’ special flavor profile, or terroir, would suffer. “Terroir” refers to the whole environment in which the grapes are grown — soil and water characteristics, temperature, altitude, and so on — as well as the flavor and aroma that these factors impart. A wine’s terroir can be a legally protected entity in France, where the AOC system (an acronym for Appellation d’Origine Contrôlée) classifies wines according to their region of production and quality. It’s this system that says Champagne can come only from the Champagne region to protect its unique terroir.

 

5. California Wines Beat French Rivals in a Blind Taste Test
images?q=tbn:ANd9GcRcixUHOokeGM0ptNVluvA

In a legendary event dubbed “The Judgment of Paris,” held on May 24, 1976, French wine experts preferred upstart California wines to the finest French ones in a taste test. An English wine shop owner staged the event to drum up business, and everyone assumed a French victory was a foregone conclusion. The nine experts swirled, sniffed, and sipped a variety of reds and whites, then tallied the number of points they awarded to each sample; shockingly, a cabernet sauvignon and a chardonnay from Napa Valley won out, proving that countries besides France could produce the world’s finest wines. A bottle of each winning wine is now in the Smithsonian collection.

 

6. Wine Is Often Found in Shipwrecks

SEC_108346905.jpg?quality=90&strip=all&z

Wine has been traded around the world for centuries, and the vessels transporting it have occasionally run into trouble. Today, intact bottles of wine can sometimes be located among the wreckage of sunken ships. Experts advise against drinking their contents, but some curious gastronauts can’t be dissuaded. In 2009, a hurricane disturbed the seafloor around Bermuda and revealed still-corked bottles in the wreck of a Civil War-era ship; a panel of tasters said it was “awful.” Champagne recovered from a 170-year-old shipwreck in the frigid Baltic Sea gave tasters hints of cheese and “wet hair.” Among the recent finds yet to be sampled are unopened bottles of wine from the wreck of the HMS Gloucester, which sank while carrying the future king James II of England, and bottles that went down with a British steamship after a German torpedo attack during World War I.

 

 

Source: Amazing Facts About Wine

  • Like 1
Link to comment
Share on other sites

Fact of the Day - VACUUMS

320px-Hubert_Cecil_Booth_Stra%C3%9Fensta

Did you know... In early 1901, English inventor Hubert Cecil Booth traveled to Empire Music Hall in London to witness a strange invention — a mechanical aspirator designed to blow pressurized air to clean rail cars. Booth later asked the demonstrator why the machine (invented by an American in St. Louis) didn’t simply suck up the dust rather than blow it around. “He became heated,” Booth later wrote, “remarking that sucking out dust was impossible.” Unconvinced, Booth set about creating such a contraption, and later that same year he filed a patent for a vacuum machine he named the “Puffing Billy.”

 

Hungerford-Arcade-Vacuum-Cleaner-Article AF1QipNPEkjTJV-vf12Pa6f5VMCNIW8aEdjAbLpx

This machine wasn’t quite as fancy as modern Dust Busters, Dirt Devils, Hoovers, or Dysons. Instead, the Puffing Billy was red, gasoline-powered, extremely loud, and big — really big. So big, in fact, that the machine needed to be pulled by horses when Booth’s British Vacuum Cleaner Company made house calls. Once outside a residence, 82-foot-long hoses snaked from the machine through open windows. Because turn-of-the-century carpet cleaning wasn’t cheap, Booth’s customers were often members of British high society; one of his first jobs was to clean Westminster Abbey’s carpet ahead of Edward VII’s coronation in 1902. By 1906, Booth had created a more portable version of the Puffing Billy, and two years later, the Electric Suction Sweeper Company (later renamed Hoover) released the “Model O,” the first commercially successful vacuum in the United States. House cleaning has sucked ever since. 

 

Engineers in the 19th century used horses to power boats.

80b3894e2ae6cdbd32c60f113118eabc_229272.

Although an animal-powered boat can trace its origins back to Roman times, team boats (also known as “horse boats” or “horse ferries”) became especially popular during the 19th century in the United States. Horses walked either in a circle or in place to turn wheels that moved the boat forward. The first commercially operated horse boat (or any other animal-powered boat) in the U.S. plied the waters of the Delaware River around 1791. Well suited for journeys of only a few miles, horse boats were soon sailing the waters of Lake Champlain as well as the Hudson River before eventually spreading to the Ohio and Mississippi rivers, and the Great Lakes. By the 1850s, these horse-powered creations were largely replaced by paddle steamers — the beginning of the horse’s decades-long slide from supremacy to irrelevancy, at least when it comes to transportation.

 

 

Source: The earliest vacuum cleaners were horse-drawn.

Edited by DarkRavie
  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...
Please Sign In or Sign Up