DarkRavie Posted April 25 Author Report Share Posted April 25 Fact of the Day - BICYCLES Did you know.... The Wright brothers are best known for their historical flight over Kitty Hawk, North Carolina, in 1903, but years before the siblings made aviation history, they were busy running a bicycle shop in western Ohio. Wilbur Wright and his younger brother Orville had long dreamed of gliding through the wild blue yonder, but it would take years of work to finance their costly first attempts. In the 1880s, the brothers undertook their first joint business, a small printing shop in Dayton that churned out local newspapers, church pamphlets, and bicycle parts catalogs. By 1892 the brothers had moved from printing for bicycle companies to starting their own, inspired by their shared passion for cycling; Wilbur reportedly loved leisurely rides through the countryside, while Orville was known for participating in bike races. The Wright Cycle Company initially offered repairs and rentals, but as cycling became more popular, the brothers turned to manufacturing their own designs in an effort to compete with the dozens of nearby bike shops. Their first model, the “Wright Special,” was released in May 1896, followed by the “Van Cleve.” Together, Wilbur and Orville hand-built around 300 bikes per year during their peak production years before 1900, using the profits to fund their flight experiments. By 1908, they had abandoned their shop to focus solely on aeronautics. Today, only five antique Van Cleve bikes exist, two of which remain in the brothers’ hometown at the Wright Brothers National Museum in Dayton. Wilbur and Orville Wright flew together only one time. Before takeoff at Kitty Hawk in 1903, the Wright brothers had to decide who would man their one-passenger plane for the first time, making the decision with a coin toss. But even when the duo expanded their planes to two-seaters, they were rarely airborne together, sharing only one flight during their lives. Orville and Wilbur reportedly promised their father they would never fly together because of the risk of a plane crash; the brothers gave their word, which also ensured that one of them could continue their aeronautical work in case of a fatal accident. In September 1908, Orville did survive the world’s first deadly plane crash, during a demonstration for the U.S. Army (his passenger was U.S. Army Lt. Thomas E. Selfridge). The accident, however, didn’t deter Orville or his brother, and two years later the siblings shared their only joint flight, soaring for six minutes while their father watched from the ground. Afterward, Orville took the excited 82-year-old on the sole flight of his life. Source: Before they built airplanes, the Wright brothers owned a bicycle shop. 1 Link to comment Share on other sites More sharing options...
Arisien Posted April 25 Report Share Posted April 25 9 hours ago, DarkRavie said: The Wright brothers are best known for their historical flight over Kitty Hawk, North Carolina, in 1903, but years before the siblings made aviation history, they were busy running a bicycle shop in western Ohio. It's a little crazy that bicycles and planes were created within a century of each other, bikes having been invented a couple of decades prior to the brothers birth. Though not as impressive as the rate of advancement later in the 20th century. 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted April 26 Author Report Share Posted April 26 Fact of the Day - CANARY IN THE COAL MINE Did you know... Canaries were used in coal mines much more recently than you might have thought. In 2013, as certain butterfly populations plummeted across North American prairies, Canadian conservationist Cary Hamel explained why people should care. “Butterflies are a bit of a canary in a coal mine,” he told The Canadian Press. “They’re really sensitive to changes in weather. They’re sensitive to changes in habitat loss. They’re sensitive to invasive species and land management. The fact that the Poweshiek skipperling and other prairie butterflies are all declining should really have us stand up and take notice that something is going wrong with our native prairies.” It’s a textbook example of a canary in a coal mine: something that serves as an early warning sign of a larger issue. These days, anything can fit the bill, from the aforementioned insects indicating a troubled ecosystem to struggling small businesses indicating a troubled economy. But the original canary was a literal one—and it didn’t indicate trouble in a coal mine by chirping. From Pet to Pit Mines are a hotbed for dangerous gases collectively known as damps—from Dampf, the German word for “vapor.” Hydrogen sulfide is stinkdamp, so named for its rotten-egg-like odor; the ever-flammable methane makes up firedamp; a carbon dioxide mixture is black damp because flame lamps won’t stay lit in those conditions; and carbon monoxide, the invisible killer, is white damp. The cocktail of gases produced by a mining explosion (i.e., the afterdamp) often features carbon monoxide, and rescue missions can’t succeed or even proceed without knowing whether the atmosphere is unsafe. Before modern detection technologies existed, people relied on canaries to tell them. The idea is generally credited to John Scott Haldane, a pioneering British physiologist whose experiments—some done on himself and his teenage son—gave us oxygen therapy and a greater understanding of how various gases affect the human body. (The family motto was “Suffer.”) In the late 19th century, Haldane observed that small, warm-blooded animals are affected by carbon monoxide poisoning much more quickly than humans. He recommended that men use them as a natural alarm system in mines. But celebrating Haldane as the sole genius behind so-called “pit canaries” poses a couple of problems. For one thing, his initial reports in the 1890s centered on the use of mice. Only in the early 20th century, when the existence of pit canaries was already hitting print, did he start mentioning birds in his published work. Moreover, researchers have pointed out that canaries had accompanied miners for decades, if not centuries, before the practice went mainstream. According to writer Jerry Dennis, Austrian Alpine miners adopted canaries as early as the 1690s, and some brought the birds with them when they later migrated to Germany’s Harz Mountains in search of work. Harz miners began breeding canaries to sell as domesticated songbirds—a side hustle so successful that by the 19th century, the region was a premier canary exporter. All that said, Haldane’s science-backed endorsement of little creatures in general as carbon monoxide monitors no doubt helped legitimize and institutionalize the custom. A 1911 British law mandated that every mine with at least 100 underground workers keep “two or more small birds” on hand. It’s unclear exactly how canaries so quickly eclipsed other birds as miners’ choice companions, but what made them such a popular pet probably also made them such a popular mining tool. They were singsongy, sociable, bright-yellow birds—the kind you’d notice going sullen and dropping from its perch, even in darkness. That’s what they did when exposed to carbon monoxide, signaling rescuers to beat a hasty retreat. Miner’s Best Friend Pit canaries didn’t perish on every journey underground; plenty were revived once they had clean air in their lungs, sometimes thanks to a special cage outfitted with an oxygen pump. But it was hardly a risk-free gig, especially for canaries reused on mission after mission. “In the office of Francis Keegan, state mine inspector, sings a little yellow canary—weakly, very weakly.” one Kansas newspaper reported in 1914. “It hops feebly about but soon tires and returns to its cage. It is waiting to die so that some men may live. … Perhaps the next trip into the mine will be its last.” Officials did test out other animals. Sparrows briefly seemed promising as “comparatively little sentiment attaches to” them, as Texas’s El Paso Herald put it in 1912, but one mining report claimed that they “do not lend themselves easily to captivity.” The Herald also explained that mice didn’t suffice “because when confined in cages, they are liable to sulk, and it is not always possible to know whether their behavior is attributable to this cause or to gas distress.” Some coal mines, according to a 1913 story out of Washington, D.C., employed “nimble and sinewy” rats to run on tiny treadmills, “presumably after a bit of cheese forever out of reach.” The treadmills powered lights—convenient in dark mines—and if the light went off, it was immediately clear that your rat may have inhaled noxious fumes. But none of these alternatives could quite compete with the tried and true canary. World War I tunnelers, many of whom were miners by trade, even used the birds to detect gas as they hacked toward enemy territory. Canaries continued to participate in mining operations for the better part of the 20th century; Britain didn’t stop requiring mines to keep them on site until the mid-1980s. Some miners were loath to see their feathered friends replaced with electric monitors. “There is no more reassuring sight down a mine during an emergency than seeing a canary sitting happily on its perch. Anyway, I would much rather trust a bird than batteries,” one former miner told the Leicester Mercury in 1986. The Canary After the Coal Mine Canaries’ mining legacy lives on in the phrase canary in the coal mine (and other iterations, e.g., canary in a coal mine), which gained more widespread popularity toward the end of their tenure underground. The Oxford English Dictionary’s earliest citation is from a 1970 journal article in Audubon: “The epidemic rise of emphysema plus a plague of respiratory diseases—these are but the canary in the mine. They alert us to the ultimate catastrophe.” But the expression existed at least a little before then. Kurt Vonnegut used it in an address to the American Physical Society in February 1969: “I often wondered what I thought I was doing, teaching creative writing, since the demand for creative writers is very small in this vale of tears. I was perplexed as to what the usefulness of any of the arts might be, with the possible exception of interior decoration. The most positive notion I could come up with was what I call the canary-in-the-coal-mine theory of the arts. This theory argues that artists are useful to society because they are so sensitive. They are supersensitive. They keel over like canaries in coal mines filled with poisonous gas, long before more robust types realize that any danger is there.” Vonnegut mentioned his “canary-bird-in-the-coal-mine theory” again in a 1973 interview with Playboy. Artists, he explained, had “chirped and keeled over” in reaction to the Vietnam War before society at large had cottoned on to its horrors. While figurative canaries can be found in any sphere, many live in the natural world. Sentinel species, as they’re known, are organisms that scientists track in order to learn about the broader well-being of an ecosystem—including threats that can, like carbon monoxide in a mine, endanger humans. Mussels clue us in to water contaminants, and lichens know whether we’re breathing dirty air. Sentinel species can also help industries operate more effectively and less disruptively within ecosystems; the weight fluctuations of northern elephant seals, for example, can tell fisheries more about where and when to fish than you might have thought. The canary’s own watch may have ended, but it gave us an easy way to grasp what sentinel species do. In fact, they’re even sometimes called “ecosystem canaries.” Source: The Dark History Behind the Phrase ‘Canary in the Coal Mine’ 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted April 27 Author Report Share Posted April 27 Fact of the Day - MAY DAY Did you know... And how did a pagan springtime festival become a day for workers’ rights? Ask the average American to describe May Day, and they might mention a pole wrapped in ribbons and springtime pagan rituals. It’s true that May 1 does have associations with those things in the Northern Hemisphere, but the holiday has held a greater meaning ever since the Second Industrial Revolution at the end of the 19th century. That’s when the International Socialist Conference declared May 1 International Workers’ Day. The Pagan Origins of May Day Spring is packed with various holidays and celebratory rituals—there’s the spring equinox, Beltane, and Easter, to name some of the more well-known ones. In some places, May 1 is also a day of springtime festivities. Some believe May Day traces back to the Roman festival of Floralia; others link it to Beltane. Various sources of May Day inspiration, whether Roman, Celtic, or from elsewhere in Europe, link the festivities to fertility, birth, and the start of the spring farming season. People would celebrate by gathering flowers and dancing around the Maypole, a tree or pole with ribbons tied around it. How May Day Became a Rally for Workers’ Rights May Day transformed from a day reserved for spring festivals to a communist day of remembrance when U.S. workers took to the streets in Chicago on May 1, 1886. Factory workers were fed up with working up to 16-hour days under dangerous conditions. They went on strike to demand more reasonable workday hours until protests erupted in violence. On May 3, several workers were injured or killed in a clash with police, and the following night a bomb detonated in a crowd of police officers monitoring an assembly in Haymarket Square. Police responded by opening fire at protestors, killing several and injuring 200. Today, socialists and supporters of workers’ rights commemorate the incident, known as the Haymarket Affair, each year on May 1. The date’s modern connotations haven’t erased its original significance as an ancient spring festival. Since emerging from pagan traditions, May Day has grown into a secular holiday in Europe, with celebrations including cake, music, and, of course, a dance around the Maypole. Signs of it are easy to miss in the U.S., but in other countries it’s a public holiday that takes the place of Labor Day. Activists around the world often plan marches and protests for the first of May. Source: What Is May Day? 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted April 28 Author Report Share Posted April 28 Fact of the Day - SHARK ARE OLDER THAN WHAT? Did you know... For most of human history, sharks were considered fairly harmless, a perception that changed forever with the 1975 release of Steven Spielberg’s Jaws. Relative to the total amount of time sharks have been around, however, “most of human history” is just the blink of an eye. Having existed for somewhere between 400 million and 450 million years, these fish are older than just about anything you can think of — including Saturn’s rings. While the planet Saturn itself formed some 4.5 billion years ago alongside the rest of our solar system, its rings formed between 10 million and 100 million years ago, making them relatively recent in the grand scheme of things. And just for fun, here are some other things sharks are older than: trees (which are roughly 390 million years old), the North Star (70 million years), and the Atlantic Ocean (150 million years). That’s right — sharks have existed longer than one of the oceans they now swim in, as the Atlantic didn’t form until the supercontinent Pangea broke apart. Sharks weren’t recorded making noise until 2025. They don’t call them silent killers for nothing, and indeed part of what’s made sharks so frightful in the collective imagination is the idea that their attacks, while vicious, are noiseless. But sharks aren’t entirely silent. University of Auckland scientists have recorded a rig shark making a clicking sound, most likely by snapping its teeth. Their research was published in March 2025, marking a breakthrough in our understanding of these ancient creatures. The sound, which the sharks made an average of nine times in a 20-second span, wasn’t produced while swimming or feeding. The researchers believe it isn’t used as a means of communication, but rather is something sharks do when startled or stressed. Source: Sharks are older than Saturn’s rings. 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted April 29 Author Report Share Posted April 29 Fact of the Day - BUBBLES WITH SAP Did you know... The allure of bubbles spans the ages: Take, for example, their use in 16th-century art as a reminder of life’s fleetingness, or their 2014 induction into the National Toy Hall of Fame. And if you’re looking for a new take on the age-old toy, check out the Jatropha curcas shrub, aka the bubble bush. The tropical plant — native to Central America, Mexico, and parts of South America and the Caribbean — is known for a sticky sap that could be called Mother Nature’s own bubble solution. When plucked from the bush, branches leak a foamy liquid and can be used as an all-in-one bubble wand; just snap the twig in half and blow. Bubble bushes get their standout sap from naturally occurring chemicals called saponins, a foaming compound used in soaps and food. Related to poinsettias and castor oil plants, Jatropha curcas is similarly toxic to humans and animals if eaten (and can also cause skin rashes and irritate eyes). Despite its toxicity — along with the fact that bubble bushes are considered invasive species throughout much of Asia, where they’re commonly found — the plant does have benefits beyond bubbles. Jatropha bushes are vigorous growers perfect for creating natural fences and boundaries, and they’re known for effectively combating soil erosion around waterways and in regions with heavy rainfall. Some parts of the plant are used in pharmaceuticals to treat infections and diseases such as cancer. And while research is pending, it’s suspected that these bubbling wonders could be an environmentally friendly source of biofuel. There’s a plant that produces shampoo-like suds. Bitter ginger goes by a few names: the Latin Zingiber zerumbet, the Hawaiian “Awapuhi Kuahiwi,” or the common term “shampoo ginger.” Regardless of the alias, this versatile plant is sought out for its multipurpose tropical bloom. Found in moist environments near rivers and waterfalls, the pine cone-like flowers mature each spring and produce an oozy liquid that can be used as a fragrant replacement for shampoo. Native to Asia, the plant is also found in Hawaii, where botanists consider them “canoe plants,” the term for greenery that was originally brought to the island by traveling Polynesian settlers. Bitter ginger is deeply rooted in Hawaiian culture — it’s believed to be an earthly form of the life-creating deity Kane — and all parts of the plant are used. Roots add flavor to food and are used in herbal medicines, leaves are used as eco-friendly food wraps, and its oils are the star of perfumes and cosmetics. Source: There’s a plant with sap that can be used to blow bubbles. 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted April 30 Author Report Share Posted April 30 Fact of the Day - DUFFLE BAG Did you know.... The practical tote packs in a lot of history. Whether you’re going away for a weekend or smuggling illicit goods in and out of the country, at some point, you’ve probably grabbed a duffel bag. These cylindrical totes are typically made of heavy-duty canvas or some other durable material and make for a more portable packing solution than conventional luggage. But why is it known as a duffel bag? The History of Duffel Bags The history of duffel bags begins with duffel, a thick woolen cloth named after the town that produced it in Antwerp, Belgium, circa the 15th century. The material was used for a variety of goods that needed to stand up to wear, making it an easy choice for storage, clothing, and work applications. There were duffel coats, duffel blankets, duffel parkas, and, eventually, duffel bags. According to the Oxford English Dictionary, duffel came into English from Dutch in the 1600s, and the first printed use of duffel bag dates to 1768, when a newspaper called the Public Advertiser printed an ad featuring “an old green duffil [sic] bag.” Early on, duffle bags were more like a duffel pouch tied at the top, and it’s likely they were often used by sailors. It was easy to see the appeal of a bag that could be filled to the brim when needed or easily collapsed and tucked away when it wasn’t, so it’s no surprise that in the late 1800s, the duffel was adopted by militaries in the U.S. and abroad as part of a serviceman’s assigned gear. Duffels would eventually evolve from their early laundry bag aesthetic to a bag that was oblong, zippered, and able to stand freely by World War II. The term duffel bag began to gain popularity in the early 20th century, a trend that continued when soldiers returned home and brought their duffel bags with them. Usage of the term began to tick up in the 1950s and increased each decade as more people considered it synonymous with a rugged and practical solution to transporting clothes and goods. (The word also sometimes referred to the contents of a bag rather than the bag itself. If, for example, you left your bag open and the clothes inside spilled on the floor, someone might say that you “dropped your duffel.”) Come the 1980s, it was common to see mentions of duffel bags in the press—but not for back-to-school sales. Cocaine dealers often utilized the bags to heft their valuable kilos. In 1985, the bags were confirmed not to be animal-proof: Some cocaine that was tossed out of a drug-smuggling plane over Georgia was discovered by a bear and ingested, a tale that loosely inspired the 2023 movie Cocaine Bear. What’s the Difference Between a Duffel Bag and a Gym Bag? Some people use duffel bag and gym bag interchangeably, but they’re not quite the same. While duffels are usually made from heavy materials, gym bags are often made with lighter synthetics for easier air movement. A gym bag may also have pockets for shoes or supplements. Duffels are typically larger than gym bags, which carry lighter workout clothes. Duffel Bag vs. Ditty Bag A duffel bag may also be confused for a ditty bag, but as with a gym bag, there are significant differences. A ditty bag—which dates to the 1860s—is a smaller tote that was originally meant for a sailor’s smaller possessions, like grooming or sewing supplies. And in the 20th century, volunteers called care packages sent to troops in Vietnam “ditty bags.” These days, you might use one in your purse or while traveling to keep smaller items in one spot rather than floating around loose in your bag. If you’re unsure about what category your bag falls into, you should consider whether your bag is cylindrical, sturdy, and able to be slung across the shoulders. If it checks those boxes, you can probably declare it a duffel. And if it’s well made, it’s possible it could last for quite a long time. In 1944, an American soldier named William Kadar lost his duffel while stationed during World War II. In 2013, it was returned to him after being found in France, hardly worse for wear. Source: Why Is It Called a “Duffel Bag”? 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted May 1 Author Report Share Posted May 1 Fact of the Day - ZIPPERS Did you know... Though they’re now commonly found on everything from jackets to couch cushions, zippers were actually originally intended for shoes. The history of this versatile mechanism can be traced to 1893, when inventor Whitcomb Judson was granted a patent for a rudimentary zipper that he called the “clasp-locker,” an alternative to lengthy shoelaces. The patent described it as “a series of clasps securable to the flaps of the shoes” which automatically engaged or disengaged with a movable hand device. Judson displayed his creation at the 1893 Chicago World’s Fair, though it was met with minimal interest. Despite several refinements to the product, this zipper ultimately never caught on due to its sharp hooks and the resulting frequency of torn fabric. Swedish inventor Gideon Sundback later improved upon Judson’s design, creating a more reliable version with two rows of metal teeth pulled together by a slider. These hookless fasteners were designed to be used on “shoes, corsets, and other articles of wear,” according to the 1917 patent. The invention caught the eye of New York City tailor Robert Ewig, who sewed them onto money belts. These belts were rather popular among U.S. sailors, whose uniforms lacked pockets, and in 1918, the Navy bought 10,000 fasteners to incorporate into flight suits. In 1923, the BFGoodrich Company added Sundback’s fasteners to rubber boots and coined the word “zipper,” an onomatopoeia based on the sound they made. A design student was paid $35 to create Nike’s “Swoosh” logo. In 1971, Carolyn Davidson was a graphic design student at Portland State University in Oregon. One day, a man named Phil Knight overheard her lamenting to a classmate about her inability to afford art supplies. Knight approached the student with an offer to design a logo for his new shoe company, Blue Ribbon Sports — later renamed Nike. Davidson created the now-iconic “Swoosh” and charged Knight $35 (roughly $275 today) for her work. The following year, Nike debuted its first shoe, which featured Davidson’s logo. In 1983, Knight invited Davidson to the Nike offices, where she was awarded a gold ring and 500 shares of stock as an additional thanks. But because of six subsequent stock splits, those 500 shares equal 32,000 shares today — upward of $2.3 million in value. Source: The first zippers were for shoes. 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted May 2 Author Report Share Posted May 2 Fact of the Day - MENOPAUSE Did you know.... The wide-ranging symptoms and timing of menopause have caused some confusion. Like Mercury retrograde, menopause is blamed for a constellation of unpleasant or inexplicable events. Women in their forties or even younger hold menopause responsible when they experience a wide array of physical symptoms, like irregular periods, aches and pains, mood swings, weight gain, insomnia, fatigue, and bursts of hunger—but they may be misattributing these annoyances. It turns out that many women don’t know when menopause actually starts. According to a February 2025 poll of 1068 women by The Ohio State University, 61 percent of respondents believed they’ll hit menopause in their forties. And while some will, the National Institute on Aging reports the average age of menopause onset is actually 52, though it can be hard to tell where the dividing line is. Perimenopause vs. Menopause Some women may mistake perimenopause symptoms for menopause itself. The medical definition of perimenopause is when your period is irregular by at least seven days for a minimum of 10 months—so when that happens (and if you’re keeping track), you know you’ve started perimenopause. Menopause itself, defined as the point in a woman’s life when her period permanently stops, hasn’t actually started until 12 full months have passed without menstruation occurring. Perimenopause (peri- means “near” or “around”) is the phase when your body begins to prepare for menopause. Your hormone levels start to fluctuate, causing mood swings, hot flashes, problems sleeping, and irregular periods. It usually begins when you’re in your mid-forties and lasts for eight to 10 years before menopause starts in your early fifties, according to the Cleveland Clinic. That being said, the age when menopause starts can really fluctuate from person to person. A study from the Turkish Journal of Obstetrics and Gynecology found that health, socioeconomic, and hereditary factors can affect its timing. Some of those factors include your mother’s age when she hit menopause, the age you got your first period, how long you were in your mother’s womb, whether you used birth control pills, the stability of your cycle before perimenopause, how many children you have, your weight, use of cigarettes and alcohol, the amount of physical activity you get, education level, socioeconomic status, if you’ve had an ovary removed, the amount of lead in your blood, how much fat you eat, and more. Menopause: Still a Medical Mystery Ultimately, we may never be able to pinpoint an exact age that perimenopause and menopause begin, thanks to all of those factors. Plus, women’s healthcare—especially menopause—remains under-researched even though menopause affects 100 percent of 50 percent of the population. According to findings from Harvard Medical School, 99 percent of preclinical models of aging don’t take the effects of menopause into account, meaning that these guidelines for treating age-related conditions don’t reflect reality. That’s partially because scientists lack the proper lab animals for menopause studies, such as female animals and those that have given birth. Most studies use male animals, and historical research overwhelmingly favors men. Additionally, menopause is quite rare in the animal world—only a few mammals, including chimpanzees and killer whales, are known to go through it. Source: When Does Menopause Actually Start? 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted May 3 Author Report Share Posted May 3 Fact of the Day - OKAPIS Did you know.... The 6-foot-tall, roughly 500-pound, famously shy okapi (Okapia johnstoni) can only be found in the wild in the Ituri tropical rainforests of the Democratic Republic of Congo. Its biology features some amazing adaptations: A unique strip pattern on its rump helps the mammal blend in with shade cast by the rainforest canopy, and its fur is coated in a natural oil that repels moisture, something that rainforests obviously provide in abundance. What’s more, the okapi’s large ears can detect even the slightest disturbance, and okapi mothers communicate with their young in frequencies beyond human hearing. However, perhaps the okapi’s most useful evolutionary trait is its tongue. Stretching some 12 to 14 inches, it’s long enough to swat flies, clean the okapi’s ears, and even clean its eyelids. The tongue is also prehensile, meaning it can grasp and strip leaves from branches. This is immensely useful, as okapis can eat up to 60 pounds of food every day. Although okapis live an isolated existence and look like a cross between a zebra and a deer, their tongues give away their genetic lineage. Okapis are the only living relatives of the giraffe, which explains the animal’s nicknames, including forest giraffe, Congolese giraffe, and zebra giraffe. Like okapis, giraffes also sport blue-hued prehensile tongues, and scientists estimate that the two species shared a common ancestry some 11 million to 12 million years ago. Today, unfortunately, okapis live under threat from deforestation, mining, armed militant groups, and hunting. Thankfully, groups like the Okapi Conservation Project are hard at work preserving the habitat of this “Congolese unicorn” for generations to come. Until 1901, Western scientists thought the okapi was a mythical creature. For a mammal that can weigh hundreds of pounds, the scientific discovery of the okapi seems startlingly late. Throughout the 18th and 19th centuries, Western scientists heard about an “African unicorn” that some Congolese Indigenous peoples called o’api. However, because okapis live in hard-to-reach rainforests and are famously shy, experts dismissed the animal as simply a myth, a cryptid similar to the yeti of the Himalayas or the Sasquatch of the Pacific Northwest. However, in 1900, British explorer Sir Harry Johnston sent the first hide samples to the Zoological Society of London, and the okapi “myth” transformed into reality. Although finally “found” (at least by Western scientists; local tribes likely knew of the animal for millennia), traces of the okapi’s once-mythical status can still be seen — the creature serves as a mascot of sorts for the International Society of Cryptozoology. Because if the “African unicorn” is real, what else might science turn up next? Source: Okapis have tongues long enough to wash their eyelids. 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted May 4 Author Report Share Posted May 4 Fact of the Day - FATIGUE VS. TIRED Did you know.... Confused about whether you‘re tired or truly fatigued? Here's how to spot the difference and what it might mean for your health. “I’m tired” is something we’ve all said before, whether after a long workday or when we first wake up. But what if the feeling is more than that? While fatigue and tiredness often get tossed around like they mean the same thing, they’re pretty different. They may feel similar, but understanding the difference can greatly impact your well-being. What Is Tiredness? According to WebMD, tiredness is usually a short-term lack of energy that often improves with sleep or rest. UnityPoint Health notes that feeling tired is far more common than experiencing fatigue. Research shows just how widespread it is: The CDC found that 13.5 percent of adults aged 18 and older reported feeling very tired or exhausted most days or every day over three months in 2022. On top of that, women reported higher rates of feeling tired than men. What Is Fatigue? Fatigue is more extreme. A fatigued individual undergoes consistent mental or physical exhaustion to the point where the feeling impacts their lives. For example, they may be constantly unable to concentrate during work or school hours. They may even feel apathetic towards things they typically enjoy or lose motivation to do activities. Three common types of fatigue—physiologic, secondary, and chronic—are categorized by how long they typically last and their causes. Physiologic fatigue results from the patient’s lifestyle, such as poor sleep or mental strain, and usually improves when their habits are adjusted. Secondary fatigue is caused by another medical condition, including cancer, heart disease, and depression, and typically lasts one to six months. Someone with secondary fatigue will typically gain their energy back after medical treatment. Chronic fatigue lasts longer than six months and treatment usually focuses on symptom relief. (A related condition, myalgic encephalomyelitis/chronic fatigue syndrome or ME/CFS, is a serious illness with digestive and cognitive symptoms in addition to extreme fatigue). Common causes of feeling fatigued and tired often overlap, which explains why people confuse the definitions. The UK’s National Health Service shares that people can experience fatigue or tiredness because of several factors, including poor dietary choices, insomnia, exercising too much or too little, and depression. If you’re feeling fatigued or tired and can’t figure out why, it’s a good idea to check in with a doctor. Source: Fatigued vs. Tired: What‘s the Difference? 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted May 5 Author Report Share Posted May 5 Fact of the Day - EXPRESSION Did you know... Pipe organs have a little something to do with it. If we say that someone is “pulling out all the stops,” then we mean that they’re holding nothing back and making every conceivable effort to do or accomplish something. The expression is familiar to most speakers of the English language—but where did the phrase come from? What exactly are these stops, and for that matter, why are we pulling them all out? According to the Oxford English Dictionary, stop in this sense was initially “sometimes vaguely used for ‘note’, ‘key’, ‘tune’ ” as far back at the 16th century. Eventually, though, it came to refer to the rounded handles, switches, or button-like stoppers typically found around the keyboard of a pipe organ, which are called “organ stops” or “stop knobs.” To understand why someone might pull them all out, though, we first need to know a little bit more about how an organ actually works. In simple terms, the pipes of an organ are essentially gigantic whistles, and make a sound only when air is forced or blown through them. Each individual pipe makes a different note (corresponding to the keys or pedals of the organ), while the pipes themselves are arranged in multiple musical sets, called “ranks,” each of which produces a different kind of tone or musical effect. The notes produced by some ranks will have a softer, mellower tone, for instance, while others might be brasher, shriller, or far more resonant. Depending on its size, the number of ranks or sets of pipes an organ has might range from just a few to in the hundreds, allowing an organist to produce a host of different sounds and tonal textures on the same instrument. Using the organ stops, they can switch between different ranks as required throughout a performance. The stops on an organ control the airflow into the pipes, thereby allowing them to produce sound. “Pulling out” a stop removes a slider at the base of each rank of pipes, opening them up to the air passing through the instrument (either by bellows or an electronic blower), and ultimately changing the tone of the music being played. Each stop has a name corresponding to the kind of tone color or musical effect that the pipes to which it is connected produce. Stops labeled things like trumpet, tuba, and trombone, for instance, produce harsher, brassier sounds, while the unda stop, or unda maris, produces a softer, undulating sound, meant quite literally to evoke a “wave of the sea.” A skilled player will often open several stops at once to combine sounds from multiple ranks of pipes to create a richer tone overall. With lots of pipes sounding at the same time, the volume of the organ increases. Pulling out all of the stops—so that every rank of pipes sounds simultaneously—would therefore theoretically produce the loudest, grandest, and most impressive sound of all (if not a rather cacophonous one). That’s the idea behind the expression pulling out all the stops. The figurative use meaning “make a considerable effort” emerged in the mid-1800s, with the English poet Matthew Arnold credited with its earliest use in an essay published in 1865. “Proud as I am of my connection with the University of Oxford, I can truly say, that knowing how unpopular a task one is undertaking when one tries to pull out a few more stops in that powerful but at present somewhat narrow-toned organ, the modern Englishman, I have always sought to stand by myself, and to compromise others as little as possible,” he wrote. Since then, different versions of the phrase have cropped up, but the idea—and the musical theory behind it—remains the same. Source: Where Does the Expression ‘Pull Out All the Stops’ Come From? 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted May 6 Author Report Share Posted May 6 Fact of the Day - GOLD RUSH Did you know..... Although the 1848 California gold rush was the largest in American history, it wasn’t the first. That distinction belongs in the state of North Carolina, where in 1799, Conrad Reed, the 12-year-old son of a Hessian Revolutionary War deserter named John Reed, found a 17-pound gold nugget in Little Meadow Creek outside Charlotte. At first — not knowing what his son had stumbled across — the elder Reed used the rock as a doorstop for his home’s front door. It wasn’t until 1802, when he took the rock to a local jeweler, that he began to grasp the enormity of his son’s discovery (although he sold the nugget for far less than it was actually worth). By 1803, Reed had established the first gold mining operation in the U.S. As local papers reported on his business, nearby farmers began hunting for gold on their own properties by searching shallow riverbeds, a practice known as “placer mining.” When these shallow-lying deposits dried up in the 1820s, companies ditched the gold pans and began excavating lode mines, which required many more workers. Until 1828, North Carolina was the only gold-producing state in the Union, and its gold rush reached its peak in the 1830s and 1840s, when the industry employed nearly 30,000 people. The state’s gold-hued fortunes changed once the first reports of wealth out West arrived in the Carolinas, but Reed never saw the end of his state’s gold-rush boom time, dying a rich man in 1845 with his mine raking in millions. The California gold rush began only one week before the U.S. gained control of the territory. When James Marshall, a worker on John Sutter’s sawmill, discovered gold there on January 24, 1848, the California territory was technically still a possession of Mexico. But at the conclusion of the Mexican-American War, Mexico officially ceded the land to the U.S. — one week after Marshall’s discovery, on February 2, 1848. Mexican officials had no knowledge of the momentous discovery made in California when they signed the Treaty of Guadalupe Hildago, which brought the war to an end. California papers didn’t even report on the discovery until mid-March, and the East Coast of the U.S. remained unaware until months later. The discovery brought a tidal wave of migration to the territory — so much so that it went from Mexican control to a U.S. state in just two years. While good news for the U.S. government and a handful of rags-to-riches prospectors, the discovery of gold in the West was devastating for Native Americans as well as the majority of miners hoping to strike it big, only to be subjected to back-breaking work with little to show for it. Source: The first U.S. gold rush was started by a 12-year-old boy. 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted May 7 Author Report Share Posted May 7 Fact of the Day - ODD GRAMMAR RULE Did you know... You know this rule, even if you don’t know you know it. The English language is full of all sorts of quirks that can be infuriating to non-native speakers. (Imagine learning as an adult that cough, enough, and though all make different sounds.) To those of us who speak English as our first tongue, these nonsensical grammar conventions come as second nature—and some rules are so innate that they rarely get taught in school. Take this example: This passage tweeted by editor Matthew Anderson comes from the book The Elements of Eloquence: How to Turn the Perfect English Phrase. It outlines the rules of adjective order when preceding a noun. According to the text, the order goes “opinion-size-age-shape-colour-origin-material-purpose Noun,” and any change made to that organization will make you “sound like a maniac.” For instance, big black dog is a perfectly acceptable phrase, but saying “black big dog” just sounds awkward. At least that’s the case for native English speakers—people learning English as a second language are tasked with committing that seemingly arbitrary sequence to memory. If they don’t, they risk getting confused stares when asking for “the green lovely rectangular French old silver whittling little knife.” That’s not the only English rule we know without knowing we know it. Here are a few more, from why the phrasing is my brother’s car and not the car of my brother to why we say “abso-freakin’-lutely” instead of “absolute-freakin’-ly.” Source: The Odd Grammar Rule Most English Speakers Know But Are Rarely Taught 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted May 8 Author Report Share Posted May 8 Fact of the Day - SLOGAN Did you know... Much like the durable gems it refers to, the advertising slogan “A Diamond Is Forever” has endured the test of time. The line was first penned in 1947 and cemented a connection between diamond rings and romance, though it was, ironically, conceived of by a woman who never married, opting instead to prioritize her career and spend time with her dogs. Mary Frances Gerety was a copywriter at the N.W. Ayer & Son advertising agency, where she was assigned to De Beers, a company that controlled the global supply of rough diamonds. At the time, diamonds weren’t as widely associated with love as they are today — before World War II, only an estimated 10% of proposals featured a diamond engagement ring. Many women tended to prefer more practical engagement gifts, such as a car or washing machine. It was up to Gerety to change that perception by convincing couples that diamond rings weren’t just a luxury, but an essential part of a marriage proposal. While working late on an ad campaign for the company, Gerety realized she’d forgotten to come up with a memorable slogan. According to The New York Times, Gerety later recalled, “Dear God, send me a line,” and jotted down the now-iconic phrase before heading to bed. When she awoke the next morning, she thought the slogan was passable but nothing special. But those four simple words, “A Diamond Is Forever,” proved to be hugely successful. U.S. diamond sales skyrocketed from $23 million in 1939 to an astounding $2.1 billion by 1979. Gerety’s creation was later named the top slogan of the 20th century by Ad Age. “Diamonds Are a Girl’s Best Friend” was first performed by Carol Channing. The song “Diamonds Are a Girl’s Best Friend” was popularized by Marilyn Monroe in the 1953 film Gentlemen Prefer Blondes — a musical performance later ranked as the 12th best in film history by the American Film Institute. But the song was originally sung by actress and comedian Carol Channing, who debuted it on the stage four years earlier. Channing starred as Lorelei Lee in the original 1949 Broadway production of Gentlemen Prefer Blondes. The show featured music by Jule Styne — who also scored Gypsy and Funny Girl — and lyrics by Leo Robin, who won an Oscar for the 1938 song “Thanks for the Memory” from the Bob Hope film The Big Broadcast of 1938. Together, the pair composed “Diamonds Are a Girl’s Best Friend,” which was made famous by Channing during a nearly two-year Broadway run. Channing performed her signature song once again in the 1974 Broadway show Lorelei — a spinoff of the original 1949 musical. Source: The woman who coined the slogan “A Diamond Is Forever” never married. 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted May 9 Author Report Share Posted May 9 Fact of the Day - STRIKEOUT Did you know... The use of the letter ‘K’ as shorthand for ‘strikeout’ dates back to the earliest days of America’s favorite pastime. When it comes to baseball, the use of the letter K to represent a strikeout is one of the most elegant and concise practices in the sport—especially for headline writers. It should come as no surprise, then, that the originator of the abbreviation was the forefather of the modern sportswriter, Henry Chadwick. The British-born, Brooklyn-based writer for the Long Island Star and The New York Clipper in the mid-19th century is credited with the invention of the box score, one of his many contributions to the game that earned him posthumous election into the Hall of Fame. Baseball in America developed before television, radio, or even widespread photography, which made newspaper reports of the game crucial to the spread of the sport’s popularity. Chadwick wasn’t the first person to record the runs scored per inning, but Baseball Magazine declared one of his 1859 game summaries as “The First Baseball Box Score Ever Published,” and he became known as the founder of the modern scoring system. Many of the shorthands he developed over the following decades are still part of the modern baseball lexicon, including the K. Although these days score cards use lines to indicate base hits, Chadwick used an HR for home run, a D for double, and so on. As for making an out at the plate, Chadwick needed an abbreviation for what was known at the time as having “struck three times.” He is usually credited with coining the term strike-out. In terms of the box office, though, he went with K because the letter S was already taken. But also because he often used the phrase, the batter was struck, and the last letter in struck is the letter K. Beyond that, a backwards letter K has taken on other connotations in the world of baseball. Fans also recognize it as a way of indicating that a batter struck out without taking a swing with the third strike. The MLB officially credits Chadwick with popularizing its usage, which may account for why he was inducted into the National Baseball Hall of Fame in 1938, and why he was the only journalist in it for decades. Source: Why Does ‘K’ Stand for ‘Strikeout’ in Baseball? 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted May 10 Author Report Share Posted May 10 Fact of the Day - NOT IN THE U.S. Did you know.... The world’s largest fast-food chain has an estimated 45,000 locations, none of which are located in the United States. It’s called Mixue Ice Cream & Tea, and the popular chain more than doubled its total number of stores in just three years (between 2022 and 2025). Around 90% of Mixue locations are in China, with the rest scattered across 11 other countries in the Eastern Hemisphere, including Thailand, Singapore, Japan, and Australia. Mixue was founded in 1997 by a student named Zhang Hongchao. It started off as a tiny, lone stall selling frozen treats in China’s Henan province before its formal establishment as a company in 1999. The number of Mixue franchises snowballed after that — a fitting trajectory, given the mascot is a snowman named Snow King. Today, Mixue sells ice cream, bubble tea, and iced beverages at an affordable cost. The company’s 45,000 locations (as of March 2025) surpass all other global fast-food brands, even including giants such as McDonald’s, which has 43,477 locations worldwide. Mixue’s rapid expansion is partially due to a strategy that prioritizes smaller stores in well-trafficked areas, which ensures low overhead costs and plenty of foot traffic. While analysts believe Mixue may one day expand into the U.S. and Europe, the company is focused on Asian and Oceanic markets for the time being. There are no Taco Bells in Mexico. Although the chain was inspired by Mexican cuisine, you won’t find any Taco Bells in Mexico itself. This isn’t for lack of effort, as Taco Bell has tried to break into the market on two separate occasions. The first attempt was in 1992, when the company opened a food cart in Mexico City. But locals were confused by the inauthentic names of menu items and also taken aback by the comparatively high prices. Taco Bell tried again in 2007 — a choice Mexican writer Carlos Monsiváis decried to the Associated Press as “like bringing ice to the Arctic.” That time, Taco Bell marketed itself as an American fast-food chain rather than pretending to sell Mexican fare. It opened a location in Monterrey, Mexico, that sold items such as french fries and ice cream, but that, too, failed to take off. Source: The biggest fast-food chain in the world doesn’t have any U.S. locations. 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted May 11 Author Report Share Posted May 11 Fact of the Day - EXPRESSION Did you know.... For most of us, literally putting our feet in our mouths isn’t a physical possibility once we reach adulthood. So, where did this curious expression originate? We’ve all put our foot in our mouths at least once—metaphorically, at least. This vivid little expression commonly refers to making an awkward blunder in a conversation, like mixing up someone’s name or making an ill-timed joke. It first appeared in print in the late 18th century, but its exact origins are murky. Putting your foot in your mouth came about only after the phrase put your foot in it had established itself in the English language as a way to refer to making a mistake. The first print appearance of the expression was in a 1796 play. In Bannian Day, a play by George Brewer, one character expresses their uncertainty: “To be sure I an’t now a little at a loss to know whether I’ve made a good hand of this, or whether I’ve put my foot in it.” The fact that the phrase was tapped for a medium that’s meant to be acted out suggests that people may have been saying something similar, if not identical, aloud regularly before it was ever written down. It’s thought that this original expression may first have emerged from the idea of accidentally stepping in something undesirable, like mud or feces. It’s not clear if putting your foot in your mouth, in particular, was a derivative of this first phrase or if it emerged independently, but it wasn’t until 1879 that the version we know initially appeared. The Iowa-based Waterloo Courier reported that someone was “bound to put his foot in his mouth whenever he [opened] it,” and in 1902, the Atlanta Constitution wrote, “General Bragg has gone and done it again! His happy faculty of putting his foot in his mouth whenever he opens it hangs to him like a toper’s appetite.” Putting Your Foot in It vs. In Your Mouth: What’s the Difference? Still, as similar as they may appear to be, there are slight differences between these two phrases. For one, put your foot in it is more often used in the United Kingdom and Commonwealth countries, whereas put your foot in your mouth both emerged in and more commonly appears in the American lexicon. But put your foot in it is also a much more general saying. Conversely, putting your foot in your mouth is reserved for conversational mishaps—your foot is in your mouth because of a verbal mistake, specifically—while put your foot in it could refer to any erroneous action. So, for example, while accidentally spoiling a surprise gift for someone would be considered both putting your foot in your mouth and putting your foot in ‘it,’ buying someone a terrible gift would only ever be called putting your foot in it. Moving away from its original construction, putting your foot in your mouth further evolved in the mid-20th century into the term foot-in-mouth disease for those who can’t help but make constant social blunders. It’s a tongue-in-cheek term referencing foot and mouth disease, which is a real illness that affects livestock with hooves, like cows, pigs, and sheep. Usually, animals that contract this illness are put down to avoid it spreading to others—though luckily for humans with foot-in-mouth disease, the consequences aren’t usually so severe. Interestingly, however, putting your foot in something may not always be used negatively. A notable exception to the usual association is the phrase’s definition within African-American Vernacular English (AAVE). Instead of referring to an embarrassing mistake, the phrase is used to compliment particularly delicious cooking. If someone made an especially phenomenal meal, they’re said to have put their foot in it. So, next time someone tells us we’ve put our foot in something, we can only hope they’re talking about a dish we’re serving them and not something we’ve just said. Source: Where Does the Expression ‘Put Your Foot in Your Mouth’ Come From? 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted May 12 Author Report Share Posted May 12 Fact of the Day - UNCLE Did you know.... One theory says it comes from an Irish word; another says we have the ancient Romans to thank. But the joke is on those theories, because the real story is more complicated than that. Perhaps you’ve been forced to say it while getting noogies from a bully on the playground. Or maybe you’ve heard it used in a movie where one character roughing up another insists that they “say ‘uncle’,” or admit defeat, before they’re set free. But why uncle—why not aunt or mom or some other authority figure? Where did this bizarre saying come from? Uncle Meaning and Origin Theories According to the Oxford English Dictionary, say uncle is a uniquely North American phrase that first popped up in the written record in 1891 in an article from the Iowa Daily Citizen, and it had taken on the meaning “admit defeat” by 1912, when the Modesto News declared “This Time it is ‘Martie’ Graves and Don Johns who made them say ‘Uncle’.” There are a number of theories about where the phrase came from; one mentioned in the OED posits that we might get this sense of uncle from the Irish word anacol, which means “protection” or “quarter.” But, as David Wilton at Word Origins notes, “This idea was first put forward in the journal American Speech in 1976, but it is speculation with essentially no evidence to support it … there [are no] recorded instances of say anacol or anything similar that would lend credence to the idea of a folk etymology.” Yet another theory says that we get it from the days of the Roman Empire. Supposedly, young children of that era who were attacked by bullies wouldn’t be set free until they said “Patrue, mi Patruissimo,” or “Uncle, my best Uncle,” because at that time, the brother of one’s father was accorded almost the same level of status and power as one’s dad—therefore, declaring the bully to be your “Best Uncle” was tantamount to granting him a title of respect. Joking Around It seems more likely that we have a joke to thank for why we say “uncle” to give up. The joke from the OED’s first citation reads in full: “A gentleman was boasting that his parrot would repeat anything he told him. For example, he told him several times, before some friends, to say ‘Uncle,’ but the parrot would not repeat it. In anger he seized the bird, and half-twisting his neck, said: ‘Say “uncle,” you beggar!’ and threw him into the fowl pen, in which he had ten prize fowls. Shortly afterward, thinking he had killed the parrot, he went to the pen. To his surprise he found nine of the fowls dead on the floor with their necks wrung, and the parrot standing on the tenth twisting his neck and screaming: ‘Say “uncle,” you beggar! say “uncle.” ’ ” As Michael Quinion at World Wide Words writes, later versions of the joke have the man’s niece persuading him to buy her a parrot—and that’s why the bird is saying “uncle.” But in a way, we do have Ireland to thank, because according to Wilton, the joke seems to have first appeared in a Dublin newspaper in June 1891. From there, it made its way into a London newspaper and then to the Iowa Daily Citizen, at which point say uncle spread across the country and became part of North American vernacular: “The original joke may have gotten its start in Ireland,” Wilton says, “but it had nothing to do with anacol and did not develop into a stock phrase until it had crossed the ocean.” Source: Why Do We Say “Uncle” When Admitting Defeat? 1 Link to comment Share on other sites More sharing options...
DarkRavie Posted May 13 Author Report Share Posted May 13 Fact of the Day - ACTUALLY PINK Did you know.... The Addams Family was filmed in black and white, and it’s difficult to imagine it any other way — not only because it premiered in 1964, when color television was still something of a novelty, but because the aesthetic perfectly suits the show’s gothic vibes. It was hardly dour on set, however, as the iconic living room where most of the action takes place was actually pink. A resurfaced photo of the set shows just how garish many of the colors were — including bright pink walls and rugs — which in hindsight makes perfect sense: As long as nothing looked out of place in the final black-and-white rendering, its real-life hue didn’t make much of a difference. Several of the set’s props were repurposed from another MGM production, The Unsinkable Molly Brown, which was released a few short months prior to The Addams Family. The characters of the latter made their first appearances in a series of single-panel New Yorker comics by series creator Charles Addams, the first of which debuted in 1938. None of the characters had names in the original comic, however. Most of them, including Morticia and Wednesday, received their monikers when Addams licensed a doll collection based on the cartoon in 1962. And speaking of names, Wednesday’s middle name is — naturally — Friday. Lurch and Thing were played by the same actor. In addition to his roles in Star Trek and I Dream of Jeannie, Ted Cassidy is best known for his performance as Lurch in The Addams Family. He reprised his role as the hulking butler in several iterations of the franchise, including the 1973 animated series and the 1977 television movie Halloween With the New Addams Family, as well as in episodes of the 1960s Batman TV series and The New Scooby-Doo Movies. But Lurch wasn’t his only contribution to the show, as the disembodied hand known as Thing belonged to Cassidy as well — something many fans didn’t realize at the time, as the character is credited as “Itself” in the credits. Cassidy had a separate contract for playing Thing and portrayed the character with his right hand, though he occasionally switched to his left to see if anyone would notice. Audiences probably didn’t, just as they likely couldn’t tell when assistant director Jack Voglin portrayed Thing in scenes featuring both of Cassidy’s characters. Source: The living room set of the “Addams Family” TV show was actually pink. 1 Link to comment Share on other sites More sharing options...
Recommended Posts
Please sign in to comment
You will be able to leave a comment after signing in
Sign In Now