\"Writing.Com
*Magnify*
    May     ►
SMTWTFS
    
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Archive RSS
Printed from https://www.writing.com/main/profile/blog/cathartes02
Rated: 18+ · Book · Opinion · #2336646

Items to fit into your overhead compartment


Carrion Luggage

Blog header image

Native to the Americas, the turkey vulture (Cathartes aura) travels widely in search of sustenance. While usually foraging alone, it relies on other individuals of its species for companionship and mutual protection. Sometimes misunderstood, sometimes feared, sometimes shunned, it nevertheless performs an important role in the ecosystem.

This scavenger bird is a marvel of efficiency. Rather than expend energy flapping its wings, it instead locates uplifting columns of air, and spirals within them in order to glide to greater heights. This behavior has been mistaken for opportunism, interpreted as if it is circling doomed terrestrial animals destined to be its next meal. In truth, the vulture takes advantage of these thermals to gain the altitude needed glide longer distances, flying not out of necessity, but for the joy of it.

It also avoids the exertion necessary to capture live prey, preferring instead to feast upon that which is already dead. In this behavior, it resembles many humans.

It is not what most of us would consider to be a pretty bird. While its habits are often off-putting, or even disgusting, to members of more fastidious species, the turkey vulture helps to keep the environment from being clogged with detritus. Hence its Latin binomial, which translates to English as "golden purifier."

I rarely know where the winds will take me next, or what I might find there. The journey is the destination.
Previous ... -1- 2 3 ... Next
May 9, 2025 at 10:23am
May 9, 2025 at 10:23am
#1088989
As a (mostly) solo traveler myself, this article from Business Insider caught my attention.



Okay. Mostly I wonder why they bothered to publish this. Is it some sneaky pronatalist propaganda? Shill for the travel industry? Just a way to get eyeballs on the site?

For most of my 20s, travel was my whole personality.

Huh. Most of us had "struggle to find an entry-level position and not get laid off" personalities in our 20s.

So, when I started feeling a little stuck in the summer of last year at almost 29 years old, I did what had always worked before: I packed a bag, booked a one-way ticket, and left.

Oh, no. The horror of turning 29.

But then, one afternoon, hiking through the jungle, watching scarlet macaws flash across the sky, I felt it: nothing.

No awe, no wonder, just a dull, creeping awareness that I'd seen this all before, that I could be anywhere, that none of it was touching me the way it used to.


This Just In: people change as they get older. It's not always about "growing up." It's not about "putting away childish things." It's just change. I'm certainly not the person I was in my 20s, and while I can't point to a certain event and say "This was the watershed moment, the point at which my tastes changed," it happened. Perhaps gradually.

Now, travel just felt like I was running away. I wasn't discovering new things about myself. I wasn't growing. I wasn't even particularly interested in where I was.

Okay, well, I'll give her points for recognizing this and not holding on to old habits just because they're old habits.

When I came back to the US, I expected to feel relief. Instead, I felt restless in a way that travel couldn't fix.

Yeah, that's what happens when you've changed and you haven't yet figured out what you want to do next.

A deeply meaningful life isn't found in constant movement; it's built over time. It's in the friendships that deepen over years, not days. The sense of belonging that grows from showing up again and again. The purpose that comes from committing to something, even when it's not thrilling every moment.

Perhaps the problem is looking for meaning when there isn't any. But really... this is not some grand revelation. This is, again, an age thing. It hits some people earlier or later than others, but eventually, I think, most people get there.

Travel will always be a part of my life, but I no longer see it as the answer to everything.

That's because there is no one answer to everything. No, not even religion. I, too, enjoy travel, but I don't see it as some grand solution to all of life's problems. It's just nice to get out and do something different every now and then. If travel is the only thing you do, the "something different" may be settling down, as it was with this author. When I was a kid, there was a house on my road with a shingle outside proclaiming its name: "Journey's End." I didn't understand that as a kid. I think I do now.

Please don't think I'm ragging on this chick. I only question why BI decided to publish this particular piece, which seems more like a blog entry than an opinion piece for a magazine (not that there's anything wrong with blog entries, either). I can't help but think it's some sort of propaganda, but I might be paranoid about that.
May 8, 2025 at 9:16am
May 8, 2025 at 9:16am
#1088924
From Popular Mechanics, some science reporting that I'm not going to get too skeptical about this time, promise.

    Scientists Found Evidence of a Megaflood that Shaped Earth’s Geologic History  Open in new Window.
The flood may have refilled the entire Mediterranean Basin in just two years.


Not that it shouldn't be approached with a level of skepticism; it's just that I don't know enough about the subject to know what questions to ask. However, I do question the headline: certain people see that headline and immediately think of one particular story involving rain, animals, and an ark. Hopefully the subhead is enough to disabuse one of any such notions, not to mention the article itself.

Ages, epochs, periods, and even eras are often defined by some sort of geologic trauma. The Chicxulub asteroid, for example, pushed the Earth into the Cenozoic Era, and 65 million years later, experts are pondering if we’ve entered a new geologic age induced by modern humans (and their predilection for greenhouse gasses).

If you ever look at a geologic time scale (here's one from Wiki),  Open in new Window. you might note that geologic ages, eras, epochs, and so on proliferate more in relatively recent times than they did way back when. I've often wondered how much of this is observational and/or recency bias.

As for the "new geologic age induced by modern humans," I don't know for sure, but I thought they discarded the concept of the Anthropocene. Of course, "they" aren't a monolith and there might still be debate.

Around 6 million years ago, between the Miocene and Pliocene epochs—or more specifically, the Messinian and Zanclean ages—the Mediterranean Sea was cut off from the Atlantic Ocean and formed a vast, desiccated salt plain between the European and African continents.

If there's no ocean or sea between the continents, are they separate continents? By ancient convention, Europe and Asia are considered different continents, so I suppose so.

Until, that is, this roughly 600,000-year-long period known as the Messinian Salinity Crisis suddenly came to an end.

Messinian Salinity Crisis would make an excellent name for a 70s prog-rock band.

At first, scientists believed that the water’s return to the Mediterranean took roughly 10,000 years.

I have a bit of an objection to this wording. It's not like scientists took it on faith; there was evidence. It's entirely possible that the evidence was misinterpreted, but, as this article shows, scientists change their views when new or reinterpreted evidence shows up.

But the discovery of erosion channels stretching from the Gulf of Cadiz to the Alboran Sea in 2009 challenged this idea, suggesting instead that a powerful megaflood may have refilled the Mediterranean Basin in as little as two to 16 years.

Other than wondering why the author didn't just say "Straits of Gibraltar," which is probably better-known globally than "Alboran Sea" and "Gulf of Cadiz," there's a really, really big difference between 10,000 years and something on the order of a decade. Specifically, 1000 orders of magnitude.

Quite a few discoveries move whatever needle by a tiny amount, like if there's evidence that the Sun is 5 billion years old but new evidence comes in that suggests 5.1 billion (I'm not saying this happened, just an example my head came up with). But this difference is a major shift. So I'd be looking for lots of evidence to back it up. Extraordinary claims require extraordinary evidence, and I call a 1000 orders of magnitude change extraordinary.

But, again, I'm not saying it's not true; I just don't know much about this subject.

That likely means this flooding event—now known as the Zanclean megaflood—featured discharge rates of roughly 68 to 100 Sverdrups (one Sverdrup equals one million cubic meters per second).

Case in point: I'd never heard of the Sverdrup. So of course I looked it Sverdr-up.  Open in new Window. Turns out it's used in oceanography. From that Wiki link: "One sverdrup is about five times what is carried at the estuary by the world's largest river, the Amazon."

It shouldn't be surprising that they came up with a larger unit. This is analogous to how star masses are reported in terms of solar masses, or interstellar distances in light-years or parsecs. It keeps us from dealing mathematically with huge numbers, like billions or trillions, or having to use exponents.

At any rate (that's a pun there), even if the numbers (68 to 100 in this case) are comprehensible, the amount of water flow is almost certainly not.

The article goes into a discussion of the evidence that led to this extraordinary conclusion. I don't know enough o say whether it's compelling or not, but I did find it an interesting read. But then:

This model shows that flooding could have reached speeds of 72 miles per hour, carving deep channels as observed in the seismic data.

Look, I get using nonstandard units to make enormous quantities somewhat manageable in calculations, but switching from metric/SI to "miles per hour?" That, I cannot abide. Pick one. (It's about 115 km/h.)

Now, let's see if I can find a lead singer for Messinian Salinity Crisis. And some musicians. Because I have no talent, either.
May 7, 2025 at 10:01am
May 7, 2025 at 10:01am
#1088865
Once again, Mental Floss tackles the important topics.

    A Brief History of Pizza  Open in new Window.
Take a bite out of pizza’s past.


Like many kids, I found history classes boring. Later, history became a favorite topic. I often wondered why that's the case. Part of it is because kids lack context, I'm sure. But another part is that they never taught the history of pizza.

The history of pizza is a large pie—half Margherita and half lies.

If you have to order a half-and-half pizza, you have failed at diplomacy and compromise.

The most famous story about its origins, in which the classic tri-color pie was created to honor Queen Margherita of Savoy, is a work of fiction.

And yet, it's the first thing people hear, so they'll stick with the fictional version.

U.S. soldiers did not fall in love with pizza en masse during their time fighting World War II and bring it back to the States.

Pretty sure I've never heard that tale.

And the pizza in New York is not good because of the magical tap water.

That bit, I knew. The pizza there is good because it's New York pizza. While New York City tap water is remarkably good for drinking, it doesn't contribute much to the taste of New York's most perfect food. Nor does it do anything to improve the taste of beer from their local breweries.

Let’s take a look at some iconic pizza styles...

Some of which aren't pizza, but okay.

In 2014, newly-elected New York mayor Bill DeBlasio set off a small international incident when he was photographed eating his pizza with a knife and fork... So, was the then-mayor wrong? Right?

Obviously, he was wrong, as he's a politician.

The answer is both, and that’s because pizza is at once internationally recognizable and completely regional. That’s why some people look at a Hawaiian pie and see the greatest crime ever committed to leavened bread and others see a beautiful story about immigration, intercultural influence, and innovation (or, at least, lunch).

The only thing I love more than watching Chicago vs. New York pizza arguments is watching the pineapple-on-pizza arguments. Well, actually, I love pizza more than any argument, but they still amuse me.

The article goes into the Margherita thing, then:

According to food historian Tommaso Esposito, up until the mid-20th century, pizzas were usually ordered by simply listing the ingredients you wanted on top. Esposito wrote a book all about pizza songs (yes, that’s a thing) from the 16th century up until 1966 and found that none of the songs mentioned specific pizza types by name.

Hey, I still order by listing the ingredients I want on top. Also, how come I don't know any pizza songs?

Neither of those two famous Neapolitan pie varieties would have been possible without tomatoes.

And I'm glad the article acknowledges this. While something resembling pizza undoubtedly existed long before tomatoes were brought over from the Americas (I've seen histories tracing it back to classical Rome), it took the nightshade fruit to really make pizza what it's recognizable as today.

When we think of pizza today, tomatoes—a crop the Aztecs had introduced to the Spanish—often seem like an essential ingredient.

That's a kind way of putting "the Spanish stole tomatoes from the Aztecs."

The Oxford English Dictionary, in fact, defines pizza as a dough “baked with a topping of tomatoes, cheese, and any of various other ingredients.”

I don't accept dictionary arguments, but this one reflects common usage.

Anyone who’s ever had a white pie might blanche at that definition.

Ha ha. I see what you did there.

There’s a written record from Gaeta, about 60 miles up the coast from Naples, dating back to the end of the 1st millennium CE. It lays out an agreement in which someone owes a local bishop 12 pizzas every Christmas and Easter Sunday.

As the article notes, this was in the B.T.E. epoch (Before Tomatoes in Europe).

We don’t have any way to know exactly what that proto-pizza looked or tasted like, but consider what the simplest version of a pre-Columbian-Exchange pizza might entail: a simple Mediterranean flatbread. Kind of like … a pita.

Now here's where the article gets into that linguistic parallelism, something I've wondered about often myself, but never cared enough to look up.

Plenty of sources think this is no accident, and draw a linguistic line straight from pita to pizza. That’s not the only possible etymology for the word, though.

There's one important difference between pita and pizza, though: the former is generally baked on its own, while pizza dough is topped and then baked. Now, I've had things called "pizza" which feature pita or naan or other flatbread, pre-baked, topped with traditional pizza toppings (tomatoes, mozzarella, pepperoni) and then baked again, but I've always thought it's not true pizza. It can be good, though.

If we define pizza as a flatbread with toppings, we can imagine it being “invented” more or less independently by the Ancient Greeks, Egyptians, and Natufians (from modern-day Jordan, who were apparently making bread more than 14,000 years ago).

Yes, putting stuff on bread is as old as civilization, I can accept that. I can also easily see someone putting another hunk of flatbread on top, so I've never truly accepted the "Earl of Sandwich" story for eating something between two pieces of bread. The article backs me up on this, too:

The idea of putting something delicious inside a pizza-like bread likely dates back thousands of years.

They talk about figs as the "something delicious" before going on with:

Eventually, pizza with figs became popular beyond those who ate it out of economic necessity. Wealthier eaters embellished the simple dish with prosciutto, creating a new variation that harkens back to pizza’s historical roots and remains popular today.

This parallels the history of a lot of cheap eats. You take what's available in an area, and it feeds the masses. Then, later, it becomes a gourmet delicacy. Hell, France made basically a national cuisine out of that idea. Snails and frog legs, anyone?

The Hawaiian pie was invented in 1962, according to most accounts, by Sam Panopoulos, a restaurateur living in Ontario. Sam was originally from Greece, and the boat he left on stopped, fortuitously, in Naples, where he first became acquainted with pizza.

Unlike the murky origins of pizza itself, that story checks out. I like it because it's international: Greek, Italian, Canadian, Polynesian, American.

The article also discusses other styles of pizza, like Detroit and Chicago, which I don't consider pizza. Again, though, it can be good.

A different approach to that same long cook time may have given us Ohio Valley-style pizza. One of its defining features is the last-minute additions of cold toppings, including cheese.

Unlike some other regional pizzas, Ohio Valley style tends to stay in the Ohio Valley.

There's a lot more at the link. I won't belabor it further, except to say that regardless of categorization arguments, I only have one pizza advice about pizza, or pizza-adjacent concoctions: if you like it, eat it, and don't listen to those of us who need to be purists or pedants.
May 6, 2025 at 8:39am
May 6, 2025 at 8:39am
#1088807
A few days ago, I shared an article about how to tell if someone is rich. This one's like that, only it's about smart. From Upworthy:

     How do you know someone is very smart? Here are 15 'subtle signs' others notice.  Open in new Window.
"You can understand both sides of an issue and still think one is wrong."


It's probably a lot easier to tell if someone's stupid. That's easy: they are. Everyone is stupid; even, sometimes, very smart people.

A Redditor named Occyz wanted to know how people tell the difference by asking them to share the “subtle” signs that someone is very intelligent.

Oh, great, an article that summarizes a Reddit thread. In other words, don't believe a word of it. (See? I is smart.)

A big takeaway is people think highly intelligent people are mentally flexible. They are always interested in learning more about a topic, open to changing their minds when they learn new information, and they're acutely aware of what they don’t know.

So, people of questionable intelligence, plus a bunch of AI bots,  Open in new Window. offering their opinions (or regurgitated AI training) about something that even scientists have a hard time quantifying.

In fact, according to the psychological principle known as the Dunning-Krueger effect, there is a big confidence chasm between highly intelligent people and those who are not. Low-IQ people often overestimate what they know about topics they need to familiarize themselves with. Conversely, people with high IQs underestimate their knowledge of subjects in which they are well-versed.

In fact, starting a paragraph with the words "in fact" does not, in fact, mean that what follows is fact.

Here are 15 “subtle” signs that someone is highly intelligent.

"They don't tell everyone how smart they are" seems to be missing from the list.

Incidentally, the article opens with a big picture of Steve Jobs. Now, there's no denying that Jobs was intelligent. He started a company with a couple of friends in a garage, and by the time he died, it was the most valuable company in the world (based on market capitalization). But he also eschewed evidence-based medicine, leading to quite possibly an early death. I'd argue that's not very smart. On the other hand, had he held out a little longer, Apple wouldn't have been the most valuable company in the world anymore, so maybe he was playing n-dimensional chess and winning? I don't know.

Point is, smart isn't everything, just like money isn't everything. You can be smart and still a raging asshole, like Jobs reportedly was.

I won't bore everyone with comments on every single item in the article. Hopefully, the ones I mention here will be enough to get my point across.

1. They admit their mistakes

"When someone can admit a mistake and they know they don’t know everything."


This sounds more like learned behavior. It is a good trait to have in most situations, I think, but I can't say it correlates with general intelligence. There are a few on the list like this.

2. Great problem-solvers

On the other hand, this one strikes me as the actual definition of intelligence.

3. They appreciate nuance

"'I can hold two opposing ideas in my head at the same time.' Anyone who is willing to do that is intriguing to me.


I'd agree with that. I've said many times that life isn't binary; it's not all good/bad, black/white, whatever. I'm just not sure one has to be a genius to do it.

5. They have self-doubt

The great American poet and novelist Charles Bukowski once wrote, “The problem with the world is that the intelligent people are full of doubts and the stupid ones are full of confidence,” and according to science, he’s correct.


Yeah, well, Yeats wrote it first (I think): "The best lack all conviction, and the worst / Are full of passionate intensity."

9. They can simplify big ideas

Okay, but to me, that's less a marker of intelligence and more a sign of... I don't know. Empathy? What do you call wanting other people to understand something? And also of being so well-versed in the "big idea" that they can explain it to the uninitiated.

Richard Feynman, who gets my vote for smartest dude of the 20th century (edging out the perennial icon Einstein), reportedly once said, "If I could explain it to the average person, it wouldn't have been worth the Nobel Prize." And yet, he spent a lot of time explaining stuff.

I wish I could find out who said something like "If you really want to learn something, figure out how to explain it to a fourth-grader." I thought it was Feynman, but I'm having trouble finding the quote. If indeed it exists.

11. They're humble

"They don't continually need to tell people how intelligent they are."


Okay, so up there, where I said, '"They don't tell everyone how smart they are" seems to be missing from the list.'? I was wrong.

See what I did there?

There are more in the article, as you might have inferred based on the number-skipping (and the fact that I told you I was going to skip some), because you're smart.

Now, just to be clear, I'm not saying these are bad things. Everything on that list is what I'd consider a desirable character trait, to one degree or another. I just question their correlation with what we call intelligence, which, as I noted above, is notoriously hard to quantify in general. Sure, there are IQ tests, but I don't think such tests measure all possible forms of intelligence.

And, just to reiterate something I've said before, it's best not to conflate intelligence with knowledge. Someone who does well on trivia questions has a lot of stuff memorized, but that doesn't necessarily mean they can figure something out that's unfamiliar to them. It's like, I don't know, if you have the dictionary memorized, you'll be able to make more Scrabble words, but will you be able to place them on the optimal score-enhancing spaces? The former is knowledge; the latter may be intelligence.

In conclusion, there's a whole lot of other dimensions to a person than just "smart." Or how much money they have. Which also aren't necessarily correlated. I mean, everyone knows, or should know, that the only thing that matters is how attractive you are.
May 5, 2025 at 9:52am
May 5, 2025 at 9:52am
#1088758
I couldn't let this hate-review from SFGate go by without comment.

    A stay at the decrepit tomb of what was once the Vegas Strip's coolest hotel  Open in new Window.
When it opened, Vegas had never seen anything like Luxor. Now, it's one of the most hated hotels on the Strip.


Who the hell wrote this? Someone working for the competition? There's a lot of competition there, but I'd suspect Harrah's (owner of Caesar's Palace).

And within 10 minutes of my arriving at Luxor, it was clear why it’s one of the most reviled hotels in Las Vegas.

Really? Because within 10 minutes of me arriving there, I'm already relaxed and ready to gamble.

I pulled into the porte cochere shortly before noon and headed inside with my luggage in tow. Hoping to stow my bag while I explored the resort, I walked over to the bell services desk. The employee gestured for me to come closer, then angrily pointed behind me.

“That’s the line,” she said.

I turned to see a queue about 10 feet away. It extended all the way through the lobby to the casino floor.


Okay, a few things to unpack here.

Let's start with the last bit. That makes it sound like the lines at Disney. This is bullshit. There's not much space between the front desk and the casino floor.

Now, and here's the major, epic fail of this takedown piece: the author is channeling Yogi Berra here. "No one goes there anymore. It's too crowded."

So now I begin to suspect that this writer, "mortified" (her own word) at her faux pas, simply got a bad first impression and then found everything she could to rag on.

Then, by some miracle, I got a text: My room was ready. I passed two broken moving walkways, a closed cafe and a long, blank wall lined with employee-only doors before finding the ancient-looking bank of elevators.

Yeah, I know that route. I also know the quicker, alternative route, which takes you through the casino floor. Had she gone that way, she might have written about how the hotel forces you through the noisy, flashy, money-sucking part of the first floor. As for broken walkways, yeah, that happens in an aging building. I've never seen the place not having some sort of construction going on.

Finally, it's not like elevators were a thing in ancient Egypt. The least they can do is style them like older elevators.

When the first one opened, the electrical panel was exposed, wires spilling out. The doors shuddered shut, and the ascent began. Because Luxor is a pyramid, the elevators are more like funiculars, climbing sideways at a 39-degree angle.

Okay, okay, I'll grant that the exposed wires, which seem to be confirmed by a pic in the article, are a major fail on the part of maintenance and/or management. While there are laws about under-21s in casino areas in Vegas, plenty of families stay in the hotels. I'm not a big "think of the children" person, but kids do have a tendency to get curious about stuff like that.

The elevators rattle uncontrollably, shaking the occupants like a martini all the way up. They’re also incredibly slow. I was on the 21st floor, and it took over a minute to get there.

Waaah, they're slow. I think of them as Wonkavators. They are a bit rumbly and shaky, but that's part of their charm. As I put it to anyone in there with me (captive audience), "Hey, we came here to gamble, right?"

Things did not improve when I reached the room. As I closed the door behind me, I saw that there was no deadbolt, no bar lock, no privacy latch.

Okay, first of all, I've been in lots of hotels, from fleabags in rural Montana to the Ritz-Carlton in DC, so I don't recall specifically if the Luxor rooms lack those features. Seems to me they do have them, but it's possible that some rooms don't.

Second, the pyramid is not the only place to stay there. They have two "tower" facilities with more traditional elevators and rooms without sloping walls. As I recall, you only pick the pyramid rooms by your own choice.

Does Luxor mistrust its guests so much that it doesn’t provide interior locks? I wondered how many times a day its staff had to force their way into rooms, and why.

Look, I'm no expert on the hotel industry, but management has ways to bypass those "security" features. People die in hotel rooms on a regular basis (not because they're in hotel rooms, but just because a lot of people stay in hotels and everyone dies at some point). Also, let's not forget that Luxor is immediately adjacent to, and for a long time shared an owner with, Mandalay Bay, and Mandalay Bay was where the infamous concert shooter stayed.

The dark exterior of Luxor made for a perpetual tint in the room, worsened by the fact that one of the windowpanes was crusted in desert dust. This is probably a great setup for someone with a blistering hangover, but it gave a depressing pallor to the space.

Counterpoint: "I couldn't sleep in because the room was too bright!"

There were two positives. One was the Wi-Fi, which was strong enough to seamlessly maintain a video call.

You're on the 21st floor of the pyramid, and you're trusting the hotel Wee-Fee over your phone's hotspot? Your priorities are backwards.

The other was the toilet, which flushed with the force of a cruise ship lavatory.

I'm glad she counts that as a positive, but the engineer in me wants to know how they get pressures like that at the top. Is there a hidden water tank at the tip of the pyramid? The tip which famously has a giant sun-bright spotlight pointing at the stars?

If you’re eating at the Luxor buffet, this is no doubt a hygienic necessity.

I don't get the love for buffets. I've never eaten at that one. I only eat at buffets when my friends pressure me into it. If there's one thing that Vegas doesn't lack, it's casinos. If there's another, it's restaurants, including ones where you don't have to do half the work.

The article goes into some of the property's history, which is interesting but somewhat irrelevant. Then:

Stripped of its novelty, though, the gloomy interior is now bare and brutalist.

You say that like it's a bad thing. It is not.

With limited food options at the hotel, I ate elsewhere for dinner.

I will grant that, compared to some other Vegas properties, the Luxor has fewer dining options. There's a food court for fast food, a breakfast/lunch diner style area, a deli, a couple of Starsuckses, a tequila bar with food, a sushi place (which is incidentally very good), and the aforementioned buffet. This is "limited" in Vegas, true, but when you consider that all you have to do is ride up an escalator to the passageway between Luxor and Mandalay Bay, which is a mall with various shopping options and, yes, many restaurants, this complaint falls short for me.

That night, afraid of falling asleep without a security lock, I dragged an armchair in front of my door. At $299.32 for two nights, it felt particularly absurd to be redesigning the room for safety.

I don't mean to be rude or anything (okay, I kinda do), but that exhibits a level of paranoia I just can't get behind. Like I said, hotel staff can burst into a room at any time if they have to. And anyone who's not staff shouldn't have a key. Hell, those rickety gambling Wonkavators won't even take you to your floor if you don't use your room key (unless, I suppose, the panel's broken and the wiring's exposed, which, as I said, is one legitimate complaint).

And, I might add: $300 for two nights? What the Egyptian Underworld? I've never paid more than $50 for a night, and it's usually even less because it's comped (yes, this means I spent more at the blackjack tables, but ignore that).

After a fitful night’s sleep, I stumbled down to the lobby Starbucks.

Which one? Seriously, the overabundance of Starsucks is my second-biggest problem with Luxor, after the really quite tiny and understaffed high-stakes table games room. Okay, no, third, after the high-stakes room and their deal with Pepsi (I'm a die-hard Coke guy).

Now, look, I know tastes are different. You want high-end? Plenty of other options in Vegas. You want real cheap? Those options exist, too, usually without the shows and casinos. Luxor may not be "cool," but it's cheap (this author got price-gouged, sorry) and the beds are comfortable, especially if you stay in one of the towers instead of the pointy thing.

Las Vegas properties have a relatively short half-life. Luxor has already passed that point. I fully expect it to go the way of Golden Nugget and other casinos that were the Vegas version of historical-register buildings.

Meanwhile, though, I wasn't about to let this absolute hit-piece stand without comment.
May 4, 2025 at 9:24am
May 4, 2025 at 9:24am
#1088686
As I've noted before, I try to be skeptical of articles that confirm what I believe. Like this one from The Guardian.

    Night owls’ cognitive function ‘superior’ to early risers, study suggests  Open in new Window.
Research on 26,000 people found those who stay up late scored better on intelligence, reasoning and memory tests


One wonders if the study was conducted by night owls.

The idea that night owls who don’t go to bed until the early hours struggle to get anything done during the day may have to be revised.

Eh, getting anything done is overrated.

It turns out that staying up late could be good for our brain power as research suggests that people who identify as night owls could be sharper than those who go to bed early.

We're also funnier, better looking, and richer.

Seriously, though, the first thing I had to ask myself was this: Are we smarter because we stay up later, or do we stay up later because we're smarter? Or is there some factor that contributes to both, like, maybe, a willingness to go against the grain of society and do one's own thing, regardless of the schedule imposed upon us by cultural pressure?

Or, and I'm still being serious for once, do larks as a group score lower on these traits because some of them are actually owls who were pressured into their schedule by relentless society?

Researchers led by academics at Imperial College London studied data from the UK Biobank study on more than 26,000 people who had completed intelligence, reasoning, reaction time and memory tests.

They then examined how participants’ sleep duration, quality, and chronotype (which determines what time of day we feel most alert and productive) affected brain performance.


Well, now, they could have said up front that sleep duration and quality were also being considered as factors. I think it's pretty well-established that people who get a good and full night's sleep (whether it takes place technically at "night" or not) tend to do better with things like memory and reaction time.

From a purely speculative viewpoint, this brings me back to wondering if some larks aren't getting decent sleep because they should be owls. I can't think of a mechanism by which merely shifting one's sleep hours could help with cognition, unless one's sleep hours already should be other than what they are. In other words, I'd expect to see the reverse result in such a study if it were generally larks being forced into night owl mode, rather than the reality of the other way around.

I imagine we could get some data on that if they just studied people like late-shift workers or bartenders, people who need to follow an owl schedule even if their chronotype is more lark.

Going to bed late is strongly associated with creative types. Artists, authors and musicians known to be night owls include Henri de Toulouse-Lautrec, James Joyce, Kanye West and Lady Gaga.

I also imagine way more musicians are owls just because they, too, can be forced into a stay-up-late schedule for work, whatever their natural chronotype. For writers, it's a different story (pun intended), because creative writers, at least, often set their own schedules. At any rate, I'm glad the article uses "strongly associated with" instead of implying causation in either direction.

...the study found that sleep duration is important for brain function, with those getting between seven and nine hours of shut-eye each night performing best in cognitive tests.

Which I was speculating about just a few minutes ago.

But some experts urged caution in interpreting the findings. Jacqui Hanley, head of research funding at Alzheimer’s Research UK, said: “Without a detailed picture of what is going on in the brain, we don’t know if being a ‘morning’ or ‘evening’ person affects memory and thinking, or if a decline in cognition is causing changes to sleeping patterns.”

Fair point, so my skepticism here is warranted for reasons I didn't even think of.

Jessica Chelekis, a senior lecturer in sustainability global value chains and sleep expert at Brunel University London, said there were “important limitations” to the study as the research did not account for education attainment, or include the time of day the cognitive tests were conducted in the results.

Hang on while I try to interpret "sustainability global value chains," which sounds to me more like a bunch of corporate buzzwords strung together haphazardly. Regardless of the value, or lack thereof, of that word salad, her note about limitations is important to account for.

The main value of the study was challenging stereotypes around sleep, she added.

And I think that's valid (maybe not "the main" but at least "a" value), because us owls are generally seen as lazy and unproductive.

Well, okay, I am lazy and unproductive, but that doesn't mean I'm not an outlier.
May 3, 2025 at 12:48am
May 3, 2025 at 12:48am
#1088599
This article's a few years old, and it's from PC Gamer, a source I don't think I've ever quoted before. No, I don't follow them, even though I am a... wait for it... PC gamer. But this one's not about gaming.

    I just found out what Wi-Fi means and it's sending me  Open in new Window.
It's almost certainly not what you think.


Wi-Fi is something most of us use every day. It's a miraculous technology that allows us to communicate and share large amounts digital information to multiple devices without the use of cables.

The great big machine that went BING and fixed my heart problem, that was miraculous technology. Wi-Fi? Just technology.

But what does it mean?

I know I do philosophy in here from time to time, but "what does it mean" is just too big a ques- Oh, you mean, what does "Wi-Fi" mean.

Wireless Fidelity? Wrong. Wireless Finder? Nope. Withering Fireballs? Not even close, my friend.

From now on, in my house, it's Withering Fireballs.

According to MIC  Open in new Window. quoting this interview from 2005 by Boing Boing,  Open in new Window. Wi-Fi doesn't mean any of these things, and in fact actually means basically nothing at all.

So here I am, quoting an article that quotes an article that quotes another (20 year old) article. Sure, I could have just gone to the original source, but where's the fun in that? Then I wouldn't have been able to make jokes about Withering Fireballs.

Here's my take: it means what it means. Every word has a meaning, except maybe for "meaningless."

Rather, Wi-Fi was a name settled on between a group now known as the Wi-Fi alliance and some brand consultants from Interbrand agency.

"Now known as?" One wonders what they were known as before they invented the term Wi-Fi. Let's look it up, shall we? "In 1999, pioneers of a new, higher-speed variant endorsed the IEEE 802.11b specification to form the Wireless Ethernet Compatibility Alliance (WECA)  Open in new Window."

WECA, now, that's a meaningless acronym because they're not called that anymore. I know a few people in Wicca, but that's a different thing.

Ten names were proposed by the brand agency, and in the end the group settled on Wi-Fi, despite the emptiness the name holds.

"Despite?" I'd have guessed "because of." You may not want your brand to connote other meanings. It can lead to confusion. Different story, but that's kind of what happened with .gif. The creator of the Graphics Interchange Format went to his grave insisting that it's pronounced with a soft g, and he was wrong. We're still arguing about it to this day, and .gifs are older than Wi-Fi.

"So we compromised and agreed to include the tag line 'The Standard for Wireless Fidelity' along with the name.

"This was a mistake and only served to confuse people and dilute the brand."


Like I said.

A word that many of us say potentially several times a day is actually straight up marketing nonsense.

Fun fact: in French, it's pronounced "wee-fee," which I find highly amusing. No relation to "oui."

At any rate, every word is made up. Some were made up more recently than others, is all. Some get passed around for a while and then fall out of favor, while others become Official Scrabble Words or whatever (I wonder if I'd get dinged for using "yeet" on a Scrabble board.)

Perhaps sometime in the future, a newer technology will replace what we know today as Wi-Fi. They'll try to give it a different name. We'll just keep calling it Wi-Fi. Maybe we'll even drop the hyphen, which seems to be the pattern for lots of made-up words. And the French will go on pronouncing it differently.
May 2, 2025 at 9:37am
May 2, 2025 at 9:37am
#1088507
Today's article, from Nautilus, is even older than most that grab my attention: first published in, apparently, 2013. That's ancient by internet standards.

     The Mystery of Human Uniqueness  Open in new Window.
What, exactly, makes our biology special?


Well, technically, each species is unique in its own way. But it's unsurprising that humans would be most interested in the uniquity of humans. (I just made that word up, and I like it.)

If you dropped a dozen human toddlers on a beautiful Polynesian island with shelter and enough to eat, but no computers, no cell phones, and no metal tools, would they grow up to be like humans we recognize or like other primates?

That's a lot of restrictions for one experiment. How about we just drop them off on the island?

(Ethics bars the toddler test.)

Annoying.

Neuroscientists, geneticists, and anthropologists have all given the question of human uniqueness a go, seeking special brain regions, unique genes, and human-specific behaviors, and, instead, finding more evidence for common threads across species.

And yet, evidently, there is something that makes humans different from nonhumans. Not necessarily better, mind you. But if there weren't a unique combination of traits that separates a human from a chimpanzee, or a mushroom from a slime mold, we wouldn't put them in different conceptual boxes.

Meanwhile, the organization of the human brain turns out to be far more complex than many anticipated; almost anything you might have read about brain organization a couple decades ago turns out to be radically oversimplified.

And this is why the date of the article matters: in the twelve years since it came out, I'm pretty confident that even more stuff got learned about the human brain.

To add to the challenge, brain regions don’t wear name tags (“Hello, I am Broca”), and instead their nature and boundaries must be deduced based on a host of factors such as physical landmarks (such as the hills and valleys of folded cortical tissue), the shapes of their neurons, and the ways in which they respond to different chemical stains. Even with the most advanced technologies, it’s a tough business, sort of like trying to tell whether you are in Baltimore or Philadelphia by looking out the window of a moving train.

Yeah, you need to smell the city to know the difference.

Even under a microscope human brain tissue looks an awful lot like primate brain tissue.

That's because we are primates.

When we look at our genomes, the situation is no different. Back in the early 1970s, Mary-Claire King discovered that if you compared human and chimpanzee DNA, they were so similar that they must have been nearly identical to begin with. Now that our genomes have actually been sequenced, we know that King, who worked without the benefit of modern genomic equipment, was essentially right.

"Must have been nearly identical to begin with." Congratulations, you just figured out how evolution proceeds.

Why, if our lives are so different, is our biology so similar? The first part of the answer is obvious: human beings and chimpanzees diverged from a common ancestor only 4 to 7 million years ago. Every bit of long evolutionary history before then—150 million previous years or so as mammals, a few billion as single-celled organisms—is shared.

Which is one reason I rag on evolutionary psychology all the time. Not the only reason, but one of them. Lots of our traits were developed long before we were "us," and even before we diverged from chimps.

If it seems like scientists trying to find the basis of human uniqueness in the brain are looking for a neural needle in a haystack, it’s because they are. Whatever makes us different is built on the bedrock of a billion years of common ancestry.

And yet, we are different.

I look at it like this:

Scotch is primarily water and ethanol. So is rum, gin, vodka, tequila, other whisk(e)ys, etc. But scotch is unique because of the tiny little molecules left after distillation, plus the other tiny little molecules imbued into it by casking and aging. This doesn't make scotch better or superior to other distilled liquors, but it does make it recognizable as such. (I mean, I think it's superior, but I accept that others have different opinions.)

I was unable to find, with a quick internet search, the chemical breakdown of any particular scotch, but, just as I'm different from you, a Bunnahabhain is different from a Glenfiddich, and people like me can tell the difference—even though the percentages of these more complicated chemicals are very, very small.

Point is, it doesn't take much. But trying to find this "needle in a haystack" (how come no one ever thinks to bring a powerful electromagnet?) might be missing the point. And yes, that pun was absolutely, positively, incontrovertibly intended.

Humans will never abandon the quest to prove that they are special.

We've fucking sent robots to explore Mars. I say that's proof enough. But again, "special" doesn't mean "superior." Hell, sometimes it means "slow."
May 1, 2025 at 5:53am
May 1, 2025 at 5:53am
#1088454
Here's a relatively short one (for once) from aeon. It's a few years old, but given the subject, that hardly matters.



And right off the bat, we're getting off to a bad start. Proclaiming that something is "always" (or "never") something just begs someone to find the one counterexample that destroys the argument.

In this case, that someone is me.

You have probably never heard of William Kingdon Clifford. He is not in the pantheon of great philosophers – perhaps because his life was cut short at the age of 33 – but I cannot think of anyone whose ideas are more relevant for our interconnected, AI-driven, digital age.

33? That's barely old enough to have grown a beard, which is a prerequisite for male philosophers. Or at least a mustache.

However, reality has caught up with Clifford. His once seemingly exaggerated claim that ‘it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence’ is no longer hyperbole but a technical reality.

I'll note that this quote is not the same thing as what the headline stated. I guess it's pretty close, but there's a world of difference between "without evidence" and "upon insufficient evidence."

There is, for example, no evidence for a flat Earth beyond the direct evidence of one's senses (assuming one is in Kansas or some other famously non-hilly location), and overwhelming evidence that the Earth is basically round. Okay, not a great example, because flat-Earth believers can be shown to be wrong. But morally wrong? I'm not so sure.

I hold the belief, for a better example, that murder is wrong. There's no objective evidence for this, and moreover, we can argue about what constitutes "murder" as opposed to other kinds of killing, such as assisted suicide or self-defense. And yet, it seems to me that believing that murder is wrong is, on balance, a good thing for people's continued survival, and thus morally right.

His first argument starts with the simple observation that our beliefs influence our actions.

Okay, that seems self-evident enough. The article provides examples, both practical and ethical.

The second argument Clifford provides to back his claim that it is always wrong to believe on insufficient evidence is that poor practices of belief-formation turn us into careless, credulous believers. Clifford puts it nicely: ‘No real belief, however trifling and fragmentary it may seem, is ever truly insignificant; it prepares us to receive more of its like, confirms those which resembled it before, and weakens others; and so gradually it lays a stealthy train in our inmost thoughts, which may someday explode into overt action, and leave its stamp upon our character.’

I've heard variations on this argument before, and it does seem to me to have merit. Once you believe one conspiracy theory, you're primed to believe more. If you accept the concept of alien visitations, you can maybe more easily accept mind-control or vampires. That sort of thing.

Clifford’s third and final argument as to why believing without evidence is morally wrong is that, in our capacity as communicators of belief, we have the moral responsibility not to pollute the well of collective knowledge.

And that's fair enough, too.

So why do I object to the absolutist stance that it's always wrong to believe on insufficient evidence?

Well, like I said up there, I can come up with things that have to be believed on scant-to-no evidence and yet are widely considered "moral." The wrongness of murder is one of those things. That we shouldn't be doing human trials for the pursuit of science without informed consent and other guardrails. That slavery is a bad thing. And more.

I'm not even sure we can justify most morality on the basis of evidence (religious texts are not evidence for some objective morallity; they're just evidence that someone wrote them at some point), so to say that belief on the basis of insufficient evidence is morally wrong (whether always or sometimes) itself has little evidence to support it. You have to start by defining what's morally right and wrong, or you just talk yourself in circles.

While Clifford’s final argument rings true, it again seems exaggerated to claim that every little false belief we harbour is a moral affront to common knowledge. Yet reality, once more, is aligning with Clifford, and his words seem prophetic. Today, we truly have a global reservoir of belief into which all of our commitments are being painstakingly added: it’s called Big Data.

Again, though, that's a matter of scale. People have held others to certain standards since prehistory; in the past, this was a small-community thing instead of a global surveillance network.

None of this is meant to imply that we should accept the spread of falsehoods. The problem is that one person's falsehood can be another's basic truth. That makes it even more difficult to separate the truth from the lies, or even to accept the reality of certain facts.

Yes, having evidence to support one's beliefs is a good thing overall. But we're going to end up arguing over what constitutes evidence.
April 30, 2025 at 10:09am
April 30, 2025 at 10:09am
#1088392
I suppose this article from Mental Floss could be described as calling out the fallacy of fallacy. Meta-fallacy, as it were.

    5 Common Terms That Double as Logical Fallacies  Open in new Window.
Not all wishful thinking involves the future.


In simple terms, a logical fallacy is a flaw in reasoning that weakens your argument; you’ve drawn a conclusion based on illogical, irrelevant, deceptive, or otherwise faulty evidence.

There's a link to a page with several examples of common logical fallacies. I'll reproduce it here.  Open in new Window.

But the article is more specific, focusing on terms that mean something different in ordinary discussion than they do in formal settings. This is, I think, akin to how the ordinary definition of "theory" is very different from the scientific definition, which leads to quite a bit of confusion sometimes.

There's only five examples. I feel like there could probably be more, but I'm no expert. Here's a couple of highlights:

Begging the question

Begging the question is a fallacy whose premise assumes the conclusion is true without actually proving it.


This is one I see a lot. People use "beg the question" when what they really mean is "There's another obvious question to ask now."

Thing is, it's not wrong. The meaning should be able to be deduced from context. But I think it helps to know that there are at least two meanings, as with "theory," to help avoid confusion.

This second sense is so at odds with its Aristotelian source material that some people think it’s just plain wrong—but it’s by far the most common way we use the phrase today.

On the other talon, the article just pointed out the bandwagon fallacy, which is that if enough people think something, then it must be right. I think this is an example of irony; that's still a little fuzzy to me.

But here's the one that gets misused a lot, in my view:

Slippery slope

A slippery slope fallacy involves arguing against an initial action on the basis that it will lead to a succession of undesirable consequences—but without any significant evidence to support that the series of events will actually occur...

If you describe something as a slippery slope in any casual context, though, you probably aren’t implying that it’s a fallacious argument. More likely, you mean an action truly will lead you down a bad road.


I'm not sure these are incompatible definitions. I am pretty sure that people who like to invoke the "slippery slope" metaphor in politics are doing it to avoid compromise.

Wishful thinking

As a logical fallacy, wishful thinking doesn’t necessarily involve the future, either. In fact, it frequently involves the present: Instead of “I want it to come true, so it will come true,” it’s often a case of “It ought to be true, so it is true.” It’s even sometimes called the “ought-is fallacy” (not to be confused with the is-ought fallacy, wherein you argue that something ought to keep being a certain way because it already is that way).


Yeah, well, I want to understand this stuff better, so I will understand this stuff better.
April 29, 2025 at 8:32am
April 29, 2025 at 8:32am
#1088329
I don't believe in guilty pleasures. If you like something, revel in it. If you don't, don't do it. The closest I come is a website I know I shouldn't look at, but sometimes, I just can't help myself: Lifehacker.

    How to Tell If Someone Is Actually Rich  Open in new Window.
Plenty of people do "fake rich" to impress, intimidate, or sell something to you.


Yes, that's because it works often enough for that to be a viable strategy.

There was a time when rich people were pretty easy to spot. Everything about them, from their clothing to their leisure activities, was more or less designed to exclude the rubes.

Yeah, I don't know about that. People have been conning other people for as long as there have been people. Probably.

But things have changed in recent decades, and the signs of wealth have become harder to detect. This is due in part to the rise of the Influencer, the person on social media who curates their life in photos, reels, and emojis to convey an image of wealth and luxury—even though most of them are broke and just faking it, usually so they can sell you something.

Ah, yes, that's why I saved this article so long ago: I've ragged on the concept of influenzas before.

In a world where everyone is wearing jeans and T-shirts, and where a generous credit limit will allow you to live the high life (for a while, at least), how can you tell fake rich from real rich?

Well, for starters, "real rich" has nothing to prove, doesn't need your approval, and often fakes a lower financial status in order to avoid beggars and golddiggers.

Rich people don’t flash their wealth for one simple reason: They don’t think about it.

Yeah, no, I seriously doubt it's only that "one simple reason." I spitballed three of them in the previous paragraph.

If they do think about money, it’s often in terms of a reluctance to let people know how much money they have, and the steps they can take to hide it.

Fair point.

Truly wealthy people have more time, and this translates into knowledge in terms of skill sets and life experience. They know how to ski or ride a horse (or play polo); they know the protocols of private jets; they’re familiar with foreign cities in a way that goes beyond tourism.

Or maybe they're just very, very good at video games and avoiding anything resembling Real Work.

Fashion and luxury brands exist to advertise to the world that you have the money to pay for them, and that’s the last thing really rich folks need or want to do.

Which doesn't make a lot of sense to me, why people think luxury brands signal that one is rich. If I have $1000, and spend $900 of it on a Prada briefcase (or whatever; hell if I know how much one costs or if it even exists), then I have $100: enough for a good meal at a decent steakhouse. But if I buy a cheap-ass leather briefcase on Amazon for $100, then I have $900: enough for a good meal with a kick-ass bottle of scotch after.

Frankly, I'd rather have the steak and scotch.

There is one exception: quality. There could be a good reason to spend more on long-lasting, comfortable clothing than on flash-in-the-pan fashion or cheap knockoffs.

But the truly wealthy are disconnected from money to the point where everything seems free, because they often don’t directly pay for anything—and if they do, they don’t worry about the price of things. Paying for specific things or experiences just doesn’t really impact their day-to-day existence, which is why they can be stymied by questions like “How much does a gallon of milk cost?”

Yeah, well, that may not be the best example. I'm not what you'd call wealthy: no private jets, no servants, modest house (it's fully paid for, which is wealth enough for me). But I couldn't tell you off the cuff what a gallon of milk costs, because I rarely buy the stuff, and when I do, it's like a quart. And you can't just multiply that cost by 4 to get the price of a gallon. Besides, truly wealthy people import it from Europe and it's labeled in liters. Right? They do that, right?

I do know the approximate current price of a banana, though; those, I buy fairly regularly. It's less than a quarter. Someone's using low-wage labor there.

It’s not a hard-and-fast rule, but often a surefire sign that someone is real rich is a lack of luggage.

And this last bit doesn't make a lot of sense to me for a couple of reasons. First, in order to notice this, you, too, have to be traveling. Second, "real rich" doesn't travel commercial; even first class is beneath them—so you won't see them boarding or disembarking from their private plane, even if it's rented.

But all of this simply raises the question of why one would need to spot "real rich," except for maybe just curiosity. You think they'll give you money or opportunity? They won't. And if they do, they're probably not "real rich," but trying to scam you.

I say don't worry about it, and go about your life. Unless that life includes faking "rich" to get something out of people.
April 28, 2025 at 9:22am
April 28, 2025 at 9:22am
#1088272
I suppose one way to get clicks is to proclaim right off the bat that some famous person was wrong, like this eight-year-old article from aeon does.



Well, if that's the claim, why not also assert that Thoreau was wrong? Because he definitely was.

According to Ubuntu philosophy, which has its origins in ancient Africa, a newborn baby is not a person.

Yeah, it's only a person before it's born, after which no one cares. Oh, wait, that's modern Alabama, not ancient Africa.

People are born without ‘ena’, or selfhood, and instead must acquire it through interactions and experiences over time. So the ‘self’/‘other’ distinction that’s axiomatic in Western philosophy is much blurrier in Ubuntu thought.

I'm not arguing against this philosophy, but "the ‘self’/‘other’ distinction" goes completely away once you realize you're the only actual consciousness. But solipsism is difficult to defend, because everyone you argue with about it is really you.

Who I am depends on many ‘others’: my family, my friends, my culture, my work colleagues.

Why are you limiting that to people? Your experiences are shaped by nonhuman animals, plants, and your inanimate environment too. Also shrooms. Mostly shrooms, if you're doing philosophy like this.

Yet the notion of a fluctuating and ambiguous self can be disconcerting. We can chalk up this discomfort, in large part, to René Descartes. The 17th-century French philosopher believed that a human being was essentially self-contained and self-sufficient; an inherently rational, mind-bound subject, who ought to encounter the world outside her head with skepticism.

And I do not yet see a contradiction between these philosophies. We can only perceive the world around us through our senses, which are processed by our brains.

There's a bit about the background of Descartes' philosophy, but I ended up viewing the narrative with skepticism, mainly because of this later example provided by the author:

In the 1960s, the American psychologists John Darley and Bibb Latané became interested in the murder of Kitty Genovese, a young white woman who had been stabbed and assaulted on her way home one night in New York. Multiple people had witnessed the crime but none stepped in to prevent it. Darley and Latané designed a series of experiments in which they simulated a crisis, such as an epileptic fit, or smoke billowing in from the next room, to observe what people did. They were the first to identify the so-called ‘bystander effect’, in which people seem to respond more slowly to someone in distress if others are around.

See, the Kitty Genovese thing was a prime example of sensationalist reporting. It turns out that "multiple people" did not witness the crime; that was overblown to sell papers to a shocked and disgusted public (an example of the pre-internet version of clickbait). Also, it's somewhat contradictory to simultaneously claim, as this article does, that the "self" is dependent upon one's environment while touting the results of a scientific study in a controlled setting; it turns out that the Bystander Effect  Open in new Window. may not be as robust in real-world situations. There are also cultural variations involved.

Point being, if you're going to write an article that hinges, in part, on the truth of the Kitty Genovese incident, and you're unaware that it was debunked years ago, one has to wonder what else might be wrong or misleading in the article.

Is there a way of reconciling these two accounts of the self – the relational, world-embracing version, and the autonomous, inward one? The 20th-century Russian philosopher Mikhail Bakhtin believed that the answer lay in dialogue. We need others in order to evaluate our own existence and construct a coherent self-image.

Or, you know, "others" are an illusion projected by your own subconsciousness.

Okay, no, I'm not actually a solipsist. And I do accept that we're only who we are in relation to our environment. I mean, that's the "nurture" part in the old nature / nurture question. There will likely continue to be arguments about how much of each makes us "us," but I think people who assume it's either one or the other are pretty rare nowadays.

But for the most part, scientific psychology is only too willing to adopt individualistic Cartesian assumptions that cut away the webbing that ties the self to others. There is a Zulu phrase, ‘Umuntu ngumuntu ngabantu’, which means ‘A person is a person through other persons.’ This is a richer and better account, I think, than ‘I think, therefore I am.’

And yet, as I said, I don't see an inherent contradiction here. You know you exist because you think. But that doesn't rule out the existence of the world around us or the beings that inhabit it, or their effect on us. "Cogito ergo sum" is the beginning of philosophy, not its end point, in my view anyway. Perhaps I'm just not enough of a philosopher to understand what the author is trying to say; or, maybe, I'm simply a bystander.
April 27, 2025 at 9:22am
April 27, 2025 at 9:22am
#1088205
I've been wanting to get to this one for some time now, but it's just evaded the random numbers like someone who owes me money avoids me.



"No."

That's how.

I'm going to let "financial therapist" slide for now.

Lending money to a friend or family member can put a strain on the relationship if you're not careful.

Wow, Einstein, you really have a firm grasp of the obvious.

Nearly a quarter of people who lent money or covered a group expense with the expectation of being paid back say doing so negatively impacted their relationship with the other party, Bankrate's 2024 financial taboos survey found.

Frankly, I'm surprised it's less than 25%.

"Decide if you can afford to give them the money and if you can't, you may not really be in a position to help," Aja Evans, a board-certified therapist who specializes in financial therapy, tells CNBC Make It.

Okay, no, the implication there is that if you can afford to give them the money, you should do it. This is also bullshit. Sometimes, giving people money enables them or helps them avoid the consequences of their own actions, and you shouldn't do it.

That's not to say having that conversation is easy, Evans says. Often, close friends or family members may be aware of the things you're spending money on, like clothes or vacations, and make judgements about what you can or can't afford.

Which demonstrates a lousy understanding of how money works. "How can you be poor? You just bought a big house and a Porsche." "Yes, and after spending all that money, I'm broke." Anyone who doesn't understand that "big purchase means you have less money" needs financial therapy, not a bail-out.

"Just because you have it in your account doesn't mean you can give it," she says. "Especially if you know other bills are coming."

Or even if you know you're going to need it when you're old and unable to get any more from work or from skinflint family members.

Directly saying no when a friend or family member asks for money can be hard, especially if you've loaned them money in the past.

I don't care how emotionally hard it is. "No" is a very short, simple, one-syllable word in most languages, including English. It's not hard to pronounce.

Now, I don't mean that we shouldn't take care of our friends and family. And you might fear losing them if you don't acquiesce to their pleas for pelf. But, honestly, family or not, friends or not, some people are no great loss, and that includes perpetual mooches.

The rest of your friends and family might just respect you more for standing up to them instead of giving in.

And to reiterate, it's not hard: you just open your mouth and say: "No."
April 26, 2025 at 10:54am
April 26, 2025 at 10:54am
#1088062
The article that popped up today is from aeon, and fairly long. But the headline irked me, and I might have a couple of comments on the text, too.

    The commitment to collaborate  Open in new Window.
Though natural selection favours self-interest, humans are extraordinarily good at cooperating with one another. Why?


"Why?" Well, because "natural selection" doesn't favor "self-interest." That's a pernicious falsehood perpetrated by social Darwinists and Libertarians, in support of an individualist agenda.

No, the driving force in humans and many other species isn't competition, but cooperation. I know I've said this before. Even some things that look like competition still involve cooperation, like a chess match or a sportsball game: at the very least, you agree to follow the same set of rules, and if you don't, you get called out for cheating.

Competition is also a factor, of course, but cooperation builds societies, which offer mutual protection.

Anyway, the article, or at least some selected excerpts from it.

The evolution of cooperation has been of interest to biologists, philosophers and anthropologists for centuries. If natural selection favours self-interest, why would we cooperate at an apparent cost to ourselves?

Like I said, questionable premise, but still a reasonable question worthy of study.

If I can reduce the cost of cooperating by deception – pretending to pull my weight in the group project or in the rescue mission – and still reap the benefits, why would I not do so?

I don't think that's such a profound conundrum. Lots of people do employ deception to reap benefits. Hell, some nonhuman animals do, too (my cats, for example). If they're caught, though, those benefits tend to disappear.

The article proceeds to get into an evolutionary muddle, which, well, I don't even know where to start picking it apart. Maybe I'll just note that at least part of the discussion rests on the old "men hunt / women forage" trope, which has been at least partially debunked.

There's a lot more to it, and I fear a large part of it is pure speculation.

I have spelled out a coevolutionary link between human cooperation and commitment.

No, you haven't. You have made a hypothesis, and supported it to some extent.

But the author does acknowledge a thing I've been saying about evolutionary hypothesizing:

But how can we tell if my account is true or not? One might think that this kind of explanation is rather speculative and unconstrained – it is storytelling. An evolutionary explanation of this sort generally begins with a description of the ancestral state and a purported end state that we want to explain. Here, the end state is modern human cooperation. The explanation given takes a narrative form – the aim is to provide a synthesised description of an evolutionary process by appealing to incremental changes we could have made in response to social or ecological pressures in our environment.

All these "we do x today because our ancestors needed to learn to do it to survive" narratives strike me as just-so stories. Unless there's evidence, we can narrate all we want, and it'll just be a story. To back it up, we need more than just guesswork. Just asserting things like "men hunt / women forage" is an attempt to justify current social roles with an evolutionary narrative, but many of these guesses fall apart on examination.

My hypothesis is that this relationship between expanding cooperation and new forms of commitments is a uniquely human phenomenon and helps to explain the evolution of distinctively human prosociality.

And I'll give credit to the author here: the "hypothesis" aspect is acknowledged. I just didn't want people walking away thinking this was the One Truth about human cooperation. What we know is that cooperation and collaboration is what got us into space, for example—though there was certainly a bit of competition involved, too.
April 25, 2025 at 9:13am
April 25, 2025 at 9:13am
#1087977
I don't know when Popular Mechanics started doing science articles, but this isn't the first one I've shared from them.



And did it feature scantily-clad women each month?

Carvings on a 12,000-year-old monument in Turkey appear to mark solar days and years, making it possibly the oldest solar calendar in ancient civilization.

Hm, maybe it didn't have months at all, let alone chicks in fur bikinis.

Also, I'll note: oldest surviving calendar. I'm fairly certain no ancient civilization just said, "You know what? Let's move these giant rocks here so they make pretty shadows depending on the time of year." No, they probably started with wood poles or something easier, then decided to make them "permanent" by creating a stone one.

An ancient monument discovered in Turkey may just be an ancient monument. But, if its markings are what experts think they are, it might be the world’s oldest solar calendar.

How hard is it to determine if the markings are pretty women or not?

By analyzing the symbols carved onto pillars, the team believes that every “V” could represent a single day, given that one pillar featured 365 days. And among those, the summer solstice in particular was highlighted with a V worn around the neck of a bird-like beast meant to represent the summer solstice constellation during that time.

This paragraph is agonizingly ambiguous. I've heard before that due to things like precession, the background stars associated with any given season change over that order of time. I also know that different cultures have, and still have, different ideas of what stars formed what pretty pictures in the sky. What's maddening here is: does the "summer solstice constellation" refer to the constellation the sun is in at the solstice, or the one at zenith at midnight (which would be the opposite side of the sky)? And while we're at it, how do they know that this particular culture interpreted one of the constellations as a "bird-like beast?"

The calendar’s preoccupation with day, night, and seasonal changes may have sparked anew with a world-changing comet strike, one that experts believed occurred in roughly 10,850 B.C. and helped contribute to a mini-ice age that eliminated numerous species.

Even early, pre-agricultural humans could predict things like solstices and equinoxes. A comet strike is notoriously unpredictable without modern technology. I have to wonder if sites like (probably) this one were an attempt to make sense of the random nature of certain events, by controlling the things that they thought they could. Pure speculation, really.

“It appears the inhabitants of Gobekli Tepe were keen observers of the sky,” Martin Sweatman, lead study researcher from the University of Edinburgh’s School of Engineering, said in a statement.

This should not be surprising. We have lots of different cultures who liked to create calendars. Stonehenge is probably the most famous, but there's also the Mayan calendar and who knows how many that weren't preserved in stone?

The carvings also track cycles for both the Moon and the Sun, which pre-date other calendar finds of this type by “many millennia,” the group wrote.

Oh, so the lunar cycles were also part of this. The stuff up toward the beginning made it sound purely solar.

The researchers believe that the temple carvings show the ancient civilization was recording dates precisely, noting how the movement of constellations across the sky differed based on the time of the year. This would be 10,000 years before Hipparchus of ancient Greece documented the wobble in the Earth’s axis in 150 BC, making this newfound calendar well ahead of its time.

This makes little sense to me, but I don't know if it's me, or poor interpretation on the author's part. What this purported calendar purportedly tracks is yearly changes, not the precession that takes thousands of years to complete one cycle.

But don't get me wrong: a calendar site that old would be a major development in our understanding of historical astronomy. If their findings are confirmed, anyway. Archaeologists have a strong tendency to see what they want to see, and to call any ancient artifact whose use they don't understand a "ritual tool."

Even if the calendar hypothesis is wrong, which I'm not saying it is, it's still a stone thing from 12,000 years ago, and that's pretty damn cool by itself.
April 24, 2025 at 2:08am
April 24, 2025 at 2:08am
#1087899
After doing a bit on color earlier this week (see "Blue My MindOpen in new Window.), I found this related 2022 article/interview from Knowable Magazine.

         Color is in the eye, and brain, of the beholder  Open in new Window.
The way we see and describe hues varies widely for many reasons: from our individual eye structure, to how our brain processes images, to what language we speak, or even if we live near a body of water


Now, I don't have a whole lot to say about it; I consider this to be more of a follow-up. I didn't expect it to follow-up so soon, but such are the perils of random number generators.

Some people are color-blind. Others may have color superpowers.

The former seems to be linked to a Y chromosome; the latter, to an XX pair. Make of that what you will; I call it semirandom genetic variation. Like how the gene complex for calico cats is linked to the XX. For whatever it's worth, I seem to have perfectly standard color vision, but I have two friends, both male, both with Irish ancestry, who are colorblind to different degrees.

That's not science, by the way. That's an observation of a couple of data points.

To learn more about individual differences in color vision, Knowable Magazine spoke with visual neuroscientist Jenny Bosten of the University of Sussex...

And the rest of the article is an edited transcript of that interview.

As I said, I don't have much to say, for once. So I'm not going to reproduce parts of the interview. I will note that they do make mention of The Dress, which I also covered in an entry fairly recently, but that was in the old blog.

There's also some reinforcement of what I said before: that the spectrum is, well, a spectrum, with way more than seven colors. Some say there are millions. I'm pretty sure the actual number is finite, at any rate. To reiterate, the "seven colors" thing can be traced back to Newton, who associated them with other mystical sevens like the Sun and Moon plus five visible planets.

And at the end (spoiler alert), it reiterates the philosophical question (I say philosophical, because we don't have a scientific means of testing this yet) that Kid Me posed: do I see the same colors that you do? I don't know. I also don't know that it matters except in terms of satisfying one's curiosity.

So, maybe tomorrow I'll have more to say. We'll see. And we'll see in different colors.
April 23, 2025 at 8:42am
April 23, 2025 at 8:42am
#1087848
Today, we're taking a look at an age-old conundrum about eggs. No, not whether or not they preceded chickens (from an evolutionary perspective, they did), or why they're still so expensive, but, well, I'll let The Conversation explain it:



I'm tempted to answer "no." Most question headlines are answered "no."

You might have heard that eating too many eggs will cause high cholesterol levels, leading to poor health.

Researchers have examined the science behind this myth again, and again, and again – largely debunking the claim.


And yet, it persists, because people tend to remember only the first word on a subject, not its later retractions.

A new study suggests that, among older adults, eating eggs supports heart health and even reduces the risk of premature death.

This. This is why people don't trust nutrition science.

The article describes the study's methods. Then:

The research was published in a peer-reviewed journal, meaning this work has been examined by other researchers and is considered reputable and defensible.

At least, that's how it's supposed to work. Sometimes, though, things slip through.

Researchers received funding from a variety of national funding grants in the United States and Australia, with no links to commercial sources.

I'm glad they included this line, because funding can induce bias, even unconscious bias: a desire to get more funding, so we'll tend to produce the result they want. All those studies about how great dark chocolate is for you? Well, they might not be wrong, but they're suspicious because they were paid for by Willy Wonka.

Due to the type of study, it only explored egg consumption patterns, which participants self-reported. The researchers didn’t collect data about the type of egg (for example, chicken or quail), how it was prepared, or how many eggs are consumed when eaten.

There may be other confounding factors, too. It may be a correlation-not-causation thing: what if the high-level egg-eaters also had other dietary habits that are known to be heart-protective, like eating rabbit food?

The article notes other limitations of the study, then goes off on some other scientific research, apparently unrelated to eggs. This may be an editing issue.

Here's the important part, though:

The fuss over eggs comes down to their cholesterol content and how it relates to heart disease risk. A large egg yolk contains approximately 275 mg of cholesterol — near the recommended daily limit of cholesterol intake.

In the past, medical professionals warned that eating cholesterol-rich foods such as eggs could raise blood cholesterol and increase heart disease risk.

But newer research shows the body doesn’t absorb dietary cholesterol well, so dietary cholesterol doesn’t have a major effect on blood cholesterol levels.


Unlike the egg thing, the relationship between cholesterol and heart disease risk is well-founded. What wasn't well-founded, apparently, was the idea that eating foods that naturally contain cholesterol, like eggs, is linked to blood cholesterol levels.

While the science is still out, there’s no reason to limit egg intake unless specifically advised by a recognised health professional such as an accredited practising dietitian.

And I say we're already neurotic enough about food. Worrying about it so much can't possibly be good for you.
April 22, 2025 at 10:21am
April 22, 2025 at 10:21am
#1087788
In keeping with the spirit of yesterday's entry ("Avoid snakebites by not going outside,") here are some tips from Lifehacker to mess with.

    10 of the Most Ridiculous Fees (and How to Avoid Paying Them)  Open in new Window.
I didn't realize I needed generational wealth to check my bag at the airport.


Checked bag fees? The only reason they exist is because people insist on the cheapest possible flight, which they determine before they discover that there are about fifty add-on fees in addition to the base cost, and checked bags are but one of them. Eliminate checked bag fees, and airlines will all just raise their prices the same amount. (They also have the added bonus of causing people to fight for overhead bin space for... you know... carrion.)

Processing fees. Service fees. Hidden fees. It feels like most companies and services these days have found countless ways to sneakily squeeze money out of me.

And then, on top of that, they expect you'll pay their employees for them by tipping them. Not to mention begging for money for questionable "charities."

Even when it’s only a few dollars here and there, it’s the principle of the thing: Why am I being charged in the first place?

Because they want your money. And because they can.

Concert “service” fees

If you’ve tried to buy a concert ticket in recent history, you’ve been slapped with a shocking string of processing, commission, or transaction fees.


How to avoid: stop going to concerts, like I did. Mostly because, in some cases, the add-on fees more than doubled the price of the ticket. But also because I refuse to go to any venue named after a company, which most of them are, these days.

This is a somewhat different situation than airline add-on fees. It's not like there are two or more ticket merchants trying to sell passes to the same concert at the same venue (usually). There's no "competition" reason like with airlines looking to appear to have the lowest price. No, they do it because once you've decided $200 is a perfectly reasonable price for nosebleed seats at a rock concert, the sunk cost fallacy takes over and you end up paying another $300 for processing, convenience, and Ferengi fees.

Airbnb fees

Next to the cost of concert tickets, Airbnb has gained notoriety for its bullshit fees. I've found the growing consensus is that Airbnb simply isn’t worth its exorbitant service fees.


Solution: avoid AirBnB, like I do. Maybe at first it made sense, but now they're having a measurable negative impact on the housing market, among other negative social consequences. Hotels can have sneaky fees too, but they tend to be lower. And from what I've heard, with AirBnB, you generally have to do your own cleaning, which is anathema to the whole point of going anywhere. I don't clean my own house; why should I clean someone else's? (I don't live in filth; I hire a service.)

Seat selection and airline fees

Of all the bullshit airline fees these days, “seat selection” might be the shadiest.

I happen to disagree. Those are disclosed up front, during seat selection, and if you don't like it, feel free to cheap out in a middle seat.

ATM fees

When you need cash fast, ATM fees are tough to avoid.


Are they? I haven't paid an ATM fee in decades, unless you count the foreign currency exchange fee I paid exactly once, in Paris. Mostly, you just have to have the right bank.

Car dealership fees

If there’s someone you can trust to be honest and no-bullshit, it’s a car salesman, right?


Oh, a funny person. Hey look, everybody, it's a comedian!

Seriously, though, I've bought two cars in the past 20 years, so I'm no expert here. I can't say "Don't buy a car," though, because most of us either need one or would be seriously inconvenienced without one. I do wish I could just order the car I want online, like I do with computers, rather than deal with high-pressure sales tactics and end up with something other than ideal for me.

Gym initiation fees

When you join a new gym, your first bill might come with an “initiation fee.”


There are other ways to get exercise without going to the gym. Turns out that once you're giving them money monthly, it stops being an incentive for you to go.

Credit reports

Make a habit of checking your free credit score from sites like Credit Karma or Experian.


I will admit to having a Credit Karma account. It's free, and they're up front about the reason for it being free: they advertise credit and banking services. Even without that, though, there's probably no reason to check your credit report except maybe once a year, or if you suspect identity theft. Maybe if you're about to apply for a loan, but that can roll into the "once a year" thing.

Overdraft fees

“Overdraft protection” sounds like a positive thing to stop you from taking out more money than you have in your account. However, when the bank offers overdraft protection, they charge quite the fee for it.

Sigh. There's a really easy way to avoid these, too: don't fucking overdraw your account. I know, I know, it takes work, and maybe math. If you don't like it, then don't complain about overdraft fees. Not to mention that emergencies do happen. And, again: bank shopping.

Bank statement fees

A paper bank statement can come with a wild $2 or $3 monthly fee.


Wow, whoever put this article together sucks ass at picking banks.

Online shipping fees

As much as I'd like to support in-person brick and mortar stores, sometimes Amazon one-day shipping is the only option I have. And with shipping costs these days, I know I'm guilty of buying more products just to qualify for free shipping—the classic "spend to save" trap.


Well, there's your problem right there. Consider buying less shit. Fewer shit? Whatever.

Bottom line (complete with service fees) is, companies get away with hidden and extra fees because we let them. There are some things it's worth paying extra for, like, for me, streaming without commercials, or an internet connection that doesn't require me to pay Comcast a dime. Also, shoes. Don't skimp on shoes. But this obsession with always getting the cheapest everything can get more expensive and time-consuming in the long run.
April 21, 2025 at 9:52am
April 21, 2025 at 9:52am
#1087704
People make things harder than they have to, sometimes. Case in point from Outdoor Life:

    7 Ways Not To Die From A Rattlesnake Bite  Open in new Window.
Longtime Outdoor Life contributor Michael Hanback is back with tips for avoiding snakebites


1. Don't go where rattlesnakes are.
2. Stay indoors and maybe stay on sidewalks if you must leave the house.
3. Don't go outside.
4. Play in traffic.
5. Avoid the outdoors.
6. Stay home (and don't let snakes in no matter how much they beg).
7. Definitely don't visit Australia.

Technically, if you do all these things, your chance of dying from a snake bite is low, but never zero. You will definitely die from something else, though.

Now, here's where I tell you that the article does include photos of some very cute (but potentially deadly) nope ropes, though other people may not find them as pretty as I do. Yes, I like snakes. From a distance, unless I know they're nonvenomous.

I’ve seen a few snakes here and there, but I’ve never even had a whiff of a close call with a venomous one.

Yeah, you have. You really have. You just didn't know it.

That is, until one day last June on a remote stretch of the Appalachian Trail in Western Virginia.

Which is pretty much the only place you'll find a rattlesnake (or it will find you) in Virginia. Growing up, we had copperheads and cottonmouths to deal with, but also the nonvenomous and very useful blacksnake. Well, we called them blacksnakes; their more official name is northern black racer, which is a damn cool name. The really, truly official name is Coluber constrictor constrictor, which is also cool and would make a great band name for anything but a Whitesnake cover band. My dad kept one around (much to the chagrin of my mom) and named him Goldberg "because if I named him Nixon, nobody would trust him."

Someone had run over his tail at some point (I'm still unclear as to where a snake's body ends and their tail begins), but Goldberg got around just fine.

Anyway, back to rattlesnakes.

I saw a flash in the rocks beside the footpath and peered down at a timber rattler as thick as a forearm, coiled six inches from my right boot! With glinting hints of yellow and green in the midday sun, it was both beautiful and terrifying.

Well, I'm glad I'm not the only one who can appreciate a danger noodle. Difference is, I don't go out looking for them.

Wherever you roam, your chances of a potentially dangerous rattlesnake encounter are small.

They're even smaller if you stay in the car.

1. Know Where Rattlesnakes Live

...and don't go there.

2. Know When Snakes Are Active

When it's warm. They're reptiles.

3. Gear Up Smart

I recommend full plate armor.

A guy at REI told me that in a tight situation, a thick wool sock could turn fangs, though Heaven forbid you or I ever have to find out!

Yeah, I'm not going to bet my life on what one guy tells me.

4. Watch Your Step

You know that Gadsden flag the crazies have co-opted? With the "Don't tread on me" slogan? Yeah, don't step on snakes.

5. Watch Your Reach

After ankles and legs, most snake bites occur on hands and arms.

That fear you have if you're a guy and you need to relieve yourself off-trail? Yeah, that's not going to happen. One might get your legs in that situation, though, and then you're yelling and everyone sees you with your dick hanging out, writhing around and screaming.

6. Stay Back!

Better yet, don't go where snakes are.

7. Don’t Panic

Ah, yes, useful advice in any situation, especially interstellar hitchhiking.

So, yeah, the article provides way more practical advice if you simply must go hiking in the woods for some reason, like you're hiding from the cops or something. But there are plenty of other reasons to stay indoors; snake bites are scary but fairly uncommon compared to, say, ticks, spiders, scorpions, and all manner of other arachnids. Or even having a branch fall on you. More common than quicksand, though, which TV shows when I was a kid convinced me were all over the place outside.

Meanwhile, I'll just stay on my deck and avoid the most dangerous thing that climbs up onto it with me: opossums. Which aren't even as cute as snakes.
April 20, 2025 at 8:48am
April 20, 2025 at 8:48am
#1087645
I'm not overly familiar with the source of the article I'm featuring today. It's from Open Culture, which bills itself as having "the best free cultural & educational media," and already I distrust it because if you're really the best, you don't need to self-promote as such.

But we're going to look at this article anyway.



I'd heard this assertion before, but I don't think I've ever blogged about it.

The article, incidentally, contains a video with a similar title. I didn't watch it. I don't know if it covers the same material as the writing. I prefer writing over videos.

In an old Zen sto­ry, two monks argue over whether a flag is wav­ing or whether it’s the wind that waves. Their teacher strikes them both dumb, say­ing, “It is your mind that moves.”

That reminds me of how an optimist and a pessimist argue over whether a glass is half-full or half-empty, when, clearly, the glass is twice as big as it needs to be.

Such obser­va­tions bring us to anoth­er koan-like ques­tion: if a lan­guage lacks a word for some­thing like the col­or blue, can the thing be said to exist in the speaker’s mind?

It's a fair question, I'll admit, but it seems to me that lots of things exist that we don't have words for.

We can dis­pense with the idea that there’s a col­or blue “out there” in the world. Col­or is a col­lab­o­ra­tion between light, the eye, the optic nerve, and the visu­al cor­tex. And yet, claims Maria Michela Sas­si, pro­fes­sor of ancient phi­los­o­phy at Pisa Uni­ver­si­ty, “every cul­ture has its own way of nam­ing and cat­e­go­riz­ing colours.”

Can we really dispense with that idea, though? Just as sound is a pressure wave in a medium such as air, water, or something solid, regardless of whether there is an ear around to hear it (so much for the "tree falls in the forest" Zen koan), color is a particular wavelength on the electromagnetic spectrum. I'd argue that insofar as color exists at all, being not a "thing" but a property of a thing, that wavelength that we agree on as "blue" exists, too. How we perceive that color is, to me, a separate issue.

The most famous exam­ple comes from the ancient Greeks. Since the 18th cen­tu­ry, schol­ars have point­ed out that in the thou­sands of words in the Ili­ad and Odyssey, Homer nev­er once describes any­thing — sea, sky, you name it — as blue.

I'd heard that, of course, but I still have questions, like: How do we know they didn't use the word for blue if they didn't have a word for blue? And, more importantly: Why are we trusting the color descriptions of a blind poet?

It was once thought cul­tur­al col­or dif­fer­ences had to do with stages of evo­lu­tion­ary devel­op­ment — that more “prim­i­tive” peo­ples had a less devel­oped bio­log­i­cal visu­al sense.

Yeah, evolution doesn't really work like that. No matter how primitive the culture seems to our technological senses, people are generally the same, genetically speaking, all over.

“If you think about it,” writes Busi­ness Insider’s Kevin Loria, “blue doesn’t appear much in nature — there aren’t blue ani­mals, blue eyes are rare, and blue flow­ers are most­ly human cre­ations.”

Well, yeah, but unless you live in London or Seattle, there's this big thing-that-isn't-a-thing called the daytime sky, which we describe as blue. It's pretty hard to miss unless you live in a cave, which even cavepeople didn't do all the time.

The col­or blue took hold in mod­ern times with the devel­op­ment of sub­stances that could act as blue pig­ment, like Pruss­ian Blue, invent­ed in Berlin, man­u­fac­tured in Chi­na and export­ed to Japan in the 19th cen­tu­ry.

I did a blog entry a while back on that particular pigment; as I recall, it featured in Japanese ("The Wave") and European ("Starry Night") art. But I have my doubts about that being the origin of our shared perception of the color blue. Newton did a lot of study of the color spectrum, breaking up sunlight using a prism like the proto-Dark Side of the Moon cover art, and he included "blue" as a color. I should note, however, that Newton seems to have chosen a seven-color scheme (the actual spectrum covers a lot more than seven shades) because of the mystical association with the number seven: days of the week, visible heavenly bodies that move (sun, moon, and five planets).

One mod­ern researcher, Jules David­off, found this to be true in exper­i­ments with a Namib­ian peo­ple whose lan­guage makes no dis­tinc­tion between blue and green (but names many fin­er shades of green than Eng­lish does). “David­off says that with­out a word for a colour,” Loria writes, “with­out a way of iden­ti­fy­ing it as dif­fer­ent, it’s much hard­er for us to notice what’s unique about it.”

I kind of agree with that, though. There's the old story about how the Inuit have many different words for snow; it's probably false (not least because there are several languages and dialects involved), but it does speak to the larger truth that we name the things we find to be important in our lives.

It's an interesting line of inquiry, though. When I was a kid, I remember making an offhand comment to a friend like, "How do I know that the colors I see are the same as the colors you see? Like, we can both agree that this grass is green, but if I could see through your eyes, would I see the same color as I do now?" Those weren't my exact words, which I don't remember, but whatever I said, he understood what I was getting at. Much later, after the internet became a thing, someone echoed my childhood Zen koan and got a bunch of mind-blown reactions.

As far as I know, we can't know the answer to that, not yet. Perhaps someday.

58 Entries *Magnify*
Page of 3 20 per page   < >
Previous ... -1- 2 3 ... Next

© Copyright 2025 Robert Waltz (UN: cathartes02 at Writing.Com). All rights reserved.
Robert Waltz has granted Writing.Com, its affiliates and its syndicates non-exclusive rights to display this work.

Printed from https://www.writing.com/main/profile/blog/cathartes02