Items to fit into your overhead compartment |
Earlier this month, I did an entry on smarts: "We Are Very Smart" ![]() Well, I didn't find the exact quote, nor was I expecting to because I wasn't sure about its wording. But after I did that entry, I found the following article, an older one from Medium, and it describes a similar idea. The famous Nobel winning physicist Richard Feynman understood the difference between “knowing something” and “knowing the name of something” and it’s one of the most important reasons for his success. Well, that, and being really very insanely smart. Feynman stumbled upon a formula for learning that ensured he understood something better than everyone else. "Be very smart?" There are four steps to the Feynman Technique. Actually, there are five. The zeroth one is: you have to want to learn. Without that, you'll just be staring out the window, chin in palm, sighing, after asking "when are we ever going to use this in real life?" Step 1: Teach it to a child Write out what you know about the subject as if you were teaching it to a child. This, I think, is from where I got the mangled quote above. A lot of people tend to use complicated vocabulary and jargon to mask when they don’t understand something. Yes, a lot of people do that, especially in the business world, which is all about optimizing synergies for sustainable innovative solutions to resource allocation issues (for example). But other people use it because it expresses nuance in a way that simpler near-synonyms cannot. While I agree that there's almost never a reason to use "utilize" instead of utilizing "use," "complicated vocabulary and jargon" are compression algorithms. Instead of using lots of words like "it's made up of different little parts that connect to each other" we can say "it's complicated." Still, I agree with the basic idea: run a decompression routine on the jargon. Not only does that show you've got a handle on the subject matter, but it makes it easier to communicate to outsiders (and children). There are, of course, three other steps, as the article notes. I don't need to reiterate them here. Mostly, I just wanted to clarify the Feynman connection from the earlier entry. One thing I've found from experience is that teaching something (it doesn't necessarily have to be to a child) is a way to firm up and increase one's own knowledge of the subject matter. It may be one motivation for my blogging. |
Nothing bigger than infinity, right? Well, what about second infinity? From Quanta: Mathematicians Measure Infinities and Find They’re Equal ![]() Two mathematicians have proved that two different infinities are equal in size, settling a long-standing question. Their proof rests on a surprising link between the sizes of infinities and the complexity of mathematical theories. When someone brightly proclaims, "Nothing is impossible!" I have two possible responses, depending on my mood: 1) Even in a vacuum, there exist electromagnetic fields, quantum virtual particles, etc., so yes, technically, it's impossible to achieve "nothing." 2) Okay, if nothing is impossible, please, go ahead and count to infinity. And if I'm in a really bad mood, there's a third: 3) "You are." But despite my #2 response, which demonstrates that there are things that are, in point of fact, not possible, mathematicians understand quite a bit about the very useful (but probably entirely abstract) concept of infinity, one bit being that the infinity of integers is of a lower order, aka smaller, than the infinity of real numbers. The problem was first identified over a century ago. At the time, mathematicians knew that “the real numbers are bigger than the natural numbers, but not how much bigger. Is it the next biggest size, or is there a size in between?” said Maryanthe Malliaris of the University of Chicago, co-author of the new work along with Saharon Shelah of the Hebrew University of Jerusalem and Rutgers University. Just so we're clear, in math, "natural numbers" means positive integers, and "real numbers" means integers, fractions, and non-repeating transcendental numbers like pi. From what little about this stuff that I understand, there is a greater infinity of real numbers just between the integers 1 and 2 than the entire infinity of integers. Oh, and to make your mind spin even more, the infinity of natural numbers is the same size as the infinity of integers, which is the same size as the infinity of even numbers, which is the same size as the infinity of multiples of 100, and the infinity of prime numbers, and so on. You don't have to take my word for it, but mathematicians have proven this shit rigorously, as the article goes on to explain using pretty basic concepts, all because of the work of a dude named Georg Cantor. Then: What Cantor couldn’t figure out was whether there exists an intermediate size of infinity — something between the size of the countable natural numbers and the uncountable real numbers. He guessed not, a conjecture now known as the continuum hypothesis. Now, I'm obviously not a mathematician, but this stuff fascinates me to the point where I've read entire books about it. One of them, as I recall, pointed out later work that showed that the continuum hypothesis cannot be proven or disproven using the framework of mathematics, which this article also nods to: In the 1960s, the mathematician Paul Cohen explained why. Cohen developed a method called “forcing” that demonstrated that the continuum hypothesis is independent of the axioms of mathematics — that is, it couldn’t be proved within the framework of set theory. (Cohen’s work complemented work by Kurt Gödel in 1940 that showed that the continuum hypothesis couldn’t be disproved within the usual axioms of mathematics.) The article continues in even more detail and, no, there's not a lot of actual math in it; it's mostly written in plain language, and the only problem I had reading it was keeping all the names straight. Because while I don't understand much of mathematics, I understand it better than I understand people. My real point in posting this, however, is to show that some things are, in fact, impossible. But what is possible is to know what's impossible. |
This bit has cycled out of the news by now, but I don't really care. I have something to say about it anyway. From The Guardian: Celebrities criticize all-female rocket launch: ‘This is beyond parody’ ![]() Amy Schumer, Olivia Wilde and Olivia Munn are among the famous names calling out the much-publicised space trip The all-female Blue Origin rocket launch may have received plenty of glowing media coverage – but not everyone is impressed. Oh, you mean a publicity stunt sometimes generates bad publicity? But I've been assured that there's no such thing as bad publicity. The stunt has drawn criticism from a number of female celebrities who were not keen on the Jeff Bezos-owned Blue Origin NS-31 mission, which included Katy Perry... Cutting off the quote there because, in reality, no one gives a shit about any of the other passengers. Also, criticism from "celebrities," female or not, is about as meaningful as astrology. Model and actor Emily Ratajkowski... ...said she was “disgusted” by the 11-minute space flight, which featured Perry serenading her fellow passengers with a cover of What a Wonderful World and advertising her upcoming tour setlist in brief zero gravity. “That’s end time shit,” Ratajkowski said. “Like, this is beyond parody.” Okay, that's about as much of the article as I can stomach quoting. Here's the thing, though: I don't blame Perry or whoever those other chicks were. Well, maybe Sanchez, but who can really blame her for wanting to marry billionaire Lex Luthor? Sorry, I mean Jeff Bezos; I always get those two confused. Point is, it was a cunning stunt, and it worked. People talked about it for weeks. I'm talking about it now, but only because the article has been languishing in the pile for a month. I'm tempted to say "I don't care," but obviously, on some level, I do. And I wanted to try to articulate why. Let's start with the definition of "space." There's no well-defined upper limit to Earth's tenuous outer atmosphere. It's not like the ocean, which has a shifting and rolling but definable boundary; air molecules just kind of get more and more rare the higher you go. Plus, if you did pick some density and say "anything above this density is atmosphere and anything below it is space," you'd find that the altitude varies depending on what spot you're above on Earth, partly because it'll be higher where the air is warmer. So "space" is defined to begin at the Kármán Line, 100 km above mean sea level. Leaving aside that it's not really a line but a (nearly) spherical shell, this launch barely exceeded that altitude. So, yes, from a technical and internationally-recognized legal perspective, they were in space. Briefly. Second, I'm going to say stuff about women in space, at great personal risk. The first woman in space was Valentina Tereshkova, who's still alive. Kate Mulgrew, the actor who portrayed Kathryn Janeway on Star Trek: Voyager looks a bit like her (I doubt that this is a coincidence). Now, if they'd sent Mulgrew up there, I might have been more impressed. But someone sent Shatner up in a different launch, and he was all jaded and shit so they probably soured on Trek actors. Anyway, my point is, Tereshkova was a trained cosmonaut. The first American woman in space, Sally Ride (which I always thought was an excellent name for an astronaut) was a physicist with a PhD. Katy Perry is a singer. I'm not ragging on singers. I'm not ragging on Perry. She's talented, though I can't say I personally like her stuff. I'm just saying that her skill set doesn't say "astronaut." Hence: publicity stunt. I remember thinking a similar thing about Shatner when they lofted him up there, except Shatner doesn't exactly have the singing chops, as anyone who's ever been subjected to his cover of "Rocket Man" can attest. We're not at the point yet where we need singers in space. And let's not forget that it's pretty routine to send both men and women up to the ISS now, people who spend months there doing... whatever they're doing. Science, research, maintenance; you know, productive stuff. there was that one Canadian dude who brought a guitar with him and made some cool videos, but he's known as an astronaut, not as a musician. These spacers do actual work. Basically, this was a passenger flight, albeit a very expensive one. So, no, the passengers don't impress me, regardless of gender. And finally, shame on the media for breathlessly covering this like it's some sort of grand accomplishment. It's not. No new science, no barrier-breaking, no frontier-pushing. We won't be getting cool space-age tech from a suborbital passenger flight. Sure, first all-female crew, but they could have lifted pretty much anyone healthy enough to handle some extra Gs. I mean, yeah, on some level it's pretty cool that we have companies doing their own rocket launches, but it's not so cool that they can afford to use their powers for advertising. Go, I don't know, mine an asteroid for rare earths or figure out a way to stop the next one from hitting our planet. More likely, they'll figure out how to make the next one hit just the right spot on our planet. Their competition's headquarters, e.g. As anyone who reads my blog should know by now, I'm not against space exploration, and that I believe that a variety of genders, races, and nationalities should be included—because it's about humanity in general, not about one government or company or culture. But this wasn't exploration. It was exploitation. |
I don't expect much from Stylist. Still, this doesn't clear even the low bar I set for them. Astrology fans, you’ve been reading the wrong star sign all this time: this is what your zodiac sign means now ![]() First off, to reiterate stuff I've said before: Yes, of course astrology is bunk. I consider it like folklore or fairy tales: obvious fiction, but still culturally significant, at least from an historical perspective. And it did give way to astronomy, similar to how alchemy evolved into chemistry. Any interpretation about what some particular stellar/planetary configuration "means," however, is purely made up. And second off, we've known about precession for decades. I remember great horror in the astrology community in the 1990s when it became publicized that zodiac signs are nearly one full sign off from their traditional calendar locations (which has been a thing for way longer than decades, but apparently, astrologers hadn't heard), and more freakouts in the noughties when astronomers announced that Ophiucus was part of the zodiac, too. January is over, February is upon us, and we all know what that means: it’s Aquarius season! Yeah, this article has been languishing in the pile for a few months. So what? The original article is nearly 10 years old, anyway. That’s right; astrologers have promised that all planets are going direct and we have zero retrogrades this month, which means we can lean hard into the do-gooder spirit of this astrological season without any annoying complications. This is usually where I close the tab and give up, but I still want to hear her take on the changes. Well, astrology, on the surface, may be based on the position of the sun relative to certain constellations – and it may be influenced by the movements of the sun, moon, planets and stars, too. "May be?" That's what astrology is. Not that it has any bearing on objective reality, but that's the lore that we've inherited. However, it is absolutely not considered to be a science. Finally, the truth. Indeed, it’s been wholeheartedly rejected by the scientific community – with many pointing out that astrological predictions are too general, too unspecific to be subjected to scientific testing. That's hardly the only objection. Even those of us who dismiss astrology as a load of absolute nonsense know which star sign we are. Yes, and as an Aquarius, I dismiss astrology as a load of absolute nonsense. (I am, however, sometimes fond of nonsense.) Because, as you’ve no doubt read already, it was recently revealed that everything we thought we knew about the zodiac was a lie. "Recently," my ass. Suddenly, astrologers started paying attention to astronomers. Selectively. If they'd actually paid attention to everything science said, there wouldn't be astrologers. Nasa – as in, yes, actual Nasa – have confirmed that the sky today is completely different to how it was almost 3,000 years ago, when the Babylonians first invented the 12 signs of the zodiac. Sigh. I... I can't even begin. Pauline Gerosa, the consultant astrologer behind Astrology Oracle, tells me: “Ophiuchus has always been one of the constellations that fall along the ecliptic. It just wasn’t selected by the ancient astrologers to be one of the 12 zodiac signs.” To muddy the waters (that's an Aquarius pun) even further, the zodiac constellations (I suppose I should explain here that these are the made-up interpretations of stellar configurations that get crossed by the Sun, Moon, and planets) aren't all nicely uniform in size. The article delves into that bit later. “It’s important to remember that astrology is NOT astronomy,” she adds. “Astronomy is a scientific concept based on 3D material reality. Astrology is a symbolic language, a philosophy, a multidimensional concept. They used to be seen as two sides of the same coin and hopefully they will be again.” You know, I can't really fault this quotation too much. It gives too much credit to astrology, perhaps, but considering the source, that's understandable. There are some more details at the article, but I just have one more takeaway from this: this represents an intrusion of actual science—well, not so much actual science as astronomical categorizations—into astrology. Maybe there'll be more of this, and astrology will be sent back to folklore, where it belongs, instead of being taken seriously as life direction and fate. |
Taking a chance here with a National Geographic link. I couldn't trust them once Fox bought them out (I fully expected headlines like "Global Warming: Myth Or Hoax?"), and I'm not sure Disney's much better. But I couldn't give this one a miss. Everything you think you know about spiders is wrong ![]() They're not attracted to your body lotion. They don't crawl in your mouth at night. In fact, they want nothing to do with you. Uh huh. That's what they want you to believe, to lull you into a false sense of security until, one night, you wake up, and something eight-legged and fuzzy is staring at you with eight hungry eyes. I mean, come on, did a spider write this? With hundreds of years of baseless myth to supply us, it’s no wonder as many as six percent of people are phobic of arachnids. Well, that's one spin on it. Another is that some people are phobic of spiders regardless of truth or fiction. Also, I find it difficult to believe that it's only six percent. One other thing: the author uses "myth" to mean falsehood, which is a perfectly acceptable definition, but many cultures have actual myths ("foundational stories") surrounding spiders, many of which paint the arachnids in the positive colors they deserve. It's just important to know what definition the spider who wrote this is referring to. These animals are stunningly diverse, ingenious creatures with so many characteristics worth admiring. Yes, and it's far easier to admire said characteristics on the exceptionally large members of order araneae. Much of what is commonly touted about the spindly eight-legged invertebrates is a misconception, according to Rod Crawford, a spider expert and curator of arachnology at The Burke Museum. “Everything you thought you knew about spiders is wrong,” says Crawford. Hey, another spider getting quoted! First, they aren’t insects. Spiders belong to a completely different class called “Arachnida.” Yeah, you know what else are in Arachnida? Such cute and cuddly exoskeleton-owning bugs as scorpions and ticks. Studies show that, in some ecosystems, more than 40 percent of all insect biomass passes through spiders, making them the number one controllers of insect populations. Yeah, yeah, I know: they eat bugs. This is not the flex you think it is. Myth: Spiders are out to bite us Most people will never be bitten by a spider in their lifetime. Yeah, that's what they want you to think. I'm healing from a spider bite right now, and it's not my first. Although it’s common to wake up with small skin bumps and sores and blame a spider, there’s almost always no reason to believe a spider is responsible for the prick, says Dimitar Stefanov Dimitrov, a spider evolution expert at the University Museum of Bergen in Norway. I strongly suspect that Dimitrov would be singing a different tune if he lived in Australia instead of Norway. Myth: We swallow some spiders in our sleep every year Throughout the years, several online forums and publications have claimed we swallow as many as eight spiders in our sleep every single year. Okay, even if that were true, though: so what? Apart from the "gross" factor. Myth: Spiders lay eggs in the tips of bananas and other fruits No, of course not. They lay them in your pillow. I remember when I was a kid and Bubble Yum first came out (being a kid, this was exciting: a soft bubble gum? Count me in) there was a pervasive urban legend that it contained spider eggs, a myth probably started by the trolls over at Dubble Bubble. Myth: Spiders can lay eggs under your skin and other crevices of your body The story goes like this: a woman returns from a holiday in a warm, exotic location and finds a bump on her cheek that’s pulsating and growing. Concerned, she visits a doctor, and when the specialist pries the welt open, hundreds of small spiders crawl out. We heard that story as kids, too, only it wasn't always a woman. I did once read an account of a biologist who got infested with a botfly larva, and who was so intrigued by the process that he just let it develop under his skin. There's some more at the link. Now, just to be clear, I'm mostly joking here. I admire spiders, preferably from a distance. That doesn't stop me from making "nuke it from orbit" jokes, or posting jump-scare gifs for your viewing pleasure. |
Not exactly cutting-edge physics, but this PopSci article showcases scientists using their noodles. Curious and hungry physicists whip up perfect pasta pan salt rings ![]() ‘Our simple observation of daily life conceals a rich variety of physical mechanisms.’ When you’re boiling water for pasta, throwing a bit of salt into the water can help it boil a little bit faster–if only by a few seconds. Not a good start, PopSci. Not a good start at all. That's not why you salt pasta water. It's for flavor and texture. ![]() With that, a white ring of salt deposits will often show up within the pan. A group of curious and hungry physicists harnessed the power of fluid dynamics to see what ingredients are necessary to create nicer looking salt rings–releasing larger salt particles from a greater height can help make more uniform salt deposits at the bottom of a pan. The findings are detailed in a study published January 21 in the journal Physics of Fluids. And with that, we come one step closer to a Unified Theory of Everything. Okay, no, now I'm the one lying, but at least I'm doing it for the sake of comedy. A team from the University of Twente in the Netherlands and the French National Institute for Agriculture, Food, and Environment (INRAE) were spending an evening playing board games and eating pasta, when they began to question what it would take for them to create uniform and “beautiful” salt rings. I can't help but note the absence of Italy from these experiments. The team set up a tank of boiling water in a lab and tested dropping in salt of various sizes at different speeds. By "various sizes," I assume they mean fine-grained to coarse-grained. “These are the main physical ingredients, and despite its apparent simplicity, this phenomenon encompasses a wide range of physical concepts such as sedimentation, non-creeping flow, long-range interactions between multiple bodies, and wake entrainment,” said Souzy. Jargon is a compression algorithm. I'll just trust that they can indeed relate this to other physical phenomena. Souzy also reports that he can use this data in the kitchen to “create very nice salt rings almost every time.” Which is cool and all, but I have to ask: how does the pasta taste? |
From Vox, a report that's sure to freak out a lot of people. The life-or-death case for self-driving cars ![]() Sorry, a robot is probably a safer driver than most humans. As usual, I'm not just accepting this at face value. Nor am I rejecting it outright. When the concept of autonomous vehicles (AVs) was first floated in the real world, I knew immediately what was going to happen: the public would freak out, and anyone who stood to lose revenue would take advantage of the freakouts to come down hard against AVs on "safety" grounds, playing in to the general fear of robots. Municipalities, for instance, who stand to lose a significant source of revenue if they can't ticket people for speeding or rolling stop signs. And cops, who might have to switch to investigating real crimes like theft. And boy, have they been harping on safety. Humans drive distracted. They drive drowsy. They drive angry. Even when we’re firing on all cylinders, our Stone Age-adapted brains are often no match for the speed and complexity of high-speed driving. Ugh. They had to throw in a spurious reference to evolutionary psychology, didn't they? The result of this very human fallibility is blood on the streets. Nearly 1.2 million people die in road crashes globally each year, enough to fill nine jumbo jets each day. I admit I didn't check that statistic, but it tracks. I'm also not sure of the numerical comparison to ill-defined "jumbo jets," but I think the point is that every time we lose an airplane, "jumbo" or not, we hear about it for days or weeks afterward, while the road crashes are generally just background noise. Here in the US, the government estimates there were 39,345 traffic fatalities in 2024, which adds up to a bus’s worth of people perishing every 12 hours. I generally work with the nice round number 40,000 for average yearly traffic fatalities in the US. It's close enough to make the point I like to make, which is: that's about 110 fatalities each day, which translates to 4 to 5 per hour. Call it 4. Every time an AV so much as skins someone's knee, we hear about it, and it frightens people. And yet, if your phone pinged every time there was a fatality involving human drivers, you'd hear an alarm every 15 minutes or so. Injuries? Roughly every five minutes. We're used to it. As I said, background noise. Obviously, there are a lot more human drivers than computer ones, so a direct comparison is more difficult. But my point remains: driving kills. In the US, it kills on the same order of magnitude as firearms, ![]() But the true benefit of a self-driving revolution will be in lives saved. And new data from the autonomous vehicle company Waymo suggests that those savings could be very great indeed. Obviously, we need to be very careful using data from a company whose existence depends on continued development of AVs. Fortunately, the article uses "suggests," implying that it would be good to look into this further, preferably without an Agenda (for or against). In a peer-reviewed study that is set to be published in the journal Traffic Injury Prevention, Waymo analyzed the safety performance of its autonomous vehicles... They then compared that data to human driving safety over the same number of miles driven on the same kind of roads. And that alleviates some of my concerns about impartiality. Not all of them, but some. Back-of-the-envelope calculations suggest that if the same 85 percent reduction seen in serious crashes held true for fatal ones — a big if, to be clear, since the study had too few fatal events to measure — we’d save approximately 34,000 lives a year. Okay, first of all, I was morbidly amused at "the study had too few fatal events to measure." Look, I get that the number of motor vehicle fatalities can never be zero, unless there are no vehicles on the road whatsoever (and even then, I suspect it would still be nonzero). It would be ideal, sure, but it's unrealistic to expect that. What I've been saying is that if AVs could be shown to reduce fatalities and other serious accidents by 10-20% (perhaps to an annual fatality level of 32,000 to 36,000, down from 40K), it would be worth it from a safety perspective. This study flips that script, implying a fatality rate of about 6,000 per year—a truly significant reduction, way more (pun intended) than my personal threshold. Of course, there are plenty of caveats to the Waymo study and even more obstacles before we could ever achieve anything like what’s outlined above. Yes, and I'm glad the article includes said caveats. You can read them there; I've already mentioned one of them (it being a company study). Still, the data looks so good, and the death toll on our roads is so high that I’d argue slowing down autonomous vehicles is actually costing lives. And there’s a risk that’s precisely what will happen. As with most things, there are other factors to consider. Accidental death is, obviously, a bad thing, so it's a fine metric for studies like this. But some other considerations include: personal loss of freedom (I can almost guarantee that AVs will have kill switches usable by law enforcement, which could be hacked or abused); economic impact (taxi drivers, rideshare gig workers, truckers, etc. to lose jobs); and ensuring that their routing is accurate (no driving into lakes or onto closed roads, e.g.); to name just a few. Not to mention the aforementioned loss of revenue to municipalities, which, frankly, I don't care about. There's also the issue of liability. Right now if I hit a pedestrian on the street, I'd be personally liable. Who or what is responsible if an AV hits a pedestrian? Well, that's for lawyers to figure out and, hopefully, if this study holds water, there would be a lot fewer such cases. Which is another consideration: personal injury lawyers would make less money. Waah. The article continues with a reiteration of what I've already said up there: Too often the public focuses on unusual, outlier events with self-driving cars, while the carnage that occurs thanks to human drivers on a daily basis is simply treated as background noise. (That’s an example of two common psychological biases: availability bias, which causes us to judge risk by outlier events that jump easily to mind, and base-rate neglect, where we ignore the underlying frequency of events.) As I said, if the news had to cover every traffic fatality in the US alone, we'd be getting four or five alerts an hour. The result is that public opinion has been turning against self-driving cars in recent years, to the point where vandals have attacked autonomous vehicles on the street. You know what that reminds me of? Luddites. Change is scary. Machines are scary. Also, we might lose our jobs, and that's really scary. In other words, I doubt that's all about safety or traffic fatalities. To sum up, I'm not strongly for or against AVs. I do think that reducing fatalities is generally a good thing, but, as I said, there are other considerations, though maybe not life-or-death ones. We need more studies, preferably independent ones, but ideally, any switchover (which would realistically happen after I'm gone) should be based on statistics and science, not on fear. |
Big Think asks the tough questions again: The 6 strongest materials on Earth are harder than diamonds ![]() For millennia, diamonds were the hardest known material, but they only rank at #7 on the current list. Can you guess which material is #1? Thing is, while we can use words like "strong," "tough," and "hard" almost interchangeably in casual conversation, those words have specific meanings in materials science. And there they are in the headline, conflating "strong" and "hard." (I'm the one who muddied the waters with "tough.") I'll just pause here for a moment while you get juvenile strong/hard jokes out of your system. Ready? Ok, good. Worse, later in the article (spoiler alert), there's even more confusion. So I thought I'd put this up front. While I do have some background in this from engineering studies, I'm by no means an expert. Wiki has a section ![]() As examples, glass is strong in many ways, but it's not very tough (except for certain specialty glass). Concrete is tough, not very hard (you can scratch it with many metals, as my sidewalk can attest after a snow-shoveling session), and has high compressive strength but low tensile strength. As for diamonds? Hard, but not tough; they'd make a lousy structural material. Their strength is fairly high, but there are many higher-strength materials, which is one reason I felt this article to be somewhat misleading. One final bit of pedantry before jumping in to the article: As I'm sure everyone is aware, diamond is carbon. Graphite is also carbon, and it's one of the least hard materials known (which is why we can write with it; it needs to be mixed with some other material in pencils so as not to wear down too fast). What we'll see here is that carbon is even more versatile than those two materials would suggest, and that's not even counting its ubiquity in biology. Although diamonds are commonly known as “the hardest material in the world,” there are actually six materials that are harder. And here, I'm not sure if the author means simply scratch resistance or not. For example, hardness, scratch resistance, durability (as many very hard materials are also brittle), and the ability to withstand extreme environmental stresses (such as pressures or temperatures) all factor into how impressive a material can be. And here we have some muddle again. Durability is a measure of wear resistance; there are brittle materials that are durable but not very hard, such as glass. If glass weren't durable, we wouldn't use it for windows. (To make matters even more muddy, there are types of glass that aren't very brittle.) On the biological side, spider silk is notorious as the toughest material produced by a plant, animal, or fungus. Sigh. Spider silk is strong (comparatively speaking), not tough. While other materials may rank higher on the Mohs hardness scale than diamonds, they’re all easier to scratch than diamonds are. (And, consequently, can be scratched or otherwise damaged through contact with a diamond.) And this bit makes no sense to me whatsoever. Either I'm missing something, or it's just plain wrong. The Mohs hardness scale, I've known from a very early age, refers to scratch resistance. Like I said, I'm not an expert, so don't just take my word for it. Now, I've already banged on long enough, but hopefully you get the idea: don't confuse hardness, toughness, strength, elasticity, etc. The article goes on to list the materials that surpass diamond in some way, but I'm still unclear on which material property they really mean. But I'm sure you'd at least like to know #1 on the list, because superlatives are always interesting: The #1 hardest material: Graphene At last, it’s the hardest material of all: a hexagonal carbon lattice that is only a single atom thick. And so we come full circle: hard diamond, soft graphite, ultrahard graphene. Carbon: is there anything it can't do? But the fun fact, left out of the article as far as I can tell, is that naturally occurring graphite consists of multiple layers of tiny bits of graphene. Sure, the interesting graphene is human-made, but it's not like we didn't know about its existence. Still, it's a little unclear whether graphene is hard, tough, strong, or some combination of the three. Whatever it is, though, it's damn interesting. |
Here's a source I don't think I've referenced before: Greater Good. Ever since I saw the movie Hot Fuzz (three or four times), that phrase has filled me with a sense of unease. And when I hear it, I reflexively repeat it, intoned in a British Northern accent, or as close to it as a US Southerner can get. Five Ways Nostalgia Can Improve Your Well-Being ![]() Some recent studies suggest that experiencing nostalgia about our past can make us happier and more resilient during times of stress. You know, used to be, nostalgia was considered a mental illness. Sigh... I miss those days. I often find myself nostalgic for days gone by—especially my young adulthood. Thinking about days when I could go backpacking with a friend on a moment’s notice or dance the night away at my wedding, without the constraints of child care or a limited energy supply, gives me a bittersweet feeling—a mixture of joy, sadness, and longing. Me? I've always had a limited energy supply, especially when it comes to group activities. And I've never had to deal with child care. Well, there was that one time my friend roped me into baby-sitting. I taught the kid how to make (and use) water balloons, and somehow never found myself in that situation again. Staying “stuck in the past” was often associated with being unable to adjust to new realities, like when soldiers were nostalgic for their faraway homes and experienced loneliness and dread. That may be an extreme example. While I've never been a soldier, I can understand how those emotions could happen, even without nostalgia. Not that long ago, some considered nostalgia to be a mental illness, akin to melancholy, which could lead to anxiety, depression, and sleep disorders. Yes, and I already made the joke about it up there. I just included this quote to support that the "mental illness" assertion was factual, unlike many of my jokes. Waltz's Second Rule: Never let the facts get in the way of a good joke. Or a bad one. Especially a bad one. But more recent findings on nostalgia suggest it can be good for us, increasing our well-being, making us feel connected to other people, and giving us a sense of continuity in our lives. Good. Let's drive another nail into the "living in the present moment" coffin. Rather than being a problem, nostalgia can help bring happiness and meaning to our lives. On second thought, happiness is overrated, and meaning is whatever we want it to be. Nostalgia makes us feel socially connected Nostalgia about our past often includes recalling important people in our lives—people who cared about us and made us feel like we belonged. Yeah, okay, but it can also highlight how you'll never see some of those people again. Seriously, though, the article links to some studies, which, full disclosure, I didn't read. Nostalgia helps us find meaning in life A sense of meaning in life involves knowing that your existence matters and that your life has coherence or purpose. It’s something we all strive for in one way or another. No, it's not. As one study found, nostalgia can increase your motivation to pursue important life goals, because it increases meaning—not just because it puts you in a better mood. Again, links to studies. Again, no clicky here. Nostalgia can make us happier Though it does seem to do just that—to boost our mood. Even though nostalgia is by definition a blend of positive and negative emotion, the positive tends to outweigh the negative, meaning we feel happier overall. I feel like the key words there are "can" and "tends to." Nostalgia puts us in touch with our authentic selves When thinking nostalgically about our past, we are the prime protagonists in our own life stories. I'm already the prime protagonist in my own life story. Perhaps for this reason, engaging in nostalgia can lead to personal growth. At least one study found that feeling nostalgia made people feel more positively about themselves, which, in turn, made them more open to experiencing new things, expanding their horizons, and being curious—all signs of psychological health. I'm still open to those things, nostalgia or not. But lately, what's been on my mind is: how long will that last, at this point? Nostalgia may help people who feel disillusioned or depressed Perhaps because of these potential benefits, people tend to engage in nostalgia when they are feeling down, lonely, or disillusioned. You know what helps me in those situations? Listening to really depressing music or watching really depressing movies. The article does go into some of the negatives: Of course, that doesn’t mean that nostalgia is always good or can’t have a downside. If nostalgia makes us spend too much time thinking about our past, it may prevent us from recognizing the joy in our lives right here and now. And, since we tend to engage in nostalgia when negative things occur, it could become an avoidance strategy that keeps us from dealing with present problems in more effective ways. Me, I'm not coming down hard on one side or the other. My feeling is that trying to squelch nostalgia, or any other emotion, simply on the basis of "I've heard it's bad to feel this way, so I won't feel this way" can't be good for you, but that's hardly scientific. I'll just point out that the "-algia" part of the word comes from a Greek root meaning "pain," and that the word apparently had the original sense of "homesickness," not a general longing for the past. The past, though, pain and all, is what made us who we are, so I can see why sometimes reflecting upon it can have positive outcomes. Let's just not lose ourselves in it. |
Here's an older bit (2018) from Slate that's still relevant, as I know I've talked about this topic on numerous occasions. Here’s Why It’s So Impossible to Get Reliable Diet Advice From the News ![]() What’s good for you seems to change every week. Maybe we should stop blaming the media and look at the studies underneath the stories, too. Oh, no. I'm not going to stop blaming the media. Or at least particular outlets and/or individual reports. Science reporting is all over the place, and it's important to hold them accountable. For example, anyone who reported breathlessly and credulously about the recent false claim of de-extincting the dire wolf? Yeah, don't trust them. That has nothing to do with nutrition science, of course, but it's the first example of bad science reporting I could come up with off the top of my noggin. So, no, large media site, I will not "stop blaming the media." Fat might be good for you this week, and coffee is bad. Or maybe, fat is bad, and coffee is good. If you are a connoisseur of such articles—say, someone like me who would like to make “evidence-based choices” about health—the ping-ponging of studies and coverage will not have escaped your notice. My favorite example of that is eggs, the beginning of the discussion of which predated the internet. Good. Bad. Kinda good. Kinda bad. Good. Bad. Good, but only the whites. Bad, but okay in moderation. Good. Bad. Good, but too expensive. I think we're back to good now; I don't eat eggs that much, so I don't pay as much attention. It occurred to me at some point that one way to control people is through their diet (and also their sexual practices, but that's irrelevant to this). Religions have been doing this for probably as long as there's been religions. Hell, the Pythagoras cult in ancient Greece practiced vegetarianism, and reportedly wouldn't even eat beans (how their brains functioned enough to figure out basic math is an open question). Even secularists have gotten into the act with plant-forward diets, the fat-free craze of the 90s, the carb-free craze of the noughties, and gluten scares and whatever. It is, intentionally or not, another way to divide people, since eating is, in humans, a communal activity. While I trust science in most things, nutrition science is notoriously all over the place, so I pretty much just eat what I want, when I want. It is easy, especially as someone who is on the research side of things most of the time, to fault the media for sensational coverage of individual studies that fails to consider the broader context. And certainly there is a healthy dose of that all around us (for example, why write a headline like “Do Tomatoes Cause Heart Attacks?” when the answer is “no”?). Oh, the answer to that question is easy: it's clickbait. But I don’t think this is the main problem, and at the very least, it’s not the only one. Instead, I would argue the main problem is that the studies that underlie this reporting are themselves subject to significant bias. I tend to agree with that assessment, but mostly just in the realm of nutrition science. Everyone has bias, and while scientists are trained to minimize it, it exists, even unconsciously. Science is largely self-correcting; that's why replication and peer review are necessary. The unfortunate truth is that people latch on to the first thing they hear, which is why we're still having arguments about the completely and thoroughly debunked idea, promulgated by someone with an Agenda, that vaccines cause autism (there's a lot more to that false claim, but I have another article queued up to talk about it). When you look at one particular food in the data and try to understand its impact, it’s impossible to zero in on the impact of just that food—you’re also seeing the impact of all of the other features that go into determining what food you eat. I'm pretty sure I've noted something like this before. At any rate, the article goes on to explain this sort of thing better than I ever have, so I'd suggest reading it. They also mention some variables that I don't think I've spent much time discussing, including education and wealth. There are even helpful graphs. But I think the bulk of the change must fall to how we do research, and how seriously we take these problems. Okay, I could argue on proportionality (studies / reporting on studies) here, but I won't. What I will do is point out that "we" don't "do research." (Watching videos on the internet doesn't count as "doing research"). The author there is apparently using "we" to mean "us scientists," which may not be apparent from just my cherry-picked quotes. This is one of the limitations of the English language, but a simple "we researchers" could have clarified. "We," as in average ordinary non-scientists, if we're going to be subjected to these potentially biased studies, have the responsibility to ourselves and others to spot questionable studies and reports thereof. Yes, like I've done in here in the past and will probably do again in the future. I know this may sound hypocritical, because I've railed before against putting certain responsibilities on us, but I think in this case, it's warranted. At the very least, though, let's not forget the ultimate confounding variable: personal taste. If you're forced to eat something you don't like, you'll eat less of it, and less means lower calories overall, which could lead to better health by some measures. I've suspected this for many years; one of the earliest stories in my portfolio was based on this idea. If I had to eat kale all the damn time, for instance, I'd get thin—but at what cost? At some point, you realize you're maybe going to live longer, but only by giving up some of the few things that make life worth living, so how is that better? Especially since you could get hit by a bus or a falling Russian satellite at any moment. |
As a (mostly) solo traveler myself, this article from Business Insider caught my attention. Okay. Mostly I wonder why they bothered to publish this. Is it some sneaky pronatalist propaganda? Shill for the travel industry? Just a way to get eyeballs on the site? For most of my 20s, travel was my whole personality. Huh. Most of us had "struggle to find an entry-level position and not get laid off" personalities in our 20s. So, when I started feeling a little stuck in the summer of last year at almost 29 years old, I did what had always worked before: I packed a bag, booked a one-way ticket, and left. Oh, no. The horror of turning 29. But then, one afternoon, hiking through the jungle, watching scarlet macaws flash across the sky, I felt it: nothing. No awe, no wonder, just a dull, creeping awareness that I'd seen this all before, that I could be anywhere, that none of it was touching me the way it used to. This Just In: people change as they get older. It's not always about "growing up." It's not about "putting away childish things." It's just change. I'm certainly not the person I was in my 20s, and while I can't point to a certain event and say "This was the watershed moment, the point at which my tastes changed," it happened. Perhaps gradually. Now, travel just felt like I was running away. I wasn't discovering new things about myself. I wasn't growing. I wasn't even particularly interested in where I was. Okay, well, I'll give her points for recognizing this and not holding on to old habits just because they're old habits. When I came back to the US, I expected to feel relief. Instead, I felt restless in a way that travel couldn't fix. Yeah, that's what happens when you've changed and you haven't yet figured out what you want to do next. A deeply meaningful life isn't found in constant movement; it's built over time. It's in the friendships that deepen over years, not days. The sense of belonging that grows from showing up again and again. The purpose that comes from committing to something, even when it's not thrilling every moment. Perhaps the problem is looking for meaning when there isn't any. But really... this is not some grand revelation. This is, again, an age thing. It hits some people earlier or later than others, but eventually, I think, most people get there. Travel will always be a part of my life, but I no longer see it as the answer to everything. That's because there is no one answer to everything. No, not even religion. I, too, enjoy travel, but I don't see it as some grand solution to all of life's problems. It's just nice to get out and do something different every now and then. If travel is the only thing you do, the "something different" may be settling down, as it was with this author. When I was a kid, there was a house on my road with a shingle outside proclaiming its name: "Journey's End." I didn't understand that as a kid. I think I do now. Please don't think I'm ragging on this chick. I only question why BI decided to publish this particular piece, which seems more like a blog entry than an opinion piece for a magazine (not that there's anything wrong with blog entries, either). I can't help but think it's some sort of propaganda, but I might be paranoid about that. |
From Popular Mechanics, some science reporting that I'm not going to get too skeptical about this time, promise. Scientists Found Evidence of a Megaflood that Shaped Earth’s Geologic History ![]() The flood may have refilled the entire Mediterranean Basin in just two years. Not that it shouldn't be approached with a level of skepticism; it's just that I don't know enough about the subject to know what questions to ask. However, I do question the headline: certain people see that headline and immediately think of one particular story involving rain, animals, and an ark. Hopefully the subhead is enough to disabuse one of any such notions, not to mention the article itself. Ages, epochs, periods, and even eras are often defined by some sort of geologic trauma. The Chicxulub asteroid, for example, pushed the Earth into the Cenozoic Era, and 65 million years later, experts are pondering if we’ve entered a new geologic age induced by modern humans (and their predilection for greenhouse gasses). If you ever look at a geologic time scale (here's one from Wiki), ![]() As for the "new geologic age induced by modern humans," I don't know for sure, but I thought they discarded the concept of the Anthropocene. Of course, "they" aren't a monolith and there might still be debate. Around 6 million years ago, between the Miocene and Pliocene epochs—or more specifically, the Messinian and Zanclean ages—the Mediterranean Sea was cut off from the Atlantic Ocean and formed a vast, desiccated salt plain between the European and African continents. If there's no ocean or sea between the continents, are they separate continents? By ancient convention, Europe and Asia are considered different continents, so I suppose so. Until, that is, this roughly 600,000-year-long period known as the Messinian Salinity Crisis suddenly came to an end. Messinian Salinity Crisis would make an excellent name for a 70s prog-rock band. At first, scientists believed that the water’s return to the Mediterranean took roughly 10,000 years. I have a bit of an objection to this wording. It's not like scientists took it on faith; there was evidence. It's entirely possible that the evidence was misinterpreted, but, as this article shows, scientists change their views when new or reinterpreted evidence shows up. But the discovery of erosion channels stretching from the Gulf of Cadiz to the Alboran Sea in 2009 challenged this idea, suggesting instead that a powerful megaflood may have refilled the Mediterranean Basin in as little as two to 16 years. Other than wondering why the author didn't just say "Straits of Gibraltar," which is probably better-known globally than "Alboran Sea" and "Gulf of Cadiz," there's a really, really big difference between 10,000 years and something on the order of a decade. Specifically, 1000 orders of magnitude. Quite a few discoveries move whatever needle by a tiny amount, like if there's evidence that the Sun is 5 billion years old but new evidence comes in that suggests 5.1 billion (I'm not saying this happened, just an example my head came up with). But this difference is a major shift. So I'd be looking for lots of evidence to back it up. Extraordinary claims require extraordinary evidence, and I call a 1000 orders of magnitude change extraordinary. But, again, I'm not saying it's not true; I just don't know much about this subject. That likely means this flooding event—now known as the Zanclean megaflood—featured discharge rates of roughly 68 to 100 Sverdrups (one Sverdrup equals one million cubic meters per second). Case in point: I'd never heard of the Sverdrup. So of course I looked it Sverdr-up. ![]() It shouldn't be surprising that they came up with a larger unit. This is analogous to how star masses are reported in terms of solar masses, or interstellar distances in light-years or parsecs. It keeps us from dealing mathematically with huge numbers, like billions or trillions, or having to use exponents. At any rate (that's a pun there), even if the numbers (68 to 100 in this case) are comprehensible, the amount of water flow is almost certainly not. The article goes into a discussion of the evidence that led to this extraordinary conclusion. I don't know enough o say whether it's compelling or not, but I did find it an interesting read. But then: This model shows that flooding could have reached speeds of 72 miles per hour, carving deep channels as observed in the seismic data. Look, I get using nonstandard units to make enormous quantities somewhat manageable in calculations, but switching from metric/SI to "miles per hour?" That, I cannot abide. Pick one. (It's about 115 km/h.) Now, let's see if I can find a lead singer for Messinian Salinity Crisis. And some musicians. Because I have no talent, either. |
Once again, Mental Floss tackles the important topics. Like many kids, I found history classes boring. Later, history became a favorite topic. I often wondered why that's the case. Part of it is because kids lack context, I'm sure. But another part is that they never taught the history of pizza. The history of pizza is a large pie—half Margherita and half lies. If you have to order a half-and-half pizza, you have failed at diplomacy and compromise. The most famous story about its origins, in which the classic tri-color pie was created to honor Queen Margherita of Savoy, is a work of fiction. And yet, it's the first thing people hear, so they'll stick with the fictional version. U.S. soldiers did not fall in love with pizza en masse during their time fighting World War II and bring it back to the States. Pretty sure I've never heard that tale. And the pizza in New York is not good because of the magical tap water. That bit, I knew. The pizza there is good because it's New York pizza. While New York City tap water is remarkably good for drinking, it doesn't contribute much to the taste of New York's most perfect food. Nor does it do anything to improve the taste of beer from their local breweries. Let’s take a look at some iconic pizza styles... Some of which aren't pizza, but okay. In 2014, newly-elected New York mayor Bill DeBlasio set off a small international incident when he was photographed eating his pizza with a knife and fork... So, was the then-mayor wrong? Right? Obviously, he was wrong, as he's a politician. The answer is both, and that’s because pizza is at once internationally recognizable and completely regional. That’s why some people look at a Hawaiian pie and see the greatest crime ever committed to leavened bread and others see a beautiful story about immigration, intercultural influence, and innovation (or, at least, lunch). The only thing I love more than watching Chicago vs. New York pizza arguments is watching the pineapple-on-pizza arguments. Well, actually, I love pizza more than any argument, but they still amuse me. The article goes into the Margherita thing, then: According to food historian Tommaso Esposito, up until the mid-20th century, pizzas were usually ordered by simply listing the ingredients you wanted on top. Esposito wrote a book all about pizza songs (yes, that’s a thing) from the 16th century up until 1966 and found that none of the songs mentioned specific pizza types by name. Hey, I still order by listing the ingredients I want on top. Also, how come I don't know any pizza songs? Neither of those two famous Neapolitan pie varieties would have been possible without tomatoes. And I'm glad the article acknowledges this. While something resembling pizza undoubtedly existed long before tomatoes were brought over from the Americas (I've seen histories tracing it back to classical Rome), it took the nightshade fruit to really make pizza what it's recognizable as today. When we think of pizza today, tomatoes—a crop the Aztecs had introduced to the Spanish—often seem like an essential ingredient. That's a kind way of putting "the Spanish stole tomatoes from the Aztecs." The Oxford English Dictionary, in fact, defines pizza as a dough “baked with a topping of tomatoes, cheese, and any of various other ingredients.” I don't accept dictionary arguments, but this one reflects common usage. Anyone who’s ever had a white pie might blanche at that definition. Ha ha. I see what you did there. There’s a written record from Gaeta, about 60 miles up the coast from Naples, dating back to the end of the 1st millennium CE. It lays out an agreement in which someone owes a local bishop 12 pizzas every Christmas and Easter Sunday. As the article notes, this was in the B.T.E. epoch (Before Tomatoes in Europe). We don’t have any way to know exactly what that proto-pizza looked or tasted like, but consider what the simplest version of a pre-Columbian-Exchange pizza might entail: a simple Mediterranean flatbread. Kind of like … a pita. Now here's where the article gets into that linguistic parallelism, something I've wondered about often myself, but never cared enough to look up. Plenty of sources think this is no accident, and draw a linguistic line straight from pita to pizza. That’s not the only possible etymology for the word, though. There's one important difference between pita and pizza, though: the former is generally baked on its own, while pizza dough is topped and then baked. Now, I've had things called "pizza" which feature pita or naan or other flatbread, pre-baked, topped with traditional pizza toppings (tomatoes, mozzarella, pepperoni) and then baked again, but I've always thought it's not true pizza. It can be good, though. If we define pizza as a flatbread with toppings, we can imagine it being “invented” more or less independently by the Ancient Greeks, Egyptians, and Natufians (from modern-day Jordan, who were apparently making bread more than 14,000 years ago). Yes, putting stuff on bread is as old as civilization, I can accept that. I can also easily see someone putting another hunk of flatbread on top, so I've never truly accepted the "Earl of Sandwich" story for eating something between two pieces of bread. The article backs me up on this, too: The idea of putting something delicious inside a pizza-like bread likely dates back thousands of years. They talk about figs as the "something delicious" before going on with: Eventually, pizza with figs became popular beyond those who ate it out of economic necessity. Wealthier eaters embellished the simple dish with prosciutto, creating a new variation that harkens back to pizza’s historical roots and remains popular today. This parallels the history of a lot of cheap eats. You take what's available in an area, and it feeds the masses. Then, later, it becomes a gourmet delicacy. Hell, France made basically a national cuisine out of that idea. Snails and frog legs, anyone? The Hawaiian pie was invented in 1962, according to most accounts, by Sam Panopoulos, a restaurateur living in Ontario. Sam was originally from Greece, and the boat he left on stopped, fortuitously, in Naples, where he first became acquainted with pizza. Unlike the murky origins of pizza itself, that story checks out. I like it because it's international: Greek, Italian, Canadian, Polynesian, American. The article also discusses other styles of pizza, like Detroit and Chicago, which I don't consider pizza. Again, though, it can be good. A different approach to that same long cook time may have given us Ohio Valley-style pizza. One of its defining features is the last-minute additions of cold toppings, including cheese. Unlike some other regional pizzas, Ohio Valley style tends to stay in the Ohio Valley. There's a lot more at the link. I won't belabor it further, except to say that regardless of categorization arguments, I only have one pizza advice about pizza, or pizza-adjacent concoctions: if you like it, eat it, and don't listen to those of us who need to be purists or pedants. |
A few days ago, I shared an article about how to tell if someone is rich. This one's like that, only it's about smart. From Upworthy: How do you know someone is very smart? Here are 15 'subtle signs' others notice. ![]() "You can understand both sides of an issue and still think one is wrong." It's probably a lot easier to tell if someone's stupid. That's easy: they are. Everyone is stupid; even, sometimes, very smart people. A Redditor named Occyz wanted to know how people tell the difference by asking them to share the “subtle” signs that someone is very intelligent. Oh, great, an article that summarizes a Reddit thread. In other words, don't believe a word of it. (See? I is smart.) A big takeaway is people think highly intelligent people are mentally flexible. They are always interested in learning more about a topic, open to changing their minds when they learn new information, and they're acutely aware of what they don’t know. So, people of questionable intelligence, plus a bunch of AI bots, ![]() In fact, according to the psychological principle known as the Dunning-Krueger effect, there is a big confidence chasm between highly intelligent people and those who are not. Low-IQ people often overestimate what they know about topics they need to familiarize themselves with. Conversely, people with high IQs underestimate their knowledge of subjects in which they are well-versed. In fact, starting a paragraph with the words "in fact" does not, in fact, mean that what follows is fact. Here are 15 “subtle” signs that someone is highly intelligent. "They don't tell everyone how smart they are" seems to be missing from the list. Incidentally, the article opens with a big picture of Steve Jobs. Now, there's no denying that Jobs was intelligent. He started a company with a couple of friends in a garage, and by the time he died, it was the most valuable company in the world (based on market capitalization). But he also eschewed evidence-based medicine, leading to quite possibly an early death. I'd argue that's not very smart. On the other hand, had he held out a little longer, Apple wouldn't have been the most valuable company in the world anymore, so maybe he was playing n-dimensional chess and winning? I don't know. Point is, smart isn't everything, just like money isn't everything. You can be smart and still a raging asshole, like Jobs reportedly was. I won't bore everyone with comments on every single item in the article. Hopefully, the ones I mention here will be enough to get my point across. 1. They admit their mistakes "When someone can admit a mistake and they know they don’t know everything." This sounds more like learned behavior. It is a good trait to have in most situations, I think, but I can't say it correlates with general intelligence. There are a few on the list like this. 2. Great problem-solvers On the other hand, this one strikes me as the actual definition of intelligence. 3. They appreciate nuance "'I can hold two opposing ideas in my head at the same time.' Anyone who is willing to do that is intriguing to me. I'd agree with that. I've said many times that life isn't binary; it's not all good/bad, black/white, whatever. I'm just not sure one has to be a genius to do it. 5. They have self-doubt The great American poet and novelist Charles Bukowski once wrote, “The problem with the world is that the intelligent people are full of doubts and the stupid ones are full of confidence,” and according to science, he’s correct. Yeah, well, Yeats wrote it first (I think): "The best lack all conviction, and the worst / Are full of passionate intensity." 9. They can simplify big ideas Okay, but to me, that's less a marker of intelligence and more a sign of... I don't know. Empathy? What do you call wanting other people to understand something? And also of being so well-versed in the "big idea" that they can explain it to the uninitiated. Richard Feynman, who gets my vote for smartest dude of the 20th century (edging out the perennial icon Einstein), reportedly once said, "If I could explain it to the average person, it wouldn't have been worth the Nobel Prize." And yet, he spent a lot of time explaining stuff. I wish I could find out who said something like "If you really want to learn something, figure out how to explain it to a fourth-grader." I thought it was Feynman, but I'm having trouble finding the quote. If indeed it exists. 11. They're humble "They don't continually need to tell people how intelligent they are." Okay, so up there, where I said, '"They don't tell everyone how smart they are" seems to be missing from the list.'? I was wrong. See what I did there? There are more in the article, as you might have inferred based on the number-skipping (and the fact that I told you I was going to skip some), because you're smart. Now, just to be clear, I'm not saying these are bad things. Everything on that list is what I'd consider a desirable character trait, to one degree or another. I just question their correlation with what we call intelligence, which, as I noted above, is notoriously hard to quantify in general. Sure, there are IQ tests, but I don't think such tests measure all possible forms of intelligence. And, just to reiterate something I've said before, it's best not to conflate intelligence with knowledge. Someone who does well on trivia questions has a lot of stuff memorized, but that doesn't necessarily mean they can figure something out that's unfamiliar to them. It's like, I don't know, if you have the dictionary memorized, you'll be able to make more Scrabble words, but will you be able to place them on the optimal score-enhancing spaces? The former is knowledge; the latter may be intelligence. In conclusion, there's a whole lot of other dimensions to a person than just "smart." Or how much money they have. Which also aren't necessarily correlated. I mean, everyone knows, or should know, that the only thing that matters is how attractive you are. |
I couldn't let this hate-review from SFGate go by without comment. A stay at the decrepit tomb of what was once the Vegas Strip's coolest hotel ![]() When it opened, Vegas had never seen anything like Luxor. Now, it's one of the most hated hotels on the Strip. Who the hell wrote this? Someone working for the competition? There's a lot of competition there, but I'd suspect Harrah's (owner of Caesar's Palace). And within 10 minutes of my arriving at Luxor, it was clear why it’s one of the most reviled hotels in Las Vegas. Really? Because within 10 minutes of me arriving there, I'm already relaxed and ready to gamble. I pulled into the porte cochere shortly before noon and headed inside with my luggage in tow. Hoping to stow my bag while I explored the resort, I walked over to the bell services desk. The employee gestured for me to come closer, then angrily pointed behind me. “That’s the line,” she said. I turned to see a queue about 10 feet away. It extended all the way through the lobby to the casino floor. Okay, a few things to unpack here. Let's start with the last bit. That makes it sound like the lines at Disney. This is bullshit. There's not much space between the front desk and the casino floor. Now, and here's the major, epic fail of this takedown piece: the author is channeling Yogi Berra here. "No one goes there anymore. It's too crowded." So now I begin to suspect that this writer, "mortified" (her own word) at her faux pas, simply got a bad first impression and then found everything she could to rag on. Then, by some miracle, I got a text: My room was ready. I passed two broken moving walkways, a closed cafe and a long, blank wall lined with employee-only doors before finding the ancient-looking bank of elevators. Yeah, I know that route. I also know the quicker, alternative route, which takes you through the casino floor. Had she gone that way, she might have written about how the hotel forces you through the noisy, flashy, money-sucking part of the first floor. As for broken walkways, yeah, that happens in an aging building. I've never seen the place not having some sort of construction going on. Finally, it's not like elevators were a thing in ancient Egypt. The least they can do is style them like older elevators. When the first one opened, the electrical panel was exposed, wires spilling out. The doors shuddered shut, and the ascent began. Because Luxor is a pyramid, the elevators are more like funiculars, climbing sideways at a 39-degree angle. Okay, okay, I'll grant that the exposed wires, which seem to be confirmed by a pic in the article, are a major fail on the part of maintenance and/or management. While there are laws about under-21s in casino areas in Vegas, plenty of families stay in the hotels. I'm not a big "think of the children" person, but kids do have a tendency to get curious about stuff like that. The elevators rattle uncontrollably, shaking the occupants like a martini all the way up. They’re also incredibly slow. I was on the 21st floor, and it took over a minute to get there. Waaah, they're slow. I think of them as Wonkavators. They are a bit rumbly and shaky, but that's part of their charm. As I put it to anyone in there with me (captive audience), "Hey, we came here to gamble, right?" Things did not improve when I reached the room. As I closed the door behind me, I saw that there was no deadbolt, no bar lock, no privacy latch. Okay, first of all, I've been in lots of hotels, from fleabags in rural Montana to the Ritz-Carlton in DC, so I don't recall specifically if the Luxor rooms lack those features. Seems to me they do have them, but it's possible that some rooms don't. Second, the pyramid is not the only place to stay there. They have two "tower" facilities with more traditional elevators and rooms without sloping walls. As I recall, you only pick the pyramid rooms by your own choice. Does Luxor mistrust its guests so much that it doesn’t provide interior locks? I wondered how many times a day its staff had to force their way into rooms, and why. Look, I'm no expert on the hotel industry, but management has ways to bypass those "security" features. People die in hotel rooms on a regular basis (not because they're in hotel rooms, but just because a lot of people stay in hotels and everyone dies at some point). Also, let's not forget that Luxor is immediately adjacent to, and for a long time shared an owner with, Mandalay Bay, and Mandalay Bay was where the infamous concert shooter stayed. The dark exterior of Luxor made for a perpetual tint in the room, worsened by the fact that one of the windowpanes was crusted in desert dust. This is probably a great setup for someone with a blistering hangover, but it gave a depressing pallor to the space. Counterpoint: "I couldn't sleep in because the room was too bright!" There were two positives. One was the Wi-Fi, which was strong enough to seamlessly maintain a video call. You're on the 21st floor of the pyramid, and you're trusting the hotel Wee-Fee over your phone's hotspot? Your priorities are backwards. The other was the toilet, which flushed with the force of a cruise ship lavatory. I'm glad she counts that as a positive, but the engineer in me wants to know how they get pressures like that at the top. Is there a hidden water tank at the tip of the pyramid? The tip which famously has a giant sun-bright spotlight pointing at the stars? If you’re eating at the Luxor buffet, this is no doubt a hygienic necessity. I don't get the love for buffets. I've never eaten at that one. I only eat at buffets when my friends pressure me into it. If there's one thing that Vegas doesn't lack, it's casinos. If there's another, it's restaurants, including ones where you don't have to do half the work. The article goes into some of the property's history, which is interesting but somewhat irrelevant. Then: Stripped of its novelty, though, the gloomy interior is now bare and brutalist. You say that like it's a bad thing. It is not. With limited food options at the hotel, I ate elsewhere for dinner. I will grant that, compared to some other Vegas properties, the Luxor has fewer dining options. There's a food court for fast food, a breakfast/lunch diner style area, a deli, a couple of Starsuckses, a tequila bar with food, a sushi place (which is incidentally very good), and the aforementioned buffet. This is "limited" in Vegas, true, but when you consider that all you have to do is ride up an escalator to the passageway between Luxor and Mandalay Bay, which is a mall with various shopping options and, yes, many restaurants, this complaint falls short for me. That night, afraid of falling asleep without a security lock, I dragged an armchair in front of my door. At $299.32 for two nights, it felt particularly absurd to be redesigning the room for safety. I don't mean to be rude or anything (okay, I kinda do), but that exhibits a level of paranoia I just can't get behind. Like I said, hotel staff can burst into a room at any time if they have to. And anyone who's not staff shouldn't have a key. Hell, those rickety gambling Wonkavators won't even take you to your floor if you don't use your room key (unless, I suppose, the panel's broken and the wiring's exposed, which, as I said, is one legitimate complaint). And, I might add: $300 for two nights? What the Egyptian Underworld? I've never paid more than $50 for a night, and it's usually even less because it's comped (yes, this means I spent more at the blackjack tables, but ignore that). After a fitful night’s sleep, I stumbled down to the lobby Starbucks. Which one? Seriously, the overabundance of Starsucks is my second-biggest problem with Luxor, after the really quite tiny and understaffed high-stakes table games room. Okay, no, third, after the high-stakes room and their deal with Pepsi (I'm a die-hard Coke guy). Now, look, I know tastes are different. You want high-end? Plenty of other options in Vegas. You want real cheap? Those options exist, too, usually without the shows and casinos. Luxor may not be "cool," but it's cheap (this author got price-gouged, sorry) and the beds are comfortable, especially if you stay in one of the towers instead of the pointy thing. Las Vegas properties have a relatively short half-life. Luxor has already passed that point. I fully expect it to go the way of Golden Nugget and other casinos that were the Vegas version of historical-register buildings. Meanwhile, though, I wasn't about to let this absolute hit-piece stand without comment. |
As I've noted before, I try to be skeptical of articles that confirm what I believe. Like this one from The Guardian. Night owls’ cognitive function ‘superior’ to early risers, study suggests ![]() Research on 26,000 people found those who stay up late scored better on intelligence, reasoning and memory tests One wonders if the study was conducted by night owls. The idea that night owls who don’t go to bed until the early hours struggle to get anything done during the day may have to be revised. Eh, getting anything done is overrated. It turns out that staying up late could be good for our brain power as research suggests that people who identify as night owls could be sharper than those who go to bed early. We're also funnier, better looking, and richer. Seriously, though, the first thing I had to ask myself was this: Are we smarter because we stay up later, or do we stay up later because we're smarter? Or is there some factor that contributes to both, like, maybe, a willingness to go against the grain of society and do one's own thing, regardless of the schedule imposed upon us by cultural pressure? Or, and I'm still being serious for once, do larks as a group score lower on these traits because some of them are actually owls who were pressured into their schedule by relentless society? Researchers led by academics at Imperial College London studied data from the UK Biobank study on more than 26,000 people who had completed intelligence, reasoning, reaction time and memory tests. They then examined how participants’ sleep duration, quality, and chronotype (which determines what time of day we feel most alert and productive) affected brain performance. Well, now, they could have said up front that sleep duration and quality were also being considered as factors. I think it's pretty well-established that people who get a good and full night's sleep (whether it takes place technically at "night" or not) tend to do better with things like memory and reaction time. From a purely speculative viewpoint, this brings me back to wondering if some larks aren't getting decent sleep because they should be owls. I can't think of a mechanism by which merely shifting one's sleep hours could help with cognition, unless one's sleep hours already should be other than what they are. In other words, I'd expect to see the reverse result in such a study if it were generally larks being forced into night owl mode, rather than the reality of the other way around. I imagine we could get some data on that if they just studied people like late-shift workers or bartenders, people who need to follow an owl schedule even if their chronotype is more lark. Going to bed late is strongly associated with creative types. Artists, authors and musicians known to be night owls include Henri de Toulouse-Lautrec, James Joyce, Kanye West and Lady Gaga. I also imagine way more musicians are owls just because they, too, can be forced into a stay-up-late schedule for work, whatever their natural chronotype. For writers, it's a different story (pun intended), because creative writers, at least, often set their own schedules. At any rate, I'm glad the article uses "strongly associated with" instead of implying causation in either direction. ...the study found that sleep duration is important for brain function, with those getting between seven and nine hours of shut-eye each night performing best in cognitive tests. Which I was speculating about just a few minutes ago. But some experts urged caution in interpreting the findings. Jacqui Hanley, head of research funding at Alzheimer’s Research UK, said: “Without a detailed picture of what is going on in the brain, we don’t know if being a ‘morning’ or ‘evening’ person affects memory and thinking, or if a decline in cognition is causing changes to sleeping patterns.” Fair point, so my skepticism here is warranted for reasons I didn't even think of. Jessica Chelekis, a senior lecturer in sustainability global value chains and sleep expert at Brunel University London, said there were “important limitations” to the study as the research did not account for education attainment, or include the time of day the cognitive tests were conducted in the results. Hang on while I try to interpret "sustainability global value chains," which sounds to me more like a bunch of corporate buzzwords strung together haphazardly. Regardless of the value, or lack thereof, of that word salad, her note about limitations is important to account for. The main value of the study was challenging stereotypes around sleep, she added. And I think that's valid (maybe not "the main" but at least "a" value), because us owls are generally seen as lazy and unproductive. Well, okay, I am lazy and unproductive, but that doesn't mean I'm not an outlier. |
This article's a few years old, and it's from PC Gamer, a source I don't think I've ever quoted before. No, I don't follow them, even though I am a... wait for it... PC gamer. But this one's not about gaming. Wi-Fi is something most of us use every day. It's a miraculous technology that allows us to communicate and share large amounts digital information to multiple devices without the use of cables. The great big machine that went BING and fixed my heart problem, that was miraculous technology. Wi-Fi? Just technology. But what does it mean? I know I do philosophy in here from time to time, but "what does it mean" is just too big a ques- Oh, you mean, what does "Wi-Fi" mean. Wireless Fidelity? Wrong. Wireless Finder? Nope. Withering Fireballs? Not even close, my friend. From now on, in my house, it's Withering Fireballs. According to MIC ![]() ![]() So here I am, quoting an article that quotes an article that quotes another (20 year old) article. Sure, I could have just gone to the original source, but where's the fun in that? Then I wouldn't have been able to make jokes about Withering Fireballs. Here's my take: it means what it means. Every word has a meaning, except maybe for "meaningless." Rather, Wi-Fi was a name settled on between a group now known as the Wi-Fi alliance and some brand consultants from Interbrand agency. "Now known as?" One wonders what they were known as before they invented the term Wi-Fi. Let's look it up, shall we? "In 1999, pioneers of a new, higher-speed variant endorsed the IEEE 802.11b specification to form the Wireless Ethernet Compatibility Alliance (WECA) ![]() WECA, now, that's a meaningless acronym because they're not called that anymore. I know a few people in Wicca, but that's a different thing. Ten names were proposed by the brand agency, and in the end the group settled on Wi-Fi, despite the emptiness the name holds. "Despite?" I'd have guessed "because of." You may not want your brand to connote other meanings. It can lead to confusion. Different story, but that's kind of what happened with .gif. The creator of the Graphics Interchange Format went to his grave insisting that it's pronounced with a soft g, and he was wrong. We're still arguing about it to this day, and .gifs are older than Wi-Fi. "So we compromised and agreed to include the tag line 'The Standard for Wireless Fidelity' along with the name. "This was a mistake and only served to confuse people and dilute the brand." Like I said. A word that many of us say potentially several times a day is actually straight up marketing nonsense. Fun fact: in French, it's pronounced "wee-fee," which I find highly amusing. No relation to "oui." At any rate, every word is made up. Some were made up more recently than others, is all. Some get passed around for a while and then fall out of favor, while others become Official Scrabble Words or whatever (I wonder if I'd get dinged for using "yeet" on a Scrabble board.) Perhaps sometime in the future, a newer technology will replace what we know today as Wi-Fi. They'll try to give it a different name. We'll just keep calling it Wi-Fi. Maybe we'll even drop the hyphen, which seems to be the pattern for lots of made-up words. And the French will go on pronouncing it differently. |
Today's article, from Nautilus, is even older than most that grab my attention: first published in, apparently, 2013. That's ancient by internet standards. Well, technically, each species is unique in its own way. But it's unsurprising that humans would be most interested in the uniquity of humans. (I just made that word up, and I like it.) If you dropped a dozen human toddlers on a beautiful Polynesian island with shelter and enough to eat, but no computers, no cell phones, and no metal tools, would they grow up to be like humans we recognize or like other primates? That's a lot of restrictions for one experiment. How about we just drop them off on the island? (Ethics bars the toddler test.) Annoying. Neuroscientists, geneticists, and anthropologists have all given the question of human uniqueness a go, seeking special brain regions, unique genes, and human-specific behaviors, and, instead, finding more evidence for common threads across species. And yet, evidently, there is something that makes humans different from nonhumans. Not necessarily better, mind you. But if there weren't a unique combination of traits that separates a human from a chimpanzee, or a mushroom from a slime mold, we wouldn't put them in different conceptual boxes. Meanwhile, the organization of the human brain turns out to be far more complex than many anticipated; almost anything you might have read about brain organization a couple decades ago turns out to be radically oversimplified. And this is why the date of the article matters: in the twelve years since it came out, I'm pretty confident that even more stuff got learned about the human brain. To add to the challenge, brain regions don’t wear name tags (“Hello, I am Broca”), and instead their nature and boundaries must be deduced based on a host of factors such as physical landmarks (such as the hills and valleys of folded cortical tissue), the shapes of their neurons, and the ways in which they respond to different chemical stains. Even with the most advanced technologies, it’s a tough business, sort of like trying to tell whether you are in Baltimore or Philadelphia by looking out the window of a moving train. Yeah, you need to smell the city to know the difference. Even under a microscope human brain tissue looks an awful lot like primate brain tissue. That's because we are primates. When we look at our genomes, the situation is no different. Back in the early 1970s, Mary-Claire King discovered that if you compared human and chimpanzee DNA, they were so similar that they must have been nearly identical to begin with. Now that our genomes have actually been sequenced, we know that King, who worked without the benefit of modern genomic equipment, was essentially right. "Must have been nearly identical to begin with." Congratulations, you just figured out how evolution proceeds. Why, if our lives are so different, is our biology so similar? The first part of the answer is obvious: human beings and chimpanzees diverged from a common ancestor only 4 to 7 million years ago. Every bit of long evolutionary history before then—150 million previous years or so as mammals, a few billion as single-celled organisms—is shared. Which is one reason I rag on evolutionary psychology all the time. Not the only reason, but one of them. Lots of our traits were developed long before we were "us," and even before we diverged from chimps. If it seems like scientists trying to find the basis of human uniqueness in the brain are looking for a neural needle in a haystack, it’s because they are. Whatever makes us different is built on the bedrock of a billion years of common ancestry. And yet, we are different. I look at it like this: Scotch is primarily water and ethanol. So is rum, gin, vodka, tequila, other whisk(e)ys, etc. But scotch is unique because of the tiny little molecules left after distillation, plus the other tiny little molecules imbued into it by casking and aging. This doesn't make scotch better or superior to other distilled liquors, but it does make it recognizable as such. (I mean, I think it's superior, but I accept that others have different opinions.) I was unable to find, with a quick internet search, the chemical breakdown of any particular scotch, but, just as I'm different from you, a Bunnahabhain is different from a Glenfiddich, and people like me can tell the difference—even though the percentages of these more complicated chemicals are very, very small. Point is, it doesn't take much. But trying to find this "needle in a haystack" (how come no one ever thinks to bring a powerful electromagnet?) might be missing the point. And yes, that pun was absolutely, positively, incontrovertibly intended. Humans will never abandon the quest to prove that they are special. We've fucking sent robots to explore Mars. I say that's proof enough. But again, "special" doesn't mean "superior." Hell, sometimes it means "slow." |
Here's a relatively short one (for once) from aeon. It's a few years old, but given the subject, that hardly matters. And right off the bat, we're getting off to a bad start. Proclaiming that something is "always" (or "never") something just begs someone to find the one counterexample that destroys the argument. In this case, that someone is me. You have probably never heard of William Kingdon Clifford. He is not in the pantheon of great philosophers – perhaps because his life was cut short at the age of 33 – but I cannot think of anyone whose ideas are more relevant for our interconnected, AI-driven, digital age. 33? That's barely old enough to have grown a beard, which is a prerequisite for male philosophers. Or at least a mustache. However, reality has caught up with Clifford. His once seemingly exaggerated claim that ‘it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence’ is no longer hyperbole but a technical reality. I'll note that this quote is not the same thing as what the headline stated. I guess it's pretty close, but there's a world of difference between "without evidence" and "upon insufficient evidence." There is, for example, no evidence for a flat Earth beyond the direct evidence of one's senses (assuming one is in Kansas or some other famously non-hilly location), and overwhelming evidence that the Earth is basically round. Okay, not a great example, because flat-Earth believers can be shown to be wrong. But morally wrong? I'm not so sure. I hold the belief, for a better example, that murder is wrong. There's no objective evidence for this, and moreover, we can argue about what constitutes "murder" as opposed to other kinds of killing, such as assisted suicide or self-defense. And yet, it seems to me that believing that murder is wrong is, on balance, a good thing for people's continued survival, and thus morally right. His first argument starts with the simple observation that our beliefs influence our actions. Okay, that seems self-evident enough. The article provides examples, both practical and ethical. The second argument Clifford provides to back his claim that it is always wrong to believe on insufficient evidence is that poor practices of belief-formation turn us into careless, credulous believers. Clifford puts it nicely: ‘No real belief, however trifling and fragmentary it may seem, is ever truly insignificant; it prepares us to receive more of its like, confirms those which resembled it before, and weakens others; and so gradually it lays a stealthy train in our inmost thoughts, which may someday explode into overt action, and leave its stamp upon our character.’ I've heard variations on this argument before, and it does seem to me to have merit. Once you believe one conspiracy theory, you're primed to believe more. If you accept the concept of alien visitations, you can maybe more easily accept mind-control or vampires. That sort of thing. Clifford’s third and final argument as to why believing without evidence is morally wrong is that, in our capacity as communicators of belief, we have the moral responsibility not to pollute the well of collective knowledge. And that's fair enough, too. So why do I object to the absolutist stance that it's always wrong to believe on insufficient evidence? Well, like I said up there, I can come up with things that have to be believed on scant-to-no evidence and yet are widely considered "moral." The wrongness of murder is one of those things. That we shouldn't be doing human trials for the pursuit of science without informed consent and other guardrails. That slavery is a bad thing. And more. I'm not even sure we can justify most morality on the basis of evidence (religious texts are not evidence for some objective morallity; they're just evidence that someone wrote them at some point), so to say that belief on the basis of insufficient evidence is morally wrong (whether always or sometimes) itself has little evidence to support it. You have to start by defining what's morally right and wrong, or you just talk yourself in circles. While Clifford’s final argument rings true, it again seems exaggerated to claim that every little false belief we harbour is a moral affront to common knowledge. Yet reality, once more, is aligning with Clifford, and his words seem prophetic. Today, we truly have a global reservoir of belief into which all of our commitments are being painstakingly added: it’s called Big Data. Again, though, that's a matter of scale. People have held others to certain standards since prehistory; in the past, this was a small-community thing instead of a global surveillance network. None of this is meant to imply that we should accept the spread of falsehoods. The problem is that one person's falsehood can be another's basic truth. That makes it even more difficult to separate the truth from the lies, or even to accept the reality of certain facts. Yes, having evidence to support one's beliefs is a good thing overall. But we're going to end up arguing over what constitutes evidence. |