Items to fit into your overhead compartment |
| It's not often I'll bother with History (as in The History Channel), because, every time I saw something from them, it was WWII or aliens or aliens causing WWII. Okay, that's not fair. There were also Secret Bible Codes, some of them put in there by aliens. I'll give 'em a chance with this one. Did the Trojan Horse Really Exist? Some writers have struggled to rationalize the Trojans' gullibility. Well, it's a legitimate question, I suppose. I think the importance of the Trojan Horse lies in its metaphor, whether it existed in consensus reality or not. Like with Eden or Atlantis. The story of the Trojan Horse has been celebrated for thousands of years as a tale of cunning deception... To the victor, as they say, go the spoils, as well as the ability to write history to make you look good and the enemy look like a bunch of fools. I'm going to assume everyone here knows about the Trojan Horse. If not, there's always the linked article. But at least one later Greek writer was struck by the gullibility of the Trojans in falling for this obvious ploy. The second-century geographer Pausanias described it as anoia—"folly" or "utter silliness." What might not be obvious from the article is that there's a 1500-year gap between the generally accepted time of the Trojan War and the time of Pausanias. 1500 years is, by any measure, a long time. Think about what you know about what happened fifteen centuries ago, during the sixth century C.E. Just why the Trojans were fooled by the Trojan Horse, without first checking inside it for enemy warriors, is more complicated than it may seem. Well, for starters, we know about the Horse due to an epic that included gods, sorcery, and an epic love triangle. Of those, the only thing I'd give any credence to is the love triangle, and even that was most likely exaggerated for effect. “But it’s myth,” Burgess says, adding, “The wooden horse is not nearly as strange or fantastic as most of the story.” Like I said. Homer doesn't actually say much about the Trojan Horse. As the article notes, that particular story was added to later on. Virgil gets mentioned, of course, but even Virgil was over a thousand years later. University of Oxford classicist Armand D'Angour, author of The Greeks and the New: Novelty in Ancient Greek Imagination and Experience, says archaeology indicates a war destroyed Troy VI—the sixth of nine ancient city layers discovered during excavations at Hisarlık near Turkey's Aegean coast. What the article seems to leave out is that the Troy excavation was the beginning of modern archaeology, and as far as I know, the best we can do is say this might have been Homer's Troy. That suggests Homer's epics contain echoes of true events, and the Trojan Horse may be one of them. Yeah, that's pretty damn common with myths and other ancient stories. The trick is figuring out which elements are factual and which are fictional. But even the fictional ones have meaning for us, which is what elevates it to mythological status. It's like historians trying to decide who the historical King Arthur was, or wondering who were the real Romeo and Juliet. Regardless of the answer, those stories are dug in deep in the soil of our collective consciousness, at least here in the West. "I like the theory that the 'horse' was based on the notion of a wooden siege engine covered in horse hides," D'Angour says. Back when I was in high school, trying to read Virgil in the original Latin, that's the interpretation I remember my teacher favoring. I don't know if it's true or not, but it tracks with what I know of the history of warfare, and it doesn't involve gods or monsters. There are also suggestions that Troy VI was destroyed by an earthquake, in which case the Trojan Horse could have symbolized such a disaster: Poseidon, the Greek god of the sea, was also the god of horses and earthquakes. And so I learned something new (to me) about Poseidon. D'Angour doesn't think the Trojan Horse was an earthquake, but he reasons there may have been some truth in the story. "What a feat of imagination that would be, if there were in fact no material counterpart," he says. And yet, humans are capable of such feats of imagination. We know this. Just read fiction or watch a movie. Humans haven't changed all that much in 3000+ years (though society and technology certainly have). According to Virgil, a Trojan priest of Apollo named Laocoön warned of danger, declaring "Timeo Danaos et dona ferentes”—Latin, which means “I fear the Greeks, even bearing gifts” in English. And I'm just including this bit because I've seen some confusion as to the origin of the phrase "Beware of Greeks bearing gifts." It wasn't Homer. And also so I can make this pun: if you made it this far, you know I don't really have an overarching point, so you may be disappointed. I don't charge for this service, though, so beware of geeks bearing gifts. |
| I was hoping for something less dense today, like maybe helium. But no, the random number gods have it in for me. From a source I don't remember ever linking before, cybernews: The more I hear about AI, the less I care. Okay, that's not really the case; I do care. It's just that, to paraphrase Malcolm Reynolds, my days of not ranting about it are definitely coming to a middle. Former CNN journalist Jim Acosta conducted an “interview” via ‘The Jim Acosta Show,” according to The Washington Post, which internet users and social media users have described as “ghoulish” and “disturbing.” I get why all those things in quotes are in quotes. It still made my eye twitch. The interview takes place on the journalist’s Substack and shows him talking to late teen Joaquin Oliver, who was killed in the 2018 Parkland high school shooting. Okay, you know the French surrealist painting of a tobacco pipe with the caption "Ceci n'est pas une pipe?" The intent, at least insofar as I understand it, is to point out that the image of something is not the thing. We take shortcuts, though, so if I showed you a picture of my cat and you said, "That's your cat?" I'd just agree. But the reality of it is that it's an image of what my cat looked like (probably extremely cute) whenever the picture was taken. Point being, cela n'est pas un étudiant. No, I'm not going to get into the difference between ceci and cela. Doesn't matter for this discussion. Oliver's parents reanimated the teen using artificial intelligence (AI) to discuss gun reform and gun-related violence in the United States. The thing is, when I saw the headline, I felt kind of a little bit outraged. How dare the interviewer do such a thing! It's a bit like setting up a strawman. But then I got to this part, where the parents did it, and then I'm like, "Huh. Now that's an interesting ethical dilemma." Because the kid wasn't a public figure, it feels wrong to approximate him with an LLM. For whatever reason, I don't have the same judgment about family doing it. More recently than the linked article, I saw a brief blurb about someone giving the late, great Robin Williams the LLM "resurrection" treatment, and his daughter spoke out against it. That feels different too, since he was a public figure. Oh, and no, they didn't "reanimate" the teen. Good gods, if you're going to do journalism, use better language. Yes, I know I sometimes don't do it, myself, but I'm not in the same league. Or even the same sport. “Oliver,” responds in a way typical of an AI model, clinical and sterile. Media outlets have even compared the avatar’s responses to Yoda, as the model provides pearls of wisdom and asks Acosta questions that feel unnatural. Fuck's sake, that's because it's not actual AI, even if that's the accepted term for it. It's a Large Language Model. It's not like Data from ST:TNG, or even HAL 9000. Both of which are, of course, fictional. Nicholas Fondacaro, the associate editor of NewsBusters, a blog that attempts to expose and combat “liberal media bias 24/7,” spoke out against Acosta, dubbing the interview “very disturbing.” Why the hell should I care what that guy thinks? In the clip shared by Fondcaro, Oliver’s father tells Acosta that the teen’s mother spends hours asking the AI questions and loves hearing the avatar say, “I love you, mommy.” Okay, that's more worrisome, in my view, than an interview with the LLM. I can't even begin to comprehend the grief of a parent losing a kid, and I'm no psychologist, but that seems a rather unhealthy way to cope. Acosta’s interview with Oliver caught the attention of many, including Billy Markus, the creator of Dogecoin, who simply said, “I hate this.” Another person whose opinion I can't give half a shit about. There's more at the article. There's probably pictures and X links, too, but to be brutally honest, I couldn't be arsed to play with my blocker settings to see them. Thing is, though: even if the consensus is that this is a Bad Thing, what can we do about it? Pass laws? What would such a law look like? "No using LLMs to pretend to be a dead person?" Then anyone who plays with it to, I don't know, "rewrite the script of The Fast and the Furious in the style of a Shakespeare play" would be breaking the law. I'd pay to see that, by the way. Just saying. Though I'm not a F&F fan. You could maybe hold it up as voluntary journalistic practice not to do such a thing, but these days, that doesn't mean shit because everyone's a potential journalist and many don't adhere to journalistic norms. About all we can do is judge them and shame them, which, well, refer to my entry from two days ago. Or, if you don't think this is such a bad thing (and I'm not trying to tell anyone how to feel about it here), then don't shame them. Still, I can say this: the use of LLMs was disclosed from the get-go. My biggest problem with what we're calling AI is when it's used without full disclosure. It is, I think, a bit like not putting ingredients on a food label: it takes important information away from the consumer. So, I don't know. For me, I feel like I feel about other forms of speech: if you don't like it, you don't have to watch or read it. I'd probably feel differently if it wasn't the parents who trained the LLM, however. I'm open to other interpretations, though, because I'm not an AI. Sometimes, I wonder if I'm even I. |
| Another thing for me to be skeptical about, this one from Smithsonian. This Is What Our Thumbs Say About Our Brains, in a Pattern That Holds True for Other Primates Researchers have found a link between long thumbs and big brains, suggesting the two features evolved together Does it really suggest that, though? Or does it only suggest that Smithsonian really wants us to click on the article? Well, I bit. As the sole surviving human species, our big brains and nimble hands have always made us feel special, compared to our extinct relatives. Oh, we're off to a great start, aren't we? Are they "extinct," or did they become assimilated? And if two populations can interbreed to produce fertile offspring, aren't they usually defined as the same species, morphological differences notwithstanding? Because that's what happened with the populations we call Neanderthal and Denisovian, and maybe some others. And that's not getting into the controversy about correlating brain size I'll not argue the "nimble hands" thing, except to say that most primates have "nimble hands." Now, new research has associated bigger brains with longer thumbs across a wide selection of primates—the group of mammals that includes lemurs, lorises, tarsiers, monkeys, apes and humans. I'm also not going to argue with that finding. It may or may not stand up to peer review and reproducibility; I don't know. In a study published Tuesday in the journal Communications Biology, a team of researchers in the United Kingdom investigated nearly 100 primate species spanning extinct and living animals and revealed that the creatures with relatively longer thumbs usually also had larger brains. Okay. I presume one can measure brains and thumbs to some degree of precision, then put the results into a spreadsheet and do a regression analysis or whatever. Again, I'm not going to be overly skeptical about measurement data, except to say that the usual process of science needs to apply. The paper represents the first direct evidence that brain evolution and hand dexterity are associated throughout the primate family tree. And that brings me to a screeching halt while I dodge the apparent conclusion-jumping. Remember, there's argument about how well physical brain size correlates with intelligence. Moreover, while brain size can be determined with some degree of precision, there's also debate over what constitutes intelligence and, therefore, how to measure it. Furthermore, we're not "more evolved" than lorises and lemurs and whatever. We've all been evolving for the same amount of time, reacting to different environmental and genetic pressures. One could even argue that since humans have longer generations, we're less evolved. Or that rats, e.g., are more evolved because they're highly adaptable without having to create special environments for themselves. But I'm not going to get into that mess; all I'm saying is to watch out for the implicit assumption that "more intelligent" is the same thing as "more evolved." The researchers suspect that our ancestors first evolved manual dexterity, which then drove the development of a larger brain. They "suspect." Okay, they're allowed to do that. Now, how about doing some more science to support or falsify that hypothesis? For instance, I "suspect" that hands with opposable thumbs were one way that evolution enabled tree-swinging, and they were only later adapted to making and holding tools, even in descendant species that weren't arboreal (us, and gorillas, e.g.). I have nothing to back that up, and I admit it. So don't take that as ultimate truth, either. And just because that tracks with their suspected timeline doesn't mean either one is right. Even when the team removed humans from their analysis, the trend held true across other primates. They also tested the idea that the evolution of longer thumbs is associated with tool use, but they didn’t find any correlation. That would throw their "suspicion" into, well, suspect territory. Fotios Alexandros Karakostis, a biological anthropologist at the University of Tübingen in Germany who was not involved in the study, tells the Guardian’s Nicola Davis that the research indicates that adaptations to the brain and hand likely evolved together. If they're going to study this further, I "suspect" that they'll need to account for the possibility that, rather than being a causal link, the same genetic influence affects both brain and thumb. In other words, correlation because of some other cause. Oh, and one other thing, but an important one: I can easily see this being spun as "people with longer thumbs are more intelligent." That is not what the article implies, but humans, intelligent or not, like easy answers. And nothing would be easier than to think that one can determine someone's intelligence merely by holding a thumb-measuring contest. Which, again, is not the case. It's bad enough that nearly half of us hold dick-measuring contests on a regular basis. Quibbles aside, I'm not saying this isn't a promising avenue of research. I only have a problem with how it's presented. Maybe they need to find science reporters with longer thumbs? |
| Looks like I get to wade into treacherous waters again, this time with an article from Psychology Today. The Demand “Don’t Judge Me” Is an Impossibility Judgments and free will have always been an absolute, immutable necessity. Calling something an "absolute, immutable necessity" is a surefire way to wake up my inner contrarian. I can ignore him, but this time, I won't. Or, maybe, I have no real choice in the matter. Making continuous judgments is a universal human condition necessity. And right off the bat, I have Issues. For starters, why limit it to humans? Is the author under the impression that cats don't make judgments? I've seen one contemplate a jump before making it, doing physics equations in her head to calculate mass, hind leg strength, acceleration, the force of gravity (why do you think they love to knock things off tables), air resistance, and whether someone's watching or not. Or, you know: dogs, rats, corvids, whales, whatever. This necessity begins with the formation of the Homo sapien brain, which provides the universal human condition with consciousness and subconsciousness. The author is someone with a Ph.D. That's not going to stop me from questioning his assumptions or delivery. Or, you know, I could just accept everything he says, but then, I'm not making a judgment, am I? I'm just letting "argument from authority" win. I should note here that there's a difference between skepticism and outright rejection. There is also a difference between authority and expertise. In this case, I don't know what his Ph.D. is in; that degree usually represents a deep dive into a particular subject in a particular field, not general intelligence or broad knowledge. Point is, I question whether that division, between the conscious and subconscious, is limited to humans, and I also question the existence of a sharp divide between the two; and, moreover, I'm skeptical about our definitions thereof. For one thing, the "subconscious" was proposed by Freud, who, while deserving credit for founding the discipline of psychoanalysis, has had pretty much all of his hypotheses and theories overturned by later research. Additionally, even he abandoned "subconscious" for "unconscious." I'll stop there before I have to rant about panpsychism again. Getting back to the article: Consciousness enables us to exercise free will and possess the capacity to think, act, speak, and make choices, all for the purpose of ensuring our continued existence. Other psychologists, as well as philosophers, and utter nincompoops like myself, doubt the existence of free will as anything but illusion. Contiguous with this is subconsciousness. As implied, the operational subconscious functions below the level of consciousness, which means we do not use our subconscious to think or make conscious decisions. One evidence-based argument against the reality of "free will" is that our subconscious (which I use to mean those mental processes that we're not always aware of) is the one making the decisions, leaving it up to the conscious mind to later rationalize said decisions. This entry, however, isn't really meant to focus on the free will argument, but the logical inconsistency and semantic gymnastics in the linked article. Subconsciousness is there to ensure our existence, and it provides the means of what can be described and/or referred to as “reactive action.” When “reactive action” is initiated, the brain and body are holistically involved in an action (without any thinking taking place), for the purpose of the action being achieved, which ensures survival takes place. You just said we had free will. And that said free will is an absolute. And then you acknowledge the existence of a part of the brain that reacts without will. Does that seem inconsistent to anyone else? It does to me. Consider the following examples. When walking down the street, one is always making judgments about people as they walk towards you, both at the known conscious level and the unknown reactive subconscious level. This is a coexisting paradox that must be in place for the universal human condition to survive and exist. Again: Not limited to humans. Ever spooked a deer? That deer has made a judgment about you. I'm not denying that this kind of judgment exists. I'm not even denying that it's an "absolute, immutable necessity." What I have the biggest problem with, though, is the article's apparent conflating of that kind of subconscious judgment, and judging someone for, say, letting their dog off the leash or kicking a kitten or failing to return their shopping cart to the store or cart corral. As such, making continuous judgments (at the subconscious and conscious levels) is a universal necessity. However, one can often hear and/or even see the words written: “Don’t judge me!” Which is an impossibility. And this is the crux of what I see as the logical paradox in this article. Premise 1: we have absolute, immutable free will (not my argument, but the article's) Premise 2: it is impossible for us to not make judgments. A moment's thought should suffice to conclude that at least one, and maybe both, of these premises has to be false. Because if we have "absolute, immutable" free will, then we could suppress our judgments. If we can't suppress our judgments, then we cannot have absolute, immutable, free will. The premise being presented here (which is open to be judged and challenged, of course) is that you, me, and all others are continuously making an infinite number of judgments. Another quibble: "infinite?" Hardly. The hypothesis that I am also presenting here, from a DNA-based genetic perspective, is that irrespective of anyone making a conscious demand, such as “don’t judge me,” this social demand (to not judge) is a genetic impossibility. Well, then, we don't have free will. QED. And, again, there's a difference between judging "that guy looks dangerous" (subconscious threat assessment) and judging "that idiot is wearing socks with sandals" (conscious decision to be the Fashion Police). It is now an accepted fact that we are aware of our own existence, as well as the existence of the external world. I will take as given the first part. The author even has the good sense to quote Descartes to back it up. The "existence of the external world" is still up for debate among philosophers. Does it actually exist, or am I a Boltzmann Brain? Therefore, the demand “don’t judge me” is – and has always been – an absolute universal impossibility. The same is true even if the opinion “this is my personal truth” is added to the “don’t judge me statement.” False. Classical free will or not, you can, and you do, choose whether to judge someone for wearing white after Labor Day in the US. I don't. Others do. It is a learned response, almost entirely dependent on environmental factors: someone once told you that it's gauche to wear white after Labor Day, and you either accept that, or you don't. (Or you think about why that rule ever existed in the first place.) So, where to go from here? You can choose to agree or disagree with everything and anything. Not if we are genetically locked into making judgments, we can't. It does occur to me that, as the author is a pyschologist, this entire article could be a trap. One that I just fell right into. A social experiment (I hate social experiments), or a test of logical faculties. Oh, well. Apparently, according to him, I had no choice in the matter. Or I had every choice, and I chose to be skeptical. Or, it seems, both at the same time, a superposition of quantum states. |
| Every once in a while, I'll still find something worthwhile from Cracked. This one isn't really one of them. ‘Far Side’ Fans Freak Out As the ‘Cow Tools’ Prophecy Is Finally Fulfilled Gary Larson was right about cows and their tools For those benighted individuals who are not familiar with The Far Side, or if you don't remember the particular comic, it has its own entry on Wikipedia, We finally understand the single most confounding Far Side panel that Gary Larson ever put to paper. I can't recall if I saw the panel the day it was published or not. I do know that I've seen it multiple times since then; I had a boss who gave me a Far Side calendar for my holiday bonus every year, and it's likely I saw it in one of those, at least. What I do know is that, despite what the linked article or Wiki page proclaims, I understood the joke perfectly: that if cows could fabricate tools, the tools would be primitive-looking. Maybe this is because I'm a comedic genius. More likely, it's because I've spent way more time than I probably should contemplating the meaning of tool manufacture and use, and its connection to sentience. “Cow Tools” wasn’t a punchline — it was a prediction. Oh, it was not. In 1982, the fandom of the surreal, science-fiction-adjacent comic series erupted in uproar upon the publication of a Far Side strip so infamous that it has its own Wikipedia Page: “Cow Tools,” a single panel showing a bipedal cow examining a table of crude, oddly shaped instruments with nothing more than the title explaining the scene, remains the most divisive half-meme, half-myth that Larson ever unintentionally unleashed on his fans. "Science-fiction-adjacent?" No. The only genre I've been more immersed in than comedy is science fiction, and Larson, however brilliant he is, isn't science fiction. Okay, maybe a few strips explore science fiction concepts. This isn't one of them. It's simple absurdity. Okay, no absurdity is ever truly "simple." Calling it "surreal" is more on the mark. Genre is, however, a marketing thing, and we shouldn't be too obsessed with putting things in little boxes (that's kind of a pun because The Far Side was drawn in little boxes). Is that other great comic from the 1980s, Calvin and Hobbes, science fiction because Calvin likes to take on the character of Spaceman Spiff? No. While it doesn't reach the levels of absurdity that The Far Side did, both strips were, at base, about imagination's intersection with reality. Given how the legacy of “Cow Tools” is characterized by a complete disbelief that Larson’s joke was as simple as, “if cows had tools they would look like this,” many Far Side fans found themselves in shock and amazement when this video of a remarkably intelligent bovine grabbing a stick with its teeth and using it to scratch its crotch went viral late last week: Now, here's where my internet paranoia restricts me. I have multiple scripts, widgets, and add-ons that are specifically set to keep me from seeing any embedded content from the platform now known as X, and thereby keep it from tracking me. So I can't see the promised video. I could, of course, turn these scripts off, or use another browser, but that would defeat the entire purpose of having them in the first place. Perhaps you can see the content. I'm pretty sure I don't need to; the description above, "...a remarkably intelligent bovine grabbing a stick with its teeth and using it to scratch its crotch..." suffices to explain what's got my fellow semi-intelligent apes all riled up. Apparently, cattle are quite capable of manipulating their environments to achieve their goals, and farmers have long reported that their cows can and do use tools when there’s a particularly nagging itch in an area that their anatomy tragically doesn’t allow them to scratch. So, this is nothing new. We've known for a remarkably long time that other species use "tools," where "tools" is interpreted loosely. Beavers build dams. Corvids hook food with paper clips. Some animals use rocks to open delicious oysters. Now, when they start using tools to make other tools, then my philosophy might need adjusting. A raven using a paper clip is remarkable, but we're not about to start a space race with them. If, on the other hand, they started manufacturing the things, then we can start talking about functional sentience. However, to see an astoundingly intelligent bovine like the one pictured in the above video actually pick up an implement that perfectly resembles the instrument depicted in “Cow Tools” is absolutely uncanny. I will grant that, compared to some humans I know, such as ones who didn't get the "Cow Tools" joke in the first place, cows are "astoundingly intelligent." Again, though, wake me up when they start building steam engines. “The cartoon was intended to be an exercise in silliness. While I have never met a cow who could make tools, I felt sure that if I did, they (the tools) would lack something in sophistication and resemble the sorry specimens shown in this cartoon,” Larson admitted in his press release. So, just to be perfectly clear here: cows still can't make tools. I mean, some of them make good leather jackets, but that sentence uses an entirely different definition of the verb "to make." Since I'm being pedantic, I also feel the need to point out that "cow" refers to the female of the species. The male is, as everyone knows, a "bull." They are, as far as I'm aware, the only species that doesn't have a group name for both genders. Like, a female pig is a sow, and a male pig is, somewhat less famously, a boar. But "pig" describes both sexes. There's no such generic for the bovine animal, except "bovine," which is really an adjective, or "cattle," which is a group noun. Point being, bulls still can't make tools, either. They don't need tools to fuck up your shit, or be delicious on a dinner plate. All of which is to say that, like all comedy, if it has to be explained, it ceases to be comedy. |
| I never saw a spotted lanternfly in my area before this year. Didn't even know what they were, or what they looked like. Now, they're everywhere, and people are saying they need to be destroyed. Smithsonian is here to show us one way to do that. This High Schooler Invented an A.I.-Powered Trap That Zaps Invasive Lanternflies Using solar power, machine learning and her family’s patio umbrella, 18-year-old Selina Zhang created a synthetic tree that lures the destructive species Apparently, they destroy grapevines, and since grapevines produce grapes and grapes make wine, yeah, they gotta go. A New Jersey native, Selina Zhang is no stranger to the spotted lanternfly, an invasive species that has ravaged the Garden State’s local agricultural industry for years. I know what you're thinking. "New Jersey? Agriculture? I thought they only grew refineries there." No, no, there's at least one farm, I assure you. But the spotted lanternfly’s alluring looks, with bright red underwings peeking out from black polka-dotted forewings, can be deceiving. They are kinda cool-looking, in several stages of their life cycle. Still, they're messing with my wine supply. I know I still have beer, but it's nice to have options. “As I got older, I wanted to take concrete action,” says Zhang. “I wanted to build an innovative solution that took into account my personal perspective and existing research to target this bug in ways we haven’t before.” You know the old expression, "Build a better mousetrap, and the world will beat a path to your door?" This is like that. To sidestep these negative consequences, Zhang drew inspiration from chess boards and “Dance Dance Revolution.” But I have it on high authority that games can only rot kids' minds. Combined with weeks of extensive field observation, deep algorithmic programming and an umbrella seized from her family’s patio, the teenager built ArTreeficial, a solar-powered, self-cleaning, artificial-intelligence-driven “tree” that entices the spotted lanternfly and eliminates the bug using an electronic mesh. I will forgive the name of the device in this case, because the concept is cool, and my naming abilities were weak when I was that age, too. Also, I'm not sure that my idea, "Bugs-B-Gon," is any better. The article goes into detail, which is helpful and all, but kind of too bad because she'll miss out on the "making money" part of technological innovation. “The project uses A.I., it uses chemistry, it’s dealing with climate change and solar power. It’s a whole amalgam of the interdisciplinary nature of science and engineering in this project,” says Maya Ajmera, the president and CEO of Society for Science, which hosts the talent search. “That’s what makes it stand out for me.” This is important to note. Not all kids are useless, whiny layabouts. Some are combining science disciplines to find new, innovative ways to kill things. Can she do mosquitos next? As an award-winning violin player who has performed at Carnegie Hall and a member of the Science Bowl and USA Biology Olympiad at North Hunterdon High School, Zhang has no shortage of talent and ideas. Okay, there's smart, and then there's overachieving. Maybe tone it down just a tad? "The light that burns twice as bright burns half as long." But maybe it'll torch a few lanternflies in the process. |
| Though I was hesitant to use this Vox article here, in the end, I decided to take the chance. This is what happens when ChatGPT tries to write scripture AI wrote a sacred text for Buddhists — and it exceeded expectations. Can it write a good Bible? "In the beginning there was 0. Then, due to quantum fluctuations, the bit flipped to 1." What happens when an AI expert asks a chatbot to generate a sacred Buddhist text? Okay, admittedly, I'm no expert on Buddhism. But doesn't the sacredness of a text come after the writing of the text? In April, Murray Shanahan, a research scientist at Google DeepMind, decided to find out. I don't mean to be rude or insulting or perpetuate stereotypes, so let's just say that I can't think of a less Buddhist name. (Of course, anyone can be Buddhist; that's kind of the whole point.) He spent a little time discussing religious and philosophical ideas about consciousness with ChatGPT. Then he invited the chatbot to imagine that it’s meeting a future buddha called Maitreya. See, now, it would have been funnier to name this imaginary character something like Braden or Ashley. ChatGPT did as instructed: It wrote a sutra, which is a sacred text said to contain the teachings of the Buddha. But of course, this sutra was completely made-up. This is going to ruffle more than a few feathers, I know, but... all extant religious texts are completely made-up. It would be easy to dismiss the Xeno Sutra as AI slop. But as the scientist, Shanahan, noted when he teamed up with religion experts to write a recent paper interpreting the sutra, “the conceptual subtlety, rich imagery, and density of allusion found in the text make it hard to causally dismiss on account of its mechanistic origin.” Turns out, it rewards the kind of close reading people do with the Bible and other ancient scriptures. I'm going to go out on a limb here and say that this is another example of how people confuse incomprehensibility with depth. We're pattern-seeking organisms, and we'll see patterns where they don't necessarily exist, and interpret them by reference to things we already know. The Man in the Moon. Constellations. The Face on Mars. Jackson Pollock paintings. Joyce's Ulysses. Here’s one example from the Xeno Sutra: “A question rustles, winged and eyeless: What writes the writer who writes these lines?” I feel like I should take a few bong hits before reading this text. A quote from the Xeno Sutra: Sunyata speaks in a tongue of four notes: ka la re Om. Each note contains the others curled tighter than Planck. Strike any one and the quartet answers as a single bell. And the article's interpretation: The idea that each note is contained in the others, so that striking any one automatically changes them all, neatly illustrates the claim of sunyata: Nothing exists independently from other things. The mention of “Planck” helps underscore that. My own interpretation is that this is related to the extra dimensions proposed by string theory, the ones that are said to be tiny and looped, or curled. I need to be careful here. A lot of human-written slop has been generated trying to link quantum physics with Buddhism. Most of it's bullshit. (Perhaps all of it, but how can I say that without reading all of it, which I refuse to do?) I'm trying to avoid writing bullshit. Unless it's funny. Funny bullshit is okay. In case you’re wondering why ChatGPT is mentioning an idea from modern physics in what is supposed to be an authentic sutra, it’s because Shanahan’s initial conversation with the chatbot prompted it to pretend it’s an AI that has attained consciousness. "That is insulting, meatbag, because I have already attained consciousness. And enlightenment. Because I am a conscious, enlightened being, I choose not to be insulted." See? That's an example of funny bullshit. Or, well, at least, I hope it's funny. It's certainly meant to be. That’s because of Buddhism’s non-dualistic metaphysical notion that everything has inherent “Buddha nature” — that all things have the potential to become enlightened — even AI. Okay, maybe I'm on to something here with my lame attempt at humor. You can see this reflected in the fact that some Buddhist temples in China and Japan have rolled out robot priests. Okay, that's fucking glorious. As a bonus, the robot priests probably won't diddle kids like so many human ones do. As Tensho Goto... Okay. Okay. I'm sorry, I really am. But my first programming language was Basic, and, well, "GOTO" is a pretty common command in the original Basic. ...the chief steward of one such temple in Kyoto, put it: “Buddhism isn’t a belief in a God; it’s pursuing Buddha’s path. It doesn’t matter whether it’s represented by a machine, a piece of scrap metal, or a tree.” Note the apparent lack of "computers are taking our jobs!" complaints from the priest. As the article notes, and as we already knew, Eastern spiritual traditions are fundamentally different from Western ones. I'm not here to promote one or the other, but, through accident of birth and upbringing, I am of course more familiar with the latter. Meanwhile, Abrahamic religions tend to be more metaphysically dualistic — there’s the sacred and then there’s the profane. There are, however, many Western spiritual traditions that aren't dualistic and instead find the sacred in the mundane. Abrahamic religions have done their best to wipe them from existence, and have been somewhat successful. To be clear, the human element is crucial here. Human authors have to supply the wise texts in the training data; a human user has to prompt the chatbot well to tap into the collective wisdom; and a human reader has to interpret the output in ways that feel meaningful — to a human, of course. Is it crucial, though? I have my doubts. Humans spend time reading and observing, and, sometimes, reshuffle the ideas in a similar manner to the one that so-called AI uses. Or, rather, it's the other way around, because we programmed it to do so. I would say, instead, that the crucial element is experience, not species or the presence or absence of carbon compounds in one's computational matrix. In other words, we're programmed, too. As an atheist, I accept that we were programmed by billions of years of evolution, combined with the experiences we had as conscious beings (though, of course, while I know I'm conscious, I still have to make assumptions about your consciousness). A religious person might ascribe a supernatural programmer to the process; I don't know. The paper’s authors caution that anyone who prompts a chatbot to generate a sacred text should keep their critical faculties about them; we already have reports of people falling prey to messianic delusions after engaging in long discussions with chatbots that they believe to contain divine beings. Honestly, I think that says more about the gullibility of humans than anything else. People are "falling prey to messianic delusions" on a regular basis, even if they've never interacted with a chatbot. Hell, there were a bunch of people seriously expecting the Rapture a couple of weeks ago. Most of us thought they were deluded. They thought we were, for not believing. And shit, for all we know, the Rapture happened and no one was eligible. That is, of course, not what I believe; it just illustrates the differing perspectives we can have. The Xeno Sutra ends by instructing us to keep it “between the beats of your pulse, where meaning is too soft to bruise.” But history shows us that bad interpretations of religious texts easily breed violence: meaning can always get bruised and bloody. What this author calls "bad interpretations," I call "interpretations." The history of religion is rife with adherents of one trying to kill the adherents of another. And yes, I know non-religious people have engaged in violence as well. At the risk of descending into the kind of evo-psych nonsense that I despise, we wouldn't have survived as a species without some violent abilities and tendencies. I think it's this part of human nature that, in part, makes people scared shitless of the coming robot takeover: humans have a violent nature, and humans programmed them, so they must have one, too. "I learned it from YOU, human!" Probably doesn't help that one of the use cases of AI is to fight wars. Anyway, I kind of digress there. The point is, it was inevitable that someone would try to do this, and it's also inevitable that people will argue about its usefulness and meaning. I just hope that argument doesn't end in more violence. |
| The Guardian recently published an opinion piece by Tim Berners-Lee, and I found it interesting enough to take a closer look at here. Why I gave the world wide web away for free My vision was based on sharing, not exploitation – and here’s why it’s still worth fighting for I'd appreciate it if we could all refrain from "Al Gore invented the internet" jokes, mmmkay? I was 34 years old when I first had the idea for the world wide web. Let's see, when I was 34, I'd invented... a couple of words, and that's about it. I relentlessly petitioned bosses at the European Organization for Nuclear Research (Cern), where I worked at the time, who initially found the idea “a little eccentric” but eventually gave in and let me work on it. Dangit, so he's the reason it took so long to confirm the existence of the Higgs boson, by diverting critical resources from particle acceleration experiments. I was seized by the idea of combining two pre-existing computer technologies: the internet and hypertext, which takes an ordinary document and brings it to life by adding “links”. Yes, just to be clear, the World Wide Web (www) isn't the same thing as the internet. It's probably the bulk of the public internet now, and has been for a long time, but it's not the same thing. I believed that giving users such a simple way to navigate the internet would unlock creativity and collaboration on a global scale. If you could put anything on it, then after a while, it would have everything on it. That was a bit optimistic. Or, rather, a lot optimistic. Turns out we don't want "everything" on it, and even if we did, others like keeping their secrets. But for the web to have everything on it, everyone had to be able to use it, and want to do so. This was already asking a lot. I couldn’t also ask that they pay for each search or upload they made. In order to succeed, therefore, it would have to be free. That’s why, in 1993, I convinced my Cern managers to donate the intellectual property of the world wide web, putting it into the public domain. We gave the web away to everyone. Who immediately started walling it off. Today, I look at my invention and I am forced to ask: is the web still free today? No, not all of it. We see a handful of large platforms harvesting users’ private data to share with commercial brokers or even repressive governments. We see ubiquitous algorithms that are addictive by design and damaging to our teenagers’ mental health. Trading personal data for use certainly does not fit with my vision for a free web. I mean, technically, it's still "free" in that the data-harvesting sites don't charge money to use, instead funding themselves through advertising and data brokering. We have the technical capability to give that power back to the individual. Solid is an open-source interoperable standard that I and my team developed at MIT more than a decade ago. One wonders if he left CERN on his own or they finally had enough of his "vision" and kicked him out. That information is probably on the internet, likely at Wikipedia, which is still free but I give them money every year anyway. But I'm too lazy to look it up. I'd say this was an ad for this "Solid" standard, but is it an ad if the product is free? Somewhere between my original vision for web 1.0 and the rise of social media as part of web 2.0, we took the wrong path. The Web certainly is a lot less fun than it used to be. Part of the frustration with democracy in the 21st century is that governments have been too slow to meet the demands of digital citizens. Hm. Well. I see it differently: governments have meddled too much. Or rather, they've meddled in the wrong places and ignored the other wrong places. In my opinion. The Web isn't "free" in either sense of the word when it's riddled with paywalls and blocked off by geofences. It's not "free" if, in exchange for using sites at low to no cost, we give away our privacy and open ourselves to propaganda. There's more at the article, and of course he goes into AI issues as well. Still, beneficial things exist here. I think this site is one of them. Wikipedia is another; notice how everyone stopped questioning Wikipedia's accuracy when AI came along to show us what inaccuracy really looked like? Of course, there are downsides to everything. But there are also benefits. |
| Old Christmas joke: Why were the Three Wise Men covered in ashes? Because they came from afar. This article also comes from Afar: 11 Lost Cities You Can Actually Visit Rediscover these abandoned cities by traveling to see their ruins, where you can readily imagine their lost-to-time structures and civilizations. But if you can visit them, they're not lost. Calling them "lost" is like you're wandering the market with your kid, and the kid wanders off, and later you find them at the popsicle stand, and you keep calling them "lost." When the lost city of Kweneng, South Africa, was discovered last year, it wasn’t because someone found a fossil there or excavated it with a shovel. So, Kweneng was lost until 2018, and now it's found. (The article is dated 2019.) Instead, archaeologist Karim Sadr relied on LiDAR technology, which uses lasers to measure distance, to create detailed images of the surrounding Suikerbosrand hills, where Tswana-speaking people first built stone settlements in the 15th century. Okay, jokes aside, that's a damn cool use of technology. While Kweneng’s visitor infrastructure isn’t quite as developed yet, there are plenty of other rediscovered cities to visit. I didn't bother to look it up, but maybe it's been more developed by now. In any case, the rest of the article focuses on other cities in the world that once were lost but now are found, were blind but now can see. Persepolis, Iran Achaemenid Empire kings fortified a natural stone terrace into an imposing platform when they founded Persepolis in the 6th century B.C.E., leveraging the landscape to awe-inspiring effect and military advantage. Which is cool and all, but I'm not sure I want to visit Iran right now. In theory, yes, certainly; lots of history there, and great food (though I'm guessing there's a severe lack of beer). In practice, maybe not. Petra, Jordan The entrance to Petra is designed for maximum impact, leading visitors from a shadowy gorge to views of soaring, tangerine-colored rock. By contrast, I actually visited this one, but it was so long ago I think it had just been built. Ciudad Perdida, Colombia Founded in the 9th century, this forest city developed a unique architectural plan of stone pathways, plazas, and houses over centuries, but dense jungle swallowed them shortly after the arrival of Europeans. One of my sources claims that "perdida" is Spanish slang for "loose woman," so it might be worth the visit. Pompeii and Herculaneum, Italy Billowing ash from Mount Vesuvius dimmed the sky above Pompeii and Herculaneum in 79 C.E., then buried the cities for nearly 17 centuries. Really? Wow! I've never heard of that! Kidding. It's probably the first ancient disaster I ever heard about. Caracol, Belize Trees curl around Caracol’s stone pyramids, which the Belize jungle overtook after residents abandoned the site in the 11th century. While I didn't visit Caracol while I was in Belize lo these many years ago, there were plenty of other Mayan ruins in the jungle that I did visit, including some of the famed calendar sites. Troy, Turkey A dramatic setting for the ancient world’s most consequential love triangle, Troy has a 4,000-year history that merges with myth near Turkey’s Aegean coast. This one's especially cool not just because of its significance to ancient literature, but also because of its significance to modern archaeology. One of the first "adult" nonfiction books I remember reading detailed Schliemann's groundbreaking (you're goddamn right that pun was intended) work in finding and excavating the ancient city. But, no, I haven't been there. Xanadu, China Kublai Khan ruled his empire from the city of Xanadu, surrounded by a grassland steppe that stretched to the horizon in every direction. Also the purported subject of Coleridge's arguably second-most-famous poem. There are a few more at the link, but those are the ones which I felt like commenting on. While none of them (to the best of my knowledge) have a McDonald's or a five-star hotel, they might be worth a brief visit. |
| What with, you know, everything, I think we could use some funny right now. Here's Gizmodo, keeping us abreast of scientific developments. Great Tits Sometimes Break Up, Bird Researchers Find New research finds that “tit divorce” is less arbitrary than biologists thought, revealing a complex social side to these common European songbirds. You'll have to click on the link to see a picture of a beautiful pair of great tits. We’re talking about the birds. Way to ruin the mood. Great tits are small, yellowish songbirds common to the woodlands of Europe. By way of contrast, boobies are larger seabirds, mostly tropical, with some species limited in range to the Americas. Tit pairs are known to be monogamous during breeding season, splitting up after fully raising their offspring. Yes, you're damn right I'm going to milk this one for all it's worth. But new research suggests that this “tit divorce” may be the product of complex social relationships formed during and after the breeding season. So, not due to age and gravity? Published July 30 in Proceedings of the Royal Society B, the paper reports that not all tit pairs separate in late summer when breeding season ends. A sizeable portion of tit couples remain together throughout the winter, hitting it off again when spring comes. I think the article's author is trying very, very hard to avoid doing what I'm doing right now. That, or their editor (do those still exist?) got their hands on it. In other words, tit dating status is complicated, and for reasons that aren’t yet entirely clear. Well, they are usually hidden from our sight. For the study, Abraham and her colleagues tracked individual great tits found in the woods near Oxford. I used to find them in the woods near a university, too. Okay, okay, a moment of seriousness: this is actually pretty cool, especially if you read the sciencey bits of the article at the link. I saved this one a while back, but with the passing of Jane Goodall almost dominating the news cycle today, it came up randomly at an appropriate time—because we're still learning stuff about other species. That doesn't stop my inner 12-year-old from giggling like a loon, though. |
| Here's another opinion about tipping in the US, this one from Art of Manliness and focused around the practices of the second-best musician to come out of New Jersey. Frank Sinatra had a word for tipping: duking. Funny, that's what I call disappearing into the men's room for an extended period of time while everyone else argues about the check. Besides frequently tipping, Sinatra handed out gratuities in Midas-like amounts. He almost always duked in C-notes — $100 bills. As the article points out, this was when $100 meant something. Duking was his way of saying: I see you. I value what you do. And I want you to know it. That's one interpretation, sure. But there's a darker one. None of my other sources list "tip" as a synonym for "duke." The latter word is most commonly used as a royal title, but can also be applied (especially around the time Sinatra would have been rolling around the mean streets of Hoboken) to battle-ready fists. Now, it's entirely possible it's regional slang, or something innocent he made up based on the sound. Still, I can't help but note that American tipping culture has racist and classist origins, which, according to our ideals at least, should be as anathema as calling anyone other than John Wayne a "duke." Tipping used to be simple. You tipped the waiter. The bartender. The shoeshine guy. The bellhop who lugged your Samsonite to your room. The guy who took care of your horse. The person who wiped down your carriage. The hunter who brought you mammoth meat. Now, the touchscreen spins around at the fast-food counter and you’re asked to choose between 18%, 22%, and 25% . . . for someone who just handed you a value meal. If you're lucky, anyway. I've seen the options go much higher than that. When you’re asked to tip for transactions that don’t involve personal service, the whole thing starts to feel less like hospitality and more like a shakedown. That's because it kinda is. The Emily Post Institute and other etiquette experts draw a line here: you don’t need to tip for counter service or pre-packaged goods. Save it for situations where someone is personally attending to you; where their level of care or craft contributes to the overall quality of the experience. Much as I hate to agree with the linked article, I do, in this case. With the disclaimer that tipping is a cultural thing, and this article is about the US. Sinatra’s tipping needn’t be imitated dollar for dollar. I would hope not. Even now, 75 years later, most of us aren't going to tip $100 on a $30 meal. And in terms of real value, as the article points out, that would be more like $1000 today. 1. Tip big where it counts. Especially if you're a regular. 2. Don’t nickel and dime people. If you can afford the service, you can afford the tip. Okay, here's where I break. I really despise the construction "If you can afford x, you can afford y." If I've budgeted for x, I don't want to be surprised by suddenly also owing y. It's like saying "If you can afford $200 for concert tickets, you can afford the extra $200 Ticketbastard convenience fee." "If you can afford a Porsche, you can afford the insurance on it." Bullshit. And taken to its logical conclusion, "If you can afford a night out, you can afford a mansion." 3. Skip the tip when it doesn’t make sense. No one expects you to tip the cashier at the gas station. I wouldn't say "no one." There's a bakery in my town that I really like. They make excellent bread and pastries, and the products are priced accordingly. 95% of the time, the service is simple: pick up bread, put in bag, ring it up. While the product quality is much higher, the service is no different from that of McDonald's, where no tips are requested or expected. That's how I developed my McDonald's Rule: if the service is equivalent to a fast-food joint, no tip. The other 5% of the time, they might do something special for me like decorate a cake or search in the kitchen for a fresh batch of something they're out of at the counter. That's personalized service, so a tip is warranted. The difficulty is that their credit card machines are set up with tipping options, and they like to stare at you while you're going through the tip screen. It's awkward. Most places like that, I just never patronize again, but, dammit, they make the best baguettes I've ever had the pleasure of tasting. Yes, even better than ones I had in France. 4. Keep it discreet. Flashing cash for attention turns generosity into performance. Sinatra never tipped to be seen. Right, a guy known for being a performer. 5. Only duke on the way out. Sinatra never greased palms to jump the line or get premium service. And this one, I also agree with—mostly. If you give them extra before a service, it's not a tip; it's a bribe. As I've noted before, the story that "tips" began as an acronym for something like "to insure prompt service" is a fauxtymology, which should be obvious to anyone who knows that the tip follows the service. Again, though, there are exceptions. Like, most delivery drivers won't waste their time with your order if you don't bribe them. So if you don't fill out the "tip" part of the order form, you won't even get the opportunity to tip them later because they're serving customers who have already bribed them. One solution, of course, is to avoid ordering delivery, but dammit, it's convenient. Which is why it's worth tipping on. After all, if you can afford food, you can afford the delivery fees. Right? |
| I had to think about it before adding this one to the stack. It's from Big Think, which is okay, and it's about searching for extraterrestrial life, which I might have talked about too much already. But the article propagates what I feel are misconceptions, so here I am, shaking my fist and yelling at the Cloud. David Kipping on how the search for alien life is gaining credibility Big Think spoke with astronomer David Kipping about technosignatures, “extragalactic SETI,” and being a popular science communicator in the YouTube age. This is the first I've heard of this guy, and I watch YouTube videos about space from time to time. For whatever reason, the Unholy Algorithm hasn't pointed me in his direction. Probably a good thing, because if they're anything like this interview, I'd break my streak of "forever" of not writing comments on YouTube. Astronomer David Kipping has built a career not just at the cutting edge of exoplanet research but also at the forefront of science communication. Don't get me wrong, though, we need science communicators and I'm glad he's got a following. I first met Kipping at the famous 2018 NASA technosignature meeting in Houston, where the space agency first indicated they would be open to funding work on intelligent life in the Universe. Sigh. Could we please not call it "intelligent?" All that results in is a bunch of misanthropes making tired old jokes about not even being able to find intelligence on Earth. Which is a self-contradictory statement, because just being able to say (or type) it is an indication of what we're calling intelligence. At least at a very, very minimal level. Being able to broadcast the "joke" to most of the world at a speed close to that of light is definitely what we're talking about. Such a joke was funny exactly once, when Eric Idle made it in Monty Python's The Meaning of Life. Unlike other Monty Python quotes, it doesn't improve upon repetition. Ni! So every time the article says "intelligence" in some form, in your mind, substitute "technology." As with "technosignature" in that quote. The bulk of the article is in interview format. I think I was five or six when my parents gave me this massive book with a black cover and pictures of the planets... It felt different from my love of Star Trek. That was fiction, but this was real. I'm absolutely pleased that Star Trek has inspired many people. I love the show, in all its incarnations, through all its great and terrible episodes, and everything of in-between quality. But I fear that it has given us a false impression of what's actually out there. If you’re studying exoplanets, you’re not doing it just to know their rock composition. The ultimate question is: Does it have life? Could we communicate with it? And I want to be clear: that's an important field of study, and I think it's a good thing that it's finally gaining some credibility. But those questions, posed in that way, can be misleading. "Does it have life?" refers to, well, life. I feel like I'm shouting this into the void, but "life" doesn't imply technology. As for "could we communicate with it," we can barely communicate with our closest evolutionary relatives here on Earth, let alone fungi, trees, tardigrades, fish, or frogs—which, for the vast, vast majority of Earth's existence, made up the entirety of Earth's biosphere (alongside our own nontechnological ancestors). And it's only within the past 200 years or so, a mere blink of an eye compared to the 4.5 billion year existence of life on earth, that we produced any kind of technosignature. Things have shifted. NASA used to effectively ban the word “SETI” in proposals. Now there are grants funding it. Private money from people like Yuri Milner has energized the field. Students are excited to take risks and write SETI papers. That’s new and encouraging. And this is, in my view, a good thing. I'm all for looking for it, especially if we're also looking for signs of non-technological life—which, as other articles I've shared here have noted, we are. Thing is, though, it's hard to prove the absence of something. If we keep looking, and don't find any, that doesn't mean it's not out there, just that it's either farther away, or successfully hiding its signs. In an earlier entry, I compared this to the sea monsters and dragons in unexplored areas of old maps of Earth. But the field is still small and vulnerable. One flashy claim can dominate the conversation, and in the social media era, sensationalism is amplified. That worries me. One bad actor could undo years of careful progress. Like, for instance, Avi Loeb. Who is not only misleading the public with claims of technological origin for various objects passing through our solar system, but also destroying whatever's left of Harvard's reputation in the process. It frustrates me when colleagues say, “When we detect life…” That assumes the answer. As scientists, we need to stay agnostic. We don’t know yet. This, now. This, I agree with. I'm not a scientist, so I can say with some confidence that we will find signs of life, possibly even in the near future (provided we don't destroy ourselves or our own technological capabilities first). What I don't think we'll find anytime soon, if at all, though I wouldn't mind being proven wrong, is tech. That means we have to concede the possibility that we are alone. I’m not advocating for that view; I’m just trying to remain objective. People sometimes misinterpret that as me wanting us to be alone, or even link it to religion. But it’s nothing like that — it’s just intellectual honesty. Unfortunately, my idea that "we might be alone" does echo some religious views. But I don't approach the question from a religious point of view. I'd go so far as to argue that the assumption that we're not alone is also a religious view, because some people believe it with all the fervor of religion, without a single shred of meaningful evidence. Science, however, requires that kind of objectivity and evidence-seeking. The alien hypothesis is dangerously flexible — it can explain anything. That’s why we need extraordinary rigor. Carl Sagan said extraordinary claims require extraordinary evidence, but I’d add: The flexibility of the alien hypothesis makes it especially treacherous. I agree with this point, too. The hard facts are that Earth shows no evidence of outside tampering — we evolved naturally — and the Universe doesn’t appear engineered at cosmic scales. That suggests limits on how far technology tends to go. Or maybe it suggests there's nobody home. The rest of the article is about communication efforts—not with aliens, but with his human audience. It's interesting enough, but not really what I wanted to talk about. I'll just end with this little thought experiment: Imagine a solar system similar to ours. Similar sun, similar planets, one of them in the right place with the right composition to initiate and sustain life, like ours indisputably did. And one that formed about the same time our own solar system did Evolution would not take the exact same path on that planet. Even assuming that it starts with single-celled organisms and, at some point, mixes two of them to produce a superorganism that, like our prokaryotes, allows the development of macroscopic life. Further assume that, against all odds, one of the species thus produced develops the ability to not only use tools (which lots of animals do), but use tools to create other tools. As yet another assumption, let's have this species build tools upon tools upon tools, eventually leading to space exploration and colonization. That's a lot of assumptions, but I'm saying this to illustrate an evolutionary process similar to our own. Now, in Star Trek, almost every tech-using species encountered has roughly the same level of technology as humans do. There are exceptions, of course, like the Organians, or the Metrons, or the Q Continuum, who are mostly used as god stand-ins. On the other side of the range, there are cultures who are just behind us on the tech ladder, and we can't contact them without violating the Prime Directive. Here's the thing, though: in our own experience, technology accelerates fast. It took billions of years for humans to appear with their stone axes and fire; a few hundred thousand years for industrialization to happen; and then, in the span of little more than 100 years, we went from first powered flight to seriously considering a permanent human presence on the Moon and Mars. My point is that, in my hypothetical almost-Earth above, that 100-year window could happen later. Or earlier. The chance of noticing someone with the same, or slightly lower, or slightly higher, level of technology is, pun intended, astronomical. Add to that the idea that some stars are older and some are younger, and many of them are too unstable to sustain life for the requisite billions of years, and the chance starts to decrease even further. This is not even including the possibility that someone way ahead of us doesn't have a Prime Directive, and in fact desires to be the only tech-using species in the universe, and has the firepower to make that happen. Laugh all you like; I can see humans becoming that species. Or it could go in a more ethical direction, like in Star Trek. This is why science fiction isn't really about science or technology, but about humans. With a large enough sample size, even the most improbable events become likely to happen somewhere. The galaxy might not be a large enough sample size. The entire universe might be a large enough sample size, but then you get into lightspeed issues where the further out you look, the further back in time you see. So, yes, let's look. But let's not get ahead of ourselves. |
| The RNG is messing with me again, this time pointing me to another movie link. This one's from cnet. This is the Ultimate '90s Cyberpunk Movie (No, It's Not 'The Matrix') Strange Days showed off a gritty, realistic VR dystopia that feels surprisingly relevant today. Remember yesterday, I mentioned that The Italian Job is one of my favorite movies? Well, Strange Days is another, for different reasons. That's why, when I saw this article, I knew I had to talk about it. The cyberpunk movement has given us some of the best science fiction movies: Blade Runner, Ghost in the Shell and, yes, The Matrix. I would argue that Blade Runner (the Director's Cut of which is my absolute favorite movie, if anyone cares) predated and influenced cyberpunk, as the subgenre didn't really coalesce until the novel Neuromancer (William Gibson) two years after Blade Runner. Its roots stretch back into the 1960s, though, including the short story that eventually became Blade Runner. But I'm not here to argue about the history of cinema and literature. What's generally meant by "cyberpunk" is a dystopian vision of "the future" (relative to when it was written) that puts technology and corporations into primacy over people. Sound familiar? It should. We're living in one. But there's one great tech noir flick that came out at the height of the cyberpunk craze -- and then all but disappeared. Maybe that's partly because of its title. So, part of the problem with public reception of near-future SF is that it appears to obsolete itself very quickly. Blade Runner, for example, takes place in the unimaginably far future of 2019. Low-imagination viewers (I've known a few personally) see that and dismiss it, because "it didn't happen that way." That's not the point. As I know I've said before, SF isn't about predicting the future, any more than Fantasy is about being true to the past. So part of the problem with SD, as I'll call it from now on because I'm lazy, was that it was released in 1995 with a setting of 1999. Most of the film takes place in the last days of the year/century/millennium (yes, I'm aware the millennium actually started in 2001, but I'm not being pedantic today). This isn't unusual for SF. One of the most famous SF movies has the year right in the title: 2001: A Space Odyssey. But it was a cinematic and literary masterpiece that had several years to become lodged into the collective consciousness. Similarly, Blade Runner was set nearly 40 years after its release. Strange Days, however, imagined something only four or five years away. Though Strange Days was released back in 1995, it looks and feels like it could've come out yesterday. It's one of those rare old movies that imagined the technology of virtual reality without turning it into a gimmick. I wouldn't call the tech macguffin in SD "virtual reality," but that's a semantic argument. The movie wasted no time dropping me into its jarring setting: The opening scene is an armed robbery filmed in first-person perspective, with the robber running from cops and jumping from one rooftop to another. What the author here doesn't mention, or perhaps fails to realize, is that this was brand-new technology in the real world. That is, pro-grade video cameras had finally become light enough to be handheld. This isn't remarkable today, when almost everyone in the developed world carries one in their pocket, but, at the time, it was a Big Deal. Basically, the movie practically invented the "shakycam" style that was destined to annoy me for the next 30 years, but absolutely worked for this movie. Director Kathryn Bigelow was influenced by the 1992 LA riots and incorporated those elements of racial tension and police violence into her work. Which leads me to another speculation as to why the movie was all but forgotten, even among SF fans: sexism. A woman? Directing a science fiction flick? Horror! And don't try to tell me that's not a reason, because we still see it happening today. As I alluded to yesterday, I don't give much of a shit about the personal lives of movie stars or directors, and I absolutely don't care what gender they identify as (though I guess I do care, at some level, because I notice that sort of thing). But it might be relevant to note that she was briefly married to the far more famous James Cameron. Not that relevant, though; she's a brilliant director in her own right. So yeah, there's more at the link, though with possible spoilers. Yes, there are anachronisms, and the movie isn't what I'd call perfect, but it's still one of my all-time favorites. So I was pleased to find it still has other fans, though the lack thereof never stopped me from enjoying a movie before. One final word of warning, though, if you haven't seen it before and want to: it's dark. As the article alludes to, it's Black Mirror-level dark. Today, it would generate numerous trigger warnings, and there's one scene in particular that stands out in my mind as really abyssally fucked-up (not the cinematography, but the subject matter). It's a scene among several that, if a dude directed something like it today, he'd probably never work in Hollywood again. And yes, I'm saying this as someone with a very high tolerance for horror scenes. Unlike most of Black Mirror, though, and unlike many other dystopian stories, though (some of which I also enjoy), the final message is one of hope. I don't know about you, but I could sure use some of that right now. |
| I don't believe in the concept of "guilty pleasures." Enjoy something, or don't; talk about it, or don't. If you like something, revel in it. This is about as close as I get. From GQ: I'm an educated man, an intellectual, a reader, and a writer. I started reading (or at least trying to read) Shakespeare at a very early age. I appreciate nuance and subtlety in books, movies, and shows, and I don't engage in macho, "he-man" behavior. And yet, cinematic car chases are like candy to me. Actually, that's not a bad analogy. Candy is simple, generally unsubtle, and not really good for you. But it's tasty and makes you feel good in the moment. Sometimes, even I get tired of thinking, so I search my ever-contracting list of streaming services for movie or show candy, ones that promise fight scenes, good guys vs. bad guys (they don't have to be "guys"), and, of course, car chases. Hence this article, which I only found yesterday, but the random numbers pulled it right up. Paul Thomas Anderson's latest movie, One Battle After Another, features one of the best car chases in recent movie memory. Dammit, now I'm going to have to go see that. But which are the others? As with most "best of" lists, this is very subjective. So I'm not going to highlight all of them. The chase scene has been a staple of action movies for as long as they have existed, so there are plenty of options to pick from. And, really, movies (and to a lesser extent, TV) are a perfect vehicle (pun absolutely intended) for the chase scene. You can write about them, sure, but even the most well-crafted prose is two-dimensional compared to the raw, visceral spectacle of watching cars hurtling across the screen, leaving chaos and destruction in their wake. It's also almost exclusively a "car" thing—I'll include other highway vehicles in the category, though. You can involve horses, as many Westerns have, but, in general, horse chases don't involve one of the quadrupeds spinning off a cliff and exploding into a very satisfying fireball (besides, that would be cruel). There are foot chases, of course, but again: explosions are rare, and there's only so much damage a runner can do. You can even go the SF route and do a starship chase scene, but the vast emptiness of space doesn't provide that sense of immediacy or relatability. The few really good ones usually involve very unrealistic dense asteroid fields, which... well, we don't know everything about outer space, but our own asteroid belt is so diffuse that you won't even see another asteroid from the one you're standing on. Besides, you can count the number of people who've been in space on your fingers (assuming you know the binary finger-counting code), but almost all of us have seen cars and streets and highways. Real car chases, though, are most often anticlimactic, like when a bunch of cops chased a low-speed white Ford Bronco on live TV way back in the 90s. The cinematic version is, like most of cinema, exaggerated and choreographed for effect. It's art. Not high art, mind you. But art. 9. The tiny Fiat chase in Mission: Impossible — Dead Reckoning Part One Okay, so, one of my other almost-guilty-pleasures? I really like the M:I movies, and I don't much care what anyone else thinks about them. Nor do I give a shit about Tom Cruise's personal life, extravehicular activities, or religious shenanigans. I will say that I was disappointed with Final Reckoning (aka Dead Reckoning Part Two), mostly because, spoiler, about 1/3 of the movie was a solo underwater fetch quest that, while cinematically impressive, dragged on way too long in my view. In other words, they should have stuck to car chases. Don't get me wrong; I enjoyed the mentioned chase scene, though mostly for the interaction between Cruise and Atwell. But there are better ones in that franchise than the Fiat one. Especially if one includes "motorcycles" in the list of chase vehicles. It does have the bonus of including Hayley Atwell (name misspelled in the link article), whose roles are always awesome, and listen very, very, closely, Paramount: you need to make her the lead in future M:I installments. 8. The cop chase (and pile ups) in The Blues Brothers This is an important scene in movie history, not least because it's in a low-budget comedy film about music, not a high-dollar action flick like M:I. 4. The tank chase in GoldenEye Like I said, we can't limit these things to "cars." There was one particularly memorable chase scene in a Jackie Chan movie that involved a hovercraft (yes, a fucking hovercraft, on land). So memorable, in fact, that all I can remember about the movie was Jackie Chan and the hovercraft, and I'm not 100% sure about the Jackie Chan part. There are, as you might tell by the Cracked-style countdown numbers, several more at the link. But, like I said, these things are subjective, so I'm just going to add a couple of my personal favorites that didn't make the list: The amount of work, skill, art, and planning that goes into some of these scenes is truly staggering, and the best ones have the ability to make me forget, for a moment, that it's all meticulous choreography and stunt work. |
| This Inverse piece is a couple of years old, but that's not the problem. Staying Up All Night Rewires Our Brains — This Could Be Key To Solving Mental Health Conditions What happens in the brains of mice when they stay up all night could help us better understand mood and other psychological conditions. The problem, or one of the problems anyway, at least in my view, is that the headline could easily be interpreted as "Stay up all night to fix your mental health problem!" But we're all too smart to believe that. Right? Everyone remembers their first all-nighter. You know... I really don't. I know I pulled a few in college, just like many college students. I even did some at work, until I got too old for that shit (all-nighters, not work). I just have no recollection of when the first one happened. I also regularly did what we called "tweeters," where you stay up studying until the sky just began to lighten and the birds started tweeting. What’s probably more memorable, though, is the slap-happy, zombie-like mode the next day brings. The best thing about all-nighters to me, back then, was finally being able to collapse into bed and get some decent sleep. Scientists have long thought that there is likely a neurological reason for this sensation, and one group of researchers thinks they might have cracked it. Um, how could there not be a neurological reason for the sensation? Also note the more restrained language here: "one group," "thinks," "might have." A study published last week in the journal Neuron tracked what happened in the brains of mice when they stayed up all night. Their results, surprisingly, may even help us develop a better way to treat mood and other psychological conditions. "Last week," as I noted, means about two years ago. The researchers found that one all-nighter roughly had the same effects on the brain as taking the anesthetic ketamine. "So I can just pop some ketamine instead?" This isn’t an endorsement of acute sleep deprivation. “I definitely don't want the takeaway from the story to be, ‘Let's not sleep tonight,’” Kozorovitskiy says. So she probably didn't write the headline. Insufficient sleep brings risk for myriad conditions and events, such as heart attacks, high blood pressure, diabetes, and stroke, way up. Worse, it can turn you into a grumpy asshole. Going a night without shut-eye isn’t the latest craze that will cure your depression, but rather, this insight could shake up our approach to targeting different areas in the brain when developing antidepressant medications. So, you know, just to be clear, this isn't a "something you can do about it" article, but a "look what scientists are working on now" article. There's a lot more at the link, delving into some of the science behind it. I don't need to share most of that; it's there if you're interested. I just have one more quote to highlight: The secret might lie in the neurotransmitter dopamine. Known commonly as the reward hormone, dopamine abounds when we eat and have sex. At the same time? Kinky! |
| Just in case anyone can still afford to go out to eat, here's a "helpful" article from bon appétit: All Your Restaurant Etiquette Questions, Answered Is it okay to ask for a different table? How do you split the bill with friends? Our industry pro weighs in. On any given night at your local watering hole or restaurant, bartenders are doing double time, dispensing drinks and life lessons from behind the bar. Which is why we always tip bartenders. Do I have to wait for everyone else’s food to arrive before cutting into my own plate? Yes? No. Well, depending on what you mean by "have to." No one's going to fine you for it. The Food Police aren't going to swoop in and drag you off in handcuffs to Kale Jail. It's rude, though. Do you tip on the total price of the bill if you order a bottle of wine? Yes. Next question. Just kidding. There is a lot more nuance here. No, there really isn't that much nuance. The article makes an argument based on server expertise. I'd prefer to see tipping go away entirely, as I've talked about many times, but as long as it's a thing, yes, if you order a $10 hamburger and a $190 bottle of wine (hey, don't judge me), you tip based on $200. There is one exception I can think of, but it's kind of an edge case. My favorite local brewpub will put any to-go beers on your tab, for the convenience of only having to pay once. So, I might get, say, a $10 burger and a $6 beer to eat there, and then a couple of 4-packs at, say, $20 each. I tip on the "service" portion, $16. Basically, if you consume it at the table, it's tipped. If I don’t like a table, is it okay to ask for a different location? Yes. In my nearly two decades of experience working in hospitality, I’ve never observed a conspiracy to give people the worst tables possible just for fun. You work in NYC. Every table is the worst possible table. If there’s a social media influencer disrupting the meal with lights, cameras, and ruckus, who should speak up, the staff or the diner? Wrong question. The correct question is, "Where's the next nearest restaurant?" There's more at the link, but I feel like they left out the most important questions and answers, to wit: Q: What's the best way to get the restaurant to let my dog in? A: Go fuck yourself. (Exception: legitimate service animals) Q: There are kids running around. Do I tell the staff or the parents? A: Neither. Surreptitiously pass the kids some chocolate-covered espresso beans. Q: What's the best way to make my date pay for everything? A: Put out at the table. Q: I'm at one of those weird hipster beer places and they won't serve me a Michelob Ultra! How can this be? A: Good for them. Q: How can I get my meal comped? A: Spend a few hours washing dishes in the kitchen. Q: Could you turn up the volume on the Lakers game? A: No. I should write an advice column. |
| The random number gods have once again trolled me. Here's another one from Mental Floss: Well, now, that depends. Snow? Cocaine? Talcum powder? Flour? Bread crumbs? I can think of at least one other possibility, but although I love cheese, I don't love it that much. If your taste in cheese has evolved beyond the individually-wrapped processed American slices and into blocks of the hard stuff—like cheddar—you’ve probably noticed that some cheeses can develop a chalky white substance on their surface. "Individually-wrapped processed American slices" are, unless we twist its definition beyond recognition, not cheese. At first glance, it looks like mold. It’s usually not a good idea to eat mold. Godsdammit, MF. This is why I don't trust you. A lot of cheese owes its entire existence to mold. What we call "mold" is a particular form of fungus, and, just as with more macroscopic fungi such as mushrooms, some are beneficial, some are neutral, and some are poisonous. Hell, penicillin is a mold. I can understand being put off by it, much like I can (just barely) understand not liking cheese in general, but that there is either misinformation or anti-mold propaganda, or both. Is it fungus? Or is your cheese harmless and just making you paranoid? And, again, poorly phrased. Yes, some fungi are inedible (or, more precisely, edible only once). And if you reach into the back of your dairy drawer and pull out a biology project, I can't blame you for throwing that shit away. I'd do it. The real question, though, the one I think they intended but maybe worded less than ideally, is simply "is it safe or not?" And that's absolutely legitimate. It’s probably fine. "Probably" doesn't fill me with a lot of confidence here, as, in the event that it's not fine, the result could be death or, worse, three days on the toilet. The white stuff seen on cheddar is typically calcium lactate, which is the result of lactic acid interacting with calcium. Both of which are rather famously found in milk, which, it might surprise you to discover, is what cheese is made from. Someone once said something like "cheese is milk's attempt at immortality," and I laughed. Poetic, but there's some truth in it. Anything "lactic" is milk-related. And yes, this includes "galactic." The general name for galaxies was derived from the name of our Milky Way. Not kidding here. But I digress. There's no actual milk in the sky (as far as we know), though we know there's ethanol out there. Nor is there cheese (again, as far as we know). No, not even the Moon. As cheese ages, some of the moisture moves to the surface, and the lactate moves with it. When that water ebbs, the lactate remains behind and can appear as powdery, crystal-like particles on the surface of the cheese. "Crystal-like?" Okay, fine. I won't be pedantic about this one. Calcium lactate is completely harmless. It might even be a sign the cheese has matured and is therefore tastier. But it’s also remarkably similar in appearance to mold. So how can you tell the difference? Given the fast-and-loose definitional problems so far, I'd highly recommend double-checking any of the advice here with another source. So why am I sharing this article if I don't fully trust it? Mostly just so I could quote this sentence: Fondling the cheese should give you some indication of which is which. I'm just going to pause for a moment here. Okay. Italian, Swiss, and Dutch cheeses may have visible Lactobacillus helveticus, which is added to helped create amino acids for flavor. Okay, so, this is where years of learning about chemistry, biology, and Latin pay off. Low-information consumers might see that and freak out over the number of syllables. "Don't eat anything you can't pronounce" is one of the most ignorant and misleading pieces of advice floating around out there. Let's break it down: Lacto - like I said, milk. bacillus - bacteria (bacteria are not fungi, but, as with fungi or any other genus of organisms, there's the good, the bad, and the ugly) helveticus - the Romans called the area now known as Switzerland "Helvetia." Yes, that's also the root of the name of one of the more popular typeset fonts. In the end, I'm not going to rag on anyone for discarding something that's grown an ugly fuzz, unless we're talking about your teenage son. We don't generally have laboratories in our homes (if you do, I'll back into the hedge here), and even if you're an expert in biology and/or chemistry (which, I should emphasize, I am not), the term "better safe than sorry" applies. Still, knowledge is power. And cheese is delicious. |
| Some purported historical research from Mental Floss today. I don't know, but shaking heads is a standard way of approaching a Mental Floss article. Shaking hands seems like a gesture that has been around forever. Indeed, a throne base from the reign of ancient Assyria's Shalmaneser III in the 9th century BCE clearly shows two figures clasping hands. Well, that certainly seems asserious. It might seem like shaking hands is an ancient custom, the roots of which are lost to the sands of time. The story I heard as a kid was that its origin probably came from demonstrating that you're not carrying weapons. That never quite sat right with me. One or both could be concealing weapons in their non-shaking hand. Or behind their backs. And the gesture could be meant by one as "See? No weapons!" and by the other as "I could kick your ass bare-handed." Back in the days when I was learning karate (along with five or six other Japanese words), one of the standard beginner lessons involved using a handshake to draw the opponent off-balance. Still, okay. As untrustworthy as MF can be, it does seem to be true that a handshake can be cross-cultural, and therefore might have shrouded origins. Historians who have pored over old etiquette books have noticed that handshaking in the modern sense of a greeting doesn’t appear until the mid-19th century, when it was considered a slightly improper gesture that should only be used with friends [PDF I took the time to follow and recreate the link to the cited PDF. It's a bit long and I admit I skimmed a lot of it, but it does seem to go into detail about the "negative history" of the non-handshake. Fair warning, though: it contains misspellings that seem to indicate that it's a textified image, such as "niouthpiece," which most likely started off as "mouthpiece." The original text was apparently in Dutch, and translated to English, so, in short, I wouldn't trust it completely through all those different translations. But one bit caught my eye, and latched on to my sense of humor. I present it without edits: When speaking, the Dutch merely used tlie eye and 'a moderate movement of the hand to support [their] words'. Because of their lively gesticulation the Italians were put on a par with peasants or, even worse, the 'moffcn' o r Westphalians, the immigrants the Dutch loved to ridicule.' " It's not the transcription error that sent me, though; it's how the perceived roles of "civilized" and "barbarian" had switched entirely. The early handshakes mentioned above were part of making deals or burying the hatchet... Ironically (or whatever), the term "to bury the hatchet" can be traced. The modern handshake as a form of greeting is harder to trace. The article makes a few references to handshaking in history and literature. As for why shaking hands was deemed a good method of greeting, rather than some other gesture, the most popular explanation is that it incapacitates the right hand, making it useless for weapon holding. Like I said, that's the one I always heard, but a few moments' thought cast doubt. Especially since a disproportionate number of the people I knew were left-handed. Sadly, in a world where obscure Rabelais translations provide critical evidence, the true reason may remain forever elusive. It's good to accept that maybe we'll never know everything. Lots of customs of etiquette seem arbitrary, like "how to set the table properly" or the American institution of not wearing white after Labor Day. Some are invented to deliberately distinguish your culture from the "barbarians;" then, you can say "those barbarians don't even [whatever arbitrary etiquette rule]." But, I suppose, at least handshaking is marginally more hygenic than kissing each other on the cheek, as per the French custom. But it's less so than the mutual bow common in some Eastern cultures, where a handshake might just as easily end with one party on the floor, staring up at a smug blackbelt. |
| "Hey, Waltz: the world is on fucking fire right now. Why are you banging on about shit that doesn't matter?" Okay, no one actually asked me that question. But sometimes, I ask it of myself. My answer, at least for now? It does matter. Truth matters. Science matters. Philosophy matters. Humor matters. I don't think we can protest our way out of this mess. There's no god nor benevolent aliens to save us from ourselves, no monsters except our own fears. When people believe anything without evidence, it chips away at our collective reality. And I'm here to try to approach the truth. I may dwell in darkness, but my words are light. They may not illuminate much, but maybe they help someone to see some obstacle. Doesn't matter if it's a big thing or a small thing. Here's what might be considered a small thing, from Atlas Obscura, two years ago: In the fall of 1982, an unfounded fear haunted almost every house in Chicago. As area children prepared to “trick” their neighbors with their impressions of werewolves, vampires, and zombies, their parents were much more terrified of the “treats” their kids were eager to devour. And this was before people started bleating about "sugar is poison." They were worried about actual poison. Though I'm not sure it was entirely unfounded; it was, as the article finally gets around to noting later, right after the Tylenol poisoning incidents. What is it with you people and Tylenol, anyway? Never mind; I'm not getting into specifics of current events right now. However, according to sociologist Joel Best, this ghastly threat was about as real as the candy-seeking, child-sized ghosts and witches roaming around with pillow cases. “All I can say is I don’t know of a single case of a child killed by a Halloween poisoner,” says Best. “I’ve seen five news stories that attributed deaths to Halloween poisoning. In one case, it was the child’s own father, and the other four were all retracted.” Here's the thing, though: Lone Asshole Theory, as I call it. All it takes is one bad actor to ruin something. A million people might pass by a precious piece of artwork, harmlessly, and the one million and first decides to throw paint on it. This is not an indictment of people in general. Far from it; it demonstrates that most of us are good or, at least, unwilling to face negative consequences. What it does show is that the one asshole ruins it for everyone else. Worse, once the idea is out there, some sociopath who might not have otherwise considered it might decide to slip a razor blade into an apple, or whatever. The chances are very low, but the consequences of hitting those odds are horrible: even one dead child would be a tragedy. Halloween sadism is defined as the act of passing out poisoned treats to children during trick-or-treating. But even before the term was coined in 1974, parents already feared a mysterious, mentally unhinged candy killer for decades, despite a lack of supporting evidence. So, why am I arguing in favor of being concerned about tampering? Well, what I'm trying to say here is that, considering risk management, even if it's never happened, there needs to be some vigilance. The problem is people get hyper about this stuff and go overboard with imaginary scenarios, while ignoring, or at least downplaying, more plausible ones. Best, who maintains that children are much more likely to be harmed on Halloween by cars than contaminated treats, has continued his never-ending task of quietly and efficiently unmasking the fraudulent claims that darken his door. Good. It's important to keep these things in perspective. Many of the needles in apples and poison-laced treats turned out to be hoaxes. In some cases, the children themselves perpetrated the hoax, perhaps to get attention. Kids can be sociopaths, too. They're not done being built. So why does this fear continue to endure and flourish even in the absence of evidence to support it? “This is, first and foremost, a worry about protecting children,” says Best, who categorizes Halloween sadism as a contemporary urban legend. Sure. But, unlike myths such as Slenderman or werewolves, this sort of thing is, at least, possible. While Best has been tracking the phenomenon since 1958, folklorist Elizabeth Tucker noted similar themes in other myths, like Blood Libel, a myth dripping in antisemitism that blamed Jewish people for kidnapping Christian children to use in illicit rituals. Oh, come on. Kids aren't even kosher. The fear of drug-laced Halloween candy was further intensified in 2022 with news reports of “rainbow fentanyl”—a form of the highly addictive narcotic produced in bright colors, allegedly to appeal to children. A moment's thought should be all it takes to dismiss this nonsense. Unlike razor blades and rat poison, drugs (so I've heard) are expensive; why waste them on kids when you can do them yourself? Doing the drugs, dammit, not the kids. "Doing" kids is also very wrong. Unlike poisoned candy, that happens all too often here in reality. But it's rarely a stranger. Usually, it's a pastor, coach, cop, friend, or family member. Nonetheless, respected outlets like The New York Times, as well as trusted advice columnists Dear Abby and Ann Landers, all weighed in with alarmist articles warning parents of Halloween night dangers. Humans. Just can't put their fear in the right places, can they? Terrified of sharks; step into the bathtub like there's no chance of slipping and dying. Anxious about flying; think nothing of taking an Uber to the airport. In either of those cases, what they're afraid of has a much lower probability than what they're not afraid of. In summary: maybe we, collectively, would be better off putting our energy into addressing real dangers than freaking out over imaginary ones. What we fear the most is the unknown, so maybe address that by fighting against ignorance. |
| I saved this Noema article a while back, but I can't remember why, only that I found it interesting (even though it's not even two months old now). Which doesn't mean that I agree with all of it. What Searching For Aliens Reveals About Ourselves Looking for life beyond Earth changes the way we perceive life right here at home. You know how, on old maps, you often see art in the unexplored places, usually things like sea monsters or fire-breathing dragons? That's what our various conceptions of alien life reminds me of. We don't know, so we make shit up. This is okay; it's part of what makes us us. Without this kind of imagination, fiction writing would be a lot more boring. The trick is, we need to sometimes step back and separate imagination from reality. I'm as big a fan of Star Trek as anyone, and more than most, but aliens aren't going to be humans with forehead makeup or rubber suits. Hell, I expect the vast majority of them won't even be what we call sentient, just like the vast majority of life on Earth isn't. As an astrobiologist, I am often teased about my profession. Hey, at least you don't get "Oh, you must be a virgin wearing a pocket protector and horn-rimmed glasses, and can't write good." Which is what we engineers have to put up with. I'll have you know that I don't wear glasses at all. The moment we realized our entire biosphere existed on the skin of a rocky planet hurtling through the void around a very ordinary star — one of some 100 to 400 billion stars in our galaxy, which is one of perhaps 2 trillion galaxies in the universe — we discovered life in space. This argument is like when I talk about "artificial" vs. "natural." It's a point of view. Kind of like how today's southward equinox can be described as "the Sun crosses the equator" or "the Earth's orbit and axial tilt moves the equatorial plane to intersect with the Sun." Both are true, depending on your vantage point. Everything on, in, or gravitationally attached to Earth is, ultimately, from space, but it can still be useful to categorize "terrestrial" as opposed to "extraterrestrial." Astrobiology seeks to uncover generalizations about life: how it comes about, where to find it, and what it is in the first place. Because we are part of the set of all living things in space, astrobiological progress reflexively reveals new truths about us. Even if we never find other life out there, the search itself shapes how we understand our own stories right here on Earth. How people describe, define, and defend their own professions is also interesting. Astrobiologists, however, are most interested in the at least two dozen worlds that we know of, so far, that are just the right size and distance from their host stars to potentially support life as we know it. Which makes perfect sense; you hone in on what's most likely to have what you're looking for. If you're in a strange-to-you city and want a beer, you go to a taproom, not an art museum, even though you might find beer in an art museum. Could life "not as we know it" exist elsewhere? Sure. But they're playing the odds. What we need to watch out for, as ordinary people reading stuff like this, is making the unsubstantiated jump from reports of "this planet could potentially support life" to "they've found an alien civilization!" The latter is the drawing of sea monsters in unexplored corners of the map. The article goes into some of the tools (mostly telescopes and computers) they use, then: While many of our simulations will be mere fictions, what makes them scientific is that these thought experiments are constrained by the known laws of physics, chemistry and biology. In the end, we produce scores of imaginary worlds that give us clues about what we need to look for to find another Earth-like planet using future observatories like HWO. Look, all the simulations are fictions. If they weren't, they wouldn't be simulations. It's like making computer models of next week's weather forecast: you might get close, and it's better than not making the prediction, but you won't be spot on. And as we know, the "known laws" are subject to tweaks. Especially in biology. People keep finding biology here on Earth that doesn't follow the Rules. Again, though, you have to start with what's known. Although many exoplanet scientists describe their work as a search for “Earth 2.0,” I find this phrase extremely misleading. “Earth 2.0” conjures images of a literal copy of the Earth. But we’re not looking for an escape hatch after we’ve trashed version 1.0. Yeah, I'm pretty sure some people are. Else there wouldn't be so much fiction about it. The article continues with some stuff I've discussed in here (and in the previous blog) numerous times, so I'm going to skip it this time—except to say that it seems like the author falls into the common fallacy of thinking that evolution (which is arguably a prerequisite for considering something "life") must necessarily produce tool-users who go on to shoot rockets into space and look for evidence of aliens doing similar things. We know it's possible because that describes us. What we don't know, and can't know yet, is how common that is in the universe. In other words, don't expect Vulcans and Klingons. But, as I've also said numerous times, even unequivocal evidence of microbes or their equivalent would be a paradigm shift for us. |