A thing I don't understand about people
Sep. 27th, 2025 08:38 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
I mean, no, most members of the fan group loathe it with a fiery passion and are only in the group to warn passers-by how terrible it is.
Today was the day of the conference at which I had been invited, at rather short notice, to give a keynote.
Not only did I have to get up EARLY especially for a Saturday, I had a rotten night because the lower back decided to kick off and even when it had calmed down a bit it took ages to get back to sleep.
And then as I was doing my final preparations I discovered the battery in one of my hearing aids was flat, which was a bit irksome, because I had been expecting all week for it to do the warning bonging, like the other one did, and had to replace that.
So anyway, I got out, and found that the place I was aiming at was not quite so far distant from the Underground station as had been indicated, and also, even though I was late, so was the start.
Rather few actual in-person attendees - I'm not sure how many there were on the Zoom.
Crisis! there was supposed to be a delivery of sandwiches at lunchtime which Did Not Arrive so we all went out to forage (these later turned up some hours later, what is the point).
So, I think my paper went over okay, and there were some questions, even if some of them got rather off-topic onto more general questions about archives.
Some of the papers were moderately interesting, some of them were a bit hard to hear, and I picked up at least one useful reference (possibly) for one of my own projects.
Met one old academic acquaintance from way back, and a couple of interesting Younger Scholars.
Had already decided that I was not up for going on to meal in restaurant, so came home to flop.
Which of these look interesting?
An Ordinary Sort of Evil by Kelley Armstrong
9 (29.0%)
Sea of Charms by Sarah Beth Durst (July 2026)
8 (25.8%)
Following My Nose by Alexei Panshin (December 2024)
8 (25.8%)
The Fake Divination Offense by Sara Raasch (May 2026)
5 (16.1%)
The Harvey Girl by Dana Stabenow (February 2026)
5 (16.1%)
Scarlet Morning by ND Stevenson (September 2025)
13 (41.9%)
Some other option (see comments)
1 (3.2%)
Cats!
22 (71.0%)
Let's All Remember When We Saved The World:
Montreal Protocol on Substances That Deplete the Ozone Layer - signed 16th September 1987 and entering into force on January 1st 1989, [became] the first universally ratified treaty in the entire history of the United Nations....
Much smarter people than I have spent the last 2 decades trying to understand exactly why it was such a resounding success, and let’s be clear here, I am just an idiot with a newsletter. But a couple of details stand out:
The agreement didn’t wait for all the science to be completely firmed up before implementing regulation - which is a good job, because early conclusions about ozone depletion levels were significantly underestimated. Instead, it adopted a “Precautionary Principle” that was enshrined in the Rio Declaration in 1992 - acting on likely evidence to avoid consequences that may be catastrophic or even irreversible if any delay is sought. (This is markedly different from how some politicians seem to think science should work - if their words can be believed, of course.)
Negotiations took place in small, informal groups, to give everyone the best chance of being heard and being understood. More than anything else, this reminds me of Dorsa Brevia, and how utterly exhausting that conference was for all the characters involved. Who knows how many such talks led to Montreal being accepted? But every one of them counted.
There was a clear economic benefit for the industries using CFCs to move away from them - not just on principle or to avoid public backlash, but because CFCs were old tech and therefore out of patent, and shifting to new alternatives would allow companies to develop ozone-friendly chemicals they could stick a profitable patent on.
And so the world was saved - just in time for its next challenge.
The plant's disappearance from Cwm Idwal is thought to have been driven by the Victorian fern-collecting craze known as 'Pteridomania', which stripped sites of rare species.
Its rediscovery suggests that the holly fern may be recolonising from spores carried within the national park, or that a hidden population survived undetected.
“This is a remarkable rediscovery," says Alastair Hotchkiss, the Botanical Society of Britain and Ireland’s Wales Officer. "The cliffs around Cwm Idwal are seriously challenging terrain for botanists to explore, but the fact that this species remained undetected for over a century and a half is a powerful reminder of how much we still have to learn about our upland flora – and how much we still have to protect.”
Today’s world requires us to make complex and nuanced decisions about our digital security. Evaluating when to use a secure messaging app like Signal or WhatsApp, which passwords to store on your smartphone, or what to share on social media requires us to assess risks and make judgments accordingly. Arriving at any conclusion is an exercise in threat modeling.
In security, threat modeling is the process of determining what security measures make sense in your particular situation. It’s a way to think about potential risks, possible defenses, and the costs of both. It’s how experts avoid being distracted by irrelevant risks or overburdened by undue costs.
We threat model all the time. We might decide to walk down one street instead of another, or use an internet VPN when browsing dubious sites. Perhaps we understand the risks in detail, but more likely we are relying on intuition or some trusted authority. But in the U.S. and elsewhere, the average person’s threat model is changing—specifically involving how we protect our personal information. Previously, most concern centered on corporate surveillance; companies like Google and Facebook engaging in digital surveillance to maximize their profit. Increasingly, however, many people are worried about government surveillance and how the government could weaponize personal data.
Since the beginning of this year, the Trump administration’s actions in this area have raised alarm bells: The Department of Government Efficiency (DOGE) took data from federal agencies, Palantir combined disparate streams of government data into a single system, and Immigration and Customs Enforcement (ICE) used social media posts as a reason to deny someone entry into the U.S.
These threats, and others posed by a techno-authoritarian regime, are vastly different from those presented by a corporate monopolistic regime—and different yet again in a society where both are working together. Contending with these new threats requires a different approach to personal digital devices, cloud services, social media, and data in general.
For years, most public attention has centered on the risks of tech companies gathering behavioral data. This is an enormous amount of data, generally used to predict and influence consumers’ future behavior—rather than as a means of uncovering our past. Although commercial data is highly intimate—such as knowledge of your precise location over the course of a year, or the contents of every Facebook post you have ever created—it’s not the same thing as tax returns, police records, unemployment insurance applications, or medical history.
The U.S. government holds extensive data about everyone living inside its borders, some of it very sensitive—and there’s not much that can be done about it. This information consists largely of facts that people are legally obligated to tell the government. The IRS has a lot of very sensitive data about personal finances. The Treasury Department has data about any money received from the government. The Office of Personnel Management has an enormous amount of detailed information about government employees—including the very personal form required to get a security clearance. The Census Bureau possesses vast data about everyone living in the U.S., including, for example, a database of real estate ownership in the country. The Department of Defense and the Bureau of Veterans Affairs have data about present and former members of the military, the Department of Homeland Security has travel information, and various agencies possess health records. And so on.
It is safe to assume that the government has—or will soon have—access to all of this government data. This sounds like a tautology, but in the past, the U.S. government largely followed the many laws limiting how those databases were used, especially regarding how they were shared, combined, and correlated. Under the second Trump administration, this no longer seems to be the case.
The mechanisms of corporate surveillance haven’t gone away. Compute technology is constantly spying on its users—and that data is being used to influence us. Companies like Google and Meta are vast surveillance machines, and they use that data to fuel advertising. A smartphone is a portable surveillance device, constantly recording things like location and communication. Cars, and many other Internet of Things devices, do the same. Credit card companies, health insurers, internet retailers, and social media sites all have detailed data about you—and there is a vast industry that buys and sells this intimate data.
This isn’t news. What’s different in a techno-authoritarian regime is that this data is also shared with the government, either as a paid service or as demanded by local law. Amazon shares Ring doorbell data with the police. Flock, a company that collects license plate data from cars around the country, shares data with the police as well. And just as Chinese corporations share user data with the government and companies like Verizon shared calling records with the National Security Agency (NSA) after the Sept. 11 terrorist attacks, an authoritarian government will use this data as well.
The government has vast capabilities for targeted surveillance, both technically and legally. If a high-level figure is targeted by name, it is almost certain that the government can access their data. The government will use its investigatory powers to the fullest: It will go through government data, remotely hack phones and computers, spy on communications, and raid a home. It will compel third parties, like banks, cell providers, email providers, cloud storage services, and social media companies, to turn over data. To the extent those companies keep backups, the government will even be able to obtain deleted data.
This data can be used for prosecution—possibly selectively. This has been made evident in recent weeks, as the Trump administration personally targeted perceived enemies for “mortgage fraud.” This was a clear example of weaponization of data. Given all the data the government requires people to divulge, there will be something there to prosecute.
Although alarming, this sort of targeted attack doesn’t scale. As vast as the government’s information is and as powerful as its capabilities are, they are not infinite. They can be deployed against only a limited number of people. And most people will never be that high on the priorities list.
Mass surveillance is surveillance without specific targets. For most people, this is where the primary risks lie. Even if we’re not targeted by name, personal data could raise red flags, drawing unwanted scrutiny.
The risks here are twofold. First, mass surveillance could be used to single out people to harass or arrest: when they cross the border, show up at immigration hearings, attend a protest, are stopped by the police for speeding, or just as they’re living their normal lives. Second, mass surveillance could be used to threaten or blackmail. In the first case, the government is using that database to find a plausible excuse for its actions. In the second, it is looking for an actual infraction that it could selectively prosecute—or not.
Mitigating these risks is difficult, because it would require not interacting with either the government or corporations in everyday life—and living in the woods without any electronics isn’t realistic for most of us. Additionally, this strategy protects only future information; it does nothing to protect the information generated in the past. That said, going back and scrubbing social media accounts and cloud storage does have some value. Whether it’s right for you depends on your personal situation.
Beyond data given to third parties—either corporations or the government—there is also data users keep in their possession.This data may be stored on personal devices such as computers and phones or, more likely today, in some cloud service and accessible from those devices. Here, the risks are different: Some authority could confiscate your device and look through it.
This is not just speculative. There are many stories of ICE agents examining people’s phones and computers when they attempt to enter the U.S.: their emails, contact lists, documents, photos, browser history, and social media posts.
There are several different defenses you can deploy, presented from least to most extreme. First, you can scrub devices of potentially incriminating information, either as a matter of course or before entering a higher-risk situation. Second, you could consider deleting—even temporarily—social media and other apps so that someone with access to a device doesn’t get access to those accounts—this includes your contacts list. If a phone is swept up in a government raid, your contacts become their next targets.
Third, you could choose not to carry your device with you at all, opting instead for a burner phone without contacts, email access, and accounts, or go electronics-free entirely. This may sound extreme—and getting it right is hard—but I know many people today who have stripped-down computers and sanitized phones for international travel. At the same time, there are also stories of people being denied entry to the U.S. because they are carrying what is obviously a burner phone—or no phone at all.
Encryption protects your data while it’s not being used, and your devices when they’re turned off. This doesn’t help if a border agent forces you to turn on your phone and computer. And it doesn’t protect metadata, which needs to be unencrypted for the system to function. This metadata can be extremely valuable. For example, Signal, WhatsApp, and iMessage all encrypt the contents of your text messages—the data—but information about who you are texting and when must remain unencrypted.
Also, if the NSA wants access to someone’s phone, it can get it. Encryption is no help against that sort of sophisticated targeted attack. But, again, most of us aren’t that important and even the NSA can target only so many people. What encryption safeguards against is mass surveillance.
I recommend Signal for text messages above all other apps. But if you are in a country where having Signal on a device is in itself incriminating, then use WhatsApp. Signal is better, but everyone has WhatsApp installed on their phones, so it doesn’t raise the same suspicion. Also, it’s a no-brainer to turn on your computer’s built-in encryption: BitLocker for Windows and FileVault for Macs.
On the subject of data and metadata, it’s worth noting that data poisoning doesn’t help nearly as much as you might think. That is, it doesn’t do much good to add hundreds of random strangers to an address book or bogus internet searches to a browser history to hide the real ones. Modern analysis tools can see through all of that.
This notion of individual targeting, and the inability of the government to do that at scale, starts to fail as the authoritarian system becomes more decentralized. After all, if repression comes from the top, it affects only senior government officials and people who people in power personally dislike. If it comes from the bottom, it affects everybody. But decentralization looks much like the events playing out with ICE harassing, detaining, and disappearing people—everyone has to fear it.
This can go much further. Imagine there is a government official assigned to your neighborhood, or your block, or your apartment building. It’s worth that person’s time to scrutinize everybody’s social media posts, email, and chat logs. For anyone in that situation, limiting what you do online is the only defense.
This is vital to understand. Surveillance systems and sorting algorithms make mistakes. This is apparent in the fact that we are routinely served advertisements for products that don’t interest us at all. Those mistakes are relatively harmless—who cares about a poorly targeted ad?—but a similar mistake at an immigration hearing can get someone deported.
An authoritarian government doesn’t care. Mistakes are a feature and not a bug of authoritarian surveillance. If ICE targets only people it can go after legally, then everyone knows whether or not they need to fear ICE. If ICE occasionally makes mistakes by arresting Americans and deporting innocents, then everyone has to fear it. This is by design.
For most people, phones are an essential part of daily life. If you leave yours at home when you attend a protest, you won’t be able to film police violence. Or coordinate with your friends and figure out where to meet. Or use a navigation app to get to the protest in the first place.
Threat modeling is all about trade-offs. Understanding yours depends not only on the technology and its capabilities but also on your personal goals. Are you trying to keep your head down and survive—or get out? Are you wanting to protest legally? Are you doing more, maybe throwing sand into the gears of an authoritarian government, or even engaging in active resistance? The more you are doing, the more technology you need—and the more technology will be used against you. There are no simple answers, only choices.
And I wonder whether small or even large earthquakes have been noticed in the vicinity of Fishkill.
‘Who Am I Without Birth Control?’:
Ms. Hamrick, who was 26 at the time, felt normal. No unusual weight gain, no mood swings. But a couple of questions had wormed their way into her mind and lodged themselves there: Who am I without birth control? Will I feel some sort of difference coming off it? Ms. Hamrick had started taking birth control pills a decade earlier, when she was 15. Now, as she browsed her social media feeds, she kept stumbling on videos of women saying how much better they felt when they stopped taking the pills, content she wasn’t seeking out. The posts typically went like this: a glowing blonde in a workout top — the picture of health! — saying that she had stopped taking birth control pills and immediately felt more clarity of mind. Like an emotional fog had lifted, like she was a brand-new, much happier person. Ms. Hamrick’s doctor was clear with her. If she wasn’t experiencing any side effects, there was no reason to stop taking birth control. Ms. Hamrick wasn’t so sure. The more videos about the pill she watched, the more skeptical she became, and the more she felt drawn toward experimenting. She was, after all, in a moment of change. She had moved, on a whim, from Indiana to Texas. Soon after settling near Houston she met a guy and they started dating, then looking at engagement rings.
Just over a year since Ms. Hamrick decided to stop taking the pills, she has figured out who she is without birth control: She is a mother. Her baby is four months old.
Okay, my own history with the Pill was not wonderful, but I do wonder if the doc I saw at the Migraine Clinic was just a bit too invested in biochemical explanations (in particular, I discovered later that she got very into The Awful Effects of the Pill over a range of factors) rather than, um, things going on more generally in my life. Because going off the Pill may have brought about some temporary alleviation (don't honestly remember) but not much, really.
Anyway, it is probably a bit of an exaggeration to say, this is like going off the TB drugs to experience the full Consumptive Experience (and I have no doubt that there are people around in thrall to the Myth, and it is a myth, of Syphilitic Geeenyus: Sid is falling about larfing liek drayne). But honestly. 'Pure' 'Natural' I spit on that.
On 'pure', I like this on the 'pure bloodlines' mythos Alot: Claims of pure bloodlines? Ancestral homelands? DNA science says no.
And on The Miracles of Modern Science: Huntington’s disease treated successfully for first time in UK gene therapy trial:
The disease, caused by a single gene defect, steadily kills brain cells leading to dementia, paralysis and ultimately death. Those who have a parent with Huntington’s have a 50% chance of developing the disease, which until now has been incurable.
The gene therapy slowed the progress of the disease by 75% in patients after three years.
[H]omegrown remedies from locally gathered plants – defined here as ‘local herbalism’ – were still being used to address both simple and complex healthcare needs.