Largest and most powerful rocket ever built blasts off on test flight that is hoped to be step on human journey to Mars
The largest and most powerful rocket ever built has blasted off from Texas but blew up within minutes in a test flight that its makers, SpaceX, hope will be the first step on a human journey to Mars.
After a cancelled launch earlier this week because of a pressurisation issue, the 120-metre
rocket system took off at 8.33am local time (2.33pm in the UK) on Thursday. It gathered speed, but then started to spin at altitude before exploding about four minutes after leaving the ground.Continue reading…
Cleared the Pad
For a hot minute, it looked like SpaceX had done the seemingly impossible.
The space company's gigantic Starship prototype spacecraft and Super Heavy booster officially cleared the launch pad this morning at the company's South Texas testing facilities, an epic conclusion to many years of development.
It was a spectacular sight, given the sheer size of the rocket. The stainless steel tower lifted off at a slight angle, igniting dozens of Raptor rocket engines at once.
But several minutes into its maiden voyage — the stack reached a height of just over 24 miles — the 400-foot tower started spinning uncontrollably and eventually exploded in a huge cloud of gas, likely the result of the rocket's self-destruct system.
It was a sobering failure for the Elon Musk-led venture. It's still unclear what exactly led to its early demise, but it still was one hell of a test flight — and one that SpaceX can likely learn a lot from.
Unscheduled Rapid Disassembly
During the live stream, SpaceX principal integration engineer John Insprucker referred to the event as an "unscheduled rapid disassembly," a tongue-in-cheek term for an explosion.
But there's plenty to be learned from the "anomaly."
"This was a developmental test, the first test flight of Starship, and the goal was to gather the data, clear the pad, and go again," he added. "Excitement is guaranteed."
SpaceX CEO Elon Musk, however, appeared far less amused during the live stream, and did little to hide his disappointment while blankly staring at the screen ahead of him.
Fortunately, the launch site is still standing, meaning that SpaceX may be able to try again in the not-so-distant future.
Despite the setback, the launch could still set the stage for a new era in space exploration, a proof of concept of a reusable ultra-heavy-launch vehicle that could return humans to the surface of the Moon and even deliver them to Mars.
"Welcome to the Starship era, humanity," Ars Technica's Eric Berger tweeted. "It began with a bang, as big things often do. The universe awaits."
More on Starship: SpaceX Fails to Launch Mighty Starship
The post Starship Launches First Orbital Attempt, Explodes in Epic Fireball appeared first on Futurism.
Hello! I'm seeking help for an issue I have where I am unable to keep a still imagination in my head. For example, if I were to imagine being in my bedroom, I would subsequently impulsively switch perspectives to a different angle of the room or just simply move around without consciously commanding my mind to it. Does anyone have any mitigating methods or explanations for this phenomenon? Thank you!
SpaceX managed to get its Starship spacecraft, the most powerful rocket ever built, off the launchpad at its South Texas testing facilities.
But getting it into orbit proved to be far more difficult, with the 400-foot rocket and booster stack tumbling hopelessly through the air minutes after launch, before ending in a massive fireball.
While that kind of early demise should be expected from SpaceX, a company that has blown up countless prototypes over the last couple of years, it's hard not to ignore the very particular date that it chose for Starship's first orbital attempt.
That's right: it was April 20, a date imbued with meaning for CEO Elon Musk. As such, it's hard not to wonder whether the mercurial leader pushed SpaceX to launch the rocket on a meme date before it was fully ready? Did his childish obsession with the date lead to the explosion?
It's pure speculation, but not terribly far-fetched. After all, this is a guy who bought Twitter and then slapped a shiba inu on its home page due to an ancient meme — and, for that matter, covered up the "W" on the company's headquarters to make yet another crude joke.
Musk's obsession with the date, which stoners celebrate around the world, goes way back. In a fateful 2018 tweet, Musk claimed that he was "considering taking Tesla private at $420," which eventually led to several lawsuits and endless drama with investors and regulators.
He even managed to squeeze the three digits into his disastrous bid to buy Twitter last year.
Even Starship's development hasn't been spared, with the company stacking its Super Heavy Booster prototype dubbed Booster 4 on top of its Starship prototype dubbed Ship 20, the first time the two were mated to form a 400-foot tower of stainless steel.
The company even placed two Raptor engines next to each other while building a preceding Super Heavy booster prototype, lining up engine "RB4" right next to "RB20."
Musk's infatuation with the number clearly runs deep — which makes us wonder: did he force SpaceX employees to rush ahead and have the first orbital launch attempt fall on that date?
Circumstantially, it feels plausible. Just days ago, insiders at the company were saying that a 4/20 launch was impossible.
But with the boss relentlessly pushing the tired joke, it's not hard to imagine that voices of caution within the company could have been pushed ahead to make the date.
Of course, plenty of other factors likely went into the decision as well, like the Federal Aviation Administration granting the company a launch license last week. A first launch attempt on Monday, which was scrubbed due to a frozen valve, took place several days before April 20 as well.
But we still wouldn't put it past the billionaire CEO. It certainly wouldn't be the first time he's screwed up over feeble attempts at humor — which, unfortunately for him, just don't seem to be getting many laughs.
More on the launch: Starship Launches First Orbital Attempt, Explodes in Epic Fireball
The post Did Elon's Childish Obsession With 4/20 Lead to Today's Starship Explosion? appeared first on Futurism.
Researchers have found a system in the brain that seems to integrate control of individual muscles with a person's intentions, emotions, and entire body.
(Image credit: Melinda Podor / Getty Images)
- The government lifted its zero-covid restrictions in December, reopening the country to pent-up demand.
- Elon Musk said he would launch a new artificial- intelligence platform called Truth GPT as a rival to Chat GPT and other generative- AI bots, somewhat contradicting his recent call for a moratorium on developing such technology.
British Library, London
From a medieval monk mixed with a fish to the call of an extinct Hawaiian bird, this entertaining show revels in nature’s marvels – real or otherwise
In 1255, the King of France gave Henry III of England an elephant; a sensation for medieval eyes that drew crowds to the royal menagerie at the Tower of London, including the artist, chronicler and Benedictine monk Matthew Paris. The picture Paris drew from life shows with clarity how the elephant has its leg tied to a post, how it stands imprisoned and wearily spurts water from its trunk. He shades the ridges and rumples on its vast body, as he tries to accurately depict this creature that’s stepped out of fable.
This 13th-century portrait of an elephant encapsulates the paradoxical delights of the British Library’s cornucopia of animal art. To medieval folk, an elephant was a monstrous legendary beast from their myths of faraway lands – yet Paris pins this fantastic being to reality, tying it down with his objective gaze. From this early attempt at scientific natural history, to a tiny drawing of a bird in flight from Leonardo da Vinci’s Codex Arundel, to Ludwig Koch’s pioneering 1953 gramophone record of British bird songs, Animals explores how human beings have sought to observe and understand our fellow species. Yet it also revels in the fabulous, impossible dreams we have made of them.Continue reading…
- In 2015, URMC neurologist Ray Dorsey and the URMC Center for Health + Technology (CHeT) team collaborated with Sage Bionetworks to launch the first smartphone research application to monitor Parkinson’s disease in real time.
- In 2015, URMC neurologist Ray Dorsey and the URMC Center for Health + Technology (CHeT) team collaborated with Sage Bionetworks to launch the first smartphone research application to monitor Parkinson’s disease in real time.
Commercially available smartwatches and phones can capture key features of early, untreated Parkinson’s disease, according to a new study.
These technologies could provide researchers with more objective and continuous ways to measure the disease and bring new treatments to market faster, particularly for patients in the early stages of the disease.
“This research shows that readily accessible and ubiquitous technology has the potential to detect and objectively measure severity and potentially progression of important symptoms of Parkinson’s disease,” says Jamie Adams, a neurologist at the University of Rochester Medical Center, and first author of the study in npj Parkinson’s Disease.
While Parkinson’s is the world’s fastest growing
, most of the drugs used to treat it were developed in the last century. The complexity of the disease and limitation of current measures have been barriers to new therapies.
Onset and severity of symptoms—such as stiffness in the arms and legs, movement and walking difficulty, and tremors—and progression of the disease can vary significantly from patient to patient.
Furthermore, the tools traditionally used to track the disease are subjective and episodic, e.g. only collected during visits to the clinic, limiting insight into how Parkinson’s disease affects people’s daily lives.
In 2015, URMC neurologist Ray Dorsey and the URMC Center for Health + Technology (CHeT) team collaborated with Sage Bionetworks to launch the first smartphone research application to monitor Parkinson’s disease in real time.
Apple featured the app, called mPower, during their semi-annual product launch event, and 15,000 individuals participated in research using the application. Studies have shown that mPower and another Android app can accurately track the severity of the symptoms of Parkinson’s disease. Dorsey is senior author of the new smartwatch study.
Since the launch of mPower, the proliferation of smartwatches and technological improvements, particularly in gyroscopes and accelerometers that can more precisely measure movement, has heightened the research potential of these devices.
In the new WATCH-PD study, researchers at multiple sites across the United States recruited 82 individuals with early, untreated Parkinson’s and 50 age-matched controls and followed them for 12 months.
The study volunteers wore research-grade sensors, an Apple Watch, and an iPhone while performing standardized assessments in clinic. At home, participants wore the smartwatch for seven days after each clinic visit and completed motor, speech, and cognitive tasks on the smartphone every other week.
The smartphone app tracked finger-taping speed, performance on cognitive tasks, and speech, while the smartwatch was able to measure arm movement, duration of tremors, and gait features.
The researchers were able to detect motor and non-motor features that differed between individuals with early Parkinson’s and age-matched controls. The team performing longitudinal analysis and also conducting a study that will follow participants for a longer period to determine which digital measures are sensitive enough to help researchers evaluate whether an experimental therapy is making a meaningful impact on the progression of the disease.
“These findings reinforce what other studies have shown—digital devices can differentiate between people with and without early Parkinson’s and are more sensitive than traditional rating scales for some measures of Parkinson’s disease,” says Adams.
For example, the researchers note that the smartphone app detected abnormalities in speech rated “normal” by investigators. “Better measures will lead to more efficient, patient-centric, and timely evaluation of therapies.”
Additional coauthors are from Harvard Medical School, the Bill and Melinda Gates Foundation, Takeda Pharmaceuticals, Invariant Research Limited, AbbVie Pharmaceuticals, Clinical Ink, and the University of Rochester Medical School.
Biogen, Takeda, and members of the Critical Path for Parkinson’s Consortium 3DT Initiative funded the work.
Source: University of Rochester
The post Your smartwatch could detect early Parkinson’s signs appeared first on Futurity.
Nature, Published online: 20 April 2023; doi:10.1038/d41586-023-01378-2Long-awaited choice comes more than a year after Francis Collins resigned as director of the largest public funder of biomedical research in the world.
Nature, Published online: 20 April 2023; doi:10.1038/d41586-023-01379-1Under new proposals, ministerial intervention would be limited to projects where national security is at stake.
Nature, Published online: 20 April 2023; doi:10.1038/d41586-023-01377-3The SpaceX rocket made it partially through its first full test. It could change astrophysics and astronomy, as well as ferry people to the Moon and Mars.
Nature Communications, Published online: 20 April 2023; doi:10.1038/s41467-023-37705-4The authors characterized a fold in the
The largest and most powerful rocket ever built blasted off from Texas but blew up within minutes, in a test flight that its makers, SpaceX, hope will be the first step on a human journey to Mars. After a cancelled launch earlier this week due to a pressurisation issue, the 120-metre
rocket system took off at 8.33am local time on Thursday. It gathered speed but then started to spin at altitude before exploding about four minutes after leaving the ground. It appeared that the two sections of the rocket system – the booster and cruise vessel – were unable to separate properly after takeoff, possibly causing the spacecraft to fail
Scientific Reports, Published online: 20 April 2023; doi:10.1038/s41598-023-32795-yEvaluation of the registry DIALYREG for the assessment of continuous renal replacement techniques in the
SpaceX's new stainless-steel rocket named Starship exploded Thursday just four minutes after liftoff. In a statement, the company said, "with a test like this, success comes from what we learn."
Nature, Published online: 20 April 2023; doi:10.1038/d41586-023-01281-wThe proliferation of miniature satellites — and a possible switch to iodine exhaust — could have unintended consequences.
A process to make paper bags stronger—especially when they get wet—could make them a more viable alternative to single-use plastic bags, a new study shows.
The study suggests a way to create paper bags that are durable enough to be used multiple times and then broken down chemically by an alkaline treatment to be used as a source for biofuel production, says Daniel Ciolkosz, associate research professor of agricultural and biological engineering at Penn State.
“When the primary use of these paper products ends, using them for secondary purposes makes them more sustainable,” he says. “Recycling and reducing paper waste also helps in reducing total solid waste destined for landfills. This is a concept we think society should consider.”
Lead researcher Jaya Tripathi, who will graduate from Penn State this spring with a doctoral degree in biorenewable systems, devised an innovative process in which cellulose in paper is torrefied, or roasted in an oxygen-deprived environment, to greatly increase its tensile strength when wet.
Paper bags are a popular alternative to plastic bags to reduce the environmental impacts caused by using plastics, she explains, but paper bags have short lifespans due to their low durability, particularly when wet.
And a paper bag must be reused several times to reduce its global-warming potential to below that of the conventional high-density polyethylene bag, Tripathi adds.
“Reuse is mainly governed by bag strength, and it is unlikely that a typical paper bag can be reused the required number of times due to its low durability upon wetting,” she says.
“Using expensive chemical processes to enhance wet strength diminishes paper’s ecofriendly and cost-efficient features for commercial application, so there is a need to explore non-chemical techniques to increase the wet strength of paper bags. Torrefaction could be the answer.”
Because torrefaction decreases the glucose yield in the paper, Tripathi then treated the paper with a solution of sodium hydroxide, also known as lye or caustic soda, that increased its glucose yield, making it a better source for biofuel production.
In findings published in Resources, Conservation and Recycling, using filter paper as the medium, the researchers reported that the wet-tensile strength of the paper increased by 1,533%, 2,233%, 1,567%, and 557% after torrefaction for 40 minutes at 392 degrees Fahrenheit, 428 F, 464 F, and 500 F, respectively.
Glucose yield decreased with increased torrefaction severity, but after treating torrefied paper samples with an alkaline sodium hydroxide solution, glucose yield increased, the researchers say.
For instance, the glucose yield of raw filter paper was 955 mg/g of substrate, whereas it was 690 mg/g of substrate for the same paper sample torrefied at 392 F. The glucose yield increased to 808 and 933 mg/g of substrate with 1% and 10% alkaline treatment, respectively.
The need for a concept like the one demonstrated by the researchers to replace plastic bags is obvious, Tripathi points out.
According to the UN Environment Programme, 5 trillion plastic bags are produced worldwide annually. It can take up to 1,000 years for these bags to disintegrate completely. Americans throw away 100 billion bags annually—the equivalent to dumping nearly 12 million barrels of crude oil.
“By switching to stronger, reusable paper shopping bags, we could eliminate much of that waste,” Tripathi says. “The implications of a technology like the one we demonstrated in this research—if it can be perfected—including using the worn-out bags as a substrate for biofuel production, would be huge.”
Like many scientific discoveries, Tripathi learned about the synergy of torrefaction and alkaline treatment for increased paper capabilities by accident.
“I was looking into something else, studying how torrefaction impacts cellulose for glucose yield for use as a biofuel substrate,” she says. “But I noticed that the paper’s strength was increasing as we torrefied the cellulose. That made me think that it probably would be good for packaging, an entirely different application.”
The US Department of Agriculture’s National Institute for Food and Agriculture funded the work.
Source: Penn State
The post Team makes paper bags stronger, even when wet appeared first on Futurity.
Two of the top news stories in recent weeks—the Manhattan district attorney’s criminal indictment in People of the State of New York v. Donald Trump and the three-quarter-billion-dollar settlement in Dominion Voting Systems v. Fox News Network—may seem like independent affairs, but they are parts of one bigger story. That story is how former President Trump has been able to control what information is available to the public, as he has repeatedly done in an effort to aggrandize and cling to his own power. His willing helpers were media companies, but they were not acting as news organizations. The National Enquirer deliberately generated false information and hid true information from the public as part of a scheme to secure Trump’s grip on political power. Fox aired false claims and questioned true ones as it sought to placate Trump’s supporters. Together they have succeeded in polluting the marketplace of ideas in which democratic politics is supposed to thrive.
[John Hendrickson: Inside the Manhattan criminal court with Donald Trump]
But law and litigation has helped bring this story to light. The courts—a place where facts still matter—have shown a path to catching up with the wrongdoers, but worryingly, these cases could also be treated as a guide for future collusive
keen on engaging in disinformation while sidestepping exposure.
Manhattan D.A. Alvin Bragg’s criminal case—beyond the headlines about the first-ever charge against a former president and the hush money paid to a porn star—is about the capitulation of the National Enquirer to Trump’s 2016 campaign. The two were a natural pair, as seen in the Bragg indictment and statement of facts: a popular media outlet conspiring with a campaign to suppress stories that could damage the candidate’s chances of being elected. The proof of such an unholy alignment is direct: David Pecker, the head of the National Enquirer, admitted to the Department of Justice that he, Trump, and Trump’s fixer Michael Cohen engaged in a catch-and-kill scheme for stories about alleged affairs and one-night stands involving Trump, with the goal of keeping such information from voters. It was a smart move. After all, it had been the Enquirer that broke the 2007 story of then–Democratic Senator John Edwards’s affair, which ended that presidential aspirant’s political career.
The arrangement with Pecker went further than killing negative stories about Trump, to encompass promoting negative stories about Trump’s Republican and Democratic adversaries. On Trump’s road to the GOP presidential nomination, the Enquirer published more than 60 stories attacking his political opponents.
This scheme thus bore an uncanny resemblance to the contemporaneous Russian disinformation efforts, which also promoted Trump and denigrated his Republican primary adversaries while attacking 2016 Democratic front-runner Hillary Clinton and praising her challengers, including Senator Bernie Sanders. (All of this is outlined in the federal indictment of the Internet Research Agency and the Russians helping run that company.) Similarly, the Federal Election Commission sanctioned American Media Inc., the National Enquirer’s parent company, for its illegal interference in the 2016 election. AMI agreed to pay $187,500 in fines after the FEC’s nonpartisan staff found that the catch-and-kill arrangement, in coordination with Trump and Cohen, violated federal campaign-finance law.
That we know of the AMI-Trump alliance is fortuitous. The coordination of a candidate with a powerful media outlet in the lead-up to an election would not have necessarily come to light but for the secret payment of money to kill adverse stories—payments that attracted the attention of federal and state prosecutors and led to Cohen’s conviction for campaign-finance charges and Trump’s indictment on charges of falsifying business records. If Trump and AMI had been content to publish favorable stories about Trump and derogatory pieces about his adversaries—regardless of truth—without the element of hush money, we might be none the wiser as to this systemic corruption of our electoral process.
A very similar fortuity revealed an even more pervasive systemic corruption of our electoral process, one involving Trump’s collusion with another media outlet, this one far more influential. Fox Corporation and its subsidiary Fox News would have avoided their recent legal troubles if they had steered clear of targeting Dominion Voting Systems, a private company with enough resources to sue. If their election-fraud claims had been more diffuse and focused on unspecified figures or governments, the now-infamous Fox emails and texts may have never come to light.
But because Dominion brought its civil suit with its attendant right to discovery of Fox’s internal communications, the public now can see an effort—strikingly similar to that by AMI—to curry favor with one and only one candidate, in spite of many Fox employees’ private antipathy toward the man. The Dominion suit revealed how Fox actively sought to promote Trump’s effort to stay in office after losing the 2020 presidential election. Like AMI, Fox became the amanuensis of the then-president, regurgitating nightly his election-fraud claims, lies that plenty of people at Fox, up to and including News Corp Executive Chairman Rupert Murdoch, disbelieved. Such efforts included direct coordination with the White House and Trump campaign, reminiscent of the direct coordination between Sean Hannity and Paul Manafort years earlier.
Indeed, the judge overseeing the Dominion suit found that the plaintiff had so overwhelmingly established certain facts, there was no need for the jurors to consider them at trial. One such fact: Dominion did not tamper with any election results, and any claims to that effect by Fox were false. The court went further in its pretrial rulings with respect to the role Fox played in promoting this falsehood. Remarkably, the court precluded Fox from trying to convince jurors—had a trial taken place—that it was merely reporting the news.
[Read: Stormy Daniels’s oh-so-familiar story]
This is yet another aspect that aligns the two cases. In the defamation lawsuit, Fox had argued that regardless of whether the network believed Trump’s election-fraud claims, they were news and thus it had a responsibility to report them. The court rejected this argument, not because such a defense could not be valid in theory, but because that defense was not factually supported in this case. As Murdoch himself admitted, the Fox anchors were not impartially reporting; they were “endorsing” Trump’s claims.
Similarly, AMI claimed in front of the Federal Election Commission that it was covered by the so-called press exemption, which holds that “any news story, commentary, or editorial distributed through the facilities of any broadcasting station, newspaper, magazine, or other periodical publication” does not count as an expenditure on a political campaign. Relying primarily on AMI’s own statements to the Justice Department, the FEC easily rejected this claim. The company “disclaim[ed] a journalistic or editorial purpose” by admitting that it had made the hush-money payments for the express purpose of assisting the Trump campaign, the FEC legal staff explained.
In short, these conclusions of the federal court in the Fox case and the FEC and Bragg in the hush-money case underscore the nature of this threat to American democracy. In both these cases, the most damning thing is not their outcome—Dominion’s settlement or a Manhattan jury’s eventual verdict—but their revelations.
When national media companies pollute the information environment in collusion with a political campaign, the question becomes whether American institutions and the legal system can adequately respond. The courts may hinder Trump, or for that matter any politician with autocratic leanings, from colluding with media companies. But the worrying messages to such politicians may be to avoid mischaracterizing or paying hush money altogether and to avoid defaming a company with deep pockets when promulgating the next big lie. Bragg and Dominion may win their battles, but the electorate may lose the war.
As an undergraduate at the University of Chile, Bernardo Subercaseaux took a dim view of using computers to do math. It seemed antithetical to real intellectual discovery. “There’s some instinct or gut reaction against using computers to solve your problems, like it goes against the ideal beauty or elegance of a fantastic argument,” he said. But then in 2020 Subercaseaux fell in love…
Headlines about climate change have filled newsfeeds over the last few years, ranging from catastrophic (natural disasters, endangered species, dire predictions for the future) to a bit more optimistic (electrification, the transition to renewable energy, climate tech advances). The content we see and read plays a key role in shaping our opinions about climate change, but it remains a contentious topic. Is it real? Are humans causing it? How bad is it really? And what’s likely to happen in the future?
A survey carried out by the Energy Policy Institute at the University of Chicago (EPIC) and The Associated Press–NORC Center for Public Affairs Research aimed to find out how Americans really feel about climate change. The results were released over the last couple weeks in anticipation of Earth Day on April 22. In addition to general questions about climate change, the survey asked people about their views on energy policy and electric vehicles.
5,408 adults completed the survey between January 31 to February 15 of this year. There were respondents from all 50 US states, and they varied in age, race, gender, and education level.
In a nutshell, here’s what the survey found: Americans believe climate change is happening, but they’re not terribly worried about it, and are mostly not willing to spend money or go out of their way to help fix it.
Believers, Sort Of
74 percent of the survey respondents said they believe climate change is real. However, less than half—49 percent—believe it’s being caused by human activities (as opposed to natural changes in the environment). That 49 percent is down from 60 percent the last time this survey was carried out, in 2018. The change in viewpoint was uniform across education levels, from college graduates to those who stopped studying after high school. However, more people in the 18 to 29 age group changed their view than did those aged 60 or older.
In terms of actually taking action, more than half of respondents said they’re already trying to reduce their energy consumption (though this is likely as much of an effort to keep energy bills down as it is to help the environment). Some ways people are doing so is by using energy-efficient appliances (68 percent), turning off unnecessary lights (89 percent), using less paper and plastic (58 percent), eating less meat (37 percent), and using less heat and air conditioning (60 percent). These are relatively easy, low-cost actions that most anyone can take.
Fewer people are opting in to pricier climate-friendly actions, like putting solar panels on their home (11 percent), buying an electric or hybrid vehicle (12 percent), or getting electricity through a supplier that uses renewable sources (25 percent).
Hard to Cough Up the Cash
It seems that much of Americans’ willingness to help combat climate change comes down to economics. Almost two-thirds of those surveyed said they weren’t willing to pay any amount of money to combat climate change—not even $1 a month. 38 percent would pay $1 a month, and 21 percent would pay $100 a month.
How much people are willing to pay is likely more a function of their disposable income than of their concern over the environment. However, peoples’ willingness to shell out any amount of money, whether $1 or $100, decreased about 10 percent between 2021 and the present. This is likely because of the financial squeeze put on so many people by the pandemic and rising inflation; when you’re worried about making rent or buying groceries, helping the planet isn’t going to be high on your list.
“It’s striking that Americans’ willingness to pay even a $1 monthly fee to combat climate change fell to below half of respondents—the lowest level since we began tracking this data,” said Michael Greenstone, director of EPIC and an economics professor at the University of Chicago. “Americans’ willingness to pay for climate policy is far below what research projects climate change will cost society per ton of CO2 emissions.”
Similarly, 41 percent of people said they would buy an electric vehicle—if the long-term savings on gas and maintenance added up to more than the higher up-front cost of the car (cost was the biggest barrier to buying an EV). Those most likely to buy one are under 45 years old, live on the west coast in urban areas, and have high incomes. Unsurprisingly, people don’t want to be pushed into buying electric cars; just 35 percent support stricter fuel efficiency standards to encourage EV sales, and 27 percent are in favor of requiring new car sales to be electric or hybrid by 2035.
Help From Uncle Sam
Based on these responses, it seems we’re likely to find ourselves in a bit of a pickle in coming years. Despite believing in climate change, most Americans aren’t up for throwing much money at it. This must be partly due to the tough economic times we’re in; inflation and interest rates have soared, and whispers of an impending recession have been circulating for months.
But it’s also a sign that even once the economy improves and people feel more secure in their finances, real progress likely won’t be made without significant government intervention—that is, subsidies, regulation, and incentives. These need to be carefully balanced with practical concerns and realism, which can be a tall order.
Image Credit: Wikimedia Commons
Nature Communications, Published online: 20 April 2023; doi:10.1038/s41467-023-38059-7Polycyclic indolines are valuable skeletons in drug discovery. Here, the authors report an asymmetric dearomative [3 + 2] annulation of indoles with aminocyclopropanes to construct tricyclic indolines.
During the Munich Electronics Show (Shanghai) held from April 13th to 15th, the automatic charging robot of NAAS Technology Inc.(NASDAQ: NAAS) was unveiled at the booth.
This automatic charging robot focuses on mobile automatic charging and settlement. Users can place an order to complete charging with one click; even in a parking lot with a complex environment, it can use multiple perception, stereo vision, and rationally plan routes based on algorithms. , to get the job done.
Based on its mobility, this automatic charging robot will become an effective supplement to fixed charging piles, which can help new energy vehicles get rid of the space constraints, and can introduce more convenient means of energy replenishment into more parking scenarios. In addition, it is both a charging robot and an energy storage robot.
|submitted by /u/ilovekerma
If this is the wrong place to post this then I don't know where.
The universe is a machine operating on mathematical patterns, fueled by chaos. Everything in existence is fundamentally part of a web. The structure of neurons make a web. The big bang was a single point of matter that expanded into a universe, with a trajectory that makes up a web. Strings of gravity pull objects together toward a central point, forming solar systems, galaxies, Even abstract concepts, like the formation of ideas and chains of memories are all a web. Time, and the infinite possible alternative scenarios that could occur make up a web. Plants, veins in your body, the branching limbs of sophisticated animals. The patterns that spiders are naturally programmed to think about with their simple computation. It's all a web, ever expanding, reaching out into the vast unknown, grasping for whatever it can. I imagine, looking from the outside, the universe, and multiverse are are likely shaped to reflect this pattern.
Nothing actually matters. We're a bunch of stupid primates living on a rock, floating in space, in the middle of nowhere. The universe is cold and hostile. It has no reason or purpose, it just conveniently exists. A complete coincidence, the result of infinite time conducting infinite trials. It doesn't care about us, we are completely alone, from birth to death and beyond. Our lives don't matter as they have no true meaning or purpose. But the positive side is that means we can create our own purpose, and choose what actually matters for ourselves. The universe is an infinite sandbox of opportunities, and with time and effort, anything is achievable. Like the evolution of life and the universe before us, and the evolution of technology and what we will become in the time ahead. At some point our evolutionary ancestors invented a purpose, and passed it down through our genes. Instinctively we follow this purpose. We are the engineers.
Our purpose. If anything, we are here to observe and explore the vast beauty of the universe. Because without life and consciousness to experience it, the physical universe may as well not exist. It's what life has always done. Build, fight, procreate, expand. One day, we'll reach the edges of existence and our understanding of it. We'll complete the ancient cycle of the universe, and become God. Objectively speaking, there is nothing more valuable in the universe than life.
With all of this established, our species should have 3 objectives right now. 3rd is colonizing other planets. We should have a back up incase earth gets wiped out. 2nd is protecting our planet from external and internal threats. I shouldn't have to explain why. But what I want to discuss is our primary objective, Brain computer interfaces. This technology is the next step in evolution. This is my life goal. And it should be yours as well. This 4 Phase plan is the most direct and efficient rout to the post human singularity.
First I want to emphasize just how dumb and primitive we are, and then how to fix it. We will never overcome war, greed, or the deprivation of resources and education in lower classes at our current stage in evolution. It is beyond our cognitive and social abilities. Most humans put their social lives and basic urges at the top of their priorities. They want to mate, raise children, find a small career turning little stones in society to feel accomplished if their lucky, connect emotionally with people and die old in peace. Then you have sex, sports, drugs, and violence in between, with religion, petty politics, and corruption to muddy the waters. Most don't know that there are greater experiences and achievements in this existence. It will take another million years of natural evolution to grow past this. By then we'll blow ourselves up. We must speed up the process of evolution through cognitive enhancement.
The brain is a biological computer, designed by nature without any true intention, but through rigorous trial and error, leaving it extremely inefficient in its processing capabilities. Hypothetically, with the right tools and full knowledge of how our brains work, advanced lifeforms could reassemble a human brain in a configuration that optimizes processing and adds new functionalities. Take autistic savants for example. Autistic children who got hit in the head with a baseball, were born without a corpus collosum, or just had a seizure and ended up with unique paths of neural connections, called synesthesia, resulting in things like near perfect memory or mathematical computation. It doesn't seem to take much. I don't know if we'll ever be able to surgically induce synesthesia, but I don't think it will ever be necessary either. What we need is to substitute these things with brain chips until we have the ability one day to engineer synthetic brain parts.
Things like our attention span, memory, and processing abilities are completely dysfunctional. It's pathetic to be a processing device that can scan a page but fail to read any of it, or read a page but fail to retain more than half of the information by the next day. Or failing to make a critical choice because of a misplaced emotion. You could temporarily fix this with a computer chip relaying read and write signals to the brain, via electrodes from the base of the skull like they are testing in labs like Neuralink, which is just a start. Maybe we can figure out better designs.
Phase 1: A real life savant named Kim Peek (aka The Rain Man) could read a full book in minutes, and then recite it word for word with 98% accuracy. Say in a couple decades you have a fully functional ergonomic brain computer interface prototype, designed to simulate this kind of perfect memory. Its function is to read information as you do, and then store that info on its own drive, without interfering with the brain's storage process. When it detects a fault along the brain's various storage processes, it will then stimulate those parts to maximize efficiency of that process. It should also aid in recall. Also it can compare or correct faulty information with its own, with the confirmed consent of the user, via a pocket device, not a cell phone, with no internet connection, because getting hacked is too high a risk this early on. All of its data is based on what it reads from the user's nerve impulses. It will have a settings menu to control all of these functions. Maybe later this user interface can be internal via the occipital lobe. This is a general concept, because there are multiple types of memory and parts of the brain that handle them, and several variations to how the device could approach it. Essentially you have a person with an artificial memory extension, not a replacement.
At that point, a team of researchers designates 1 qualified member for a voluntary installation of this device. Using their memory enhancement chip, the researcher will rapidly research info pertaining to the production of the next brainchip, most likely designed to enhance processing speed. They can manually skim through research papers, or download official research data from around the world via a special usb drive, kinda like in the matrix, plugged in separately from the computer so you don't get hacked. Using this grand abundance of knowledge and flawless memory recall, this research specialist will work with their team on concepts and then production of the next device. This is step 1 In the most direct rout to the post human singularity that we could possibly take.
After producing and installing a series of devices to maximize the general intelligence of the specialist in areas such as short and long term memory, memory recall, working memory, logical reasoning, processing speed, computation of mathematical and abstract thought, pattern recognition, emotional control, introspection and connection to subconscious processes, they will have a model of a generation 1 posthuman. This may sound too complicated or like it would take too long, but with each augmentation of the specialist's intelligence, the speed and efficiency of device production should increase exponentially. Similar to moore's law, but much more focused, rapidly accelerating self production. The next step is to begin augmenting other members of the research team with a fluid model of PH Gen1, while going back to make necessary adjustments and updates to each member's brainchips as they progress. Phase 1 completed.
Phase 2: Select individuals will be invited to join a newly established organization of Post Humans. Each member will be augmented with the latest model of BCIs. Each member will be selected based on their Education, motivations, and activity in society. Engineers, Scientists, educators, journalists, Charity and Human rights activists, and world leaders. Motivations and psychological profiles must avoid personal financial gain, religion, or oppressive power over others. As membership grows, these intellectually augmented leaders will work toward repairing dysfunction in society, poverty, medical science, technology regarding human augmentation, artificial intelligence, and space travel.
Phase 3: At this point, human society, science and technology will be rapidly evolving at a rapidly increasing rate in every aspect. Heavily guarded, AI operated manufacturing plants should be producing latest models of BCI Systems for mass installment across the world's population. Hopefully money isn't a problem by then and they can be installed and maintenanced free of cost to the users, like free health care. If not by this time, then eventually BCI installment should be considered a human right, but not a requirement. Owning a BCI will eventually become a necessity in order to function in society, like cell phones and internet today.
Division between Post Humans and Retro Humans is inevitable. Initially there will be people who don't trust the technology or understand the philosophy, significance, or responsibility of its intention. People will be afraid of the transition into the unknown, and of losing things they love and trust in primitive life. There are a lot of good reasons to feel this way, and their decision should be respected. Over time they will see the benefits from watching augmented people's lifestyles and the advancement of society. The technology will become easier to use, less invasive, more ergonomic, and less intimidating. The general public will become increasingly educated and informed in the ideology of the the long term mission our species is advancing toward. With new found confidence and inspiration, many will brave the transition.
While that group transitions, Another group will resist. People hanging on to religion, rigid patriotism, conspiracy, their personal success in outdated society, or a fear and defiance of change, will likely incite riots, assassination attempts of Post Human Leaders, and possibly all out civil war. These people are misguided, ignorant, and afraid. Their decisions should also be respected. In order to avoid as much of this as possible, maintaining an open dialogue, and understanding of these peoples beliefs is absolutely essential. A primary example in America will be Christian Zealots. I have extensive experience with these people and their beliefs. Their bible has prophecies referring to "The End Times" in the book of revelations. This is in line with modern conspiracies theories referring to "The New World Order". Not all, but most of them have strong conservative beliefs. Many are heavily armed, and have been waiting for decades to fight fearlessly to the death in a war for god. They also make up much of our military and police force. Some already believe that current BCIs are going to be the mark of the beast.
Avoiding unnecessary conflict and misunderstandings with them can actually be very simple. Maintaining morality, transparency, and open communication throughout late phase 2 and phase 3 will help dissipate views of it being a secret society with nefarious intentions. If possible, not having a single leader in the hierarchy during later stages can avoid that individual being labelled "Antichrist" and assassinated, triggering the worst part of the prophecy. Generally avoiding symbology, timing, quantities, and a certain order of generally interpretable events, and then pointing it out to them would also help. Although in a few decades from now Christianity could be eliminated by science, as they are already becoming less statistically prevalent with scientific progress and education.
To avoid remaining Retro Humans becoming homeless, there may eventually need to be dedicated sectors of cities, or a reservation continent for them to be relocated to temporarily. There, they will be provided with everything they need to live freely and comfortably. And they will always have resources to educate themselves and upgrade if they change their mind. Those who refuse relocation can do whatever they want as long as they are not harming society. Rebels will be dealt with nonlethally, and as peacefully as possible. There will be an attempt to educate them, before being relocated to the reservation and temporarily monitored. If they attempt to return to society, they will simply be moved back to the reservation. Eventually, when most if not all of us leave earth, either they will be left to wander freely, or we may be so advanced that we decide it is no longer ethical to leave them in such a primitive state, being too ignorant to decide for themselves.
Society will become so different, that traditional governing systems like capitalism and communism will be obsolete, and post human leaders should be able to invent a superior system. It may come in the form of a selfless AI designed to make decisions in the best interest of as many groups as possible, with heightened priority to those in poverty, and individuals slipping through the cracks of society and becoming involuntarily isolated. This AI may or may not be accompanied by a hierarchy of delegates.
Phase 4: The Singularity has been crossed. This is the part that science fiction stories usually seem to get wrong. As we build technology and explore the universe, we will also continue improving ourselves physically and mentally. We will no longer be human, or even machines. We will Likely strive toward some kind of synthetic biological species, built with artificial molecules, exploiting the most reliable design patterns within and possibly beyond the universe's limits. We may even transcend physical form entirely.
For this reason, it disturbs me deeply when todays astronomers and physicists say aliens shouldn't be trusted, in fear of hostility. A civilization that has achieved interstellar travel will be like us in phase 4. They should understand the value of life better than we do. There is no rational reason for them to destroy us. We're made up of the most common materials in the universe, and by the time we're advanced enough to be a threat, we'll be beyond greed, war, and our other primitive instincts. There is no purpose for war at that point, every resource can be abundantly mined from asteroids and other celestial bodies. Viewing us like ants is also ridiculous. You can't go discuss time and space with a colony of ants. Our species has already crossed the singularity of abstract thought and communication. They could guide us and even reconstruct and assimilate us, to help us skip all the unnecessary steps to the same journey, with the effort of a billionaire flicking a coin at a homeless guy.
When I discuss ideas like these with people in my life, they often raise many of the same concerns. The first thing I always have to explain is that you can't stop or slowdown the development of technology, and we are heading this way faster and faster whether we want it or not. The only option is to embrace it, and do our best to help steer it's progress in the best direction. There may be unavoidable dystopian scenarios ahead of us. We must face them head on to overcome them as quickly and efficiently as possible, especially if we have to compete or integrate with AI for a while. Our species is like a child going through stages of maturity, we're about to reach what I think is adulthood, where we actually have control over some aspects of our life. All that hippy talk about returning to nature and us being parasites abusing the earth is counterproductive. Mother Earth raised life to use her resources to evolve and eventually venture off on its own. It would be a disgrace to our ancestors and all other life on earth to do anything else. After we leave, Earth will just be another rock floating in space in the middle of nowhere.
To address concerns regarding losing ourselves, I say we will become far more enlightened versions of ourselves, like how many religions view the afterlife. We will understand ourselves and things we desire better than we ever could in our natural forms. Becoming a hivemind is an option we may or may not take at a late stage when we have evolved beyond concerns of being hacked. We may find it best to integrate into a single mind, or maintain individuality, but connected to share our experiences. It could be like having an internet browser in your mind, or that you can plug into or out of to maintain privacy. There may be multiple options, or that may be unnecessary at such a level. I trust future us to know better than we do now. What's Important is that we keep BCIs offline and separate from our phones and computers to avoid a global hacking crisis. With all due respect, Elon musk is making an extremely poor decision with his approach to start off using BCIs to communicate telepathically. His scenario will most likely be catastrophic.
The biggest concern that I have with tampering with the brain is accidentally severing the observer component from consciousness. This observer is such an obscure concept for so many people, yet immediately intuitive for others, with no relation to education level or even belief system. It almost makes me think if the universe was a simulation, then people who get it may be the players, while the ones who don't may be NPCs. But it also could be a matter of introspection or self awareness. I don't want this to be mistaken for a soul, because that would entail spirituality or a physical energy of some sort, which I do not agree with. But this component does hold a similar position and purpose to that of a soul. When I try to imagine its origin, I envision a seemingly infinite catalogue of coordinates. Like a cosmic grid, with each coordinate being a very long number representing an individuals existence. I call this a Node Of Existence. What this means is that a consciousness can exist without an observer, and an observer lying dormant without a consciousness. It's like leaving a room with the TV still on. It will continue to play images and sounds, even without someone in the room to observe it. When a mind dies along with its consciousness, it's NOE may or may not continue on without it. If so it would be stripped of all memories, thoughts and personality. This eliminates the concept of free will but potentially supports some form of reincarnation.
I could go much further into the concept of NOE but it's aside the point. My fear is that philosophy doesn't cover this as far as I am aware, and science will straight out reject it due to no empirical evidence, but when we begin replacing and changing things we could end up ending our own existences without anyone knowing because our physical brains will continue living. Therefore we should take this concept very seriously. It would be extremely arrogant to continue without taking this unknown into account. Some people actually believe that you could upload a copy of your brain to a computer, and destroy the old one, to live forever in a computer. That is false because what really happens is you die while another mind continues your life with your memories. People who disagree may lack a NOE.
This is just something that's been on my mind for several years and I would really like to just get it out and discuss it with open minded, educated, critical thinkers. Obviously this can be adjustable and with time and education I may find a different approach. I'm giving myself a few more years to get over my psychological damage from a 20 year cycle of trauma and isolation, and then I will return to studying neurology and computer sciences, dedicating the rest of my life to this Objective. The only problem is it will take me 10-20 years before I even have the education to begin my first projects. If I can adapt to society, I will join Cryonics Institute for Plan B. That way if I we don't pull this off before the end of my natural lifespan, I can be frozen and revived once you guys finish the job, assuming we don't blow ourselves up first. If I can't adapt, I will have to leave America for a mercenary career, and my days may be numbered. I strongly urge as many people as possible to drop whatever they are doing in life and focus on Developing BCIs and making these phases play out. Nothing else matters. Please help me get off this planet, I don't want to die here.
- Watch SpaceX launch its massive Starship in historic test flight
Nature, Published online: 20 April 2023; doi:10.1038/d41586-023-01280-xFlood mortality rates are far higher in countries with larger income disparities.
Nature, Published online: 19 April 2023; doi:10.1038/d41586-023-01319-zEcology and infectious-disease science can help avert financial crises. Plus, Earth’s giant kelp forests are worth $500 billion a year and the amateur variant-sleuths helping scientists to track SARS-CoV-2.
- In 2006, Apple and Amazon launching their streaming video platforms was enough for some to call time on DVDs by mail.
Nature Communications, Published online: 20 April 2023; doi:10.1038/s41467-023-37987-8Although transition metal-catalyzed C–H bond functionalization is a widely used method in organic synthesis, many methods rely on metals of low abundance. Here, the authors report a copper-catalyzed, asymmetric C–H arylation using diaryliodonium salts.
Nature Communications, Published online: 20 April 2023; doi:10.1038/s41467-023-38040-4Stretchable and degradable elastomers are crucial for developing transient and bioresorbable electronics. Herein, Han et al. tuned the diverse properties of biodegradable PLCL elastomers and demonstrated their application in soft, perceptive robotic grippers and transient, suture-free cardiac jackets.
Nature Communications, Published online: 20 April 2023; doi:10.1038/s41467-023-37809-x
Nature Communications, Published online: 20 April 2023; doi:10.1038/s41467-023-37995-8The use of energy-dense materials is inherently limited in biphasic self-stratified batteries due to the aqueous electrolyte environment. Here, the authors extended the concept of biphasic self-stratified batteries to non-aqueous systems, resulting in increased energy density and output voltage.
- 6 The US Supreme Court has delayed its abortion pill decision
- What’s happening: Snap is planning to launch augmented-reality mirrors that allow shoppers in stores to instantly see how clothes look on them without physically trying them on.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Why your iPhone 17 might come with a recycled battery
Lithium-ion batteries power most of our personal electronics today. Mining the metals that make up those batteries can mean a lot of pollution, as well as harmful conditions for workers.
The good news is, a growing number of groups are working to make sure batteries get recycled—and some of those efforts are becoming mainstream, including Apple’s recent announcement its batteries would use 100% recycled cobalt beginning in 2025.
It says a lot about where the battery recycling industry is and where it’s going. Read the full story.
Casey’s story is from The Spark, her weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.
Snap is launching augmented-reality mirrors in stores
What’s happening: Snap is planning to launch augmented-reality mirrors that allow shoppers in stores to instantly see how clothes look on them without physically trying them on. The mirrors are going to appear in some US Nike stores later this year, and in the Men’s Wearhouse in Paramus, New Jersey.
Why? The mirrors are part of Snap’s new effort to start offering AR products in the physical world. AR has powered Snapchat filters and Lenses (the company’s term for its in-app AR experiences) for years, but these additional uses of the technology create a potential revenue stream for Snap outside the social media platform’s app. Read the full story.
Learning to code isn’t enough
A decade ago, tech powerhouses like Microsoft, Google, and Amazon helped boost the nonprofit Code.org, a learn-to-code program. It sparked a wave of nonprofits and for-profits alike dedicated to coding and learning computer science, and a number of US states that have made coding a high school graduation requirement.
But just learning to code is neither a pathway to a stable financial future for people from economically precarious backgrounds, nor a panacea for the inadequacies of the educational system. Read the full story.
—Joy Lisi Rankin
This story is from our forthcoming Education print issue, due to launch next Wednesday. If you’re not already a subscriber, you can sign up from just $69 a year—a special low price to mark Earth Week.
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 It’s better to be safe than sorry with AI
And yet, the biggest labs aren’t investing in proper safeguarding. (Economist $)
+ Google’s using generative AI for its new ad campaigns. (FT $)
+ Discussions around AI risk are long overdue. (New Scientist $)
+ Do AI systems need to come with safety warnings? (MIT Technology Review)
2 People with long covid are still suffering
And they’re feeling increasingly isolated due to the lack of restrictions. (The Atlantic $)
+ But new clinical trials are looking promising. (Wired $)
+ We’ve only just begun to examine the racial disparities of long covid. (MIT Technology Review)
3 Matt Walsh’s Twitter hacker did it to stir up drama
They say they compromised Walsh’s phone with the help of an “insider.” (Wired $)
+ Twitter’s getting rid of legacy blue checks—for real this time. (WP $)
4 All US Facebook users are owed money
But it’s not a lot, and isn’t coming anytime soon. (WSJ $)
5 North Korea says it’s built its first spy satellite
The satellite could play a key role in the country’s weapons programs. (FT $)
+ Soon, satellites will be able to watch you everywhere all the time. (MIT Technology Review)
6 The US Supreme Court has delayed its abortion pill decision
It’ll make a decision about the accessibility of mifepristone on Friday. (BBC)
+ Texas is trying out new tactics to restrict access to abortion pills online. (MIT Technology Review)
7 TikTok’s algorithm keeps pushing suicide content to minors
Depression, hopelessness and death are common themes. (Bloomberg $)
8 Erotic hypnosis is ruining women’s lives
Predatory men are using recordings to groom vulnerable people online. (BuzzFeed)
9 WeChat’s ultrashort soap operas are pushing China’s decency laws
The dramas are more provocative than traditional TV fare. (Rest of World)
10 How video games help people work through their grief
It gives them the chance to process their feelings in digital realms. (The Guardian)
Quote of the day
“Bard is worse than useless: please do not launch.”
—An internal Google note to workers spells out the problems with the company’s AI chatbot, which it launched last month, Bloomberg reports.
The big story
How robotic honeybees and hives could help the species fight back
Something was wrong, but Thomas Schmickl couldn’t put his finger on it. It was 2007, and the Austrian biologist was spending part of the year at East Tennessee State University. During his daily walks, he realized that insects seemed conspicuously absent.
Schmickl, who now leads the Artificial Life Lab at the University of Graz in Austria, wasn’t wrong. Insect populations are indeed declining or changing around the world.
Robotic bees, he believes, could help both the real thing and their surrounding nature, a concept he calls ecosystem hacking. Read the full story.
We can still have nice things
A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)
+ There’s nothing quite like a teenage bedroom.
+ If you’ve been looking to mix up your podcast library, this list offers some handy pointers.
+ Shh, don’t tell anyone about America’s hottest, and most secret, restaurants.
+ Stealing close to $200,000 in dimes seems like more trouble than it’s worth.
+ Kenny Loggins is still going—writing Winnie the Pooh songs.
When I realized the power of online journalism in the early aughts, I saw transparency as key to its promise. I’d watched Gawker X-ray New York’s media scene, and seen bloggers tear apart mainstream reporting on the 2004 presidential campaign. I found that I could drive the political conversation simply by telling my readers what I knew in plain English, when I knew it. At Politico in 2007, we adopted Gawker’s ethos that many of old-school journalists’ most interesting stories were the ones they told one another in bars, rather than the ones they printed, and applied it to American politics. We immediately hooked political junkies on a steady stream of scoops that assumed readers were on a first-name basis with Hillary and Barack, and that they didn’t need us to provide much context or analysis.
At its best, this ethos bypassed the patronizing, gatekeeping practices that often led great American institutions to mislead the country on vital public subjects. At its worst, it encouraged journalists to publish things that their predecessors had good reason to pass over, such as leaked sex tapes.
And then there were the hard cases, the explosive facts and documents that journalists had long worried citizens would take out of context if they were revealed in full. I found, and still find, that concern ludicrous in this digital age. But the trajectory of the document known as “the dossier” has disabused me of my Panglossian assumption that the new transparency is a simple blessing.
I first got wind of the dossier in December 2016, when I was the editor in chief of BuzzFeed News. One of our reporters, Ken Bensinger, received an unusual invitation to a small gathering at a hilltop mansion in Sonoma County, north of San Francisco. He’d been invited by an acquaintance, Glenn Simpson, a onetime journalist who had become a kind of private investigator and co-founded the opposition research firm Fusion GPS. Ken got lost and showed up late, finding a boisterous, all‑male affair: plenty of booze, hunks of meat on the grill, some weed being smoked outside. Simpson drew him into a conversation about a mutual acquaintance, a former British spy named Christopher Steele. Simpson then told Ken something he didn’t know: Steele had been working the case of the president-elect, Donald Trump, and he’d assembled evidence that Trump had close ties to the Kremlin—including claims that Michael Cohen, one of his lawyers, had held secret meetings with Russian officials in Prague, and that the Kremlin had a lurid video of Trump cavorting with prostitutes in the Ritz-Carlton Moscow that would come to be known as the “pee tape.”
Ken told Simpson’s story to our investigations editor, Mark Schoofs, who told me about it. Simpson wouldn’t give Ken the document, and neither would Steele. It was merely high-grade Washington gossip, irresistible chatter.
I heard about the report again over lunch in Brooklyn, when a peculiar character in Hillary Clinton’s orbit passed through town. David Brock had been an anti-Clinton journalist in the 1990s. Now he was Hillary’s fiercest ally, a genius at raising money for Democratic groups. He showed up at a café a couple of days before Christmas wearing a coat with a lavish fur collar, and stashed full shopping bags beside the table. Brock was consumed with the mission of stopping Trump, manic; he was headed, it turned out, for a heart attack that landed him in the hospital. He wanted to spread the word about a dossier of allegations involving Trump’s ties to Russia. Brock didn’t have the document, he said. But he knew The Washington Post did, and so did The New York Times. Politicians had it too, he told me, and spies; as far as I could figure out, so did everyone, except the reading public. And me.
That, I believed, made it exactly the sort of thing you should publish. The dossier would be a great story, a journalistic and traffic sensation.
We were hardly the first journalists to get the document—but we may have been the first to get it without promising to keep it secret.
Simpson, whose firm was working for the Democratic National Committee, had months earlier summoned the leading lights of Washington journalism to the Tabard Inn, a tatty hotel off Dupont Circle. There, Steele calmly shared his shocking suggestion that Trump had been compromised by the Russian government. The journalists came from The New York Times, The New Yorker, ABC News, CNN. BuzzFeed didn’t get an invite.
To Simpson’s frustration, the reporters couldn’t confirm the dossier’s allegations. And because they had promised Simpson that they wouldn’t write about the dossier itself, its author, or its path through the American government, they couldn’t report on these things either, even as they became equally interesting stories.
On December 29, the Republican foreign-policy expert David Kramer invited Ken to his office at the McCain Institute. He then did something careful Washington insiders do: He left Ken alone in the room with the document for 20 minutes, without, in Ken’s view, giving clear instructions about whether he could make a copy. Ken took a picture of every page. (Kramer later denied that he’d allowed Ken to copy it, though I believed the denial was a fig leaf. Kramer eventually clarified that denial to say that he had wanted Ken to take a paper copy with him, rather than take pictures of the document.) I printed out the 35-page document and pored over it, looking for details that we could confirm, or refute. Then I hid my copy in the back of a closet. We scrambled—as other news outlets had done—sending reporters to check out the details; one went to 61 Prague hotels to ask whether anyone had seen Michael Cohen.
On January 10, CNN’s Jake Tapper announced a big scoop: “CNN has learned that the nation’s top intelligence officials provided information to President-elect Donald Trump and to President Barack Obama last week about claims of Russian efforts to compromise President-elect Trump.” The briefing, CNN reported, was “based on memos compiled by a former British intelligence operative whose past work U.S. intelligence officials consider credible.” The memos included, the network said—ominously and hazily—“allegations that Russian operatives claim to have compromising personal and financial information about Mr. Trump.”
The dossier was in circulation, affecting the course of American politics. Now that CNN had effectively waved it in the air, surely someone, soon, would let regular people in on the secret? I knew what I thought we should do, but I asked Mark; our executive editor, Shani Hilton; and Miriam Elder, the former Guardian Moscow correspondent editing our international coverage, if we should publish it. They all agreed that the document itself was news.
We stood around Mark’s laptop as he started typing. Ken, on speakerphone, warned that we could get sued; I too-curtly told him that I wasn’t asking him for legal advice. Then we turned to writing. “A dossier making explosive—but unverified—allegations” had been in wide circulation, we wrote. The allegations were “specific, unverified, and potentially unverifiable.” Miriam had noticed a couple of odd, minor false notes in the discussion of Russian specifics. She took a turn at the laptop. “It is not just unconfirmed: It includes some clear errors,” we said. I sent a copy of our story to our in-house lawyer.
By 6:20 p.m., about an hour after Tapper’s segment concluded, we had 350 careful words explaining what we knew. In the best traditions of the internet, we published that short introduction alongside a PDF of the full document.
Then I went to stand in the middle of the newsroom and watch the traffic flow.
For the next hour, my eyes flicked between a big screen where I watched the dossier go viral, and my phone, where I watched it dominate Twitter. The tweet that came up the most included a screenshotted excerpt from the dossier describing a “perverted” scene at the Ritz-Carlton Moscow, where Trump had allegedly hired “a number of prostitutes to perform a ‘golden showers’ (urination) show in front of him. The hotel was known to be under FSB control with microphones and concealed cameras in all the main rooms to record anything they wanted to.” That excerpt was shared and shared again. Our caveats didn’t always accompany it.
The news organizations that had accepted Simpson’s invitation to the Tabard Inn were furious. This was, I believe, in part because they had been boxed out of covering the real story by their agreements with a source, but also because they genuinely thought that what we’d done—floating inflammatory, salacious, and unverified claims about the president-elect of the United States—was wildly irresponsible.
Jake Tapper sent me a furious email that evening saying that publishing the document “makes the story less serious and credible,” which was probably true—but if keeping a document secret makes it more credible, you might have a problem. Tapper also said he wished we had at least waited until morning to give his news the attention it deserved: “Collegiality wise it was you stepping on my dick,” he wrote.
I’d expected that backlash, and at first welcomed it. I thought we were on the right side of the decade-old conflict between the transparent new internet and a legacy media whose power came in part from the information they withheld. And, of course, I loved the traffic. This was a huge revelation, a secret unveiled. What made me uncomfortable was the gratitude.
My phone lit up with text messages from Democrats thanking us for publishing the dossier and revealing Trump to be as depraved as they had always believed him to be. Hillary Clinton had never mastered social media; her supporters had never developed the dense networks of memes and conspiracy theories that powered the Trump movement. But now liberals, forming a nascent “resistance,” were starting to build their own powerful narratives on social media that were sometimes more resonant than factual. The notion of a single, vast conspiracy seemed to answer their desperate question of how Trump could have been elected. Russia clearly had helped. WikiLeaks’ hack-and-dump operation was a crucial factor among many in a very close election. You didn’t need to believe all the details in the dossier to know those things.
But perhaps I should have thought a little more about WikiLeaks. A couple of weeks before the 2016 election, I’d attended a Trump rally in Edison, New Jersey, and on my way in, I’d encountered a supporter chanting, “WikiLeaks! WikiLeaks!” I asked him which specific documents he thought painted Hillary Clinton in such a bad light. He didn’t exactly know. I realized that I was looking at social media in real life, a man shouting information cast as a symbol of what he already believed about Clintonian corruption, not as anything meant to convey new knowledge.
Something similar happened with the dossier. We had embedded it as a PDF, which meant that it could travel context-free, without our article’s careful disclaimers, and that’s exactly what happened. I watched uneasily as educated Democrats who abhorred Trump supporters’ crude rants about child sex rings in Washington pizza joints were led by the dossier into similar patterns of thought. They read screenshots of Steele’s report; they connected the dots. They retweeted threads about how the plane of a Russian oligarch—previously unknown to them, now sinister—had made a mysterious stop in North Carolina.
We’d been careful, I found myself having to remind people, to say we didn’t know whether everything in the dossier was true when we published it. I defended the decision in public, in a New York Times op-ed and in a deposition, after a Russian man whom Steele had suggested was tied to the Democratic National Committee hack sued us.
Months after we released the dossier, the media executive Ben Sherwood came by my office. We’d met years earlier when he’d been at Disney, which had been trying to buy BuzzFeed. We had turned Disney down, but it had been a hard decision. I told him that running BuzzFeed had gotten more difficult, with the complexities of management and the realities of digital advertising bearing down on me.
So what did I think now? he asked. Didn’t I wish we’d done the Disney deal? “Would we have been able to publish the dossier?” I asked. “Not in a million years,” he told me. Then I told him I was glad we hadn’t taken the money.
That was an easy position to hold in 2017. It seemed reasonable to argue that publishing the dossier had been, on balance, good for the country. It had blown wide open a Russia investigation and forced voters to ask just why Trump seemed so friendly with Vladimir Putin. But although the biggest-picture claim—that the Russian government had worked to help Trump—was clearly true, the release of Special Counsel Robert Mueller’s investigation in April 2019 did not support Steele’s report. Indeed, it knocked down crucial elements of the dossier, including Cohen’s supposed visit to Prague. Internet sleuths—followed by a federal prosecutor—had poked holes in Steele’s sourcing, suggesting that he’d overstated the quality of his information.
And there had always been a more mundane version of the Trump-Russia story. Trump was the sort of destabilizing right-wing figure that Putin had covertly supported across Europe. Trump’s value to Putin was related not to a secret deal, but to the overt damage he could do to America. And Trump, BuzzFeed News’s Anthony Cormier and Jason Leopold discovered, had a more mundane interest in Russia as well: He had drawn up plans to build the biggest apartment building in Europe on the banks of the Moskva River. The Trump Organization planned to offer the $50 million penthouse to Putin as a sweetener.
That real-estate project wasn’t mentioned anywhere in the dossier. Yet it seemed to explain the same pattern of behavior, without the lurid sexual allegations or hints of devious espionage.
And publishing the dossier wasn’t, in the end, a dagger to Trump’s heart. If anything, it muddied the less sensational revelations of his business dealings and his campaign manager’s ties to Russia. An FBI agent who investigated Trump, Peter Strzok, later said the dossier “framed the debate” in a way that ultimately helped Trump: “Here’s what’s alleged to have happened, and if it happened, boy, it’s horrible—we’ve got a traitor in the White House. But if it isn’t true, well, then everything is fine.”
It was, the reporter Barry Meier wrote, “a media clusterfuck of epic proportions.” The dossier’s overreaching allegation of an immense and perverse conspiracy would, he predicted, “ultimately benefit Donald Trump.”
Six years after publication, I accept that conclusion. And yet I remain defensive of our decision. I find it easiest to explain not in the grandiose terms of journalism, but in the more direct language of respect for your reader. Don’t you, the reader, think you’re smart enough to see a document like that and understand that it is influential but unverified without losing your mind? Would you rather people like me had protected you from seeing it?
Imagine the alternative, a world in which the American public knows that there is a secret document making murky allegations that the president-elect has been compromised, a document that is being investigated by the FBI, that the president-elect and the outgoing president have been briefed on, and that everyone who is anyone has seen—but that they can’t. This would, if anything, produce darker speculation. It might have made the allegations seem more credible than they were.
We faced a difficult series of lawsuits, but we won them all, in part because we’d maintained our journalistic distance. We argued, successfully, that we were not making these claims ourselves; we were making the “fair report” of what amounted to a government document. We’d published the dossier while holding it at arm’s length, noting that we hadn’t been able to verify or knock down its claims—even if we had inadvertently launched a million conspiracy theories in the process.
And that’s the part of the dossier’s strange trajectory that remains most disturbing to me. The way the document became a social-media totem for the anti-Trump resistance rebutted my confidence that people could be trusted with a complex, contradictory set of information, and that journalists should simply print what they had and revel, guilt-free, in the traffic. We seemed to be in an impossible, even dangerous, situation: The public had lost trust in institutions while simultaneously demanding that those same institutions filter the swirl of claims that surround democracy’s biggest decisions.
I have no pat conclusion. If I had to do it again, I would publish the dossier—we couldn’t suppress it, not once CNN had discussed it and its implications on air. But I would hold more tightly to the document, so that no one could read it without reading what we knew about it—that we weren’t sure it was true, and in fact we had noticed errors in it. Releasing a document that could be shared without context—and this is as true of the WikiLeaks material as it is of the dossier—created partisan symbols, not crowdsourced analysis.
In technical terms, that means I wouldn’t simply publish it as a PDF, destined to float free from our earnest caveats. At best, we could have published the document as screenshots attached to the context we had and the context we would learn. Perhaps in some small way, this would have limited its transformation from a set of claims into a banner of the “resistance.” But I’m not under the illusion that journalists could have contained its wildfire spread, any more than I think we could have concealed it.
I’m now leading a news organization, Semafor, that is also rooted in transparency. But I no longer think transparency means that journalists can be simple conduits for facts, obscuring our own points of view, leaving our audiences to figure it out. The best we can do, I think, is to lay our cards on the table in separate piles: Here are the facts, and here’s what we think they mean—and to retain some humility about the difference between the two.
This essay is adapted from the forthcoming Traffic: Genius, Rivalry, and Delusion in the Billion-Dollar Race to Go Viral.
Until she was 5 years old, Alice Birch lived in a commune in the Malvern Hills, a bucolic area in the west of England known for bluebell woods and wandering poets. It was, she recalls, quite low-key for a commune, “not culty, not wild”—just a 19th-century redbrick country house with orchards and vegetable gardens and adults trying to live out their collectivist ideals. At night, the whole group ate together around a big round table, and Birch would listen quietly as people talked. Though everyone was broadly left-leaning, she remembers a good amount of disagreement; she couldn’t always understand what was being talked about, but she felt the tension, the crackle of ideas sparking as they met in the air.
“That’s theater,” she told me last month, sitting at a table inside London’s National Theatre. Before Birch became a highly prized film and television writer, she was a playwright—“There’s no one better,” the woman in the theater bookstore told me, her eyes glinting, as I picked up some of Birch’s plays—and throughout her work, the dinner table is often where everything kicks off. In the smash TV adaptation of Normal People that she co-wrote with Sally Rooney, a shady alfresco lunch in Italy turns into an eruption of emotional violence. [Blank], a 2019 play that premiered at London’s Donmar Warehouse, features a 45-minute scene called “Dinner Party,” in which a gathering over meze is interrupted by cocaine dealers, wine deliveries, and eventually a child wielding a baseball bat. Dead Ringers, a fiendish new series for Amazon that Birch created with the actor Rachel Weisz, brings even more to the table: At one point, the identical-twin obstetricians played by Weisz pitch a new model for women’s health care to some truly awful rich people over razor clams and kombucha. The resulting scene is one of the funniest and most mordant in TV memory.
[Read: The irresistible intimacy of Normal People]
At 36, Birch has the wholesome features of a woman in a Victorian soap ad, the gentle manner of a therapist (her mother and stepfather both work in the field), and the creative intentions of a Molotov cocktail. She’s an iconoclast in the most tender human form. “It’s quite moving, actually, to see someone continuously be so kind and respectful to everyone while writing these completely outrageous, aberrant, dysfunctional characters,” Weisz told me over the phone. Dead Ringers, a reimagining of the 1988 David Cronenberg film that starred Jeremy Irons as the codependent twin doctors, is a pitch-dark, scabrously funny, occasionally grotesque satire of the upper echelons of American health care. The show is so wildly original that viewers might not notice, at first glance, how radical it is. It forces its audience to absorb the brutality of childbirth and the danger tacitly accepted by any person who chooses—or is forced by the state—to carry a child. “Why are you wearing my vagina like a fucking glove?” a woman in labor shouts at a doctor. Babies crown; blood pools in crimson puddles on the floor.
Dead Ringers feels like not much else that’s ever been on television. It’s more like Birch’s plays than anything—funny, sharp, twisted, and often furious, but also fluid with language and imagery in a way that lets you absorb its ideas at both an intellectual and a not-quite-conscious level. The theater director Katie Mitchell, who mentored Birch and has collaborated with her several times, sees her as carrying on the tradition of writers such as Virginia Woolf, Caryl Churchill, and Sarah Kane. “Whether you’re watching her Normal People or Lady Macbeth on television and film, or her Anatomy of a Suicide or Ophelia in theater,” Mitchell told me, “it’s the same distinct signature and voice. There’s no aesthetic or political compromise across the body of the work.” The only difference with film and television, she notes, is the potential size of Birch’s audience, the almost limitless new realm that a woman with forceful ideas and a streaming series can reach.
Eight years ago, while she was still in her late 20s, Birch did something relatively unconventional for a young woman finally starting to break through in a creative industry: She had a child. For many people, it’s one of the most radicalizing experiences they’ll have. When Birch’s son came, “and it feels like he came, I didn’t have him; when he arrived,” she said, “there was so much about all of it that I couldn’t believe we don’t talk about it all the time. All the time. Because it’s violent. It’s exhausting, and painful, and punishing.” No one else she knew had children at the time. The shock of how much her life had changed was compounded by loneliness. She remembers meeting up one morning with a friend who’d come straight from a rave and was still covered in glitter. Neither of them had slept, for quite different reasons; sitting on a bench, they cried together.
Birch’s son was born during a moment in her career that she describes as being “noisier” than usual: A play she’d written called Revolt. She Said. Revolt Again, a ferocious consideration of how language keeps women from true liberation, had premiered with the Royal Shakespeare Company in 2014 to critical acclaim (one reviewer called it “a cluster bomb of subversion”), and would debut in New York at the Soho Rep two years later. While heavily pregnant, she’d finished writing her first film, an adaptation of a Nikolai Leskov novella titled Lady Macbeth; the movie, which went into production soon after, became the first starring role for a 19-year-old actor named Florence Pugh. After a dismaying experience being cast in a TV pilot, during which her body, face, and image were scrutinized by studio executives, Pugh had been on the verge of quitting acting when she auditioned for Lady Macbeth. But the film, she told The Telegraph last year, made her “fall back in love with cinema.” Her performance as Katherine—a young bride in the 19th century whose confinement at the hands of her severe older husband turns her violent—propelled Pugh’s reputation as a fascinating, unflinching new star, and Birch’s as a writer of female characters who are knotty, dark, carnal, and compelling.
Since she was very young, Birch had always written plays, without necessarily knowing that that’s what they were. She wrote pantomimes and made her friends perform in them; by herself, she lined up her pens and made them talk to one another. As a teenager, she did a weeklong internship at the Royal Court Theatre in London, historically a hothouse for incendiary British playwrights: John Osborne, Caryl Churchill, Martin McDonagh. Reading unsolicited scripts, she dismissed one by observing that it was too violent and extreme to be staged. “And they gave me a stack of Sarah Kane, Simon Stephens, Beckett, Shakespeare,” she said. “It was such a generous and gentle way of interrogating things.” The point was that transgression and experimentation are necessary to theater, crucial components in the process of making art.
That notion might help explain why so many superlative TV shows now scour theaters to add to their teams (the writers’ room for Dead Ringers included six playwrights, a director, and Weisz), or are themselves adapted from plays (Phoebe Waller-Bridge’s Fleabag, Michaela Coel’s Chewing Gum, Katori Hall’s P-Valley). Birch’s first TV job, in 2018, was on Season 2 of Succession, where a writers’ room that included the playwrights Lucy Prebble and Susan Soon He Stanton dreamed up scenarios for the cursed Roy family from a noisy office in Brixton, South London. “I sometimes feel fraudulent saying I worked on it because it was quite overwhelming,” Birch told me. Jesse Armstrong, its showrunner, “is very generous and leads it so kindly and carefully. But it’s rigorous.” She was traveling with her son around that time when she read Normal People, Sally Rooney’s literary sensation about the romantic travails of two Irish teenagers, and stayed up far too late one night to finish it. She could see distinctly how it would work as a TV series—how the structure and dialogue and emotion would play out on-screen. The subsequent show, which she ended up co-writing with Rooney after the author sought out a collaborator, scored Birch her first Emmy nomination, and became a sensation itself when it was released by Hulu and the BBC early in the coronavirus pandemic.
The director Sebastián Lelio—who collaborated with Birch on the 2022 film The Wonder—told me that Birch “has an extraordinary capacity to create very complex characters through dialogue … characters who really express themselves, their light and shadows, through the way they talk, the words they use.” That is, he said, “really the quintessential challenge of writing a script.” During the pandemic, when theatrical productions were canceled everywhere, including Birch’s planned National Theatre production of Rachel Cusk’s Outline trilogy, she found herself appreciating the task of adaptation, and the “scaffolding” of an existing work. Before adapting a book into a script, she writes the entire thing out longhand, word by word, to get as intimately acquainted with it as she can. And she usually writes at night, when the isolation and the darkness makes her feel like she can get away with more, creatively. “I feel like when she’s writing at night, in the kitchen, it’s almost like she becomes the characters when she writes,” Weisz said. “I think her imagination is a pretty beautiful, profound thing.”
[Read: Stubborn, determined, and dying]
Since having her son, Birch has been deploying that imagination to expand and sometimes upend portrayals of motherhood. In a different era, having a baby might have challenged a woman’s creative ambitions. For Birch, it turbocharged them; she felt as though she needed to write about everything just to process it. After having her son, she wrote Anatomy of a Suicide, a play written in the round, like a canon, that juxtaposes three generations of women from the same family and the ways in which their experiences rebound and echo over one another. Her next project, a film adaptation of Megan Hunter’s novel The End We Start From, stars Jodie Comer as a new mother fleeing London during a devastating flood. And she’s also in the early stages of adapting a thriller she can’t yet name, whose exploration of matrilineal trauma treads dark psychological terrain.
For its part, Dead Ringers also pushes into fresh territory. Its conception, which coincided with the birth of Birch’s daughter, considers childbearing through the lens of body horror. But it’s also immensely interested in female gratification. Weisz and Birch fixated on the Lacanian idea of “jouissance,” or the vital and sexual impulse for pleasure, while writing Elliot Mantle, the older twin, who eats, drinks, snorts, screws, and—most jarringly of all—says exactly everything that she wants to. “It’s a really particular version of pleasure that is primal and physical, and might be misunderstood,” Birch said. For one scene, filmed in a diner, Weisz had to eat cheeseburgers over and over with carnivorous relish, and, as Birch observed, “it really feels a bit radical to watch a woman eat and enjoy it.” Inevitably, being a Birch-written dinner scene, things go sideways. But in the meantime, there’s subversive joy in seeing something so simple in front of our eyes: the desire gratified, the need met. The woman’s hunger fed.
Few people are neutral about neutrality these days. Sophisticated thought, certainly, has turned against it. The very ideal, we’re told, is misconceived, at best a ruse for prettifying partisanship. Following the recent contretemps at Stanford Law—where an administrator, trying to quiet protesters who were heckling a conservative judge, spoke in a way that appeared to side with the protesters—the law-school dean cited the 1967 Kalven Report, from the University of Chicago, stressing the importance of institutional neutrality. Almost as soon as she invoked that august defense (half a century old, please note), eyes rolled. One scholar declared that it was “extraordinarily difficult” to call out the judge’s slick doublespeak “while claiming to do so from a position of political neutrality.” Whatever roles we play in public life, after all, we’re hardly free from political and ideological leanings. If our personal values are truly important to us, shouldn’t they inform everything we do? Why shouldn’t we all just put our cards on the table and be open about what’s in our hearts?
All sorts of people—judges, journalists, physicians, administrators of public institutions, you name it—maintain, in their professional capacity, some pose of neutrality. U.S. Chief Justice John Roberts famously likened a judge’s role to that of an umpire. (“It’s my job to call balls and strikes, and not to pitch or bat.”) Len Downie Jr., who ran The Washington Post from 1991 to 2008, was so determined to avoid the appearance of bias that he didn’t even vote. (“So that I never make up my mind which party, candidate or ideology should be in power.”) But critics are now convinced that this posture amounts to either self-delusion or manipulation.
[Tom Rachman: America’s is-ought problem]
In How Judges Think, Richard A. Posner, an eminent legal scholar who was, for many years, an appellate-court judge, shook his head at the mirage of “legalism,” a model whereby judges are merely applying statutes to the facts of a case. “Judges are less likely to be drunk with power if they realize that they are exercising discretion than if they think they are just a transmission belt for decisions made elsewhere and so bear no responsibility for any ugly consequences of those decisions,” he wrote. In his opinion, the Supreme Court is the most political court of all—and many have joined in concurrence. A political action group that aims to expand access to abortion has bluntly warned against the “meaningless ‘umpire’ line” that conservative nominees use in front of the Senate Judiciary Committee: “When they get on the court they push their conservative ideology and overturn precedent.”
Is neutrality even attainable? “No journalistic process is objective,” Wesley Lowery, a former Washington Post reporter, observed in a widely discussed New York Times opinion piece from 2020. “And no individual journalist is objective, because no human being is.” Given “the failures of neutral objective journalism,” he urged another ideal: “moral clarity.”
Some critics reject neutrality because they don’t believe that we can be objective; others reject neutrality because they do believe that we can be objective. “Is it even possible for the administration of a university dedicated to seeking the truth to be neutral about matters that can easily be analyzed using objective methods?” asked Holden Thorp, a scientist and former university chief, in an opinion piece for The Chronicle of Higher Education. Neutrality, in his view, is a squirrelly stratagem: “Faculty, staff, and students know the presidents are human beings who have views on these issues. Many of them knew the president before they got in the role. So, who are they fooling by saying they’re neutral? Nobody.” He wonders that university administrators “tie themselves in knots trying to somehow stay neutral on issues that are clearly in the purview of research and teaching at their institutions.”
Hovering above these arguments is a larger question of political morality: Can and should the state itself be neutral? Liberalism—which, taken broadly as an approach to governance, stresses the liberty rights of individuals and their equality before the law—has traditionally prized some notion of neutrality. Perhaps the most vigorous opponent of liberal neutrality in the past century was the German jurist and political theorist Carl Schmitt. In his essay “The Age of Neutralizations and Depoliticizations,” written in 1929, he indicted liberalism for pretending that all persons and perspectives were entitled to equal standing and that conflicts of vision could and should be worked out through the peaceable, rule-governed debate and deliberation of legislatures and courts. He especially complained about the way that what we’d call the mainstream media promulgated this vision, effectively depoliticizing politics. Politics could never really be supplanted with liberal proceduralism, he argued; real politics was, at bottom, about crushing your enemies.
In some ways, Schmitt’s thinking seems in sync with sophisticated thought in our own times. He despised the cult of Big Tech, or what he called “the torpid religion of technicity.” In his view, technology was bound to intensify conflict, not ameliorate it, and the technologies of mass communication were instruments for “the domination of the masses on a large scale.” Where people imagined that courts and officials might impersonally apply norms to cases, Schmitt insisted that what he called “decisionism”—the rulings of an arbitrary, personal will—would and should play a crucial role alongside those norms. The most powerful impulse in the organization of society—and the one to be earnestly resisted—was “the striving for a neutral domain.”
What we instead needed, he thought, was something like moral clarity: a ride-or-die embrace of a comprehensive set of ideas and values that identified what was good and what was evil. Schmitt, notoriously, embraced one solution to the permissive practices of the Weimar Republic. When Hitler came to power, he joined the Nazi Party—no pose of neutrality there!—and, for a while, served as one of its leading legal theorists. In a 1938 book of political theory, he blamed Jewish thinkers, including Baruch Spinoza and Moses Mendelssohn, for advocating forms of governance that would accommodate pluralism, the inclusion of minorities and “individual freedom of thought.” He regretted that their ideal of the “neutral state” had gained some traction, burdening government with managerial and procedural responsibilities to treat people evenhandedly.
Schmitt was prescient in attacking neutrality before most liberal theorists even recognized the importance of the concept. In recent decades, many theorists have tried to articulate the core idea of liberal neutrality. In The Ethics of Identity, I argued that the key ideal is that the state treat people of diverse social identities with equal respect: A public act may disadvantage people of a certain identity, but it should never disadvantage them because they are regarded as people of a certain identity. (So, for instance, the placement of doorknobs in public buildings may disadvantage people who are left-handed, but not because they are regarded as left-handers.) Pluralism requires this form of neutrality, what I’ve called “neutrality as equal respect.” The state doesn’t belong to any one group of citizens.
But even if we favor the pluralism that Schmitt detested, how should we reckon with the gap between our human biases and the fair-mindedness we affect in our public roles? Erving Goffman, the great sociologist, distinguished between our “front stage” conduct and our “backstage” conduct. (He was fascinated by the contrast between the way waiters behaved before customers and the way they were in the kitchen.) Is it then all a matter of dramaturgy—with secret agendas lurking behind the judge’s robes, the reporter’s pad, the provost’s bland reticence?
My colleague Michael Strevens is a philosopher of science, and in his book The Knowledge Machine, he took a hard look at a hard problem: Scientists themselves aren’t really dispassionate and objective. They’re affected by their position and their temperament, prone to power politics and vanity and grudges. When they’ve settled on a research program, it’s difficult to get them to give it up. They are, Strevens says, “all too human.”
What, then, explains the triumph of modern science? One way of putting it is that, in their professional roles, these white-coated hot messes have to pose as, well, scientists. They’ve all signed onto a shared etiquette of argument. Pretty much everyone came to agree on what counts as a legitimate move in this game. They’re going to resolve their disputes by coming up with experiments, designed to support or rule out one position or the other. When they write up their papers, they aim to leave out animus and emotion and other forms of subjectivity, in a process he calls “sterilization.” In science, Strevens stresses, reasoning (the way you actually made the inferences that you did) is private; argument (the way you defend your results to your colleagues) is public.
Skeptics insist the public pose of objectivity is a ruse that conceals the subjectivity of actual scientists. What they don’t grasp is that the public protocol, the “front stage” performance, has power. It’s a fiction that is not merely useful, but indispensable: a fiction that creates its own reality, delivering a world-changing cascade of objective facts. In short, the social roles we choose—including those that distance us from overt partisanship—matter.
[Imani Perry: Why I reject the gospel of objectivity]
Jurisprudence is far from any kind of science. But note that from 2008 to 2021, the most common vote count for a Supreme Court ruling was 9–0. (Unanimous decisions have varied in frequency year by year, from as high as 66 percent of rulings to as low as 29 percent.) Because the high-profile rulings are the divisive ones, we’re far more conscious of them, but what Posner calls legalism—statutory interpretation, unentangled in culture-war issues—is still the norm. And although Posner, who finds much to commend in the legal realism that emerged a century ago, is an advocate of “pragmatism,” an approach that’s mindful of results and free from what he describes as the “fig leaf” of some grand model of exegesis, he concedes that the pragmatic judge is “a constrained pragmatist.” He meant that judges are rightly “boxed in … by norms that require impartiality, awareness of the importance of the law’s being predictable enough to guide the behavior of those subject to it (including judges!), and a due regard for the integrity of the written word in contracts and statutes.”
Protocols of neutrality do make a difference. It isn’t that judges are not political; especially in cases that aren’t tightly tethered to case law or the Constitution, we know that they are. (We also know that politics, when they’re our politics, are to be honored as principles.) But the norms of legal argument surely limit the court’s discretion. If we were encouraged to flout those norms, our decisions would become more political (or, for those on our side, principled). That fig leaf does us favors. Keep it on, please.
Journalism is a trickier topic, because some fine journalists are overtly allied with a cause and work for overtly political outlets. In the 18th and 19th centuries, newspapers tended to be organs of a party. But let’s focus on the professional norms that, over the past century or so, became entrenched in the mainstream media. I don’t mean the outlandishly demanding “I don’t vote” credo: Even Thomas Hobbes distinguished between faith and confession, what’s in your heart and what’s on your lips. Nor should we pledge ourselves to both-sidesism. Accuracy, not balance, is the proper aim.
Still, talk of moral clarity supposes a consensus we don’t have. Like today’s progressives, the conservative cleric Richard John Neuhaus was also an enthusiast of “moral clarity,” warned against “self-deceptions about value-neutrality,” and thought that a political order required a shared sense of moral purpose. The similarities end there. “The way to stop discrimination on the basis of race is to stop discriminating on the basis of race”—that’s Justice John Roberts’s idea of moral clarity with respect to affirmative action, but probably not Justice Ketanji Brown Jackson’s. The standard professional protocols—speaking to all the involved parties, foregrounding facts rather than feelings, verifying even what you might be inclined to believe, being transparent about sourcing, maintaining some independence from the people or organizations you’re covering—can make reporters better, sometimes by buffering their human passions. Performing fairness can make us fairer.
The social roles I’ve been talking about are ones in which we find ourselves performing public actions—a category that applies not just to government officials but to people who run organizations, including businesses and nonprofit institutions. Yes, there are some pragmatic considerations here. If you’re an officer at a state college and depend on appropriations from the state legislature, the welfare of your institution may inhibit you from delivering your opinions about gun control. “My job is to win friends and influence people,” the president of a state college in Georgia told The Chronicle of Higher Education. Administrators may be striving to maintain relations with multiple constituencies: trustees, alumni, faculty, students. So they’re careful about what stands they take, cognizant that failing to take a stand can sometimes result in discord, too. And plenty has been said about the related quandaries that business leaders have tried to negotiate.
[Read: The myth of neutral technology]
But there’s another objective here: assuring members of an eclectic community that all will be treated with respect. If you manage a corporation, employees want to be reassured that you’re going to treat them fairly. If you’re a university president, particular student groups shouldn’t feel that you harbor a grudge toward them. Neutrality, here, is neutrality as equal respect: the promise that people won’t be disadvantaged in virtue of their identity, including partisan identities. It doesn’t involve the pretense that you personally have no views, but it may involve refraining from expressing some of your views. Faith need not be confession.
Something like this concern applies to the classroom. Even if a professor is a registered Democrat, Republican students should be confident that they won’t be treated worse than their peers simply by virtue of being Republicans. Nobody should want an instructor to be indifferent among scholarly arguments, but students shouldn’t feel disfavored because of what or who they are, even the student writing a thesis on the political theology of Carl Schmitt for her liberal professor.
Maybe it goes without saying that what counts as a virtue in the public realm may not be one in the private realm. Ballplayers want the umpire to be neutral; they don’t want their spouses to be neutral. In the casual intimacy of backstage life, we can rage, revere, condemn, and voice our rooting interests. We each have highly specific conceptions of what kinds of lives are worthy of respect; as citizens, we’re entitled to promote our visions. And precisely because we’re entitled to our own comprehensive conceptions of the good (and the awful), securing a modus vivendi—a way of dwelling peaceably together amid disagreements about value—is the ultimate aim of liberal neutrality.
Our public roles aren’t a ruse when our commitment to these roles is real. Critical theory of one sort or another, long a mainstay of a humanistic education, teaches us—in ways that are often invaluable—to see through the pose of disinterest, the rhetoric of dispassion, the stance of neutrality. The problem comes when seeing through something prevents us from looking at something. It’s a childish illusion to think that what happens backstage is the truth and what happens onstage is a lie. As individuals, we’re entitled to fight for what we believe in. But in a pluralistic society, the ideal of neutrality helps keep the fighting fair.
Commonwealth Games medallists since 1930 shown to have greater longevity than general population
Top-level sportspeople can live more than five years longer than the rest of the population, a study has found.
Using Commonwealth Games competitor records from since the inaugural event in 1930, the International Longevity Centre UK found large differences in the longevity of medal winners compared with people in the general population born in the same year.Continue reading…
Nature, Published online: 20 April 2023; doi:10.1038/d41586-023-01317-1Preparation and flexibility are key to a smooth recovery.
Researchers have built a database of more than 16,000 formerly enslaved people in St. Lucia in 1815.
Beginning as early as the 15th century, slavers disrupted the lives of more than 12.5 million men, women, and children of African descent by forcing them into the trans-Atlantic slave trade, uprooting them from their homes, and bringing them against their wills to territories around the world, including the British Crown colonies and the colonies in the United States.
When these enslaved people arrived at former British Crown colonies in the Caribbean, territories including St. Lucia, St. Vincent, Dominica, and Grenada, their arrivals were often marked with entries into detailed registries that documented their first and last names, their ages, occupations, specific places of origin, and even familial connections to others enslaved on the same plantation or in the same household.
To capture the important details found in these registries, to both broaden our understanding of slavery, and explain the experiences of people who rarely had the opportunity to leave a record of their lives, Tessa Murphy, an associate professor of history in the Syracuse University Maxwell School of Citizenship and Public Affairs, collaborated with Michael Fudge, a professor of practice in the School of Information Studies, and student research assistants.
Murphy’s book project, “Slavery in the Age of Abolition,” reconstructs the life histories and genealogies of people enslaved on the expanding frontiers of the British Empire in what is commonly referred to as the age of abolition. It will eventually be searchable, accessible, and available to the public.
“The database is going to be such a powerful research and teaching tool. I used examples from the database in an upper-level history seminar that I’m teaching right now, where I distributed examples to different students and had them analyze these as primary documents,” says Murphy.
“I asked them ‘What do you get from looking at this sheet that you didn’t know before about the realities of slavery?’ There are multi-generational family trees that you can derive from these. They’re quite bureaucratic documents, and when you look at them, they might seem to be just listing facts, but when you really engage with what they’re telling you, they’re testifying to the violence that underlay this system. And that really informed the daily lives of the people whose names are being recorded here,” says Murphy.
“What’s really fascinating about this particular project is the amount of data. The traditional inaccessibility of the data from a search perspective and the effort that we put into making it much more accessible and searchable. It’s going to be transformative for a lot of people,” adds Fudge.
In this podcast episode, Murphy and Fudge discuss how the project came to be, the arduous task of compiling their database, the challenges of digitally capturing historical records from more than 200 years ago, and how this database can serve as a teaching tool for the descendants of these formerly enslaved people:
Note: This conversation was edited for brevity and clarity. A PDF transcript is available here.
Source: Syracuse University
The post Database reveals lives of enslaved people in St. Lucia appeared first on Futurity.
Researchers have identified the cause of an inherited
, Glutaric Aciduria Type I, common among people with Lumbee and other Native American heritage.
Their results overturn decades of settled science and point to new, more effective therapies.
The finding, publishing in the journal Science Translational Medicine, shatters the textbook explanations for how a type of protein breaks down in a child’s brain, becoming toxic and leading to potentially fatal neurological problems.
Current literature describes the toxic substances as being produced in the brain in Glutaric Aciduria Type I (GA-1), and instead of arising elsewhere and crossing the blood-brain barrier.
Treatments for the condition, including a strict, low-protein diet, have limited success. Up to a third of children with the condition suffer long-term neurologic damage and some die.
Because other metabolic disorders have been shown to break down proteins in the liver and then cause brain damage, the researchers reopened the science into GA-1. The work was led by senior author Karl-Dimiter Bissig, an associate professor in Duke University’s departments of pediatrics, medicine, biomedical engineering, and pharmacology and cancer biology.
Bissig and colleagues launched experiments in mice specially bred to have GA-1. They found that catabolites—the residue left by the breakdown of an essential amino acid called lysine—accumulate in the liver and do cross the blood-brain barrier. This leads to a toxic build-up of glutaric acid in the brain, causing nerve damage that impacts motor skills.
The researchers were able to cure the condition in mice with either a liver transplant or CRISPR gene-editing technology. Other liver-targeted gene therapies might also be effective and could be administered once in a lifetime.
“The original experiments led to the interpretation that the toxic catabolites were produced locally in the brain,” Bissig says. “What our work demonstrates is the importance of challenging paradigms, particularly as new technologies and research approaches are available.”
Bissig says inadequate measures to address different mutations in specific populations are leading to health disparities. People with Native American, Amish, and Irish heritage have high susceptibility to GA-1, which can be identified during newborn screenings; the genetic variant common in Lumbee populations seems to cause the most damaging disease.
Because states decide what diseases are included in newborn screenings, GA-1 goes undiagnosed if it’s not part of a state’s chosen screening panel. Screenings could also be missed if babies are delivered at home.
While early diagnosis and a low-protein diet have been lifesaving, the benefits are concentrated in Amish- and Irish-heritage children, who have historically had better access to health care services than Native Americans.
“With a better understanding of this disease, we can now work to develop treatments that are more effective and easier to access,” Bissig says. “It’s much easier to treat the liver than the brain. We are now working to advance the more efficient and convenient therapies.”
The study received funding support from The Alice and Y. T. Chen Center for Genetics and Genomics; the National Institute of Diabetes and Digestive and Kidney Disease; the National Heart Lung and Blood Institute; and the National Institute of General Medical Sciences.
Source: Duke University
The post Liver may cause metabolic disease that hits Native Americans appeared first on Futurity.
Scientific Reports, Published online: 20 April 2023; doi:10.1038/s41598-023-33630-0Comparison of the neuromuscular response to three different Turkish, semi-professional football training sessions typically used within the tactical periodization training model
Vladimir Putin hails achievement that beat Hollywood project announced by Tom Cruise, Nasa and Elon Musk’s SpaceX
The first feature film shot in space premiered in Russian cinemas on Thursday, as Moscow exulted in beating a rival Hollywood project amid a confrontation with the west.
The Challenge is about a surgeon dispatched to the International Space Station (ISS) to save an injured cosmonaut. Russia sent an actor and a film director for a 12-day stint on the ISS in October 2021 to film scenes aboard the orbiting laboratory.Continue reading…
“How to Build a Life” is a column by Arthur Brooks, tackling questions of meaning and happiness. Click here to listen to his podcast series on all things happiness, How to Build a Happy Life.
One of my friends, more so than anyone else I know, has a remarkable power to make the people around him happy. He does this not through beer or flattery, but simply through the power of his personality. He is extroverted, conscientious, agreeable—all the traits that psychologists predict will attract a lot of friends.
But there’s one personality characteristic of his that I find especially winning: his enthusiasm. He is excited about his work and fascinated by mine. He speaks ebulliently about his family but also about the economy and politics. He has, as the 19th-century philosopher William James put it, “zest [for] the common objects of life.”
My friend is also an unusually happy person, which I had always thought explained his enthusiasm. But I had it backwards. In truth, enthusiasm is one of the personality traits that appear to drive happiness the most. In fact, to get happier, each of us can increase our own zest for the common objects of our lives. And it isn’t all that hard to do.
Research on personality goes back millennia, to ancient Greece at least. In the fourth century B.C., Hippocrates theorized that our characters are made up of four temperaments: choleric, melancholic, sanguine, and phlegmatic. These, he posited, were due to a predominance of one of the four humors, or fluids, in one’s body: yellow bile, black bile, blood, and phlegm.
Although medical knowledge has overtaken this approach—for example, black bile doesn’t even exist—Hippocrates foreshadowed a good deal of our modern thinking on personality. During the 20th century, scholars developed a personality typology that we still use today. In 1921, Carl Jung distinguished between introverts and extroverts; in 1949, the psychologist Donald Fiske expanded on that work when he identified five major personality factors. Later research further refined the features of these traits and named them openness, conscientiousness, extroversion, agreeableness, and neuroticism.
[Read: What your favorite personality test says about you]
Over the past 70 years, the Big Five have been used to investigate and explain many social phenomena. For example, as I have written, extroverts tend to make friends easily, but introverts tend to form deeper bonds. When people high in neuroticism who have money make more money, many of them enjoy it less than those lower in neuroticism. People who are more extroverted and conscientious tend toward conservatism, whereas those who are more open to new experiences typically espouse more liberal views.
Two traits out of the Big Five seem to be especially important for happiness: In 2018, psychologists confirmed that high extroversion and low neuroticism seemed to be the recipe for well-being. More specifically, the correlations hinged on one aspect of extroversion and one aspect of neuroticism—enthusiasm and withdrawal, respectively.
You might say that enthusiasm and withdrawal form the poles of a spectrum of behavior. Enthusiasm is defined as being friendly and sociable—“leaning into” life. Withdrawal denotes being easily discouraged and overwhelmed, leading one to “lean out” of social situations and into oneself. If we could become more enthusiastic and withdraw less, the data suggest, we would become happier. We might become more successful too. “Nothing great was ever achieved without enthusiasm,” Ralph Waldo Emerson wrote in his 1841 essay “Circles.” “The way of life is wonderful: It is by abandonment.”
Perhaps we could conceive of the perfect personality for achieving the happiest life. Of course, this is only helpful if you can change yours to better fit that ideal. This is unlikely, given that huge personality changes are generally only associated with a traumatic brain injury. However, as my colleague Olga Khazan has written, smaller shifts are possible. In one 2020 study, scientists asked people to record their ordinary activities, reminding them by text message to act in certain ways, such as being a bit more conscientious or open than they ordinarily would. It worked: Their behavior changed, at least as long as they were studied.
[From the March 2022 issue: I gave myself three months to change my personality]
If you want to lean into life more enthusiastically, you might try something similar by setting up a system of reminders. For example, you might schedule an alarm on your phone or an email to yourself each day that says, “Open up to all the people and things you see today!” But there are other, deeper interventions worth trying as well.
1. Use the “as if principle.”
In his magisterial 1890 text, The Principles of Psychology, James (a Harvard professor and an Atlantic contributor) outlined a radical philosophy of behavior change: Fake it. “We cannot control our emotions,” he noted. “But gradually our will can lead us to the same results by a very simple method: we need only in cold blood act as if the thing in question were real, and keep acting as if it were real, and it will infallibly end by growing into such a connection with our life that it will become real.”
As the psychologist Richard Wiseman argues in his book The As If Principle, James’s approach is surprisingly effective. Academic research undertaken by the psychologists Seth Margolis and Sonja Lyubomirsky bears this out, showing that if people act more extroverted in general, they do in fact succeed and become happier.
Faking enthusiasm is fairly straightforward. When you want to withdraw from social activities (perhaps you are overwhelmed or bored), act as if you were enthusiastic instead. Tell yourself, “I am going to get into this right now.” This will, the research suggests, establish new cognitive habits that gradually become more automatic.
[Read: A counterintuitive way to cheer up when you’re down]
Obviously, you can push this too far. I am not suggesting that you muster enthusiasm for something dangerous or use it to escape your problems. (“Today, I will enthusiastically act as if I didn’t have to pay my taxes!”) Instead, use the principle to nudge yourself toward positive changes.
2. Reframe challenges as chances.
One of the most popular self-improvement writers of the 20th century was the Protestant pastor Norman Vincent Peale, who sold millions of books on positive thinking. One of his titles was Enthusiasm Makes the Difference, in which he shares advice from a sage friend: “Always be glad when there is trouble on the earth … for it means there is movement in heaven; and this indicates great things are about to happen.”
It’s easy to dismiss this thinking as Pollyannaish and unscientific, but it is a good example of reframing a problem as an opportunity. This is a common strategy in creativity and innovation, and a successful technique in business leadership. Entrepreneurs routinely use reframing after setbacks by asking questions such as “What did I learn from this?” You can increase your enthusiasm for things you would ordinarily withdraw from by affirming, “This is hard for me, which is why I am doing it,” or something similar.
3. Curate your friends.
One of the best ways to become more enthusiastic is to hang around enthusiastic people such as my friend. (I’m not giving out his number; you have to find your own.) By doing this, you’ll be taking advantage of what psychologists call “emotional contagion,” in which people adopt the emotions and attitudes of those around them. If you tend to withdraw, it may be easy to gravitate toward people who do the same. But consciously doing the opposite can help you borrow a better personality trait from those around you. Look for companions who lean into life with gusto. Although it might seem like a chore at first, you’ll be more likely to “catch” this spirit and become enthusiastic about the friendships.
Fighting your tendency for withdrawal doesn’t mean that you can never be alone. There is a difference between a neurotic withdrawal from life and deliberate solitude. And the inability to be without company and stimulation is not necessarily a mark of good health either. What matters is your motive: whether you are moving away from others or toward being alone (or, conversely, whether you are moving toward others or away from your own thoughts).
Henry David Thoreau didn’t write Walden as an exercise in withdrawal but rather as an enthusiastic endorsement of finding oneself in the company of one’s thoughts. His description of waking up alone in a cabin by Walden Pond is a portrait of enthusiasm. “Every morning was a cheerful invitation to make my life of equal simplicity, and I may say innocence, with Nature herself.”
[Read: The virtues of isolation]
Even if your surroundings aren’t as picturesque as Walden, you can choose to treat every morning, every interaction, and every setback as a cheerful invitation. You can make your head into your own cozy cabin, and make life inside it a little brighter.
Är det dimma? Eller rök från en begynnande skogsbrand? Det är inte alltid lätt att avgöra från ovan. Men en ny algoritm för bildanalys kan förenkla saken.
Inlägget Drönare kan få lättare att upptäcka skogsbränder dök först upp på forskning.se.
Forskare har utvecklat ett förband som kan visa tidiga tecken på infektion i ett sår, utan att störa läkningen. Det sker genom att förbandet skiftar färg från gult till blått.
Inlägget Sårförband ändrar färg – och avslöjar infektioner dök först upp på forskning.se.
Scientific Reports, Published online: 20 April 2023; doi:10.1038/s41598-023-33323-8Author Correction: Differential chromatin accessibility in peripheral blood mononuclear cells underlies COVID-19 disease severity prior to seroconversion
Scientific Reports, Published online: 20 April 2023; doi:10.1038/s41598-023-33633-xMicro-computed tomography for the identification and characterization of archaeological lime bark
- For example, Ricardo has partnered with Digital Twin Consortium, which allows it to collaborate with technology organizations such as Ansys, Dell, Lendlease, and Microsoft.
Driving is ubiquitous—a part of daily life for millions in rural and urban regions across the globe. Its by-products, however, are sobering. According to the World Economic Forum, transportation produces almost one-fifth of global greenhouse gas emissions. There is an undeniable need to design, develop, and implement solutions to decarbonize and transition to net-zero emissions.
Auto industry leaders are keenly aware of the urgency. The industry has attracted more than $400 billion in investments over the last decade—about one quarter of which arrived in the beginning of 2020. Most of that money has been funneled into developing technologies in pursuit of net zero. The advent of software, electrification, digital tools, and data science means that industry players—including suppliers and original equipment makers—have more tools than ever before to rethink the future of mobility.
Yansong Chen, senior vice president of strategy and technology at Ricardo—an environmental, engineering, and strategic consulting company—says advanced technologies are changing the way the industry looks at its value proposition, at a fundamental level. “They’re also changing the way that the industry perceives its role in interacting with the customer.”
Beyond net zero: Data, design, and digital connections
The rise of electric vehicles (EVs) clearly shows how change has swept across the auto industry over the past decade. Global sales of passenger EVs in 2022 exceeded 10 million for the first time ever. One in every sevenpassenger cars bought globally in 2022 was an EV, compared with just one in every 70 cars sold in 2017.
As EV adoption grows, technology and software advancements have become increasingly critical to connect customers digitally and improve their experience. “Our ability to access data and apply it to the design processes in real time is how we will change the industry, reduce costs and carbon output, personalize the driving experience, and create new value for customers,” says Chen.
However, continual advances in software require a deep understanding of how technology can be applied to the auto industry. Traditional manufacturers, in particular, need to balance legacy operations with new tools and designs. “Advanced technology and AI are helping to make cars more intelligent, but they are also changing the fundamental nature of the car, both internally and externally,” according to Luc Julia, chief scientific officer at French automaker Renault.
Therefore, bridging the gap between the auto industry and technology providers is essential. For example, Ricardo has partnered with Digital Twin Consortium, which allows it to collaborate with technology organizations such as Ansys, Dell, Lendlease, and Microsoft. The open-membership consortium is an international ecosystem of industry, government, and academic experts shaping digital twin development.
Rise of the digital twin
In recent years, digital twin technology has become an almost indispensable tool in auto production, changing how vehicles are made. Renault, for example, has modeled its physical assets into digital twins, and each factory has a replica in the virtual world. This is part of the automaker’s effort to accelerate digitization of its production lines and supply chain data across the enterprise. “By optimizing data, we are able to use AI more effectively on the factory floor and increase the efficiency of our operations,” says Julia.
Renault’s factories are fed with supplier data, sales forecasts, and quality information, powered by artificial intelligence (AI) and machine learning–thereby enabling the development of multiple predictive scenarios. For instance, predictive maintenance for robots can anticipate and address potential breakdowns across the operational chain, at each part of the assembly line, before they occur.
In addition, Renault’s Refactory initiative, which is organized around four key activity centers—Re-trofit, Re-energy, Re-cycle, and Re-start—uses digital twins to reduce its carbon footprint. “It’s not just a question of electric cars, but how the batteries are sourced and the recycling of cars and materials,” says Julia.
Meanwhile, Ricardo’s marine project NEPTUNE uses digital twin technology and AI-based predictive technology to understand how to effectively deploy EV charging infrastructure, which could boost the industry. NEPTUNE researchers are developing a desk-based decision modeling and support system (DEMOSS) tool to help reduce the planning and implementation costs of a zero-carbon energy system. The results could help EVs achieve optimal charging with a minimal carbon footprint.
Hurdles on the road to net zero
For businesses, the challenges for reaching net zero are twofold. The first challenge is finding a way to comply with government climate regulations while maintaining their market share and existing business operations. “Auto leaders need to manage the transition from today to tomorrow, without breaking the business in the middle,” Chen says. The EU’s “Fit for 55” program, for instance, requires new car greenhouse gas emissions to be reduced by at least 55% from 1990 levels, by 2030. In the U.S., the Biden administration has introduced a 50% EV target for 2030.
The second challenge is to recognize evolving customer and investor expectations. The industry must keep pace with shifting views and trends, while remaining focused on its net-zero goals. A key issue that automakers have to grapple with, says Chen, is rollout speed: Customers today want new, improved vehicles at a much faster rate than before.
“Traditionally, in the transport industry, a refresh would occur every four years or so,” she says. “Changing expectations are disrupting how the industry fundamentally operates with customers now seeking out new models every 18 months or so.”
Mobility-as-a-service: driving in the moment
As customer expectations evolve, their mobility habits are also changing quickly, particularly for urban dwellers. Increasingly, says Chen, mobility-as-a-service is morphing into the idea that cars should be a part of lifestyles, both holistically and in the moment. Consider the use of a laptop: one day it could be used to produce a video, and on another to draw a painting. “Now we have to think about the car in that same context, and we’ve never done that before,” she notes. “We have to create these new levels of capability without jeopardizing the quality of delivery throughout the process.”
The global mobility-as-a-service market is expected to grow from about $236 billion in 2022 to $775 billion by 2029. And traditional car manufacturers don’t want to miss out on that growth. Renault’s Mobilize initiative, for instance, focuses on car usage rather than ownership, offering a range of accessible, affordable, and environmentally friendly mobility solutions.
As the appetite for mobility-as-a-service grows, data is—once again—crucial. Data can be leveraged to simulate new value propositions and provide insights on optimizing use of raw materials, creating a longer lifecycle for the product. “The beauty of today is that data is more readily available to us than ever been before,” Chen says. “It’s not new that a vehicle can create petabytes of data in a given day. What is new is that we have an ability to access it now.”
The shape of things to come
An epochal shift in mobility is taking place amid rapid technological changes and the global climate crisis. The auto industry is using innovative technologies to support automotive design and development, while reducing carbon emissions. Traditional industry players must work hard to understand how AI and other technology can help them advance operations, meet customers’ evolving expectations, and drive new ways of creating value across their organizations.
Industry leaders need to first understand where their companies are in the net-zero journey, says Chen. “Once that is ascertained, organizations then need to understand how technology can enable a successful decarbonization strategy in every corner of the enterprise,” Chen says. And to succeed, she adds decarbonization targets need to be “quantifiable, documentable, and traceable with data.”
Decarbonization will continue to be the primary focus of auto leaders for years to come. Encouragingly, there is a high level of collaboration across the industry, with all parties keen to understand how advanced technology can hasten decarbonization, Chen says. “We’re thinking as an industry—not necessarily as individual components—and that is allowing us to think more holistically about the impact that we have on the planet.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
- The EPA released new rules last week that will limit emissions from new vehicles sold in the US, beginning in 2027.
My phone is basically an extension of my arm at this point. To be honest, I have some mixed feelings about that, and not just because I worry about what being online 24/7 is doing to my brain cells.
As you might know, lithium-ion batteries power most of our personal electronics today. Mining the metals that make up those batteries can mean a lot of pollution, as well as harmful conditions for workers. All these problems are starting to balloon as we use lithium and assorted other materials not just in our phones and laptops, but in electric vehicles as well.
The good news is, as I’ve written about before, a growing number of groups are working to make sure batteries get recycled—and some of those efforts are becoming mainstream.
Last week Apple announced that its batteries would use 100% recycled cobalt beginning in 2025. I think this announcement says a lot about where the battery recycling industry is and where it’s going. So for the newsletter this week, let’s dive into Apple’s recycling pledge.
There’s obviously a huge array of materials that go into phones and computers, and Apple’s recycling announcement isn’t just about cobalt. The company also said that by 2025, it plans to use recycled rare-earth elements in its magnets (like the ones that help your watch and phone charge wirelessly), as well as recycled materials for the tin soldering and gold plating used for its circuit boards.
But it’s probably no accident that cobalt is the headline item. The metal has become something of a poster child for all the potential damage mining could do in the name of the clean-energy economy. It’s a key ingredient in lithium-ion batteries, and today, cobalt is mined largely in the Democratic Republic of Congo, where the activity has been tied to human rights abuses like forced labor. There’s a huge New Yorker feature about this from 2021, as well as a new book, if you want to learn more.
As of 2022, Apple was already using about 25% recycled cobalt in its batteries, up from 13% the year before. And as the new release lays out, in just a few years, all the cobalt in all “Apple-designed batteries” will be from recycled sources. One quick note here—I reached out to Apple to ask what total volume of cobalt this would represent, along with a few other questions about the news. The company hasn’t gotten back to me yet.
I decided to dig into this announcement a bit more because of a trend I’d come across in my previous reporting on battery recycling—there’s not enough old batteries getting recycled to meet demand for recycled materials.
Around and around
When it comes to materials for clean energy, a lot of people talk about a “circular economy” where batteries coming off the roads in old EVs can be used to make new ones, with zero (or very little) mining for new materials. For that to happen, you’d need about as many batteries on the metaphorical off-ramp as the number coming onto the on-ramp. And that’s not what’s happening at all.
In case you hadn’t heard, electric vehicles are on the rise. In 2017, a little over 1% of new vehicles sold globally were EVs. Just five years later, in 2022, that number had increased to about 13%, according to the International Energy Agency. We’re probably going to keep seeing more EVs hitting the road every year for a while, especially as countries pass new policies boosting EVs around the world.
The quick uptake of EVs is great news for climate action, but it’s causing a tricky dynamic for battery recyclers.
Batteries can last over a decade in a vehicle, and they can be in use for even longer if they end up getting a second life in stationary energy storage. So an EV battery won’t be ready to be recycled for at least around 15 years, in most cases. Looking back 15 years ago, in 2008, the Tesla Roadster had just started production, and the company made just a few hundred annually for the first couple of years. To put it mildly: there aren’t many EVs coming off the roads because of old age today, and there won’t be for a while.
So as the EV market continues to grow exponentially, there’s going to be a shortage of recycled materials. If all EV and phone manufacturers wanted to use only recycled cobalt, for example, there wouldn’t be enough to go around.
Production of batteries for EVs is booming: the global total of lithium-ion batteries produced for light-duty vehicles could top 12 million metric tons by 2030. Meanwhile, less than 200,000 metric tons of batteries from the same types of vehicles will be available for recycling by that date.
Despite that daunting gap, there are a couple reasons Apple can probably meet its pledge on recycled cobalt, says Hans Eric Melin, head of Circular Energy Storage, a consulting firm specializing in battery recycling.
For one, portable devices have been powered using lithium-ion batteries for decades. Thanks to your dad’s camcorder and your Motorola Razr flip phone from 2006, there’s at least some recycled cobalt floating around the market today.
And the economics of using recycled materials shake out to be pretty different for personal devices and cars. Because of its size, an EV battery can be nearly 40% of the cost of the vehicle, Melin says. That’s not the case with devices like a phone, so a company like Apple will probably be able to pay a bit more for recycled battery materials without affecting the price of the whole device.
So your iPhone in 2025 (by my math, that might be the iPhone 17) could be made using cobalt from recycled sources. Vehicles might take a bit longer: EV batteries are bigger, and there are fewer old ones ready for a new life. But we’re inching toward a world where we can reuse more of the materials in the technology we know and love.
Battery recycling was one of our 10 Breakthrough Technologies in 2023. Check out the list item, as well as my deep dive into the tech.
I spoke with JB Straubel, Tesla’s former CTO and founder of battery recycler Redwood Materials. Here’s what he had to say about the challenges ahead for batteries.
The first-ever edition of this newsletter was a travel journal of sorts from my trip to Redwood. Revisit that trip here.
Efforts to slow down climate change and adapt to what’s already happening are complicated and difficult. What if we could also try to counteract a bit of the planetary warming we’ve already caused? Some researchers say it’s an intriguing enough idea to at least look into.
Geoengineering is understandably controversial, since large-scale efforts, or even attempts to study the potential effects, could change life for people across the planet. And what’s good for some might not be good for all. As debates rage on, some groups are working to get a wider range of voices into the room, especially from climate-vulnerable nations that arguably have the most at stake.
My colleague James Temple took a look inside some of the groups working to open up who’s involved in the conversation around geoengineering. Check out his insightful story for more.
Keeping up with climate
The EPA released new rules last week that will limit emissions from new vehicles sold in the US, beginning in 2027. The policy is another big boost for EVs. The problem is, the country isn’t building chargers quickly enough to keep up. Here’s what the new rules might mean and how charging infrastructure will need to grow to keep up. (MIT Technology Review)
We can build more fire-resistant structures today than we used to, and urban planners have more strategies to slow down blazes. Changing how people react to wildfires could be the hardest part of adapting. (MIT Technology Review)
EV charging was a constant topic of discussion at one of the country’s biggest auto shows in New York earlier this month. (Canary Media)
I find heat pumps fascinating, but most people find them a little … boring. Three studios took a crack at rebranding them. (Bloomberg)
→ Find out more about how a heat pump works. (MIT Technology Review)
Hydrogen can be a tool to fight climate change—or make things worse. This is a great breakdown of how details matter when it comes to the fuel. (New York Times Opinion)
Lithium-ion batteries can help support renewables like wind and solar by saving energy for when it’s needed. But some communities are scared about what happens if energy storage facilities catch fire. (Inside Climate News)
Fusion energy might be on its way to finally becoming a reality. But even if we see fusion power plants this century, they probably won’t provide the cheap, limitless energy everyone dreams about. (Wired)
→ Here’s what’s really going on with fusion energy. (MIT Technology Review)
This startup has a new way to generate electricity using water: instead of building massive concrete dams or disturbing ecosystems in rivers, it is building hydropower systems in canals. (Associated Press)
Texas leads US states in renewable power generation. But new legislation could hinder progress. (Inside Climate News)
The global sigh of relief was almost audible when a study last year found kids who played video games for hours every day had no worse mental health than non-gamers. In fact, they came out ahead on some cognitive measures.
“Video Games May Not Rot Kids’ Brains After All,” one of the many news stories about the research trumpeted. Another headline declared: “Video games could improve kids’ brains.“
Now it turns out the study, titled “Association of Video Gaming With Cognitive Performance Among Children,” was so flawed it had to be retracted and republished. The updated results show gamers did actually score significantly worse on things like attention and depression, although some of their performance metrics were still slightly better than among non-gamers.
According to the republished article in JAMA Network Open:
Video gaming may be associated with small but improved cognitive abilities involving response inhibition and working memory and with alterations in underlying cortical pathways, but concerns about the association with mental health may warrant further study.
The study made headlines across the globe, but has been cited just twice, according to Clarivate’s Web of Science.
Studies about the effects of video games are often contentious, as we’ve reported before, and this one appears no different. According to the April 10 letter to the editor that serves as the study’s notice of retraction and replacement, a reader informed the authors of several errors in their work, which caused them to make extensive corrections.
The letter offers a detailed explanation of five key errors, many of which stem from a failure to include, properly account for, and analyze differences between the study’s two groups. There were also errors in the way the study presented data and results, for instance results related to how children performed on two cognitive tests. While the original study found that the children who played video games did better at both tests, a reanalysis showed that they did notably worse on one test and about the same on another compared to children who didn’t play video games.
Even after the corrections, the authors note that children who played video games still performed slightly better on the study’s motor and memory tasks.
However, the revised abstract notes that “the Child Behavior Checklist behavioral and mental health scores were higher in VGs , with attention problems, depression, and attention-deficit/hyperactivity disorder scores significantly higher in the VGs compared with the NVGs [non-video gamers].” This contrasts with the original study, which claimed that these scores “were not significantly different” between the groups.
Bader Chaarani, the study’s lead author and an assistant professor of psychiatry at the University of Vermont, told Retraction Watch:
As you may have noticed, our main findings and conclusions in the updated version of the article remain unchanged. The most relevant correction is that some of the mental health scores are found to be significantly higher in videogamers, whereas in the original version we stated that mental scores were higher in videogamers without reaching statistical difference. However, these scores remain far from clinical significance. The errors occurred mostly in the table of demographics, mainly because some of the co-authors involved in the analyses used inconsistent lists of participants.
Neither Chaarani nor any of the paper’s other authors have had any previous papers retracted, according to Retraction Watch’s database.
Annette Flanagin, executive managing editor of JAMA and JAMA Network, told us:
As the authors report, a reader reported concerns that prompted the authors to identify and correct errors in their analyses and findings. This is the standard process for such errors that, when corrected, result in changes in some findings and the study is considered valid. The authors provide detailed explanations in their Letter, which is published as a notice of retraction and replacement.
Frederick P. Rivara, the editor-in-chief of JAMA Network Open, did not respond to an email from Retraction Watch.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at firstname.lastname@example.org.
Vladimir Putin ljuger öppet om kriget i Ukraina. Men det stora ohämmade ljugandet, även om stora saker som går att kontrollera och avfärda, är inget nytt i politiken. Det säger Anna-Karin Selberg, doktor i filosofi vid Södertörns högskola.
Inlägget Så använder Putin den stora lögnen som metod dök först upp på forskning.se.
Nature Communications, Published online: 20 April 2023; doi:10.1038/s41467-023-37936-5The degree to which species tolerate human disturbance contributes to shape human-wildlife coexistence. Here, the authors identify key predictors of avian tolerance of humans across 842 bird species from open tropical ecosystems.
Nature Communications, Published online: 20 April 2023; doi:10.1038/s41467-023-37736-xStressful memories are a possible factor to induce psychiatric symptoms. Here, the authors demonstrate that stress susceptibility is related to memory consolidation mechanisms in the ventral hippocampus.
Nature Communications, Published online: 20 April 2023; doi:10.1038/s41467-023-37293-3Topological properties can theoretically be generated by electron correlation rather than spin-orbit coupling. Here, the authors report a correlation-driven topological insulator state in the organic material α-(BETS)2I3, and its current-driven switching to a Dirac semimetal state.
Nature Communications, Published online: 20 April 2023; doi:10.1038/s41467-023-37630-6The causes of ALS remain unclear with many proposed pathomechanisms. Here, the authors integrate iPSC-derived motor neuron and post-mortem datasets and identify a heightened DNA damage response accompanied by accumulation of somatic mutations in ALS.
|submitted by /u/gamefidelio
I think it would be a good idea to use robots as a bargaining chip for unifying nations.
I know this might sound slightly harsh, but I think major technological breakthroughs like this should be harnessed for all they're worth. This may also work with other technological breakthroughs in the future, but that remains to be seen.
To be completely clear on this, what I mean is; if the US is the only maker of robots, then we CAN LEGALLY WITHHOLD THEM from other countries in order to get those other countries to join us in a unified government system.
Again, other countries might see this as harsh, but I think that forming a world government (or at least moving swiftly towards one), outweighs that small tarnish to our reputation.
There may of course be certain countries that may get slightly angrier than other countries and may pose more of a threat, but overall, I think this is a good idea and should be enacted if robots do ever get released.
Farliga kemikalier som PCB har förbjudits men finns kvar i naturen. Nu har forskare hittat miljögiftet i bottenprover på 8 000 meters djup i Stilla havet.
Inlägget Miljögifter har hittats på 8 000 meters djup dök först upp på forskning.se.
A decade ago, tech powerhouses the likes of Microsoft, Google, and Amazon helped boost the nonprofit Code.org, a learn-to-code program with a vision: “That every student in every school has the opportunity to learn computer science as part of their core K–12 education.” It was followed by a wave of nonprofits and for-profits alike dedicated to coding and learning computer science; some of the many others include Codecademy, Treehouse, Girl Develop It, and Hackbright Academy (not to mention Girls Who Code, founded the year before Code.org and promising participants, “Learn to code and change the world”). Parents can now consider top-10 lists of coding summer camps for kids. Some may choose to start their children even younger, with the Baby Code! series of board books—because “it’s never too early to get little ones interested in computer coding.” Riding this wave of enthusiasm, in 2016 President Barack Obama launched an initiative called Computer Science for All, proposing billions of dollars in funding to arm students with the “computational thinking skills they need” to “thrive in a digital economy.”
Now, in 2023, North Carolina is considering making coding a high school graduation requirement. If lawmakers enact that curriculum change, they will be following in the footsteps of five other states with similar policies that consider coding and computer education foundational to a well-rounded education: Nevada, South Carolina, Tennessee, Arkansas, and Nebraska. Advocates for such policies contend that they expand educational and economic opportunities for students. More and more jobs, they suggest, will require “some kind of computer science knowledge.”
This enthusiasm for coding is nothing new. In 1978 Andrew Molnar, an expert at the National Science Foundation, argued that what he termed computer literacy was “a prerequisite to effective participation in an information society and as much a social obligation as reading literacy.” Molnar pointed as models to two programs that had originated in the 1960s. One was the Logo project centered at the MIT Artificial Intelligence Lab, which focused on exposing elementary-age kids to computing. (MIT Technology Review is funded in part by MIT but maintains editorial independence.) The other was at Dartmouth College, where undergraduates learned how to write programs on a campus-wide computing network.
The Logo and Dartmouth efforts were among several computing-related educational endeavors organized from the 1960s through 1980s. But these programs, and many that followed, often benefited the populations with the most power in society.Then as now, just learning to code is neither a pathway to a stable financial future for people from economically precarious backgrounds nor a panacea for the inadequacies of the educational system.
Building a BASIC computing community
When mathematics professor (and future Dartmouth president) John Kemeny made a presentation to college trustees in the early 1960s hoping to persuade them to fund a campus-wide computing network, he emphasized the idea that Dartmouth students (who were at that time exclusively male, and mostly affluent and white) were the future leaders of the United States. Kemeny argued, “Since many students at an institution like Dartmouth become executives or key policy makers in industry and government, it is a certainty that they will have at their command high-speed computing equipment.”
Kemeny claimed that it was “essential” for those nascent power brokers to “be acquainted with the potential and limitations of high-speed computers.” In 1963 and 1964, he and fellow mathematics professor Thomas Kurtz worked closely with Dartmouth students to design and implement a campus-wide network, while Kemeny largely took responsibility for designing an easy-to-learn programming language, called BASIC, for students (and faculty) to use on that network. Both developments were eagerly welcomed by the incoming students in the fall of 1964.
As Dartmouth’s network grew during the 1960s, network terminals were installed in the new campus computer center, in shared campus recreational spaces and dormitories, and at other locations around campus. And because the system was set up as a time-sharing network, an innovation at the time, multiple terminals could be connected to the same computer, and the people using those terminals could write and debug programs simultaneously.
This was transformative: by 1968, 80% of Dartmouth undergraduates and 40% of the faculty used the network regularly. Although incoming students learned how to write a program in BASIC as a first-year math course requirement, what really fostered the computing culture was the way students made the language and the network their own. For example, the importance of football in campus life (Dartmouth claimed the Ivy League championship seven times between 1962 and 1971) inspired at least three computer football games (FTBALL, FOOTBALL, and GRIDIRON) played avidly on the Dartmouth network, one of them written by Kemeny himself.
Because the network was so easy to access and BASIC was so easy to use, Dartmouth students could make computing relevant to their own lives and interests. One wrote a program to test a hypothesis for a psychology class. Another ran a program called XMAS to print his Christmas cards. Some printed out letters to parents or girlfriends. Others enjoyed an array of games, including computer bridge, checkers, and chess. Although learning to write a program in BASIC was the starting point in computing for Dartmouth students, the ways they used it to meet their own needs and forge community with their peers made the system a precursor of social networking—nearly half a century ago. Coding in BASIC didn’t replace their liberal arts curriculum requirements or extracurricular activities; rather, it complemented them.
The Dartmouth network expands
As it grew in popularity, other schools around New England sought to tap into Dartmouth’s computing network for their students. By April 1971, the network encompassed 30 high schools and 20 colleges in New England, New York, and New Jersey. All an individual school needed to connect were a terminal and a telephone line linking the terminal with the mainframe on Dartmouth’s campus (often the greatest expense of participating in the network, at a time when long-distance phone calls were quite costly). Yet as BASIC moved beyond Dartmouth into heterogeneous high schools around New England, the computing culture remained homogeneous.
Private high schools including Phillips Exeter, Phillips Andover, and St. Paul’s were among the first to connect, all before 1967. Within a few more years, a mix of private and public high schools joined them. The Secondary School Project (SSP), which ran from 1967 to 1970 and was supported by a three-year NSF grant secured by Kemeny and Kurtz, connected students and educators at 18 public and private high schools from Connecticut to Maine, with the goal of putting computing access (and BASIC) into as many hands as possible and observing the results.
That these schools asked Dartmouth for time shares reflected interest and motivation on the part of some individual or group at each one. They wanted network access—and, by extension, access to code—because it was novel and elite. Some students were enthusiastic users, even waking at four in the morning to sign on. But access to the Dartmouth network was emphatically unequal. The private schools participating in the SSP were (at the time) all male and almost exclusively white, and those students enjoyed nearly twice as much network time as the students at coeducational public schools: 72 hours per week for private school students, and only 40 for public school students.
What was intended as computing for all ultimately amplified existing inequities.
In these years before the expansion of educational opportunities for girls and women in the United States, high school boys were enrolling in many more math and science classes than high school girls. The math and science students gained access to computing in those courses, meaning that BASIC moved into a system already segregated by gender—and also by race. What was intended as computing for all ultimately amplified existing inequities.
Trying to change the world, one turtle at a time
One state away from Dartmouth, the Logo project, founded by Seymour Papert, Cynthia Solomon, and Wally Feurzeig, sought to revolutionize how elementary and middle school students learn. Initially, the researchers created a Logo programming language and tested it between 1967 and 1969 with groups of children including fifth and seventh graders at schools near MIT in Cambridge, Massachusetts. “These kids made up hilarious sentence generators and became proficient users of their own math quizzes,” Solomon has recalled.
But Logo was emphatically not just a “learn to code” effort. It grew to encompass an entire lab and a comprehensive learning system that would introduce new instructional methods, specially trained teachers, and physical objects to think and play with. Perhaps the best-remembered of those objects is the Logo Turtle, a small robot that moved along the floor, directed by computer commands, with a retractable pen underneath its body that could be lowered to draw shapes, pictures, and patterns.
By the early 1970s, the Logo group was part of the MIT AI Lab, which Papert had cofounded with the computer scientist Marvin Minsky. The kid-focused learning environment provided a way to write stories, a way to draw, a way to make music, and a way to explore a space with a programmable object. Papert imagined that the Logo philosophy would empower children as “intellectual agents” who could derive their own understanding of math concepts and create connections with other disciplines ranging from psychology and the physical sciences to linguistics and logic.
But the reality outside the MIT AI Lab challenged that vision. In short, teaching Logo to elementary school students was both time- and resource-intensive. In 1977-’78, an NSF grant funded a yearlong study of Logo at a public school; it was meant to include all the school’s sixth graders, but the grant covered only four computers, which meant that only four students could participate at the same time. The research team found that most of the students who were chosen to participate did learn to create programs and express math concepts using Logo. However, when the study ended and the students moved on, their computing experiences were largely left in the past.
As that project was wrapping up, the Logo team implemented a larger-scale partnership at the private Lamplighter School in Dallas, cosponsored by Texas Instruments. At this school, with a population of 450 students in kindergarten through fourth grade, 50 computers were available. Logo was not taught as a standalone subject but was integrated into the curriculum—something that would only have been possible at a small private school like this one.
The Lamplighter project—and the publication around the same time of Papert’s book Mindstorms, in which the mathematician enthused about the promise of computing to revolutionize education—marked a high point for Logo. But those creative educational computing initiatives were short-lived. A major obstacle was simply the incredibly slow-moving and difficult-to-change bureaucracy of American public education. Moreover, promising pilots either did not scale or were unable to achieve the same results when introduced into a system fraught with resource inequities.
But another issue was that the increasingly widespread availability of personal computers by the 1980s challenged Logo’s revolutionary vision. As computers became consumer objects, software did, too. People no longer needed to learn to code to be able to use a computer. In the case of American education, computers in the classroom became less about programming and more about educational games, word processing, and presentations. While BASIC and Logo continued to be taught in some schools around the United States, for many students the effort of writing some code to, say, alphabetize a list seemed impractical—disconnected from their everyday lives and their imagined futures.
Schools weren’t the only setting for learn-to-code movements, however. In the 1960s the Association for Computing Machinery (ACM), which had been established as a professional organization in the 1940s, spearheaded similar efforts to teach coding to young people. From 1968 to 1972, ACM members operating through their local chapters established programs across the United States to provide training in computing skills to Black and Hispanic Americans. During the same years, government and social welfare organizations offered similar training, as did companies including General Electric. There were at least 18 such programs in East Coast and California cities and one in St. Louis, Missouri. Most, but not all, targeted young people. In some cases, the programs taught mainframe or keypunch operation, but others aimed to teach programming in the common business computing languages of the time, COBOL and FORTRAN.
Did the students in these programs learn? The answer was emphatically yes. Could they get jobs as a result, or otherwise use their new skills? The answer to that was often no. A program in San Diego arranged for Spanish-speaking instructors and even converted a 40-foot tractor-trailer into a mobile training facility so that students—who were spread across the sprawling city—would not have to spend upwards of an hour commuting by bus to a central location. And in the Albany-Schenectady area of New York, General Electrical supported a rigorous program to prepare Black Americans for programming jobs. It was open to people without high school diplomas, and to people with police records; there was no admissions testing. Well over half the people who started this training completed it.
In the ’60s, Dartmouth students had unprecedented computer access thanks to a time-sharing network that connected multiple terminals via telephone line to a central computer.
Yet afterwards many could not secure jobs, even entry-level ones. In other cases, outstanding graduates were offered jobs that paid $105 per week—not enough to support themselves and their families. One consultant to the project suggested that for future training programs, GE should “give preference to younger people without families” to minimize labor costs for the company.
The very existence of these training endeavors reflected a mixed set of motivations on the part of the organizers, who were mostly white, well-off volunteers. These volunteers tended to conflate living in an urban area with living in poverty, and to assume that people living in these conditions were not white, and that all such people could be lumped together under the heading of “disadvantaged.” They imagined that learning to code would provide a straightforward path out of poverty for these participants. But their thinking demonstrated little understanding of the obstacles imposed by centuries of enslavement, unpaid labor, Jim Crow violence, pay discrimination, and segregated and unequal education, health care, and housing. Largely with their own interests in mind, they looked to these upskilling programs as a panacea for racial inequality and the social instability it fueled. A group from a Delaware ACM chapter, a conference report suggested, believed that “in these days of urban crisis, the data processing industry offers a unique opportunity to the disadvantaged to become involved in the mainstream of the American way of life.”
If success is defined as getting a steadily increasing number of Black and Hispanic men and women good jobs in the computing profession—and, by extension, giving them opportunities to shape and inform the technologies that would remake the world—then these programs failed. As the scholar Arvid Nelsen observed, while some volunteers “may have been focused on the needs and desires of the communities themselves,” others were merely seeking a Band-Aid for “civil unrest.” Meanwhile, Nelsen notes, businesses benefited from “a source of inexpensive workers with much more limited power.” In short, training people to code didn’t mean they would secure better, higher-paying, more stable jobs—it just meant that there was a larger pool of possible entry-level employees who would drive down labor costs for the growing computer industry.
In fact, observers identified the shortcomings of these efforts even at the time. Walter DeLegall, a Black computing professional at Columbia University, declared in 1969 that the “magic of data processing training” was no magic bullet, and that quick-fix training programs mirrored the deficiencies of American public education for Black and Spanish-speaking students. He questioned the motivation behind them, suggesting that they were sometimes organized for “commercial reasons or simply to de-fuse and dissipate the burgeoning discontent of these communities” rather than to promote equity and justice.
The Algebra Project
There was a grassroots effort that did respond to these inadequacies, by coming at the computing revolution from an entirely different angle.
During the late 1970s and early 1980s, the civil rights activist Robert P. Moses was living with his family in Cambridge, Massachusetts, where his daughter Maisha attended the public Martin Luther King School and he volunteered teaching algebra. He noticed that math groups were unofficially segregated by race and class, and that much less was expected of Black and brown students. Early on, he also identified computers—and knowledge work dependent on computers—as a rising source of economic, political, and social power. Attending college was increasingly important for attaining that kind of power, and Moses saw that one key to getting there was a foundation in high school mathematics, particularly algebra. He established the Algebra Project during the early 1980s, beginning in Cambridge public schools and supported by a MacArthur “genius grant” that he received in 1982.
In a book that he later coauthored, Radical Equations: Civil Rights from Mississippi to the Algebra Project, Moses clearly articulated the connections between math, computing, economic justice, and political power, especially for Black Americans. “The most urgent social issue affecting poor people and people of color is economic access. In today’s world, economic access and full citizenship depend crucially on math and science literacy,” he wrote. “The computer has become a cultural force as well as an instrument of work [and] while the visible manifestation of the technological shift is the computer, the hidden culture of computers is math.”
Arming Black students with the tools of math literacy was radical in the 1980s precisely because it challenged power dynamics.
Moses had earned his bachelor’s degree at Hamilton College in New York and a master’s degree at Harvard University before teaching math at the Horace Mann School in the Bronx from 1958 to 1961. For him, arming Black students with the tools of math literacy was radical in the 1980s precisely because access to technology meant access to power. “Who’s going to gain access to the new technology?” he asked. “Who’s going to control it? What do we have to demand of the educational system to prepare for the new technological era?”
Moses mobilized students and parents alike to ensure that algebra was offered to all students at the Martin Luther King School. He devised new approaches to teaching the subject, and drawing on his experience with grassroots civil rights organizing, enrolled students to teach their peers. College admission rates and test scores rose at the school, and the Algebra Project spread to at least 22 other sites across 13 states. It focused on math because Moses identified math as the foundation of coding, and the stakes were always connected to economic justice and educational equity in an economy built on algorithms and data.
Moses made explicit “a number of issues that are often hidden in coding discourse,” the historian Janet Abbate has observed. “He questioned the implied meritocracy of ‘ability grouping’ … he attacked the stereotype that Black people aren’t interested in STEM … [and] he emphasized that social skills and community were an essential part of overcoming students’ alienation from technology.”
Moses died in 2021, but the Algebra Project lives on, now in collaboration with a group called the “We the People” Math Literacy for All Alliance. The curriculum he pioneered continues to be taught, and the Algebra Project’s 2022 conference again called attention to the need for better public education across the United States, especially for Black, brown, and poor children, “to make full participation in American democracy possible.”
Coding makes a comeback
In the past decade, a new crop of more targeted coding programs has emerged. In 2014, for example, the activist and entrepreneur Van Jones collaborated with the musician Prince to launch #YesWeCode, targeting what they called “low-opportunity communities.” In doing so, they called attention to ongoing educational and economic inequities across the United States.
One of #YesWeCode’s early efforts was a youth-oriented hackathon at the Essence Music Festival in New Orleans in 2014 that encouraged kids to connect coding with issues that mattered to them. As #YesWeCode’s chief innovation officer, Amy Henderson, explained, “A lot of the people who develop apps today are affluent white men, and so they build apps that solve their communities’ problems,” such as Uber. “Meanwhile,” she continued, “one of our young people built an app that sends reminders of upcoming court dates. That’s an issue that impacts his community, so he did something about it.”
#YesWeCode has since morphed into Dream.Tech, an arm of Dream.org, a nonprofit that advocates for new legislation and new economic policies to remedy global climate change, the racialized mass incarceration system in the United States, and America’s long history of poverty. (Its other arms are called Dream.Green and Dream.Justice.) Recently, for example, Dream.org pushed for legislation that would erase long-standing racial disparities in sentencing for drug crimes. As a whole, Dream.org demonstrates an expansive vision of tech justice that can “make the future work for everyone.”
Another initiative, called Code2040 (the name refers to the decade during which people of color are expected to become a demographic majority in the United States), was launched in 2012. It initially focused on diversifying tech by helping Black and Latino computer science majors get jobs at tech companies. But its mission has expanded over the past decade. Code2040 now aims for members of these communities to contribute to the “innovation economy” in all roles at all levels, proportional to their demographic representation in the United States. The ultimate vision: “equitable distribution of power in an economy shaped by the digital revolution.”
Technological solutionism may persist, but there’s an increasing recognition that coding training alone is not enough.
Both Code2040’s current CEO, Mimi Fox Melton, and her predecessor, Karla Monterroso, have argued that coding training alone is not enough to guarantee employment or equalize educational opportunities. In an openly critical letter to the tech industry published after the murder of George Floyd in 2020, they noted that 20% of computer science graduates and 24% of coding boot camp grads are Black or Latino, compared with only 6% of tech industry workers. Fox Melton and Monterroso observed: “High-wage work in America is not colorblind; it’s not a meritocracy; it’s white. And that goes doubly for tech.”
These recent coding education efforts ask important questions: Code for what? Code for whom? Meanwhile, several other recent initiatives are focused on the injustices both caused and reflected by more recent aspects of the digital economy, particularly artificial intelligence. They aim to challenge the power of technological systems, rather than funneling more people into the broken systems that already exist. Two of these organizations are the Algorithmic Justice League (AJL) and the Ida B. Wells Just Data Lab.
Joy Buolamwini, a computer scientist, founded the Algorithmic Justice League after discovering as a grad student at MIT that a facial-analysis system she was using in her work didn’t “see” her dark-skinned face. (She had to don a white mask for the software to recognize her features.)
Now, the AJL’s mission is “leading a cultural movement towards equitable and accountable AI,” and its tagline reads: “Technology should serve all of us. Not just the privileged few.” The AJL publishes research about the harms caused by AI, as well as tracking relevant legislation, journalistic coverage, and personal stories, all with the goal of moving toward more equitable and accountable AI. Buolamwini has testified to Congress and in state hearings on these issues.
The Ida B. Wells Just Data Lab, founded and directed by Ruha Benjamin, a Princeton professor of African American studies, is devoted to rethinking and retooling “the relationship between stories and statistics, power and technology, data and justice.” Its website prominently features a quote from the journalist and activist Ida B. Wells, who systematically collected data and reported on white mob violence against Black men during the 1890s. Her message: “The way to right wrongs is to turn the light of truth upon them.” One of the lab’s efforts, the Pandemic Portal, used data to highlight racial inequality in the context of covid-19, focusing on 10 different areas: arts, mutual aid, mental health, testing and treatments, education, prisons, policing, work, housing, and health care. It provided data-based resources and tools and offered evidence that these seemingly disparate categories are, in fact, deeply interwoven.
Technological solutionism may persist in Silicon Valley campuses and state house corridors, but individuals, organizations, and communities are increasingly recognizing that coding instruction alone won’t save them. (Even Seymour Papert expressed skepticism of such efforts back in 1980, writing in Mindstorms that “a particular subculture, one dominated by computer engineers, is influencing the world of education to favor those school students who are most like that subculture.”)
Learning to code won’t solve inequality or poverty or remedy the unjust structures and systems that shape contemporary American life. A broader vision for computer science can be found in the model proposed by Learning for Justice, a project of the Southern Poverty Law Center that works to provide educational resources and engage local communities, with the ultimate goals of addressing injustice and teaching students and the communities they come from to wield power together. The project’s digital literacy framework highlights important focus areas far beyond a narrow emphasis on learning to code, including privacy concerns, uncivil online behavior, fake news, internet scams, ideological echo chambers, the rise of the alt-right, and online radicalization.
These new frameworks of digital literacy, tech diversity, and algorithmic justice go beyond coding to prepare individuals to meaningfully question, evaluate, and engage with today’s array of digital spaces and places. And they prepare all of us to imagine and articulate how those spaces and places can better serve us and our communities.
Joy Lisi Rankin is a research associate professor in the Department of Technology, Culture, and Society at New York University and author of A People’s History of Computing in the United States.
Scientific Reports, Published online: 20 April 2023; doi:10.1038/s41598-023-33475-7Publisher Correction: Mortality rates of severe COVID-19-related
Scientific Reports, Published online: 20 April 2023; doi:10.1038/s41598-023-32920-xAuthor Correction: Development of an ex-vivo porcine lower urinary tract model to evaluate the performance of urinary catheters
Scientific Reports, Published online: 20 April 2023; doi:10.1038/s41598-023-33107-0Influence of reinforcement method on the crack characteristic parameters of expansive soil experimental study
Scientific Reports, Published online: 20 April 2023; doi:10.1038/s41598-023-32835-7Engineering benefits of replacing natural sand with manufactured sand in landfill construction
Scientific Reports, Published online: 20 April 2023; doi:10.1038/s41598-023-33647-5Extractive-liquid sampling electron ionization-mass spectrometry (E-LEI-MS): a new powerful combination for direct analysis
Scientific Reports, Published online: 20 April 2023; doi:10.1038/s41598-023-33749-0Emotion regulation mediates the relationship between social
Scientific Reports, Published online: 20 April 2023; doi:10.1038/s41598-023-33737-4Flexibly designable wettability gradient for passive control of fluid motion via physical surface modification
Scientific Reports, Published online: 20 April 2023; doi:10.1038/s41598-023-32591-8The efficacy of